Oct 13 06:28:28 crc systemd[1]: Starting Kubernetes Kubelet... Oct 13 06:28:28 crc restorecon[4737]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 06:28:28 crc restorecon[4737]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 13 06:28:29 crc restorecon[4737]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 13 06:28:29 crc restorecon[4737]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Oct 13 06:28:30 crc kubenswrapper[4833]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 06:28:30 crc kubenswrapper[4833]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Oct 13 06:28:30 crc kubenswrapper[4833]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 06:28:30 crc kubenswrapper[4833]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 06:28:30 crc kubenswrapper[4833]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 13 06:28:30 crc kubenswrapper[4833]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.363454 4833 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367616 4833 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367645 4833 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367653 4833 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367658 4833 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367664 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367674 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367682 4833 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367692 4833 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367698 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367705 4833 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367716 4833 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367721 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367726 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367732 4833 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367737 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367741 4833 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367746 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367751 4833 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367756 4833 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367761 4833 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367765 4833 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367769 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367773 4833 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367780 4833 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367784 4833 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367788 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367792 4833 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367797 4833 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367802 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367807 4833 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367814 4833 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367820 4833 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367825 4833 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367832 4833 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367836 4833 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367844 4833 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367848 4833 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367852 4833 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367856 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367860 4833 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367864 4833 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367871 4833 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367875 4833 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367879 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367886 4833 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367891 4833 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367896 4833 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367976 4833 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367982 4833 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367987 4833 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367992 4833 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.367996 4833 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.368002 4833 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.368009 4833 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.368014 4833 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.368018 4833 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.368024 4833 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.368029 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.368034 4833 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.368038 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.368043 4833 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.368094 4833 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.368100 4833 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.368105 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.368111 4833 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.368117 4833 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.368419 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.368436 4833 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.368447 4833 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.368454 4833 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.368460 4833 feature_gate.go:330] unrecognized feature gate: Example Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368609 4833 flags.go:64] FLAG: --address="0.0.0.0" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368627 4833 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368643 4833 flags.go:64] FLAG: --anonymous-auth="true" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368652 4833 flags.go:64] FLAG: --application-metrics-count-limit="100" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368660 4833 flags.go:64] FLAG: --authentication-token-webhook="false" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368666 4833 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368674 4833 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368682 4833 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368687 4833 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368693 4833 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368698 4833 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368704 4833 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368709 4833 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368715 4833 flags.go:64] FLAG: --cgroup-root="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368720 4833 flags.go:64] FLAG: --cgroups-per-qos="true" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368726 4833 flags.go:64] FLAG: --client-ca-file="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368731 4833 flags.go:64] FLAG: --cloud-config="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368739 4833 flags.go:64] FLAG: --cloud-provider="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368744 4833 flags.go:64] FLAG: --cluster-dns="[]" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368754 4833 flags.go:64] FLAG: --cluster-domain="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368759 4833 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368765 4833 flags.go:64] FLAG: --config-dir="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368770 4833 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368777 4833 flags.go:64] FLAG: --container-log-max-files="5" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368785 4833 flags.go:64] FLAG: --container-log-max-size="10Mi" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368791 4833 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368801 4833 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368808 4833 flags.go:64] FLAG: --containerd-namespace="k8s.io" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368815 4833 flags.go:64] FLAG: --contention-profiling="false" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368821 4833 flags.go:64] FLAG: --cpu-cfs-quota="true" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368827 4833 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368832 4833 flags.go:64] FLAG: --cpu-manager-policy="none" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368838 4833 flags.go:64] FLAG: --cpu-manager-policy-options="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368846 4833 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368852 4833 flags.go:64] FLAG: --enable-controller-attach-detach="true" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368858 4833 flags.go:64] FLAG: --enable-debugging-handlers="true" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368863 4833 flags.go:64] FLAG: --enable-load-reader="false" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368869 4833 flags.go:64] FLAG: --enable-server="true" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368875 4833 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368884 4833 flags.go:64] FLAG: --event-burst="100" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368890 4833 flags.go:64] FLAG: --event-qps="50" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368896 4833 flags.go:64] FLAG: --event-storage-age-limit="default=0" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368901 4833 flags.go:64] FLAG: --event-storage-event-limit="default=0" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368907 4833 flags.go:64] FLAG: --eviction-hard="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368916 4833 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368921 4833 flags.go:64] FLAG: --eviction-minimum-reclaim="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368927 4833 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368932 4833 flags.go:64] FLAG: --eviction-soft="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368938 4833 flags.go:64] FLAG: --eviction-soft-grace-period="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368943 4833 flags.go:64] FLAG: --exit-on-lock-contention="false" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368950 4833 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368955 4833 flags.go:64] FLAG: --experimental-mounter-path="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368961 4833 flags.go:64] FLAG: --fail-cgroupv1="false" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368966 4833 flags.go:64] FLAG: --fail-swap-on="true" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368971 4833 flags.go:64] FLAG: --feature-gates="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368978 4833 flags.go:64] FLAG: --file-check-frequency="20s" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368983 4833 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368987 4833 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368993 4833 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.368998 4833 flags.go:64] FLAG: --healthz-port="10248" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369003 4833 flags.go:64] FLAG: --help="false" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369007 4833 flags.go:64] FLAG: --hostname-override="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369011 4833 flags.go:64] FLAG: --housekeeping-interval="10s" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369016 4833 flags.go:64] FLAG: --http-check-frequency="20s" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369020 4833 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369024 4833 flags.go:64] FLAG: --image-credential-provider-config="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369029 4833 flags.go:64] FLAG: --image-gc-high-threshold="85" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369033 4833 flags.go:64] FLAG: --image-gc-low-threshold="80" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369037 4833 flags.go:64] FLAG: --image-service-endpoint="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369041 4833 flags.go:64] FLAG: --kernel-memcg-notification="false" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369046 4833 flags.go:64] FLAG: --kube-api-burst="100" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369050 4833 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369054 4833 flags.go:64] FLAG: --kube-api-qps="50" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369058 4833 flags.go:64] FLAG: --kube-reserved="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369063 4833 flags.go:64] FLAG: --kube-reserved-cgroup="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369067 4833 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369072 4833 flags.go:64] FLAG: --kubelet-cgroups="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369076 4833 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369080 4833 flags.go:64] FLAG: --lock-file="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369085 4833 flags.go:64] FLAG: --log-cadvisor-usage="false" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369089 4833 flags.go:64] FLAG: --log-flush-frequency="5s" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369093 4833 flags.go:64] FLAG: --log-json-info-buffer-size="0" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369102 4833 flags.go:64] FLAG: --log-json-split-stream="false" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369109 4833 flags.go:64] FLAG: --log-text-info-buffer-size="0" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369113 4833 flags.go:64] FLAG: --log-text-split-stream="false" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369119 4833 flags.go:64] FLAG: --logging-format="text" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369157 4833 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369163 4833 flags.go:64] FLAG: --make-iptables-util-chains="true" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369167 4833 flags.go:64] FLAG: --manifest-url="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369173 4833 flags.go:64] FLAG: --manifest-url-header="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369179 4833 flags.go:64] FLAG: --max-housekeeping-interval="15s" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369184 4833 flags.go:64] FLAG: --max-open-files="1000000" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369190 4833 flags.go:64] FLAG: --max-pods="110" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369195 4833 flags.go:64] FLAG: --maximum-dead-containers="-1" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369200 4833 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369204 4833 flags.go:64] FLAG: --memory-manager-policy="None" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369210 4833 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369215 4833 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369219 4833 flags.go:64] FLAG: --node-ip="192.168.126.11" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369226 4833 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369253 4833 flags.go:64] FLAG: --node-status-max-images="50" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369259 4833 flags.go:64] FLAG: --node-status-update-frequency="10s" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369263 4833 flags.go:64] FLAG: --oom-score-adj="-999" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369268 4833 flags.go:64] FLAG: --pod-cidr="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369272 4833 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369282 4833 flags.go:64] FLAG: --pod-manifest-path="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369286 4833 flags.go:64] FLAG: --pod-max-pids="-1" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369291 4833 flags.go:64] FLAG: --pods-per-core="0" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369295 4833 flags.go:64] FLAG: --port="10250" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369299 4833 flags.go:64] FLAG: --protect-kernel-defaults="false" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369304 4833 flags.go:64] FLAG: --provider-id="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369308 4833 flags.go:64] FLAG: --qos-reserved="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369313 4833 flags.go:64] FLAG: --read-only-port="10255" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369317 4833 flags.go:64] FLAG: --register-node="true" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369322 4833 flags.go:64] FLAG: --register-schedulable="true" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369326 4833 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369335 4833 flags.go:64] FLAG: --registry-burst="10" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369340 4833 flags.go:64] FLAG: --registry-qps="5" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369345 4833 flags.go:64] FLAG: --reserved-cpus="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369351 4833 flags.go:64] FLAG: --reserved-memory="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369357 4833 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369361 4833 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369365 4833 flags.go:64] FLAG: --rotate-certificates="false" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369370 4833 flags.go:64] FLAG: --rotate-server-certificates="false" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369374 4833 flags.go:64] FLAG: --runonce="false" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369379 4833 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369383 4833 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369387 4833 flags.go:64] FLAG: --seccomp-default="false" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369392 4833 flags.go:64] FLAG: --serialize-image-pulls="true" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369396 4833 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369400 4833 flags.go:64] FLAG: --storage-driver-db="cadvisor" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369405 4833 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369409 4833 flags.go:64] FLAG: --storage-driver-password="root" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369413 4833 flags.go:64] FLAG: --storage-driver-secure="false" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369417 4833 flags.go:64] FLAG: --storage-driver-table="stats" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369422 4833 flags.go:64] FLAG: --storage-driver-user="root" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369426 4833 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369430 4833 flags.go:64] FLAG: --sync-frequency="1m0s" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369435 4833 flags.go:64] FLAG: --system-cgroups="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369439 4833 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369447 4833 flags.go:64] FLAG: --system-reserved-cgroup="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369451 4833 flags.go:64] FLAG: --tls-cert-file="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369455 4833 flags.go:64] FLAG: --tls-cipher-suites="[]" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369462 4833 flags.go:64] FLAG: --tls-min-version="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369466 4833 flags.go:64] FLAG: --tls-private-key-file="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369470 4833 flags.go:64] FLAG: --topology-manager-policy="none" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369475 4833 flags.go:64] FLAG: --topology-manager-policy-options="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369480 4833 flags.go:64] FLAG: --topology-manager-scope="container" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369485 4833 flags.go:64] FLAG: --v="2" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369494 4833 flags.go:64] FLAG: --version="false" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369503 4833 flags.go:64] FLAG: --vmodule="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369509 4833 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.369514 4833 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369669 4833 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369675 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369681 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369685 4833 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369689 4833 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369693 4833 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369697 4833 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369701 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369705 4833 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369708 4833 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369712 4833 feature_gate.go:330] unrecognized feature gate: Example Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369716 4833 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369719 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369723 4833 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369726 4833 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369730 4833 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369734 4833 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369737 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369741 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369744 4833 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369749 4833 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369752 4833 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369756 4833 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369759 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369763 4833 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369766 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369771 4833 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369774 4833 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369778 4833 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369782 4833 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369785 4833 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369790 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369794 4833 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369797 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369801 4833 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369804 4833 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369808 4833 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369812 4833 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369816 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369820 4833 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369824 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369827 4833 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369832 4833 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369837 4833 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369842 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369846 4833 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369849 4833 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369854 4833 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369858 4833 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369861 4833 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369866 4833 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369871 4833 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369878 4833 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369883 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369887 4833 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369891 4833 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369895 4833 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369899 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369913 4833 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369918 4833 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369922 4833 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369926 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369930 4833 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369934 4833 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369938 4833 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369941 4833 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369945 4833 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369948 4833 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369952 4833 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369957 4833 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.369961 4833 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.371748 4833 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.384209 4833 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.384244 4833 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384338 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384349 4833 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384358 4833 feature_gate.go:330] unrecognized feature gate: Example Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384364 4833 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384370 4833 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384376 4833 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384381 4833 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384386 4833 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384391 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384396 4833 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384401 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384406 4833 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384411 4833 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384416 4833 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384421 4833 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384426 4833 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384432 4833 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384438 4833 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384444 4833 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384450 4833 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384455 4833 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384461 4833 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384467 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384473 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384478 4833 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384484 4833 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384490 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384494 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384499 4833 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384507 4833 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384512 4833 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384517 4833 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384523 4833 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384527 4833 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384533 4833 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384552 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384558 4833 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384562 4833 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384569 4833 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384575 4833 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384584 4833 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384589 4833 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384595 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384600 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384605 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384610 4833 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384615 4833 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384620 4833 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384624 4833 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384631 4833 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384636 4833 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384643 4833 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384648 4833 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384656 4833 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384662 4833 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384668 4833 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384674 4833 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384679 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384683 4833 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384688 4833 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384693 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384698 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384703 4833 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384707 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384712 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384717 4833 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384722 4833 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384728 4833 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384734 4833 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384739 4833 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384745 4833 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.384756 4833 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384915 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384924 4833 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384930 4833 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384935 4833 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384941 4833 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384946 4833 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384954 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384959 4833 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384964 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384970 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384975 4833 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384980 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384986 4833 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384992 4833 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.384998 4833 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385003 4833 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385008 4833 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385013 4833 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385019 4833 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385025 4833 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385031 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385036 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385041 4833 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385046 4833 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385051 4833 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385057 4833 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385062 4833 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385067 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385072 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385077 4833 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385082 4833 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385087 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385092 4833 feature_gate.go:330] unrecognized feature gate: Example Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385097 4833 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385102 4833 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385107 4833 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385112 4833 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385148 4833 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385153 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385159 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385163 4833 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385168 4833 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385173 4833 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385178 4833 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385182 4833 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385187 4833 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385194 4833 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385200 4833 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385205 4833 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385210 4833 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385215 4833 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385220 4833 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385224 4833 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385230 4833 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385236 4833 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385242 4833 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385248 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385254 4833 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385259 4833 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385264 4833 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385268 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385273 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385279 4833 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385286 4833 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385291 4833 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385298 4833 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385303 4833 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385309 4833 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385314 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385318 4833 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.385323 4833 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.385331 4833 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.386529 4833 server.go:940] "Client rotation is on, will bootstrap in background" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.392192 4833 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.392293 4833 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.394014 4833 server.go:997] "Starting client certificate rotation" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.394051 4833 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.395044 4833 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-09 00:10:58.246106522 +0000 UTC Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.395110 4833 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 641h42m27.851000053s for next certificate rotation Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.420105 4833 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.427764 4833 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.444440 4833 log.go:25] "Validated CRI v1 runtime API" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.484062 4833 log.go:25] "Validated CRI v1 image API" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.486487 4833 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.493912 4833 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-10-13-06-23-50-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.493965 4833 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.520480 4833 manager.go:217] Machine: {Timestamp:2025-10-13 06:28:30.51864582 +0000 UTC m=+0.619068756 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:2a40fffb-7b97-4765-9d1a-75d6749bf8d3 BootID:34e4ed34-c49c-4b1b-8fbf-570796f37a92 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:02:0a:1a Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:02:0a:1a Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:86:bc:b1 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:9c:96:e5 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:8a:41:5b Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:1e:70:6c Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:2b:85:d2 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:fe:e4:e1:42:ee:bb Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ea:05:ec:fe:de:8c Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.520761 4833 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.520945 4833 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.522299 4833 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.522496 4833 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.522559 4833 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.522864 4833 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.522880 4833 container_manager_linux.go:303] "Creating device plugin manager" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.523400 4833 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.523435 4833 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.524293 4833 state_mem.go:36] "Initialized new in-memory state store" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.524396 4833 server.go:1245] "Using root directory" path="/var/lib/kubelet" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.530388 4833 kubelet.go:418] "Attempting to sync node with API server" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.530440 4833 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.530483 4833 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.530499 4833 kubelet.go:324] "Adding apiserver pod source" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.530514 4833 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.537671 4833 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.539095 4833 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.539368 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.539377 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Oct 13 06:28:30 crc kubenswrapper[4833]: E1013 06:28:30.539528 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Oct 13 06:28:30 crc kubenswrapper[4833]: E1013 06:28:30.539570 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.542579 4833 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.544198 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.544231 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.544239 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.544247 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.544263 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.544272 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.544283 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.544298 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.544309 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.544318 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.544330 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.544337 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.544923 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.545587 4833 server.go:1280] "Started kubelet" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.545752 4833 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.545906 4833 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.547028 4833 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.547501 4833 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 06:28:30 crc systemd[1]: Started Kubernetes Kubelet. Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.549810 4833 server.go:460] "Adding debug handlers to kubelet server" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.550573 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.550635 4833 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.550901 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 21:59:29.244804144 +0000 UTC Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.550982 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 1671h30m58.693826695s for next certificate rotation Oct 13 06:28:30 crc kubenswrapper[4833]: E1013 06:28:30.551009 4833 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.555355 4833 volume_manager.go:287] "The desired_state_of_world populator starts" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.555509 4833 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.556275 4833 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.557921 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Oct 13 06:28:30 crc kubenswrapper[4833]: E1013 06:28:30.558127 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Oct 13 06:28:30 crc kubenswrapper[4833]: E1013 06:28:30.559158 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="200ms" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.562581 4833 factory.go:55] Registering systemd factory Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.562622 4833 factory.go:221] Registration of the systemd container factory successfully Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.563104 4833 factory.go:153] Registering CRI-O factory Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.563159 4833 factory.go:221] Registration of the crio container factory successfully Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.563282 4833 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.563405 4833 factory.go:103] Registering Raw factory Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.563483 4833 manager.go:1196] Started watching for new ooms in manager Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.564632 4833 manager.go:319] Starting recovery of all containers Oct 13 06:28:30 crc kubenswrapper[4833]: E1013 06:28:30.564078 4833 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.150:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.186df917ee12b29f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-10-13 06:28:30.545498783 +0000 UTC m=+0.645921699,LastTimestamp:2025-10-13 06:28:30.545498783 +0000 UTC m=+0.645921699,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.575972 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576043 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576056 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576066 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576077 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576088 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576117 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576129 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576144 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576154 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576165 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576176 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576207 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576219 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576231 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576242 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576275 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576292 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576303 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576313 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576424 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576457 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576470 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576481 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576494 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576520 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576550 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576564 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576575 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576606 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576617 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576628 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576688 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576699 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576710 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576722 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576732 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576743 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576777 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576790 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576800 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576811 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576823 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576837 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576848 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576882 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576899 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576912 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576929 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576944 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576955 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576967 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.576985 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577015 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577053 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577100 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577112 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577124 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577136 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577147 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577158 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577171 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577182 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577193 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577206 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577238 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577249 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577260 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577273 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577284 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577295 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577308 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577320 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577332 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577344 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577354 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577365 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577384 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577397 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577408 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577422 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577436 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577446 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577457 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577469 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577481 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577494 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577506 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577517 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577527 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577583 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577595 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577628 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577639 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577650 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577661 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577675 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577687 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577698 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577709 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577722 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577733 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577745 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577756 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577910 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577926 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577938 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.577993 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.578011 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.578028 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.578040 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.578054 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.578067 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.578081 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.578094 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.578104 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.578117 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.578128 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.580711 4833 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.580736 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.580751 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.580761 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.580772 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.580787 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.580799 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.580809 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.580821 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.580832 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.580846 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.580857 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.580868 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.580879 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.580891 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.580902 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.580913 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.580924 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.580935 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.580946 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.580958 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.580970 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.580988 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581001 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581017 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581059 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581072 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581085 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581097 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581109 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581141 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581152 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581164 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581176 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581187 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581199 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581211 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581222 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581233 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581244 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581255 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581266 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581279 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581292 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581302 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581323 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581335 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581345 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581358 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581370 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581381 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581390 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581403 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581413 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581424 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581435 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581447 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581458 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581497 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581510 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581521 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581546 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581559 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581570 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581581 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581593 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581622 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581635 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581647 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581659 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581670 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581683 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581694 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581703 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581716 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581727 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581738 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581753 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581765 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581776 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581789 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581800 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581812 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581824 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581836 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581849 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581860 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581872 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581883 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581895 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581907 4833 reconstruct.go:97] "Volume reconstruction finished" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.581914 4833 reconciler.go:26] "Reconciler: start to sync state" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.593963 4833 manager.go:324] Recovery completed Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.606568 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.614596 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.614642 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.614652 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.615711 4833 cpu_manager.go:225] "Starting CPU manager" policy="none" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.615725 4833 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.615746 4833 state_mem.go:36] "Initialized new in-memory state store" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.622912 4833 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.625726 4833 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.625767 4833 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.625797 4833 kubelet.go:2335] "Starting kubelet main sync loop" Oct 13 06:28:30 crc kubenswrapper[4833]: E1013 06:28:30.625958 4833 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 06:28:30 crc kubenswrapper[4833]: W1013 06:28:30.628407 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Oct 13 06:28:30 crc kubenswrapper[4833]: E1013 06:28:30.628473 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.632053 4833 policy_none.go:49] "None policy: Start" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.632959 4833 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.632995 4833 state_mem.go:35] "Initializing new in-memory state store" Oct 13 06:28:30 crc kubenswrapper[4833]: E1013 06:28:30.656226 4833 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.688429 4833 manager.go:334] "Starting Device Plugin manager" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.688754 4833 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.688774 4833 server.go:79] "Starting device plugin registration server" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.689753 4833 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.689771 4833 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.689991 4833 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.690174 4833 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.690187 4833 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 06:28:30 crc kubenswrapper[4833]: E1013 06:28:30.702518 4833 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.726747 4833 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.726837 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.728217 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.728262 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.728279 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.728454 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.728703 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.728749 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.730399 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.730425 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.730433 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.730573 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.730839 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.730915 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.731300 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.731324 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.731331 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.731584 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.731628 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.731640 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.731808 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.731977 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.732031 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.732819 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.732844 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.732854 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.733092 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.733118 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.733128 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.733250 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.733377 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.733436 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.734499 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.734521 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.734529 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.735163 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.735201 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.735211 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.735227 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.735241 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.735249 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.735381 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.735409 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.736304 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.736332 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.736343 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:30 crc kubenswrapper[4833]: E1013 06:28:30.760999 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="400ms" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.784386 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.784414 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.784450 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.784570 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.784627 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.784661 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.784690 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.784824 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.784914 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.784974 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.785001 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.785027 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.785046 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.785076 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.785089 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.790463 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.791309 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.791340 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.791350 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.791378 4833 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 13 06:28:30 crc kubenswrapper[4833]: E1013 06:28:30.791676 4833 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.886690 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.886760 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.886796 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.886829 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.886862 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.886892 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.886921 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.886933 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.886988 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.887007 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.887041 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.886947 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.887113 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.886940 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.887051 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.886966 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.887152 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.887185 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.887226 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.887278 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.887319 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.887342 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.887362 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.887396 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.887408 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.887458 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.887475 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.887508 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.887591 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.887460 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.991935 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.993883 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.993933 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.993950 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:30 crc kubenswrapper[4833]: I1013 06:28:30.993984 4833 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 13 06:28:30 crc kubenswrapper[4833]: E1013 06:28:30.994496 4833 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Oct 13 06:28:31 crc kubenswrapper[4833]: I1013 06:28:31.065575 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 13 06:28:31 crc kubenswrapper[4833]: I1013 06:28:31.073186 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 13 06:28:31 crc kubenswrapper[4833]: I1013 06:28:31.089811 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 06:28:31 crc kubenswrapper[4833]: I1013 06:28:31.095840 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 06:28:31 crc kubenswrapper[4833]: I1013 06:28:31.101095 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 13 06:28:31 crc kubenswrapper[4833]: W1013 06:28:31.114696 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-f771847b6879667abc727595a1320f70a1eda3ddd1a03c7a9978a5e8d7bb36db WatchSource:0}: Error finding container f771847b6879667abc727595a1320f70a1eda3ddd1a03c7a9978a5e8d7bb36db: Status 404 returned error can't find the container with id f771847b6879667abc727595a1320f70a1eda3ddd1a03c7a9978a5e8d7bb36db Oct 13 06:28:31 crc kubenswrapper[4833]: W1013 06:28:31.117818 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-670cc295c1c13e761f092cd43eb96d88a9ea559062b236f1a5de1a4f374c71aa WatchSource:0}: Error finding container 670cc295c1c13e761f092cd43eb96d88a9ea559062b236f1a5de1a4f374c71aa: Status 404 returned error can't find the container with id 670cc295c1c13e761f092cd43eb96d88a9ea559062b236f1a5de1a4f374c71aa Oct 13 06:28:31 crc kubenswrapper[4833]: W1013 06:28:31.119134 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-559b8a1b24fa5a7b4b3e34ccd102703a27bc75bcd45b0d8071789652154df4fd WatchSource:0}: Error finding container 559b8a1b24fa5a7b4b3e34ccd102703a27bc75bcd45b0d8071789652154df4fd: Status 404 returned error can't find the container with id 559b8a1b24fa5a7b4b3e34ccd102703a27bc75bcd45b0d8071789652154df4fd Oct 13 06:28:31 crc kubenswrapper[4833]: W1013 06:28:31.122982 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-04dd15de74536bf740be3d0181c4b3899c4d8655f3cee92d394c5cda8a90f8b7 WatchSource:0}: Error finding container 04dd15de74536bf740be3d0181c4b3899c4d8655f3cee92d394c5cda8a90f8b7: Status 404 returned error can't find the container with id 04dd15de74536bf740be3d0181c4b3899c4d8655f3cee92d394c5cda8a90f8b7 Oct 13 06:28:31 crc kubenswrapper[4833]: W1013 06:28:31.127661 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-71620b50cc60f350b85f463d3259e29ba886060774c0838a93606197625d341b WatchSource:0}: Error finding container 71620b50cc60f350b85f463d3259e29ba886060774c0838a93606197625d341b: Status 404 returned error can't find the container with id 71620b50cc60f350b85f463d3259e29ba886060774c0838a93606197625d341b Oct 13 06:28:31 crc kubenswrapper[4833]: E1013 06:28:31.161864 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="800ms" Oct 13 06:28:31 crc kubenswrapper[4833]: I1013 06:28:31.395509 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:31 crc kubenswrapper[4833]: I1013 06:28:31.397406 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:31 crc kubenswrapper[4833]: I1013 06:28:31.397472 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:31 crc kubenswrapper[4833]: I1013 06:28:31.397489 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:31 crc kubenswrapper[4833]: I1013 06:28:31.397526 4833 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 13 06:28:31 crc kubenswrapper[4833]: E1013 06:28:31.397995 4833 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Oct 13 06:28:31 crc kubenswrapper[4833]: I1013 06:28:31.548840 4833 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Oct 13 06:28:31 crc kubenswrapper[4833]: W1013 06:28:31.552609 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Oct 13 06:28:31 crc kubenswrapper[4833]: E1013 06:28:31.552739 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Oct 13 06:28:31 crc kubenswrapper[4833]: I1013 06:28:31.630894 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f771847b6879667abc727595a1320f70a1eda3ddd1a03c7a9978a5e8d7bb36db"} Oct 13 06:28:31 crc kubenswrapper[4833]: I1013 06:28:31.632025 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"71620b50cc60f350b85f463d3259e29ba886060774c0838a93606197625d341b"} Oct 13 06:28:31 crc kubenswrapper[4833]: I1013 06:28:31.633510 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"04dd15de74536bf740be3d0181c4b3899c4d8655f3cee92d394c5cda8a90f8b7"} Oct 13 06:28:31 crc kubenswrapper[4833]: I1013 06:28:31.634277 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"559b8a1b24fa5a7b4b3e34ccd102703a27bc75bcd45b0d8071789652154df4fd"} Oct 13 06:28:31 crc kubenswrapper[4833]: I1013 06:28:31.635389 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"670cc295c1c13e761f092cd43eb96d88a9ea559062b236f1a5de1a4f374c71aa"} Oct 13 06:28:31 crc kubenswrapper[4833]: W1013 06:28:31.781888 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Oct 13 06:28:31 crc kubenswrapper[4833]: E1013 06:28:31.781978 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Oct 13 06:28:31 crc kubenswrapper[4833]: E1013 06:28:31.962996 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="1.6s" Oct 13 06:28:32 crc kubenswrapper[4833]: W1013 06:28:32.017077 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Oct 13 06:28:32 crc kubenswrapper[4833]: E1013 06:28:32.017179 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Oct 13 06:28:32 crc kubenswrapper[4833]: W1013 06:28:32.041836 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Oct 13 06:28:32 crc kubenswrapper[4833]: E1013 06:28:32.041926 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.199128 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.200306 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.200346 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.200369 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.200406 4833 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 13 06:28:32 crc kubenswrapper[4833]: E1013 06:28:32.200986 4833 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.548739 4833 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.641714 4833 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="57b661bbc138ab62cadbc130d842dc8d1dce42a0650e585deef53aae5f57189d" exitCode=0 Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.641862 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"57b661bbc138ab62cadbc130d842dc8d1dce42a0650e585deef53aae5f57189d"} Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.641868 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.643171 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.643215 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.643227 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.643953 4833 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="fdd73615b96be8034f386dbac0f32cb406ddf8e0f8eee12249fa46d785cdf0cc" exitCode=0 Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.644003 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.644112 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"fdd73615b96be8034f386dbac0f32cb406ddf8e0f8eee12249fa46d785cdf0cc"} Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.644958 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.644981 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.644993 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.651916 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f"} Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.651954 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.651970 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8"} Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.651993 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3"} Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.652013 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33"} Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.653321 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.653367 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.653386 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.655518 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9" exitCode=0 Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.655603 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9"} Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.655702 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.657646 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.657679 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.657690 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.662156 4833 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d03f03afc03f5539ea2ba7e2be7f65f36672f745ce9c4d84a0b49bcc4d3fe50b" exitCode=0 Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.662228 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d03f03afc03f5539ea2ba7e2be7f65f36672f745ce9c4d84a0b49bcc4d3fe50b"} Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.662412 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.662860 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.666466 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.666509 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.666524 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.666522 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.666676 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:32 crc kubenswrapper[4833]: I1013 06:28:32.666699 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.548652 4833 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Oct 13 06:28:33 crc kubenswrapper[4833]: E1013 06:28:33.564251 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="3.2s" Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.666781 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b"} Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.666820 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545"} Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.666831 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52"} Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.666840 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e"} Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.668329 4833 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3349a11b48277e6165af857213a1caef239d66a53aaba133680bd7961f0b7c63" exitCode=0 Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.668360 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3349a11b48277e6165af857213a1caef239d66a53aaba133680bd7961f0b7c63"} Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.668481 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.669288 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.669312 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.669321 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.670096 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.670119 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"158c71a74819eac0b6778680208bfd0f402fe582343198c0af41a68e823495af"} Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.670681 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.670695 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.670702 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.672980 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.673027 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.673019 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"26887843c37a94b82dc6fe25858a9a8e7d6cd5f78a4567bda07afba8e3a1a94b"} Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.673133 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9bd17b6ca285f57d8161394548b55fdfed2681f648cfe5a7619cc3c325694e87"} Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.673153 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9ab024e93fccec089531cd9b30c0dddb671f50dc2545e91808b9194879518141"} Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.673924 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.673961 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.673976 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.674047 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.674074 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.674090 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.802508 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.803881 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.803911 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.803919 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:33 crc kubenswrapper[4833]: I1013 06:28:33.803944 4833 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 13 06:28:33 crc kubenswrapper[4833]: E1013 06:28:33.804316 4833 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Oct 13 06:28:34 crc kubenswrapper[4833]: I1013 06:28:34.360649 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 06:28:34 crc kubenswrapper[4833]: I1013 06:28:34.676755 4833 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="238bd0083bc581a5c677e018590e6dd9efc299c9ed611bc7327d9286423e34d9" exitCode=0 Oct 13 06:28:34 crc kubenswrapper[4833]: I1013 06:28:34.676834 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"238bd0083bc581a5c677e018590e6dd9efc299c9ed611bc7327d9286423e34d9"} Oct 13 06:28:34 crc kubenswrapper[4833]: I1013 06:28:34.676847 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:34 crc kubenswrapper[4833]: I1013 06:28:34.677689 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:34 crc kubenswrapper[4833]: I1013 06:28:34.677737 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:34 crc kubenswrapper[4833]: I1013 06:28:34.677758 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:34 crc kubenswrapper[4833]: I1013 06:28:34.680750 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 13 06:28:34 crc kubenswrapper[4833]: I1013 06:28:34.688213 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b4880365bfcff289807821704c623d9eebc0d005713cfd7f1aaf0a438bea3654" exitCode=255 Oct 13 06:28:34 crc kubenswrapper[4833]: I1013 06:28:34.688321 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:34 crc kubenswrapper[4833]: I1013 06:28:34.688398 4833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 06:28:34 crc kubenswrapper[4833]: I1013 06:28:34.688422 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:34 crc kubenswrapper[4833]: I1013 06:28:34.688758 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:34 crc kubenswrapper[4833]: I1013 06:28:34.688932 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b4880365bfcff289807821704c623d9eebc0d005713cfd7f1aaf0a438bea3654"} Oct 13 06:28:34 crc kubenswrapper[4833]: I1013 06:28:34.688320 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:34 crc kubenswrapper[4833]: I1013 06:28:34.689388 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:34 crc kubenswrapper[4833]: I1013 06:28:34.689407 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:34 crc kubenswrapper[4833]: I1013 06:28:34.689416 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:34 crc kubenswrapper[4833]: I1013 06:28:34.689922 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:34 crc kubenswrapper[4833]: I1013 06:28:34.689945 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:34 crc kubenswrapper[4833]: I1013 06:28:34.689956 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:34 crc kubenswrapper[4833]: I1013 06:28:34.690271 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:34 crc kubenswrapper[4833]: I1013 06:28:34.690288 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:34 crc kubenswrapper[4833]: I1013 06:28:34.690295 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:34 crc kubenswrapper[4833]: I1013 06:28:34.690421 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:34 crc kubenswrapper[4833]: I1013 06:28:34.690439 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:34 crc kubenswrapper[4833]: I1013 06:28:34.690448 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:34 crc kubenswrapper[4833]: I1013 06:28:34.690711 4833 scope.go:117] "RemoveContainer" containerID="b4880365bfcff289807821704c623d9eebc0d005713cfd7f1aaf0a438bea3654" Oct 13 06:28:35 crc kubenswrapper[4833]: I1013 06:28:35.201425 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 06:28:35 crc kubenswrapper[4833]: I1013 06:28:35.693734 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 13 06:28:35 crc kubenswrapper[4833]: I1013 06:28:35.695862 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7"} Oct 13 06:28:35 crc kubenswrapper[4833]: I1013 06:28:35.696028 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:35 crc kubenswrapper[4833]: I1013 06:28:35.701186 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:35 crc kubenswrapper[4833]: I1013 06:28:35.701223 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:35 crc kubenswrapper[4833]: I1013 06:28:35.701236 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:35 crc kubenswrapper[4833]: I1013 06:28:35.705838 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d47530dd176be1fef751916ea99b7fce371bde54e4478ef1db44aa1891ef8d89"} Oct 13 06:28:35 crc kubenswrapper[4833]: I1013 06:28:35.705893 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9cc6a036c2aba21bbecedd3cc9db91682f7b5e56d6ebf2ef6dbd48c279fde0b0"} Oct 13 06:28:35 crc kubenswrapper[4833]: I1013 06:28:35.705913 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:35 crc kubenswrapper[4833]: I1013 06:28:35.705913 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e5f0ae5e8d940ae60afb26a1cb306735f2c9b4b7d238c9baed6d21075b18de4a"} Oct 13 06:28:35 crc kubenswrapper[4833]: I1013 06:28:35.706102 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d1d898c97544e00cc44ac177fb9a7b59b70fc095e270163b817f0f145aa1bda6"} Oct 13 06:28:35 crc kubenswrapper[4833]: I1013 06:28:35.706170 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"99ef85ae489d10641c1474040fd113ee16f52469384e899ff95068625e6274fe"} Oct 13 06:28:35 crc kubenswrapper[4833]: I1013 06:28:35.706635 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:35 crc kubenswrapper[4833]: I1013 06:28:35.706656 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:35 crc kubenswrapper[4833]: I1013 06:28:35.706664 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:35 crc kubenswrapper[4833]: I1013 06:28:35.891074 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 06:28:35 crc kubenswrapper[4833]: I1013 06:28:35.891359 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:35 crc kubenswrapper[4833]: I1013 06:28:35.893142 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:35 crc kubenswrapper[4833]: I1013 06:28:35.893200 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:35 crc kubenswrapper[4833]: I1013 06:28:35.893223 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:35 crc kubenswrapper[4833]: I1013 06:28:35.898646 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 06:28:36 crc kubenswrapper[4833]: I1013 06:28:36.708528 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:36 crc kubenswrapper[4833]: I1013 06:28:36.708606 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 06:28:36 crc kubenswrapper[4833]: I1013 06:28:36.708574 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:36 crc kubenswrapper[4833]: I1013 06:28:36.708566 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:36 crc kubenswrapper[4833]: I1013 06:28:36.709889 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:36 crc kubenswrapper[4833]: I1013 06:28:36.709903 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:36 crc kubenswrapper[4833]: I1013 06:28:36.709927 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:36 crc kubenswrapper[4833]: I1013 06:28:36.709926 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:36 crc kubenswrapper[4833]: I1013 06:28:36.709969 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:36 crc kubenswrapper[4833]: I1013 06:28:36.709993 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:36 crc kubenswrapper[4833]: I1013 06:28:36.709946 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:36 crc kubenswrapper[4833]: I1013 06:28:36.709928 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:36 crc kubenswrapper[4833]: I1013 06:28:36.710293 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:36 crc kubenswrapper[4833]: I1013 06:28:36.956334 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Oct 13 06:28:37 crc kubenswrapper[4833]: I1013 06:28:37.004696 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:37 crc kubenswrapper[4833]: I1013 06:28:37.006631 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:37 crc kubenswrapper[4833]: I1013 06:28:37.006690 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:37 crc kubenswrapper[4833]: I1013 06:28:37.006704 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:37 crc kubenswrapper[4833]: I1013 06:28:37.006729 4833 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 13 06:28:37 crc kubenswrapper[4833]: I1013 06:28:37.711850 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:37 crc kubenswrapper[4833]: I1013 06:28:37.711876 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:37 crc kubenswrapper[4833]: I1013 06:28:37.713570 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:37 crc kubenswrapper[4833]: I1013 06:28:37.713600 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:37 crc kubenswrapper[4833]: I1013 06:28:37.713629 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:37 crc kubenswrapper[4833]: I1013 06:28:37.713720 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:37 crc kubenswrapper[4833]: I1013 06:28:37.713635 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:37 crc kubenswrapper[4833]: I1013 06:28:37.713788 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:38 crc kubenswrapper[4833]: I1013 06:28:38.536701 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 06:28:38 crc kubenswrapper[4833]: I1013 06:28:38.715147 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:38 crc kubenswrapper[4833]: I1013 06:28:38.716302 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:38 crc kubenswrapper[4833]: I1013 06:28:38.716355 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:38 crc kubenswrapper[4833]: I1013 06:28:38.716392 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:38 crc kubenswrapper[4833]: I1013 06:28:38.835759 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 06:28:39 crc kubenswrapper[4833]: I1013 06:28:39.207734 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 13 06:28:39 crc kubenswrapper[4833]: I1013 06:28:39.208047 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:39 crc kubenswrapper[4833]: I1013 06:28:39.209691 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:39 crc kubenswrapper[4833]: I1013 06:28:39.209770 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:39 crc kubenswrapper[4833]: I1013 06:28:39.209791 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:39 crc kubenswrapper[4833]: I1013 06:28:39.682780 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Oct 13 06:28:39 crc kubenswrapper[4833]: I1013 06:28:39.683037 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:39 crc kubenswrapper[4833]: I1013 06:28:39.684435 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:39 crc kubenswrapper[4833]: I1013 06:28:39.684471 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:39 crc kubenswrapper[4833]: I1013 06:28:39.684482 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:39 crc kubenswrapper[4833]: I1013 06:28:39.717832 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:39 crc kubenswrapper[4833]: I1013 06:28:39.719315 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:39 crc kubenswrapper[4833]: I1013 06:28:39.719357 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:39 crc kubenswrapper[4833]: I1013 06:28:39.719370 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:40 crc kubenswrapper[4833]: E1013 06:28:40.702780 4833 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 13 06:28:41 crc kubenswrapper[4833]: I1013 06:28:41.911477 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 06:28:41 crc kubenswrapper[4833]: I1013 06:28:41.911773 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:41 crc kubenswrapper[4833]: I1013 06:28:41.913692 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:41 crc kubenswrapper[4833]: I1013 06:28:41.913754 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:41 crc kubenswrapper[4833]: I1013 06:28:41.913767 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:41 crc kubenswrapper[4833]: I1013 06:28:41.918123 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 06:28:42 crc kubenswrapper[4833]: I1013 06:28:42.075696 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 06:28:42 crc kubenswrapper[4833]: I1013 06:28:42.728142 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:42 crc kubenswrapper[4833]: I1013 06:28:42.729369 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:42 crc kubenswrapper[4833]: I1013 06:28:42.729447 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:42 crc kubenswrapper[4833]: I1013 06:28:42.729465 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:43 crc kubenswrapper[4833]: I1013 06:28:43.730745 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:43 crc kubenswrapper[4833]: I1013 06:28:43.731745 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:43 crc kubenswrapper[4833]: I1013 06:28:43.731770 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:43 crc kubenswrapper[4833]: I1013 06:28:43.731777 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:44 crc kubenswrapper[4833]: I1013 06:28:44.549400 4833 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Oct 13 06:28:44 crc kubenswrapper[4833]: W1013 06:28:44.688705 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Oct 13 06:28:44 crc kubenswrapper[4833]: I1013 06:28:44.688837 4833 trace.go:236] Trace[1318616199]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (13-Oct-2025 06:28:34.687) (total time: 10001ms): Oct 13 06:28:44 crc kubenswrapper[4833]: Trace[1318616199]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (06:28:44.688) Oct 13 06:28:44 crc kubenswrapper[4833]: Trace[1318616199]: [10.001050729s] [10.001050729s] END Oct 13 06:28:44 crc kubenswrapper[4833]: E1013 06:28:44.688872 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Oct 13 06:28:44 crc kubenswrapper[4833]: W1013 06:28:44.738461 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Oct 13 06:28:44 crc kubenswrapper[4833]: I1013 06:28:44.738631 4833 trace.go:236] Trace[584036618]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (13-Oct-2025 06:28:34.736) (total time: 10002ms): Oct 13 06:28:44 crc kubenswrapper[4833]: Trace[584036618]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:28:44.738) Oct 13 06:28:44 crc kubenswrapper[4833]: Trace[584036618]: [10.00201532s] [10.00201532s] END Oct 13 06:28:44 crc kubenswrapper[4833]: E1013 06:28:44.738667 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Oct 13 06:28:44 crc kubenswrapper[4833]: W1013 06:28:44.751138 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Oct 13 06:28:44 crc kubenswrapper[4833]: I1013 06:28:44.751229 4833 trace.go:236] Trace[1983863170]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (13-Oct-2025 06:28:34.749) (total time: 10001ms): Oct 13 06:28:44 crc kubenswrapper[4833]: Trace[1983863170]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:28:44.751) Oct 13 06:28:44 crc kubenswrapper[4833]: Trace[1983863170]: [10.001671893s] [10.001671893s] END Oct 13 06:28:44 crc kubenswrapper[4833]: E1013 06:28:44.751254 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Oct 13 06:28:44 crc kubenswrapper[4833]: W1013 06:28:44.808025 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Oct 13 06:28:44 crc kubenswrapper[4833]: I1013 06:28:44.808113 4833 trace.go:236] Trace[1565089164]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (13-Oct-2025 06:28:34.807) (total time: 10000ms): Oct 13 06:28:44 crc kubenswrapper[4833]: Trace[1565089164]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (06:28:44.808) Oct 13 06:28:44 crc kubenswrapper[4833]: Trace[1565089164]: [10.000927423s] [10.000927423s] END Oct 13 06:28:44 crc kubenswrapper[4833]: E1013 06:28:44.808131 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Oct 13 06:28:45 crc kubenswrapper[4833]: I1013 06:28:45.034336 4833 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Oct 13 06:28:45 crc kubenswrapper[4833]: I1013 06:28:45.034424 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 13 06:28:45 crc kubenswrapper[4833]: I1013 06:28:45.042904 4833 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Oct 13 06:28:45 crc kubenswrapper[4833]: I1013 06:28:45.042980 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 13 06:28:45 crc kubenswrapper[4833]: I1013 06:28:45.075821 4833 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 13 06:28:45 crc kubenswrapper[4833]: I1013 06:28:45.075891 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 13 06:28:45 crc kubenswrapper[4833]: I1013 06:28:45.202522 4833 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Oct 13 06:28:45 crc kubenswrapper[4833]: I1013 06:28:45.202600 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Oct 13 06:28:48 crc kubenswrapper[4833]: I1013 06:28:48.840038 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 06:28:48 crc kubenswrapper[4833]: I1013 06:28:48.840215 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:48 crc kubenswrapper[4833]: I1013 06:28:48.840562 4833 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Oct 13 06:28:48 crc kubenswrapper[4833]: I1013 06:28:48.840614 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Oct 13 06:28:48 crc kubenswrapper[4833]: I1013 06:28:48.841642 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:48 crc kubenswrapper[4833]: I1013 06:28:48.841717 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:48 crc kubenswrapper[4833]: I1013 06:28:48.841732 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:48 crc kubenswrapper[4833]: I1013 06:28:48.844315 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 06:28:48 crc kubenswrapper[4833]: I1013 06:28:48.985013 4833 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Oct 13 06:28:49 crc kubenswrapper[4833]: I1013 06:28:49.714205 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Oct 13 06:28:49 crc kubenswrapper[4833]: I1013 06:28:49.714435 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:49 crc kubenswrapper[4833]: I1013 06:28:49.715599 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:49 crc kubenswrapper[4833]: I1013 06:28:49.715638 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:49 crc kubenswrapper[4833]: I1013 06:28:49.715653 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:49 crc kubenswrapper[4833]: I1013 06:28:49.725237 4833 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Oct 13 06:28:49 crc kubenswrapper[4833]: I1013 06:28:49.726897 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Oct 13 06:28:49 crc kubenswrapper[4833]: I1013 06:28:49.745202 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:49 crc kubenswrapper[4833]: I1013 06:28:49.745207 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:49 crc kubenswrapper[4833]: I1013 06:28:49.745425 4833 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Oct 13 06:28:49 crc kubenswrapper[4833]: I1013 06:28:49.745465 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Oct 13 06:28:49 crc kubenswrapper[4833]: I1013 06:28:49.745992 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:49 crc kubenswrapper[4833]: I1013 06:28:49.746016 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:49 crc kubenswrapper[4833]: I1013 06:28:49.746026 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:49 crc kubenswrapper[4833]: I1013 06:28:49.746420 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:49 crc kubenswrapper[4833]: I1013 06:28:49.746453 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:49 crc kubenswrapper[4833]: I1013 06:28:49.746467 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:49 crc kubenswrapper[4833]: I1013 06:28:49.752065 4833 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Oct 13 06:28:49 crc kubenswrapper[4833]: I1013 06:28:49.842756 4833 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Oct 13 06:28:50 crc kubenswrapper[4833]: E1013 06:28:50.030444 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Oct 13 06:28:50 crc kubenswrapper[4833]: E1013 06:28:50.033104 4833 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.033650 4833 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.548607 4833 apiserver.go:52] "Watching apiserver" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.551832 4833 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.552336 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-operator/iptables-alerter-4ln5h","openshift-dns/node-resolver-5xwt6","openshift-machine-config-operator/machine-config-daemon-wd7ss","openshift-multus/multus-additional-cni-plugins-9c9nw","openshift-multus/multus-zbg2r","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.553194 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.553559 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.553900 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:28:50 crc kubenswrapper[4833]: E1013 06:28:50.554015 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.554220 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:28:50 crc kubenswrapper[4833]: E1013 06:28:50.554516 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.554651 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.554951 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.555206 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:28:50 crc kubenswrapper[4833]: E1013 06:28:50.555297 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.555407 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5xwt6" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.555688 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.555998 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.557907 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.558016 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.558198 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.557966 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.559382 4833 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.562284 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.562635 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.562740 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.566978 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.567400 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.568033 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.568369 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.569716 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.569990 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.572237 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.572330 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.572349 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.572392 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.572261 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.572297 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.572308 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.572312 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.572335 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.572659 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.572958 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.629109 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637055 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637080 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637100 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637120 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637134 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637148 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637163 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637180 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637195 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637209 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637223 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637264 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637279 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637293 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637307 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637322 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637338 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637355 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637371 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637387 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637404 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637418 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637433 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637447 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637464 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637487 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637508 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637549 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637566 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637582 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637597 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637613 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637628 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637642 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637660 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637653 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637678 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637696 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637712 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637732 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637750 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637770 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637788 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637805 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637827 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637841 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637855 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637864 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637870 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637915 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637935 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637966 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637983 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.637999 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638051 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638068 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638085 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638100 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638115 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638129 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638144 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638164 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638198 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638213 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638227 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638243 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638258 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638273 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638290 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638305 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638320 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638335 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638352 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638367 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638381 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638395 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638410 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638457 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638473 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638489 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638506 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638520 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638547 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638563 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638578 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638593 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638608 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638626 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638643 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638660 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638676 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638692 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638708 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638726 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638740 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638756 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638773 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638789 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638805 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638820 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638835 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638850 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638864 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638880 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638895 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638138 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638911 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638911 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638419 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638420 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638452 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638626 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638664 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638771 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638882 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638893 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.639042 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.639112 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.639140 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.639153 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.639242 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.639294 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.639471 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.639718 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.639845 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.639956 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.640052 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.640066 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.640073 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.640194 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.640209 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.640278 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.640369 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.640451 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.640616 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.640714 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.640755 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.640796 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.640845 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.640950 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.641010 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.641097 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.641081 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.641124 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.643806 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.643886 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.644019 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.644145 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.644169 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.644403 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.644679 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.644859 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.645025 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.645194 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.645500 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.645988 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.646750 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.648915 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.649065 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.649425 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.650102 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.650389 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.650614 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.650924 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.651433 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.651736 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.651942 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.652158 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.652367 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.652603 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.652633 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.652839 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.652942 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.653888 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.654234 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.654227 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.654344 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.654447 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.654587 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.654586 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.638929 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.654707 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.654735 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.654883 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.654913 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.654933 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.654956 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.654977 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.654997 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655016 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655038 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655058 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655080 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655100 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655123 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655144 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655165 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655184 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655204 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655227 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655246 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655269 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655290 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655310 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655331 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655351 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655370 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655392 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655414 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655434 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655454 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655474 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655494 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655514 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655641 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655663 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655684 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655706 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655726 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655746 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655768 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655791 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655811 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655831 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655854 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655875 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655895 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655914 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655935 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655955 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655976 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.655996 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.656018 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.656041 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.656061 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.656082 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.656103 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.656124 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.656145 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.656169 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.656193 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.656215 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.656236 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.656258 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.656289 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.654829 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.654913 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.654953 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.657223 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.657530 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.657751 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.657942 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.658110 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.658420 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.658889 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.660924 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.660965 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.660986 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661010 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661027 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661043 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661058 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661055 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661074 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661154 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661187 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661220 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661208 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661289 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661245 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661328 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661344 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661364 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661380 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661395 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661410 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661424 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661439 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661456 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661500 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661516 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661546 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661562 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661579 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661636 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7b5da43b-92dd-4de8-942c-5c546a33ee6c-os-release\") pod \"multus-additional-cni-plugins-9c9nw\" (UID: \"7b5da43b-92dd-4de8-942c-5c546a33ee6c\") " pod="openshift-multus/multus-additional-cni-plugins-9c9nw" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661648 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661675 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-multus-cni-dir\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661693 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62lfr\" (UniqueName: \"kubernetes.io/projected/2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c-kube-api-access-62lfr\") pod \"node-resolver-5xwt6\" (UID: \"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\") " pod="openshift-dns/node-resolver-5xwt6" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661709 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-system-cni-dir\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661734 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661751 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661767 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9d1bd0f7-c161-456d-af32-2da416006789-multus-daemon-config\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661782 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fa5b6ea2-f89e-4768-8663-bd965bde64fa-rootfs\") pod \"machine-config-daemon-wd7ss\" (UID: \"fa5b6ea2-f89e-4768-8663-bd965bde64fa\") " pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661799 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661815 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fa5b6ea2-f89e-4768-8663-bd965bde64fa-mcd-auth-proxy-config\") pod \"machine-config-daemon-wd7ss\" (UID: \"fa5b6ea2-f89e-4768-8663-bd965bde64fa\") " pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661831 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-hostroot\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661848 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661863 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661872 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7b5da43b-92dd-4de8-942c-5c546a33ee6c-cni-binary-copy\") pod \"multus-additional-cni-plugins-9c9nw\" (UID: \"7b5da43b-92dd-4de8-942c-5c546a33ee6c\") " pod="openshift-multus/multus-additional-cni-plugins-9c9nw" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661910 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661917 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7b5da43b-92dd-4de8-942c-5c546a33ee6c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9c9nw\" (UID: \"7b5da43b-92dd-4de8-942c-5c546a33ee6c\") " pod="openshift-multus/multus-additional-cni-plugins-9c9nw" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.661998 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662097 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662127 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c-hosts-file\") pod \"node-resolver-5xwt6\" (UID: \"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\") " pod="openshift-dns/node-resolver-5xwt6" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662141 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662127 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662167 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7b5da43b-92dd-4de8-942c-5c546a33ee6c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9c9nw\" (UID: \"7b5da43b-92dd-4de8-942c-5c546a33ee6c\") " pod="openshift-multus/multus-additional-cni-plugins-9c9nw" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662179 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662198 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fa5b6ea2-f89e-4768-8663-bd965bde64fa-proxy-tls\") pod \"machine-config-daemon-wd7ss\" (UID: \"fa5b6ea2-f89e-4768-8663-bd965bde64fa\") " pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662221 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7b5da43b-92dd-4de8-942c-5c546a33ee6c-cnibin\") pod \"multus-additional-cni-plugins-9c9nw\" (UID: \"7b5da43b-92dd-4de8-942c-5c546a33ee6c\") " pod="openshift-multus/multus-additional-cni-plugins-9c9nw" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662250 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgfh6\" (UniqueName: \"kubernetes.io/projected/fa5b6ea2-f89e-4768-8663-bd965bde64fa-kube-api-access-xgfh6\") pod \"machine-config-daemon-wd7ss\" (UID: \"fa5b6ea2-f89e-4768-8663-bd965bde64fa\") " pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662278 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662305 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662330 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9d1bd0f7-c161-456d-af32-2da416006789-cni-binary-copy\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662335 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662348 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662360 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662387 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662417 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662444 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k588t\" (UniqueName: \"kubernetes.io/projected/9d1bd0f7-c161-456d-af32-2da416006789-kube-api-access-k588t\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662470 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7b5da43b-92dd-4de8-942c-5c546a33ee6c-system-cni-dir\") pod \"multus-additional-cni-plugins-9c9nw\" (UID: \"7b5da43b-92dd-4de8-942c-5c546a33ee6c\") " pod="openshift-multus/multus-additional-cni-plugins-9c9nw" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662483 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662493 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-cnibin\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662506 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662513 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-host-run-netns\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662584 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-multus-conf-dir\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: E1013 06:28:50.662644 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662652 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662675 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8r5f\" (UniqueName: \"kubernetes.io/projected/7b5da43b-92dd-4de8-942c-5c546a33ee6c-kube-api-access-r8r5f\") pod \"multus-additional-cni-plugins-9c9nw\" (UID: \"7b5da43b-92dd-4de8-942c-5c546a33ee6c\") " pod="openshift-multus/multus-additional-cni-plugins-9c9nw" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662691 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-multus-socket-dir-parent\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: E1013 06:28:50.662709 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 06:28:51.162689351 +0000 UTC m=+21.263112267 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662732 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-host-run-k8s-cni-cncf-io\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662756 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-host-var-lib-cni-bin\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662779 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-host-var-lib-kubelet\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662801 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-os-release\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662822 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-host-var-lib-cni-multus\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662842 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-etc-kubernetes\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662866 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662867 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662905 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662927 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-host-run-multus-certs\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662992 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662996 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.663006 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.663018 4833 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.663030 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.663041 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.663052 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.663062 4833 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.663071 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.663080 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.663090 4833 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.663102 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.663138 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.663143 4833 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.663166 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.663901 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664253 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.663180 4833 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664342 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664356 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664367 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664378 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664388 4833 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664399 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664409 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664419 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664429 4833 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664440 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664452 4833 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664463 4833 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664473 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664483 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664493 4833 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664505 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664514 4833 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664524 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664550 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664560 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664570 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664581 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664591 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664600 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664610 4833 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664608 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664619 4833 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664631 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664646 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664656 4833 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664666 4833 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664676 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664686 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664695 4833 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664705 4833 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664715 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664725 4833 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664735 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664772 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664791 4833 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664803 4833 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664814 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664825 4833 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664834 4833 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664845 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664857 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664868 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664878 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664888 4833 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664902 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664913 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664923 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664933 4833 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664942 4833 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664952 4833 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664961 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664971 4833 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664981 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664980 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.664992 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.665003 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.665012 4833 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.665021 4833 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.665031 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.665040 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.665049 4833 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.665058 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.665068 4833 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.665077 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.665086 4833 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.665096 4833 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.665105 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.665114 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.665123 4833 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.665133 4833 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.665142 4833 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.665151 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.665161 4833 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.665170 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.665179 4833 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.665189 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.665200 4833 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.665215 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.665226 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.665236 4833 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.665245 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.666924 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.666946 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.666862 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.667117 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.667506 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.667622 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.667646 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.667923 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.667929 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.667975 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.668160 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.668270 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.662644 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.668362 4833 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.668387 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.668415 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.668483 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.668642 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.668944 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.668946 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.668979 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.669017 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.669039 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.669054 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.669100 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.669267 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.669527 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.669674 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.670418 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.670905 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.671236 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.671859 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.673242 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.673382 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.673420 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.673739 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.673925 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.674269 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.674384 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.674706 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.674790 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.674960 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.675232 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.675249 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.675389 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.675671 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.676326 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.676496 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.676709 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:50 crc kubenswrapper[4833]: E1013 06:28:50.677003 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:28:51.176987119 +0000 UTC m=+21.277410035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.677271 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.677277 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.677484 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: E1013 06:28:50.677622 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 06:28:50 crc kubenswrapper[4833]: E1013 06:28:50.677656 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 06:28:51.177647407 +0000 UTC m=+21.278070323 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.677825 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.677838 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.677999 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.678118 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.678211 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.678423 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.680356 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.680521 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.680868 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.680893 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.680957 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.681135 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.681259 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.681429 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.681479 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.681716 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.682259 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.683974 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.684361 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: E1013 06:28:50.684593 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 06:28:50 crc kubenswrapper[4833]: E1013 06:28:50.684611 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 06:28:50 crc kubenswrapper[4833]: E1013 06:28:50.684629 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:28:50 crc kubenswrapper[4833]: E1013 06:28:50.684679 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-13 06:28:51.184663483 +0000 UTC m=+21.285086399 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.685834 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.688255 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.688957 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: E1013 06:28:50.691763 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 06:28:50 crc kubenswrapper[4833]: E1013 06:28:50.691789 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 06:28:50 crc kubenswrapper[4833]: E1013 06:28:50.691799 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:28:50 crc kubenswrapper[4833]: E1013 06:28:50.691847 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-13 06:28:51.191830643 +0000 UTC m=+21.292253559 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.692313 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.695930 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.696751 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.697069 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.697522 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.697783 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.697875 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.698811 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.703691 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.703991 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.704013 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.704197 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.704304 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.704327 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.704420 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.710999 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.711229 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.711999 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.712208 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.713916 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.715293 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.715612 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.721462 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.724656 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.734919 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.735105 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.740445 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.740934 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.746772 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.749052 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.749482 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.751209 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7" exitCode=255 Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.751335 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7"} Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.751396 4833 scope.go:117] "RemoveContainer" containerID="b4880365bfcff289807821704c623d9eebc0d005713cfd7f1aaf0a438bea3654" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.757199 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.765190 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.765666 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fa5b6ea2-f89e-4768-8663-bd965bde64fa-rootfs\") pod \"machine-config-daemon-wd7ss\" (UID: \"fa5b6ea2-f89e-4768-8663-bd965bde64fa\") " pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.765700 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fa5b6ea2-f89e-4768-8663-bd965bde64fa-mcd-auth-proxy-config\") pod \"machine-config-daemon-wd7ss\" (UID: \"fa5b6ea2-f89e-4768-8663-bd965bde64fa\") " pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.765719 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-hostroot\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.765736 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7b5da43b-92dd-4de8-942c-5c546a33ee6c-cni-binary-copy\") pod \"multus-additional-cni-plugins-9c9nw\" (UID: \"7b5da43b-92dd-4de8-942c-5c546a33ee6c\") " pod="openshift-multus/multus-additional-cni-plugins-9c9nw" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.765751 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7b5da43b-92dd-4de8-942c-5c546a33ee6c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9c9nw\" (UID: \"7b5da43b-92dd-4de8-942c-5c546a33ee6c\") " pod="openshift-multus/multus-additional-cni-plugins-9c9nw" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.765768 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.765789 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c-hosts-file\") pod \"node-resolver-5xwt6\" (UID: \"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\") " pod="openshift-dns/node-resolver-5xwt6" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.765803 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7b5da43b-92dd-4de8-942c-5c546a33ee6c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9c9nw\" (UID: \"7b5da43b-92dd-4de8-942c-5c546a33ee6c\") " pod="openshift-multus/multus-additional-cni-plugins-9c9nw" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.765817 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fa5b6ea2-f89e-4768-8663-bd965bde64fa-proxy-tls\") pod \"machine-config-daemon-wd7ss\" (UID: \"fa5b6ea2-f89e-4768-8663-bd965bde64fa\") " pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.765835 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7b5da43b-92dd-4de8-942c-5c546a33ee6c-cnibin\") pod \"multus-additional-cni-plugins-9c9nw\" (UID: \"7b5da43b-92dd-4de8-942c-5c546a33ee6c\") " pod="openshift-multus/multus-additional-cni-plugins-9c9nw" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.765850 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgfh6\" (UniqueName: \"kubernetes.io/projected/fa5b6ea2-f89e-4768-8663-bd965bde64fa-kube-api-access-xgfh6\") pod \"machine-config-daemon-wd7ss\" (UID: \"fa5b6ea2-f89e-4768-8663-bd965bde64fa\") " pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.765866 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9d1bd0f7-c161-456d-af32-2da416006789-cni-binary-copy\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.765887 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k588t\" (UniqueName: \"kubernetes.io/projected/9d1bd0f7-c161-456d-af32-2da416006789-kube-api-access-k588t\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.765902 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7b5da43b-92dd-4de8-942c-5c546a33ee6c-system-cni-dir\") pod \"multus-additional-cni-plugins-9c9nw\" (UID: \"7b5da43b-92dd-4de8-942c-5c546a33ee6c\") " pod="openshift-multus/multus-additional-cni-plugins-9c9nw" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.765917 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-cnibin\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.765931 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-host-run-netns\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.765945 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-multus-conf-dir\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.765966 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766079 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8r5f\" (UniqueName: \"kubernetes.io/projected/7b5da43b-92dd-4de8-942c-5c546a33ee6c-kube-api-access-r8r5f\") pod \"multus-additional-cni-plugins-9c9nw\" (UID: \"7b5da43b-92dd-4de8-942c-5c546a33ee6c\") " pod="openshift-multus/multus-additional-cni-plugins-9c9nw" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766120 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-multus-conf-dir\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766123 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-hostroot\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766135 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766108 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-multus-socket-dir-parent\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766187 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-multus-socket-dir-parent\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766168 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fa5b6ea2-f89e-4768-8663-bd965bde64fa-rootfs\") pod \"machine-config-daemon-wd7ss\" (UID: \"fa5b6ea2-f89e-4768-8663-bd965bde64fa\") " pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766223 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-host-run-k8s-cni-cncf-io\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766246 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-host-var-lib-cni-bin\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766264 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-host-var-lib-kubelet\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766282 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-os-release\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766299 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-host-var-lib-cni-multus\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766315 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-etc-kubernetes\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766315 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-host-var-lib-cni-bin\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766356 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-host-run-multus-certs\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766362 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-host-run-k8s-cni-cncf-io\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766379 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7b5da43b-92dd-4de8-942c-5c546a33ee6c-os-release\") pod \"multus-additional-cni-plugins-9c9nw\" (UID: \"7b5da43b-92dd-4de8-942c-5c546a33ee6c\") " pod="openshift-multus/multus-additional-cni-plugins-9c9nw" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766397 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-multus-cni-dir\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766415 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62lfr\" (UniqueName: \"kubernetes.io/projected/2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c-kube-api-access-62lfr\") pod \"node-resolver-5xwt6\" (UID: \"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\") " pod="openshift-dns/node-resolver-5xwt6" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766432 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-system-cni-dir\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766434 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-host-var-lib-kubelet\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766453 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9d1bd0f7-c161-456d-af32-2da416006789-multus-daemon-config\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766474 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-os-release\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766395 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-etc-kubernetes\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766414 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-host-var-lib-cni-multus\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766638 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-host-run-multus-certs\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766679 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7b5da43b-92dd-4de8-942c-5c546a33ee6c-os-release\") pod \"multus-additional-cni-plugins-9c9nw\" (UID: \"7b5da43b-92dd-4de8-942c-5c546a33ee6c\") " pod="openshift-multus/multus-additional-cni-plugins-9c9nw" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766725 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-system-cni-dir\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766781 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7b5da43b-92dd-4de8-942c-5c546a33ee6c-system-cni-dir\") pod \"multus-additional-cni-plugins-9c9nw\" (UID: \"7b5da43b-92dd-4de8-942c-5c546a33ee6c\") " pod="openshift-multus/multus-additional-cni-plugins-9c9nw" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766802 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7b5da43b-92dd-4de8-942c-5c546a33ee6c-cnibin\") pod \"multus-additional-cni-plugins-9c9nw\" (UID: \"7b5da43b-92dd-4de8-942c-5c546a33ee6c\") " pod="openshift-multus/multus-additional-cni-plugins-9c9nw" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766840 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766856 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c-hosts-file\") pod \"node-resolver-5xwt6\" (UID: \"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\") " pod="openshift-dns/node-resolver-5xwt6" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766886 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766907 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-host-run-netns\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.766906 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-cnibin\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767058 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767070 4833 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767099 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767108 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767117 4833 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767126 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767177 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9d1bd0f7-c161-456d-af32-2da416006789-multus-cni-dir\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767209 4833 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767219 4833 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767228 4833 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767237 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767247 4833 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767256 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767264 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767273 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767282 4833 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767291 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767301 4833 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767310 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767318 4833 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767327 4833 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767335 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767345 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767339 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7b5da43b-92dd-4de8-942c-5c546a33ee6c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9c9nw\" (UID: \"7b5da43b-92dd-4de8-942c-5c546a33ee6c\") " pod="openshift-multus/multus-additional-cni-plugins-9c9nw" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767353 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767410 4833 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767425 4833 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767438 4833 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767453 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767464 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767471 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9d1bd0f7-c161-456d-af32-2da416006789-cni-binary-copy\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767474 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767555 4833 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767567 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767576 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767585 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767589 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9d1bd0f7-c161-456d-af32-2da416006789-multus-daemon-config\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767595 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767629 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767638 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767644 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7b5da43b-92dd-4de8-942c-5c546a33ee6c-cni-binary-copy\") pod \"multus-additional-cni-plugins-9c9nw\" (UID: \"7b5da43b-92dd-4de8-942c-5c546a33ee6c\") " pod="openshift-multus/multus-additional-cni-plugins-9c9nw" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767648 4833 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767683 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767695 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767696 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fa5b6ea2-f89e-4768-8663-bd965bde64fa-mcd-auth-proxy-config\") pod \"machine-config-daemon-wd7ss\" (UID: \"fa5b6ea2-f89e-4768-8663-bd965bde64fa\") " pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767704 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767713 4833 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767722 4833 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767730 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767740 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767748 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767756 4833 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767764 4833 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767773 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767781 4833 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767790 4833 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767800 4833 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767808 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767817 4833 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767827 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767835 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767843 4833 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767852 4833 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767861 4833 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767870 4833 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767879 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767888 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767899 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767908 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767916 4833 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767924 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767927 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7b5da43b-92dd-4de8-942c-5c546a33ee6c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9c9nw\" (UID: \"7b5da43b-92dd-4de8-942c-5c546a33ee6c\") " pod="openshift-multus/multus-additional-cni-plugins-9c9nw" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767934 4833 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767971 4833 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767981 4833 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767990 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.767999 4833 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.768008 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.768017 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.768026 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.768036 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.768046 4833 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.768055 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.768064 4833 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.768073 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.768082 4833 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.768091 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.768100 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.768109 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.768118 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.768126 4833 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.768135 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.768144 4833 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.768154 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.768165 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.768176 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.768217 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.768227 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.768237 4833 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.768246 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.768255 4833 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.768264 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.768877 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fa5b6ea2-f89e-4768-8663-bd965bde64fa-proxy-tls\") pod \"machine-config-daemon-wd7ss\" (UID: \"fa5b6ea2-f89e-4768-8663-bd965bde64fa\") " pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.771800 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.776992 4833 scope.go:117] "RemoveContainer" containerID="404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.777059 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Oct 13 06:28:50 crc kubenswrapper[4833]: E1013 06:28:50.777201 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.782640 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.784645 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8r5f\" (UniqueName: \"kubernetes.io/projected/7b5da43b-92dd-4de8-942c-5c546a33ee6c-kube-api-access-r8r5f\") pod \"multus-additional-cni-plugins-9c9nw\" (UID: \"7b5da43b-92dd-4de8-942c-5c546a33ee6c\") " pod="openshift-multus/multus-additional-cni-plugins-9c9nw" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.785919 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgfh6\" (UniqueName: \"kubernetes.io/projected/fa5b6ea2-f89e-4768-8663-bd965bde64fa-kube-api-access-xgfh6\") pod \"machine-config-daemon-wd7ss\" (UID: \"fa5b6ea2-f89e-4768-8663-bd965bde64fa\") " pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.786396 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k588t\" (UniqueName: \"kubernetes.io/projected/9d1bd0f7-c161-456d-af32-2da416006789-kube-api-access-k588t\") pod \"multus-zbg2r\" (UID: \"9d1bd0f7-c161-456d-af32-2da416006789\") " pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.787251 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62lfr\" (UniqueName: \"kubernetes.io/projected/2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c-kube-api-access-62lfr\") pod \"node-resolver-5xwt6\" (UID: \"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\") " pod="openshift-dns/node-resolver-5xwt6" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.793872 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.802727 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.810197 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.815929 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.824893 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.833693 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.843304 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.851493 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.858565 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.867427 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4880365bfcff289807821704c623d9eebc0d005713cfd7f1aaf0a438bea3654\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:34Z\\\",\\\"message\\\":\\\"W1013 06:28:33.825165 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1013 06:28:33.825522 1 crypto.go:601] Generating new CA for check-endpoints-signer@1760336913 cert, and key in /tmp/serving-cert-721161630/serving-signer.crt, /tmp/serving-cert-721161630/serving-signer.key\\\\nI1013 06:28:34.034703 1 observer_polling.go:159] Starting file observer\\\\nW1013 06:28:34.040111 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1013 06:28:34.040311 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:34.042438 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-721161630/tls.crt::/tmp/serving-cert-721161630/tls.key\\\\\\\"\\\\nF1013 06:28:34.271299 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.876245 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.879315 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-zbg2r" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.885593 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:50 crc kubenswrapper[4833]: W1013 06:28:50.889231 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d1bd0f7_c161_456d_af32_2da416006789.slice/crio-0e45bd278278ded88aa877b5e6895541d1a05d7bfc93a7a9979b81628eca2a2b WatchSource:0}: Error finding container 0e45bd278278ded88aa877b5e6895541d1a05d7bfc93a7a9979b81628eca2a2b: Status 404 returned error can't find the container with id 0e45bd278278ded88aa877b5e6895541d1a05d7bfc93a7a9979b81628eca2a2b Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.896363 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.897439 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.904979 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.904995 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.913796 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.914251 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.923350 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5xwt6" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.923656 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.923780 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wnpc6"] Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.924961 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.930394 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.930431 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.930594 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.930666 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.930686 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.930735 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.930803 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.932850 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.935222 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.940310 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" Oct 13 06:28:50 crc kubenswrapper[4833]: W1013 06:28:50.943986 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-a8e20b4f42193300222c186eec821d57d100a92728b80e15b627e4548ff4c53d WatchSource:0}: Error finding container a8e20b4f42193300222c186eec821d57d100a92728b80e15b627e4548ff4c53d: Status 404 returned error can't find the container with id a8e20b4f42193300222c186eec821d57d100a92728b80e15b627e4548ff4c53d Oct 13 06:28:50 crc kubenswrapper[4833]: W1013 06:28:50.945674 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-0ea581060c2dece1523245308cf5ab211b402681173334c417fe5c6d3623a650 WatchSource:0}: Error finding container 0ea581060c2dece1523245308cf5ab211b402681173334c417fe5c6d3623a650: Status 404 returned error can't find the container with id 0ea581060c2dece1523245308cf5ab211b402681173334c417fe5c6d3623a650 Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.949048 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.958684 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:50 crc kubenswrapper[4833]: W1013 06:28:50.960790 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa5b6ea2_f89e_4768_8663_bd965bde64fa.slice/crio-0a0b1d69297b2190e938dd5ea8d0fb72eb10c2cd8037c8f27f9ac0d009eea204 WatchSource:0}: Error finding container 0a0b1d69297b2190e938dd5ea8d0fb72eb10c2cd8037c8f27f9ac0d009eea204: Status 404 returned error can't find the container with id 0a0b1d69297b2190e938dd5ea8d0fb72eb10c2cd8037c8f27f9ac0d009eea204 Oct 13 06:28:50 crc kubenswrapper[4833]: W1013 06:28:50.964039 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b5da43b_92dd_4de8_942c_5c546a33ee6c.slice/crio-49bbc4e0a670cec9157d259b4b681d4a3f858ac2341bea8318706f781ae5f4d9 WatchSource:0}: Error finding container 49bbc4e0a670cec9157d259b4b681d4a3f858ac2341bea8318706f781ae5f4d9: Status 404 returned error can't find the container with id 49bbc4e0a670cec9157d259b4b681d4a3f858ac2341bea8318706f781ae5f4d9 Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.971570 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:50 crc kubenswrapper[4833]: I1013 06:28:50.988091 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.001907 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.012941 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4880365bfcff289807821704c623d9eebc0d005713cfd7f1aaf0a438bea3654\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:34Z\\\",\\\"message\\\":\\\"W1013 06:28:33.825165 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1013 06:28:33.825522 1 crypto.go:601] Generating new CA for check-endpoints-signer@1760336913 cert, and key in /tmp/serving-cert-721161630/serving-signer.crt, /tmp/serving-cert-721161630/serving-signer.key\\\\nI1013 06:28:34.034703 1 observer_polling.go:159] Starting file observer\\\\nW1013 06:28:34.040111 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1013 06:28:34.040311 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:34.042438 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-721161630/tls.crt::/tmp/serving-cert-721161630/tls.key\\\\\\\"\\\\nF1013 06:28:34.271299 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.022967 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.032387 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.040963 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.053800 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.067975 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.070453 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-etc-openvswitch\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.070497 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-run-openvswitch\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.070519 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-cni-netd\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.070580 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-run-netns\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.070613 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.070642 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-run-ovn\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.070792 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cb9a788e-b626-43a8-955a-bf4a5a3cb145-ovnkube-script-lib\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.070838 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-var-lib-openvswitch\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.070870 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-log-socket\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.070893 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb9a788e-b626-43a8-955a-bf4a5a3cb145-env-overrides\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.070917 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-systemd-units\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.070946 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-run-systemd\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.070971 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb9a788e-b626-43a8-955a-bf4a5a3cb145-ovn-node-metrics-cert\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.071036 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-node-log\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.071164 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb9a788e-b626-43a8-955a-bf4a5a3cb145-ovnkube-config\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.071216 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-kubelet\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.071238 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-cni-bin\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.071305 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-run-ovn-kubernetes\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.071361 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5vxf\" (UniqueName: \"kubernetes.io/projected/cb9a788e-b626-43a8-955a-bf4a5a3cb145-kube-api-access-k5vxf\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.071465 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-slash\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.094736 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.106430 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.114494 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.161969 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.171960 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-var-lib-openvswitch\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.172003 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-log-socket\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.172031 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb9a788e-b626-43a8-955a-bf4a5a3cb145-env-overrides\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.172057 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-systemd-units\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.172080 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb9a788e-b626-43a8-955a-bf4a5a3cb145-ovn-node-metrics-cert\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.172109 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-log-socket\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.172117 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-run-systemd\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.172158 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-run-systemd\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.172171 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-node-log\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.172225 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-systemd-units\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.172279 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-node-log\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.172831 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-var-lib-openvswitch\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.172835 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb9a788e-b626-43a8-955a-bf4a5a3cb145-env-overrides\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.174356 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb9a788e-b626-43a8-955a-bf4a5a3cb145-ovnkube-config\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.174399 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-cni-bin\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.174431 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-kubelet\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.174477 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-run-ovn-kubernetes\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.174504 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5vxf\" (UniqueName: \"kubernetes.io/projected/cb9a788e-b626-43a8-955a-bf4a5a3cb145-kube-api-access-k5vxf\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.174562 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-slash\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.174588 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-etc-openvswitch\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.174613 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-run-openvswitch\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.174643 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-cni-netd\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.174686 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-run-netns\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.174736 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.174761 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-run-ovn\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.174793 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cb9a788e-b626-43a8-955a-bf4a5a3cb145-ovnkube-script-lib\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.174826 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.174958 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-etc-openvswitch\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: E1013 06:28:51.174976 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 06:28:51 crc kubenswrapper[4833]: E1013 06:28:51.175042 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 06:28:52.175022293 +0000 UTC m=+22.275445279 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.175094 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-run-openvswitch\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.175135 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-cni-netd\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.175173 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-run-netns\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.175213 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.175245 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-run-ovn\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.176501 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-run-ovn-kubernetes\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.177096 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-cni-bin\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.177179 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-kubelet\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.177824 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-slash\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.177921 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cb9a788e-b626-43a8-955a-bf4a5a3cb145-ovnkube-script-lib\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.178869 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb9a788e-b626-43a8-955a-bf4a5a3cb145-ovnkube-config\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.186261 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb9a788e-b626-43a8-955a-bf4a5a3cb145-ovn-node-metrics-cert\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.199504 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5vxf\" (UniqueName: \"kubernetes.io/projected/cb9a788e-b626-43a8-955a-bf4a5a3cb145-kube-api-access-k5vxf\") pod \"ovnkube-node-wnpc6\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.247402 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:51 crc kubenswrapper[4833]: W1013 06:28:51.272336 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb9a788e_b626_43a8_955a_bf4a5a3cb145.slice/crio-1e755011c31d18efdad2d1310ee11165d3f7fb1637877b36f7cb07c9b77a1e7c WatchSource:0}: Error finding container 1e755011c31d18efdad2d1310ee11165d3f7fb1637877b36f7cb07c9b77a1e7c: Status 404 returned error can't find the container with id 1e755011c31d18efdad2d1310ee11165d3f7fb1637877b36f7cb07c9b77a1e7c Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.275053 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.275116 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.275147 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.275169 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:28:51 crc kubenswrapper[4833]: E1013 06:28:51.275242 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 06:28:51 crc kubenswrapper[4833]: E1013 06:28:51.275297 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 06:28:52.275261805 +0000 UTC m=+22.375684721 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 06:28:51 crc kubenswrapper[4833]: E1013 06:28:51.275340 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:28:52.275334657 +0000 UTC m=+22.375757573 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:28:51 crc kubenswrapper[4833]: E1013 06:28:51.275774 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 06:28:51 crc kubenswrapper[4833]: E1013 06:28:51.275793 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 06:28:51 crc kubenswrapper[4833]: E1013 06:28:51.275804 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:28:51 crc kubenswrapper[4833]: E1013 06:28:51.275819 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 06:28:51 crc kubenswrapper[4833]: E1013 06:28:51.275853 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 06:28:51 crc kubenswrapper[4833]: E1013 06:28:51.275868 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:28:51 crc kubenswrapper[4833]: E1013 06:28:51.275830 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-13 06:28:52.275822461 +0000 UTC m=+22.376245377 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:28:51 crc kubenswrapper[4833]: E1013 06:28:51.275946 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-13 06:28:52.275924764 +0000 UTC m=+22.376347750 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.626354 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:28:51 crc kubenswrapper[4833]: E1013 06:28:51.626610 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.756983 4833 generic.go:334] "Generic (PLEG): container finished" podID="7b5da43b-92dd-4de8-942c-5c546a33ee6c" containerID="1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad" exitCode=0 Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.757095 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" event={"ID":"7b5da43b-92dd-4de8-942c-5c546a33ee6c","Type":"ContainerDied","Data":"1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad"} Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.757163 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" event={"ID":"7b5da43b-92dd-4de8-942c-5c546a33ee6c","Type":"ContainerStarted","Data":"49bbc4e0a670cec9157d259b4b681d4a3f858ac2341bea8318706f781ae5f4d9"} Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.758235 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"e5628df94833afba66efa0bb8a2e8624906d00c8c52235a076be090ec4a880ff"} Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.760409 4833 generic.go:334] "Generic (PLEG): container finished" podID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerID="73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15" exitCode=0 Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.760594 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" event={"ID":"cb9a788e-b626-43a8-955a-bf4a5a3cb145","Type":"ContainerDied","Data":"73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15"} Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.760661 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" event={"ID":"cb9a788e-b626-43a8-955a-bf4a5a3cb145","Type":"ContainerStarted","Data":"1e755011c31d18efdad2d1310ee11165d3f7fb1637877b36f7cb07c9b77a1e7c"} Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.763521 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb"} Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.763580 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"0ea581060c2dece1523245308cf5ab211b402681173334c417fe5c6d3623a650"} Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.765270 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7"} Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.765315 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639"} Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.765333 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a8e20b4f42193300222c186eec821d57d100a92728b80e15b627e4548ff4c53d"} Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.767376 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.769672 4833 scope.go:117] "RemoveContainer" containerID="404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7" Oct 13 06:28:51 crc kubenswrapper[4833]: E1013 06:28:51.769833 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.777455 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0"} Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.777511 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388"} Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.777525 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"0a0b1d69297b2190e938dd5ea8d0fb72eb10c2cd8037c8f27f9ac0d009eea204"} Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.779529 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.780807 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zbg2r" event={"ID":"9d1bd0f7-c161-456d-af32-2da416006789","Type":"ContainerStarted","Data":"b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b"} Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.780890 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zbg2r" event={"ID":"9d1bd0f7-c161-456d-af32-2da416006789","Type":"ContainerStarted","Data":"0e45bd278278ded88aa877b5e6895541d1a05d7bfc93a7a9979b81628eca2a2b"} Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.782682 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5xwt6" event={"ID":"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c","Type":"ContainerStarted","Data":"5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73"} Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.782724 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5xwt6" event={"ID":"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c","Type":"ContainerStarted","Data":"11e630344a1e2461caa785e9d624b47444bf5d14bfb1724e235a55e211dfd5ee"} Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.797728 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.817669 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.843998 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.868175 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.886594 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.890810 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.895146 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.905334 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.918470 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:51Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.934277 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:51Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.946126 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:51Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.959843 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4880365bfcff289807821704c623d9eebc0d005713cfd7f1aaf0a438bea3654\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:34Z\\\",\\\"message\\\":\\\"W1013 06:28:33.825165 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1013 06:28:33.825522 1 crypto.go:601] Generating new CA for check-endpoints-signer@1760336913 cert, and key in /tmp/serving-cert-721161630/serving-signer.crt, /tmp/serving-cert-721161630/serving-signer.key\\\\nI1013 06:28:34.034703 1 observer_polling.go:159] Starting file observer\\\\nW1013 06:28:34.040111 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1013 06:28:34.040311 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:34.042438 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-721161630/tls.crt::/tmp/serving-cert-721161630/tls.key\\\\\\\"\\\\nF1013 06:28:34.271299 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:51Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.977251 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:51Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:51 crc kubenswrapper[4833]: I1013 06:28:51.992161 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:51Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.008896 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:52Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.024633 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:52Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.038981 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:52Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.049242 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:52Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.064862 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:52Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.080742 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.081793 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:52Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.090806 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.095026 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.104571 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:52Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.119484 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:52Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.136325 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:52Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.159941 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:52Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.184903 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:28:52 crc kubenswrapper[4833]: E1013 06:28:52.185074 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 06:28:52 crc kubenswrapper[4833]: E1013 06:28:52.185170 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 06:28:54.185148182 +0000 UTC m=+24.285571118 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.207517 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:52Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.236420 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:52Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.283267 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:52Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.285605 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:28:52 crc kubenswrapper[4833]: E1013 06:28:52.285812 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:28:54.285783096 +0000 UTC m=+24.386206032 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.285871 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.285928 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.285966 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:28:52 crc kubenswrapper[4833]: E1013 06:28:52.286059 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 06:28:52 crc kubenswrapper[4833]: E1013 06:28:52.286064 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 06:28:52 crc kubenswrapper[4833]: E1013 06:28:52.286092 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 06:28:52 crc kubenswrapper[4833]: E1013 06:28:52.286105 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:28:52 crc kubenswrapper[4833]: E1013 06:28:52.286113 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 06:28:54.286103025 +0000 UTC m=+24.386525941 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 06:28:52 crc kubenswrapper[4833]: E1013 06:28:52.286136 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 06:28:52 crc kubenswrapper[4833]: E1013 06:28:52.286167 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 06:28:52 crc kubenswrapper[4833]: E1013 06:28:52.286184 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:28:52 crc kubenswrapper[4833]: E1013 06:28:52.286147 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-13 06:28:54.286131105 +0000 UTC m=+24.386554011 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:28:52 crc kubenswrapper[4833]: E1013 06:28:52.286274 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-13 06:28:54.286256469 +0000 UTC m=+24.386679415 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.320612 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:52Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.368958 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:52Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.403195 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:52Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.439530 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:52Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.479974 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:52Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.519417 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:52Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.558867 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:52Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.601168 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:52Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.626211 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:28:52 crc kubenswrapper[4833]: E1013 06:28:52.626359 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.626214 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:28:52 crc kubenswrapper[4833]: E1013 06:28:52.626819 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.631769 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.632615 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.634015 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.634891 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.636105 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.636811 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.637589 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.638796 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.639450 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.643999 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.645492 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.646403 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.647082 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.647698 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.648307 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.649850 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.650510 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.651344 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.652023 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.653179 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.653831 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.654014 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:52Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.654476 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.655479 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.656128 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.657021 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.657610 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.658632 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.659083 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.660056 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.660562 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.661009 4833 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.661104 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.663213 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.663801 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.664648 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.666265 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.667138 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.667675 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.668684 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.669448 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.670645 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.671628 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.672701 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.673285 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.674356 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.675202 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.676141 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.676942 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.677831 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.678343 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.679365 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.680234 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.680856 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.681852 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.696384 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:52Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.790897 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" event={"ID":"cb9a788e-b626-43a8-955a-bf4a5a3cb145","Type":"ContainerStarted","Data":"609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40"} Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.790936 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" event={"ID":"cb9a788e-b626-43a8-955a-bf4a5a3cb145","Type":"ContainerStarted","Data":"8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a"} Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.790945 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" event={"ID":"cb9a788e-b626-43a8-955a-bf4a5a3cb145","Type":"ContainerStarted","Data":"2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e"} Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.790953 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" event={"ID":"cb9a788e-b626-43a8-955a-bf4a5a3cb145","Type":"ContainerStarted","Data":"75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a"} Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.790961 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" event={"ID":"cb9a788e-b626-43a8-955a-bf4a5a3cb145","Type":"ContainerStarted","Data":"c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368"} Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.790969 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" event={"ID":"cb9a788e-b626-43a8-955a-bf4a5a3cb145","Type":"ContainerStarted","Data":"5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4"} Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.792580 4833 generic.go:334] "Generic (PLEG): container finished" podID="7b5da43b-92dd-4de8-942c-5c546a33ee6c" containerID="dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49" exitCode=0 Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.792636 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" event={"ID":"7b5da43b-92dd-4de8-942c-5c546a33ee6c","Type":"ContainerDied","Data":"dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49"} Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.793591 4833 scope.go:117] "RemoveContainer" containerID="404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7" Oct 13 06:28:52 crc kubenswrapper[4833]: E1013 06:28:52.793772 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Oct 13 06:28:52 crc kubenswrapper[4833]: E1013 06:28:52.799181 4833 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.808939 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:52Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.818508 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:52Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.835504 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:52Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.862786 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:52Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.899714 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:52Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.938840 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:52Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:52 crc kubenswrapper[4833]: I1013 06:28:52.978290 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:52Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:53 crc kubenswrapper[4833]: I1013 06:28:53.018872 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:53Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:53 crc kubenswrapper[4833]: I1013 06:28:53.059682 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:53Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:53 crc kubenswrapper[4833]: I1013 06:28:53.103657 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:53Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:53 crc kubenswrapper[4833]: I1013 06:28:53.142973 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:53Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:53 crc kubenswrapper[4833]: I1013 06:28:53.182249 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:53Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:53 crc kubenswrapper[4833]: I1013 06:28:53.218959 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:53Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:53 crc kubenswrapper[4833]: I1013 06:28:53.626987 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:28:53 crc kubenswrapper[4833]: E1013 06:28:53.627181 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:28:53 crc kubenswrapper[4833]: I1013 06:28:53.800965 4833 generic.go:334] "Generic (PLEG): container finished" podID="7b5da43b-92dd-4de8-942c-5c546a33ee6c" containerID="2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e" exitCode=0 Oct 13 06:28:53 crc kubenswrapper[4833]: I1013 06:28:53.801058 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" event={"ID":"7b5da43b-92dd-4de8-942c-5c546a33ee6c","Type":"ContainerDied","Data":"2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e"} Oct 13 06:28:53 crc kubenswrapper[4833]: I1013 06:28:53.823874 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:53Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:53 crc kubenswrapper[4833]: I1013 06:28:53.849089 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:53Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:53 crc kubenswrapper[4833]: I1013 06:28:53.879159 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:53Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:53 crc kubenswrapper[4833]: I1013 06:28:53.903148 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:53Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:53 crc kubenswrapper[4833]: I1013 06:28:53.922129 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:53Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:53 crc kubenswrapper[4833]: I1013 06:28:53.939322 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:53Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:53 crc kubenswrapper[4833]: I1013 06:28:53.966862 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:53Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:53 crc kubenswrapper[4833]: I1013 06:28:53.989965 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:53Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:54 crc kubenswrapper[4833]: I1013 06:28:54.008227 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:54Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:54 crc kubenswrapper[4833]: I1013 06:28:54.025196 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:54Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:54 crc kubenswrapper[4833]: I1013 06:28:54.039050 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:54Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:54 crc kubenswrapper[4833]: I1013 06:28:54.050481 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:54Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:54 crc kubenswrapper[4833]: I1013 06:28:54.063394 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:54Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:54 crc kubenswrapper[4833]: I1013 06:28:54.206885 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:28:54 crc kubenswrapper[4833]: E1013 06:28:54.207087 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 06:28:54 crc kubenswrapper[4833]: E1013 06:28:54.207173 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 06:28:58.207154849 +0000 UTC m=+28.307577785 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 06:28:54 crc kubenswrapper[4833]: I1013 06:28:54.308480 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:28:54 crc kubenswrapper[4833]: I1013 06:28:54.308677 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:28:54 crc kubenswrapper[4833]: I1013 06:28:54.308725 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:28:54 crc kubenswrapper[4833]: E1013 06:28:54.308761 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:28:58.308728819 +0000 UTC m=+28.409151745 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:28:54 crc kubenswrapper[4833]: I1013 06:28:54.308818 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:28:54 crc kubenswrapper[4833]: E1013 06:28:54.308879 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 06:28:54 crc kubenswrapper[4833]: E1013 06:28:54.308902 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 06:28:54 crc kubenswrapper[4833]: E1013 06:28:54.308918 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:28:54 crc kubenswrapper[4833]: E1013 06:28:54.308922 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 06:28:54 crc kubenswrapper[4833]: E1013 06:28:54.308920 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 06:28:54 crc kubenswrapper[4833]: E1013 06:28:54.308964 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 06:28:58.308955825 +0000 UTC m=+28.409378751 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 06:28:54 crc kubenswrapper[4833]: E1013 06:28:54.308971 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 06:28:54 crc kubenswrapper[4833]: E1013 06:28:54.308985 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-13 06:28:58.308972946 +0000 UTC m=+28.409395942 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:28:54 crc kubenswrapper[4833]: E1013 06:28:54.309003 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:28:54 crc kubenswrapper[4833]: E1013 06:28:54.309088 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-13 06:28:58.309063738 +0000 UTC m=+28.409486694 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:28:54 crc kubenswrapper[4833]: I1013 06:28:54.626145 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:28:54 crc kubenswrapper[4833]: I1013 06:28:54.626234 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:28:54 crc kubenswrapper[4833]: E1013 06:28:54.626274 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:28:54 crc kubenswrapper[4833]: E1013 06:28:54.626418 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:28:54 crc kubenswrapper[4833]: I1013 06:28:54.810008 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" event={"ID":"cb9a788e-b626-43a8-955a-bf4a5a3cb145","Type":"ContainerStarted","Data":"015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065"} Oct 13 06:28:54 crc kubenswrapper[4833]: I1013 06:28:54.813335 4833 generic.go:334] "Generic (PLEG): container finished" podID="7b5da43b-92dd-4de8-942c-5c546a33ee6c" containerID="67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170" exitCode=0 Oct 13 06:28:54 crc kubenswrapper[4833]: I1013 06:28:54.813388 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" event={"ID":"7b5da43b-92dd-4de8-942c-5c546a33ee6c","Type":"ContainerDied","Data":"67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170"} Oct 13 06:28:54 crc kubenswrapper[4833]: I1013 06:28:54.816722 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc"} Oct 13 06:28:54 crc kubenswrapper[4833]: I1013 06:28:54.850908 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:54Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:54 crc kubenswrapper[4833]: I1013 06:28:54.873976 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:54Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:54 crc kubenswrapper[4833]: I1013 06:28:54.891202 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:54Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:54 crc kubenswrapper[4833]: I1013 06:28:54.905862 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:54Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:54 crc kubenswrapper[4833]: I1013 06:28:54.925357 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:54Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:54 crc kubenswrapper[4833]: I1013 06:28:54.946939 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:54Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:54 crc kubenswrapper[4833]: I1013 06:28:54.971203 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:54Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:54 crc kubenswrapper[4833]: I1013 06:28:54.983435 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:54Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:54 crc kubenswrapper[4833]: I1013 06:28:54.998335 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:54Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:55 crc kubenswrapper[4833]: I1013 06:28:55.012857 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:55Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:55 crc kubenswrapper[4833]: I1013 06:28:55.029427 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:55Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:55 crc kubenswrapper[4833]: I1013 06:28:55.049728 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:55Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:55 crc kubenswrapper[4833]: I1013 06:28:55.061119 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:55Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:55 crc kubenswrapper[4833]: I1013 06:28:55.074060 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:55Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:55 crc kubenswrapper[4833]: I1013 06:28:55.091361 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:55Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:55 crc kubenswrapper[4833]: I1013 06:28:55.104465 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:55Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:55 crc kubenswrapper[4833]: I1013 06:28:55.116098 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:55Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:55 crc kubenswrapper[4833]: I1013 06:28:55.128580 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:55Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:55 crc kubenswrapper[4833]: I1013 06:28:55.138240 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:55Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:55 crc kubenswrapper[4833]: I1013 06:28:55.151038 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:55Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:55 crc kubenswrapper[4833]: I1013 06:28:55.163049 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:55Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:55 crc kubenswrapper[4833]: I1013 06:28:55.174173 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:55Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:55 crc kubenswrapper[4833]: I1013 06:28:55.183579 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:55Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:55 crc kubenswrapper[4833]: I1013 06:28:55.192826 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:55Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:55 crc kubenswrapper[4833]: I1013 06:28:55.205047 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:55Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:55 crc kubenswrapper[4833]: I1013 06:28:55.217009 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:55Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:55 crc kubenswrapper[4833]: I1013 06:28:55.626789 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:28:55 crc kubenswrapper[4833]: E1013 06:28:55.627442 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:28:55 crc kubenswrapper[4833]: I1013 06:28:55.825054 4833 generic.go:334] "Generic (PLEG): container finished" podID="7b5da43b-92dd-4de8-942c-5c546a33ee6c" containerID="201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d" exitCode=0 Oct 13 06:28:55 crc kubenswrapper[4833]: I1013 06:28:55.825210 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" event={"ID":"7b5da43b-92dd-4de8-942c-5c546a33ee6c","Type":"ContainerDied","Data":"201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d"} Oct 13 06:28:55 crc kubenswrapper[4833]: I1013 06:28:55.843488 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:55Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:55 crc kubenswrapper[4833]: I1013 06:28:55.871858 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:55Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:55 crc kubenswrapper[4833]: I1013 06:28:55.888880 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:55Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:55 crc kubenswrapper[4833]: I1013 06:28:55.911037 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:55Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:55 crc kubenswrapper[4833]: I1013 06:28:55.932705 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:55Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:55 crc kubenswrapper[4833]: I1013 06:28:55.947810 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:55Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:55 crc kubenswrapper[4833]: I1013 06:28:55.965129 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:55Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:55 crc kubenswrapper[4833]: I1013 06:28:55.980030 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:55Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:55 crc kubenswrapper[4833]: I1013 06:28:55.992809 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:55Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.008908 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.026387 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.040566 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.054532 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.100982 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-h2qtv"] Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.101534 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-h2qtv" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.103022 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.103311 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.103704 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.104035 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.117474 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.126825 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z6hc\" (UniqueName: \"kubernetes.io/projected/1fe7840a-9a54-429e-a148-a3f369ba5fda-kube-api-access-2z6hc\") pod \"node-ca-h2qtv\" (UID: \"1fe7840a-9a54-429e-a148-a3f369ba5fda\") " pod="openshift-image-registry/node-ca-h2qtv" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.126859 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1fe7840a-9a54-429e-a148-a3f369ba5fda-host\") pod \"node-ca-h2qtv\" (UID: \"1fe7840a-9a54-429e-a148-a3f369ba5fda\") " pod="openshift-image-registry/node-ca-h2qtv" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.126875 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1fe7840a-9a54-429e-a148-a3f369ba5fda-serviceca\") pod \"node-ca-h2qtv\" (UID: \"1fe7840a-9a54-429e-a148-a3f369ba5fda\") " pod="openshift-image-registry/node-ca-h2qtv" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.129686 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.143954 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.155467 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.169722 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.184006 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.195827 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.224556 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.227499 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z6hc\" (UniqueName: \"kubernetes.io/projected/1fe7840a-9a54-429e-a148-a3f369ba5fda-kube-api-access-2z6hc\") pod \"node-ca-h2qtv\" (UID: \"1fe7840a-9a54-429e-a148-a3f369ba5fda\") " pod="openshift-image-registry/node-ca-h2qtv" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.227569 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1fe7840a-9a54-429e-a148-a3f369ba5fda-host\") pod \"node-ca-h2qtv\" (UID: \"1fe7840a-9a54-429e-a148-a3f369ba5fda\") " pod="openshift-image-registry/node-ca-h2qtv" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.227595 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1fe7840a-9a54-429e-a148-a3f369ba5fda-serviceca\") pod \"node-ca-h2qtv\" (UID: \"1fe7840a-9a54-429e-a148-a3f369ba5fda\") " pod="openshift-image-registry/node-ca-h2qtv" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.227677 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1fe7840a-9a54-429e-a148-a3f369ba5fda-host\") pod \"node-ca-h2qtv\" (UID: \"1fe7840a-9a54-429e-a148-a3f369ba5fda\") " pod="openshift-image-registry/node-ca-h2qtv" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.228741 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1fe7840a-9a54-429e-a148-a3f369ba5fda-serviceca\") pod \"node-ca-h2qtv\" (UID: \"1fe7840a-9a54-429e-a148-a3f369ba5fda\") " pod="openshift-image-registry/node-ca-h2qtv" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.237065 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.246152 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z6hc\" (UniqueName: \"kubernetes.io/projected/1fe7840a-9a54-429e-a148-a3f369ba5fda-kube-api-access-2z6hc\") pod \"node-ca-h2qtv\" (UID: \"1fe7840a-9a54-429e-a148-a3f369ba5fda\") " pod="openshift-image-registry/node-ca-h2qtv" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.254292 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.266208 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2qtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe7840a-9a54-429e-a148-a3f369ba5fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2z6hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2qtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.281360 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.292723 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.311375 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.434191 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.436665 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.436747 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.436772 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.436991 4833 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.445006 4833 kubelet_node_status.go:115] "Node was previously registered" node="crc" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.445526 4833 kubelet_node_status.go:79] "Successfully registered node" node="crc" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.447424 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.447463 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.447475 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.447493 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.447505 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:56Z","lastTransitionTime":"2025-10-13T06:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.463833 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-h2qtv" Oct 13 06:28:56 crc kubenswrapper[4833]: E1013 06:28:56.465403 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.470900 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.471088 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.471460 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.471937 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.472155 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:56Z","lastTransitionTime":"2025-10-13T06:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:56 crc kubenswrapper[4833]: E1013 06:28:56.490029 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: W1013 06:28:56.493008 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fe7840a_9a54_429e_a148_a3f369ba5fda.slice/crio-0f6a7d3ee76ea9f742f9863e55c2a6829005fd29c209471471e85ca16d580802 WatchSource:0}: Error finding container 0f6a7d3ee76ea9f742f9863e55c2a6829005fd29c209471471e85ca16d580802: Status 404 returned error can't find the container with id 0f6a7d3ee76ea9f742f9863e55c2a6829005fd29c209471471e85ca16d580802 Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.493710 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.493754 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.493765 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.493782 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.493795 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:56Z","lastTransitionTime":"2025-10-13T06:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:56 crc kubenswrapper[4833]: E1013 06:28:56.509825 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.514256 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.514306 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.514322 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.514343 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.514355 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:56Z","lastTransitionTime":"2025-10-13T06:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:56 crc kubenswrapper[4833]: E1013 06:28:56.525295 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.530269 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.530315 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.530331 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.530350 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.530364 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:56Z","lastTransitionTime":"2025-10-13T06:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:56 crc kubenswrapper[4833]: E1013 06:28:56.553424 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: E1013 06:28:56.553674 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.555453 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.555518 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.555570 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.555595 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.555612 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:56Z","lastTransitionTime":"2025-10-13T06:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.627030 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.627055 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:28:56 crc kubenswrapper[4833]: E1013 06:28:56.627207 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:28:56 crc kubenswrapper[4833]: E1013 06:28:56.627358 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.659216 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.659244 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.659251 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.659263 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.659272 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:56Z","lastTransitionTime":"2025-10-13T06:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.764046 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.764137 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.764161 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.764192 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.764217 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:56Z","lastTransitionTime":"2025-10-13T06:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.830926 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-h2qtv" event={"ID":"1fe7840a-9a54-429e-a148-a3f369ba5fda","Type":"ContainerStarted","Data":"0f6a7d3ee76ea9f742f9863e55c2a6829005fd29c209471471e85ca16d580802"} Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.836021 4833 generic.go:334] "Generic (PLEG): container finished" podID="7b5da43b-92dd-4de8-942c-5c546a33ee6c" containerID="92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b" exitCode=0 Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.836115 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" event={"ID":"7b5da43b-92dd-4de8-942c-5c546a33ee6c","Type":"ContainerDied","Data":"92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b"} Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.846088 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" event={"ID":"cb9a788e-b626-43a8-955a-bf4a5a3cb145","Type":"ContainerStarted","Data":"6394e7b2038d4bcc3172601e4c9059231545d04206dcb90a64711555d31fab8a"} Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.847898 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.847994 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.857063 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.878227 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.882210 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.882260 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.882273 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.882290 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.882303 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:56Z","lastTransitionTime":"2025-10-13T06:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.887491 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.888462 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.894674 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.909415 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.923208 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.935752 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.948867 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.960963 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.973151 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.984964 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.984990 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.984998 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.985011 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.985022 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:56Z","lastTransitionTime":"2025-10-13T06:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:56 crc kubenswrapper[4833]: I1013 06:28:56.992700 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:56Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.004842 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:57Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.015487 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:57Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.029125 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:57Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.041013 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2qtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe7840a-9a54-429e-a148-a3f369ba5fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2z6hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2qtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:57Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.052248 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:57Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.063571 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:57Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.075491 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:57Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.087174 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.087229 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.087244 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.087264 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.087280 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:57Z","lastTransitionTime":"2025-10-13T06:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.097135 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6394e7b2038d4bcc3172601e4c9059231545d04206dcb90a64711555d31fab8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:57Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.112603 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:57Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.124964 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:57Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.142200 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:57Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.154031 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2qtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe7840a-9a54-429e-a148-a3f369ba5fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2z6hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2qtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:57Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.170530 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:57Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.187285 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:57Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.189796 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.189828 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.189837 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.189851 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.189861 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:57Z","lastTransitionTime":"2025-10-13T06:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.205394 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:57Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.218504 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:57Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.232557 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:57Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.245115 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:57Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.292014 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.292044 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.292053 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.292064 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.292073 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:57Z","lastTransitionTime":"2025-10-13T06:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.394960 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.395027 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.395043 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.395066 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.395084 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:57Z","lastTransitionTime":"2025-10-13T06:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.498077 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.498115 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.498124 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.498139 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.498149 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:57Z","lastTransitionTime":"2025-10-13T06:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.601343 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.601388 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.601399 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.601420 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.601436 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:57Z","lastTransitionTime":"2025-10-13T06:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.625979 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:28:57 crc kubenswrapper[4833]: E1013 06:28:57.626150 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.704199 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.704323 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.704354 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.704388 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.704412 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:57Z","lastTransitionTime":"2025-10-13T06:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.806631 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.806681 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.806694 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.806711 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.806723 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:57Z","lastTransitionTime":"2025-10-13T06:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.851268 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-h2qtv" event={"ID":"1fe7840a-9a54-429e-a148-a3f369ba5fda","Type":"ContainerStarted","Data":"0622e1f66a8df23a49853cae411fea63ebac384e54b016796847f631873261d0"} Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.856120 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" event={"ID":"7b5da43b-92dd-4de8-942c-5c546a33ee6c","Type":"ContainerStarted","Data":"b095d1b47e37e55494e6745601476332ff59da79569c574bcbe339f84f3e97c7"} Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.856279 4833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.868249 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2qtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe7840a-9a54-429e-a148-a3f369ba5fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0622e1f66a8df23a49853cae411fea63ebac384e54b016796847f631873261d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2z6hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2qtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:57Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.887214 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:57Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.899577 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:57Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.909635 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.909693 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.909715 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.909746 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.909768 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:57Z","lastTransitionTime":"2025-10-13T06:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.922388 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:57Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.940302 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:57Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.959420 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:57Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.973080 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:57Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.983155 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:57Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:57 crc kubenswrapper[4833]: I1013 06:28:57.999476 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:57Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.012296 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.012332 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.012345 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.012360 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.012372 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:58Z","lastTransitionTime":"2025-10-13T06:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.014452 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:58Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.030235 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:58Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.050462 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6394e7b2038d4bcc3172601e4c9059231545d04206dcb90a64711555d31fab8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:58Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.063507 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:58Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.076700 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:58Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.089720 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:58Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.103148 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:58Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.114560 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.114600 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.114613 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.114649 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.114660 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:58Z","lastTransitionTime":"2025-10-13T06:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.117568 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:58Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.128624 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:58Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.140208 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:58Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.153067 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:58Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.164450 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:58Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.177167 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:58Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.189606 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:58Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.209301 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6394e7b2038d4bcc3172601e4c9059231545d04206dcb90a64711555d31fab8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:58Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.216691 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.216740 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.216748 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.216764 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.216984 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:58Z","lastTransitionTime":"2025-10-13T06:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.223096 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:58Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.233235 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:58Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.246141 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b095d1b47e37e55494e6745601476332ff59da79569c574bcbe339f84f3e97c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:58Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.250207 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:28:58 crc kubenswrapper[4833]: E1013 06:28:58.250353 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 06:28:58 crc kubenswrapper[4833]: E1013 06:28:58.250423 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 06:29:06.250408342 +0000 UTC m=+36.350831258 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.255014 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2qtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe7840a-9a54-429e-a148-a3f369ba5fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0622e1f66a8df23a49853cae411fea63ebac384e54b016796847f631873261d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2z6hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2qtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:58Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.286262 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.319762 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.319809 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.319826 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.319851 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.319867 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:58Z","lastTransitionTime":"2025-10-13T06:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.350950 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.351062 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:28:58 crc kubenswrapper[4833]: E1013 06:28:58.351083 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:29:06.351055035 +0000 UTC m=+36.451477991 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.351128 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.351203 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:28:58 crc kubenswrapper[4833]: E1013 06:28:58.351218 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 06:28:58 crc kubenswrapper[4833]: E1013 06:28:58.351240 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 06:28:58 crc kubenswrapper[4833]: E1013 06:28:58.351259 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:28:58 crc kubenswrapper[4833]: E1013 06:28:58.351312 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-13 06:29:06.351295312 +0000 UTC m=+36.451718268 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:28:58 crc kubenswrapper[4833]: E1013 06:28:58.351341 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 06:28:58 crc kubenswrapper[4833]: E1013 06:28:58.351360 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 06:28:58 crc kubenswrapper[4833]: E1013 06:28:58.351378 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:28:58 crc kubenswrapper[4833]: E1013 06:28:58.351438 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-13 06:29:06.351422825 +0000 UTC m=+36.451845771 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:28:58 crc kubenswrapper[4833]: E1013 06:28:58.351493 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 06:28:58 crc kubenswrapper[4833]: E1013 06:28:58.351662 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 06:29:06.351627811 +0000 UTC m=+36.452050767 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.422777 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.422840 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.422859 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.422884 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.422902 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:58Z","lastTransitionTime":"2025-10-13T06:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.525916 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.525963 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.525975 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.525994 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.526007 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:58Z","lastTransitionTime":"2025-10-13T06:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.626239 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.626305 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:28:58 crc kubenswrapper[4833]: E1013 06:28:58.626402 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:28:58 crc kubenswrapper[4833]: E1013 06:28:58.626690 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.628341 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.628372 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.628384 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.628403 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.628415 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:58Z","lastTransitionTime":"2025-10-13T06:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.731244 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.731295 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.731306 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.731319 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.731329 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:58Z","lastTransitionTime":"2025-10-13T06:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.833474 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.833525 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.833560 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.833585 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.833597 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:58Z","lastTransitionTime":"2025-10-13T06:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.936403 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.936440 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.936451 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.936467 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:58 crc kubenswrapper[4833]: I1013 06:28:58.936479 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:58Z","lastTransitionTime":"2025-10-13T06:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.039181 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.039231 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.039248 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.039269 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.039286 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:59Z","lastTransitionTime":"2025-10-13T06:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.141300 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.141354 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.141366 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.141383 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.141398 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:59Z","lastTransitionTime":"2025-10-13T06:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.243388 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.243420 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.243427 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.243440 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.243449 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:59Z","lastTransitionTime":"2025-10-13T06:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.346877 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.346910 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.346918 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.346929 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.346937 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:59Z","lastTransitionTime":"2025-10-13T06:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.449562 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.449612 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.449622 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.449638 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.449652 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:59Z","lastTransitionTime":"2025-10-13T06:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.554104 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.554655 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.554668 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.554694 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.554708 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:59Z","lastTransitionTime":"2025-10-13T06:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.626666 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:28:59 crc kubenswrapper[4833]: E1013 06:28:59.626795 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.656912 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.656984 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.656994 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.657040 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.657051 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:59Z","lastTransitionTime":"2025-10-13T06:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.759564 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.759606 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.759615 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.759631 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.759640 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:59Z","lastTransitionTime":"2025-10-13T06:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.861936 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.861988 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.861999 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.862017 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.862031 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:59Z","lastTransitionTime":"2025-10-13T06:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.865760 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wnpc6_cb9a788e-b626-43a8-955a-bf4a5a3cb145/ovnkube-controller/0.log" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.869583 4833 generic.go:334] "Generic (PLEG): container finished" podID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerID="6394e7b2038d4bcc3172601e4c9059231545d04206dcb90a64711555d31fab8a" exitCode=1 Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.869668 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" event={"ID":"cb9a788e-b626-43a8-955a-bf4a5a3cb145","Type":"ContainerDied","Data":"6394e7b2038d4bcc3172601e4c9059231545d04206dcb90a64711555d31fab8a"} Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.870679 4833 scope.go:117] "RemoveContainer" containerID="6394e7b2038d4bcc3172601e4c9059231545d04206dcb90a64711555d31fab8a" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.890846 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:59Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.909568 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:59Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.930148 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:59Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.955151 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6394e7b2038d4bcc3172601e4c9059231545d04206dcb90a64711555d31fab8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6394e7b2038d4bcc3172601e4c9059231545d04206dcb90a64711555d31fab8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:28:59Z\\\",\\\"message\\\":\\\"7778 6088 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1013 06:28:59.727828 6088 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1013 06:28:59.727842 6088 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1013 06:28:59.726924 6088 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1013 06:28:59.727108 6088 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1013 06:28:59.727190 6088 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1013 06:28:59.727207 6088 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1013 06:28:59.727223 6088 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1013 06:28:59.728471 6088 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1013 06:28:59.729003 6088 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1013 06:28:59.729122 6088 factory.go:656] Stopping watch factory\\\\nI1013 06:28:59.729172 6088 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:59Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.965098 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.965151 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.965164 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.965181 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.965195 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:28:59Z","lastTransitionTime":"2025-10-13T06:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.975267 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:59Z is after 2025-08-24T17:21:41Z" Oct 13 06:28:59 crc kubenswrapper[4833]: I1013 06:28:59.994966 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:28:59Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.022985 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b095d1b47e37e55494e6745601476332ff59da79569c574bcbe339f84f3e97c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.035974 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2qtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe7840a-9a54-429e-a148-a3f369ba5fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0622e1f66a8df23a49853cae411fea63ebac384e54b016796847f631873261d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2z6hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2qtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.058438 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.067391 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.067436 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.067452 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.067478 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.067495 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:00Z","lastTransitionTime":"2025-10-13T06:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.075016 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.094980 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.112062 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.128917 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.145375 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.171519 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.171600 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.171615 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.171636 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.171652 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:00Z","lastTransitionTime":"2025-10-13T06:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.274603 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.274661 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.274679 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.274702 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.274720 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:00Z","lastTransitionTime":"2025-10-13T06:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.377124 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.377209 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.377242 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.377271 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.377309 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:00Z","lastTransitionTime":"2025-10-13T06:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.480231 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.480278 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.480290 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.480305 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.480320 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:00Z","lastTransitionTime":"2025-10-13T06:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.582390 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.582433 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.582445 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.582463 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.582476 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:00Z","lastTransitionTime":"2025-10-13T06:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.626940 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:00 crc kubenswrapper[4833]: E1013 06:29:00.627103 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.627128 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:00 crc kubenswrapper[4833]: E1013 06:29:00.627295 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.649098 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.672788 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.685164 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.685206 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.685217 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.685232 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.685244 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:00Z","lastTransitionTime":"2025-10-13T06:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.688391 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.706712 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.723775 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.735566 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.745847 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.757691 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.784838 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6394e7b2038d4bcc3172601e4c9059231545d04206dcb90a64711555d31fab8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6394e7b2038d4bcc3172601e4c9059231545d04206dcb90a64711555d31fab8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:28:59Z\\\",\\\"message\\\":\\\"7778 6088 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1013 06:28:59.727828 6088 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1013 06:28:59.727842 6088 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1013 06:28:59.726924 6088 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1013 06:28:59.727108 6088 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1013 06:28:59.727190 6088 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1013 06:28:59.727207 6088 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1013 06:28:59.727223 6088 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1013 06:28:59.728471 6088 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1013 06:28:59.729003 6088 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1013 06:28:59.729122 6088 factory.go:656] Stopping watch factory\\\\nI1013 06:28:59.729172 6088 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.787366 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.787409 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.787421 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.787442 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.787454 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:00Z","lastTransitionTime":"2025-10-13T06:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.799521 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.818425 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b095d1b47e37e55494e6745601476332ff59da79569c574bcbe339f84f3e97c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.873674 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2qtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe7840a-9a54-429e-a148-a3f369ba5fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0622e1f66a8df23a49853cae411fea63ebac384e54b016796847f631873261d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2z6hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2qtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.876608 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wnpc6_cb9a788e-b626-43a8-955a-bf4a5a3cb145/ovnkube-controller/0.log" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.879242 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" event={"ID":"cb9a788e-b626-43a8-955a-bf4a5a3cb145","Type":"ContainerStarted","Data":"ae8c795090aae63c1fbb048307831c958ca8ba67709ae8bf8ae320c9f620da42"} Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.879937 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.889441 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.890038 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.890082 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.890093 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.890107 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.890117 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:00Z","lastTransitionTime":"2025-10-13T06:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.901383 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.915523 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.937952 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8c795090aae63c1fbb048307831c958ca8ba67709ae8bf8ae320c9f620da42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6394e7b2038d4bcc3172601e4c9059231545d04206dcb90a64711555d31fab8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:28:59Z\\\",\\\"message\\\":\\\"7778 6088 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1013 06:28:59.727828 6088 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1013 06:28:59.727842 6088 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1013 06:28:59.726924 6088 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1013 06:28:59.727108 6088 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1013 06:28:59.727190 6088 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1013 06:28:59.727207 6088 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1013 06:28:59.727223 6088 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1013 06:28:59.728471 6088 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1013 06:28:59.729003 6088 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1013 06:28:59.729122 6088 factory.go:656] Stopping watch factory\\\\nI1013 06:28:59.729172 6088 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.953440 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.969379 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.983036 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2qtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe7840a-9a54-429e-a148-a3f369ba5fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0622e1f66a8df23a49853cae411fea63ebac384e54b016796847f631873261d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2z6hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2qtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.992807 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.992854 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.992867 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.992884 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:00 crc kubenswrapper[4833]: I1013 06:29:00.992897 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:00Z","lastTransitionTime":"2025-10-13T06:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.001215 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.014660 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:01Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.030970 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b095d1b47e37e55494e6745601476332ff59da79569c574bcbe339f84f3e97c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:01Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.045844 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:01Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.058781 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:01Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.072049 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:01Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.085844 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:01Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.096096 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.096138 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.096151 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.096168 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.096180 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:01Z","lastTransitionTime":"2025-10-13T06:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.098956 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:01Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.112933 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:01Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.198581 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.198662 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.198681 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.198706 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.198723 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:01Z","lastTransitionTime":"2025-10-13T06:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.301027 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.301088 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.301107 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.301131 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.301150 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:01Z","lastTransitionTime":"2025-10-13T06:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.404488 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.404600 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.404626 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.404686 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.404709 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:01Z","lastTransitionTime":"2025-10-13T06:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.507533 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.507629 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.507652 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.507683 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.507706 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:01Z","lastTransitionTime":"2025-10-13T06:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.610692 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.610746 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.610763 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.610785 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.610801 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:01Z","lastTransitionTime":"2025-10-13T06:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.626294 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:01 crc kubenswrapper[4833]: E1013 06:29:01.626486 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.713478 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.713933 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.713958 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.713989 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.714010 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:01Z","lastTransitionTime":"2025-10-13T06:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.816752 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.816814 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.816831 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.816855 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.816876 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:01Z","lastTransitionTime":"2025-10-13T06:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.885305 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wnpc6_cb9a788e-b626-43a8-955a-bf4a5a3cb145/ovnkube-controller/1.log" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.886400 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wnpc6_cb9a788e-b626-43a8-955a-bf4a5a3cb145/ovnkube-controller/0.log" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.890229 4833 generic.go:334] "Generic (PLEG): container finished" podID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerID="ae8c795090aae63c1fbb048307831c958ca8ba67709ae8bf8ae320c9f620da42" exitCode=1 Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.890315 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" event={"ID":"cb9a788e-b626-43a8-955a-bf4a5a3cb145","Type":"ContainerDied","Data":"ae8c795090aae63c1fbb048307831c958ca8ba67709ae8bf8ae320c9f620da42"} Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.890362 4833 scope.go:117] "RemoveContainer" containerID="6394e7b2038d4bcc3172601e4c9059231545d04206dcb90a64711555d31fab8a" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.891046 4833 scope.go:117] "RemoveContainer" containerID="ae8c795090aae63c1fbb048307831c958ca8ba67709ae8bf8ae320c9f620da42" Oct 13 06:29:01 crc kubenswrapper[4833]: E1013 06:29:01.891231 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-wnpc6_openshift-ovn-kubernetes(cb9a788e-b626-43a8-955a-bf4a5a3cb145)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.915758 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:01Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.921064 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.921115 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.921131 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.921155 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.921173 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:01Z","lastTransitionTime":"2025-10-13T06:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.931701 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:01Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.946160 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:01Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.972452 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8c795090aae63c1fbb048307831c958ca8ba67709ae8bf8ae320c9f620da42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6394e7b2038d4bcc3172601e4c9059231545d04206dcb90a64711555d31fab8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:28:59Z\\\",\\\"message\\\":\\\"7778 6088 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1013 06:28:59.727828 6088 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1013 06:28:59.727842 6088 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1013 06:28:59.726924 6088 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1013 06:28:59.727108 6088 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1013 06:28:59.727190 6088 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1013 06:28:59.727207 6088 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1013 06:28:59.727223 6088 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1013 06:28:59.728471 6088 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1013 06:28:59.729003 6088 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1013 06:28:59.729122 6088 factory.go:656] Stopping watch factory\\\\nI1013 06:28:59.729172 6088 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae8c795090aae63c1fbb048307831c958ca8ba67709ae8bf8ae320c9f620da42\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:00Z\\\",\\\"message\\\":\\\"ing OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4c1be812-05d3-4f45-91b5-a853a5c8de71}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1013 06:29:00.891650 6258 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:01Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:01 crc kubenswrapper[4833]: I1013 06:29:01.986293 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:01Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.000737 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:01Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.018339 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b095d1b47e37e55494e6745601476332ff59da79569c574bcbe339f84f3e97c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:02Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.022998 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.023068 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.023087 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.023111 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.023129 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:02Z","lastTransitionTime":"2025-10-13T06:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.031827 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2qtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe7840a-9a54-429e-a148-a3f369ba5fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0622e1f66a8df23a49853cae411fea63ebac384e54b016796847f631873261d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2z6hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2qtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:02Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.046705 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:02Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.061527 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:02Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.077221 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:02Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.095149 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:02Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.112119 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:02Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.125311 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.125362 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.125374 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.125391 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.125403 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:02Z","lastTransitionTime":"2025-10-13T06:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.126338 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:02Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.227511 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.227616 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.227644 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.227678 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.227701 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:02Z","lastTransitionTime":"2025-10-13T06:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.330087 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.330132 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.330143 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.330159 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.330171 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:02Z","lastTransitionTime":"2025-10-13T06:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.432864 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.432949 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.433168 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.433200 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.433220 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:02Z","lastTransitionTime":"2025-10-13T06:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.535895 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.535950 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.535959 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.535971 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.535996 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:02Z","lastTransitionTime":"2025-10-13T06:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.627077 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:02 crc kubenswrapper[4833]: E1013 06:29:02.627309 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.627674 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:02 crc kubenswrapper[4833]: E1013 06:29:02.627925 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.640284 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.640625 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.640797 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.640964 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.641121 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:02Z","lastTransitionTime":"2025-10-13T06:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.744172 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.744237 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.744281 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.744317 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.744340 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:02Z","lastTransitionTime":"2025-10-13T06:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.846871 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.846929 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.846946 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.846969 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.846988 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:02Z","lastTransitionTime":"2025-10-13T06:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.895022 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wnpc6_cb9a788e-b626-43a8-955a-bf4a5a3cb145/ovnkube-controller/1.log" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.900051 4833 scope.go:117] "RemoveContainer" containerID="ae8c795090aae63c1fbb048307831c958ca8ba67709ae8bf8ae320c9f620da42" Oct 13 06:29:02 crc kubenswrapper[4833]: E1013 06:29:02.900355 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-wnpc6_openshift-ovn-kubernetes(cb9a788e-b626-43a8-955a-bf4a5a3cb145)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.917168 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:02Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.932942 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:02Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.945798 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:02Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.949295 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.949333 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.949343 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.949356 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.949366 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:02Z","lastTransitionTime":"2025-10-13T06:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.964903 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8c795090aae63c1fbb048307831c958ca8ba67709ae8bf8ae320c9f620da42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae8c795090aae63c1fbb048307831c958ca8ba67709ae8bf8ae320c9f620da42\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:00Z\\\",\\\"message\\\":\\\"ing OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4c1be812-05d3-4f45-91b5-a853a5c8de71}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1013 06:29:00.891650 6258 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-wnpc6_openshift-ovn-kubernetes(cb9a788e-b626-43a8-955a-bf4a5a3cb145)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:02Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.977493 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:02Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:02 crc kubenswrapper[4833]: I1013 06:29:02.987713 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:02Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.000950 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b095d1b47e37e55494e6745601476332ff59da79569c574bcbe339f84f3e97c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:02Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.011785 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2qtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe7840a-9a54-429e-a148-a3f369ba5fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0622e1f66a8df23a49853cae411fea63ebac384e54b016796847f631873261d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2z6hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2qtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:03Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.025173 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:03Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.039174 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:03Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.051659 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:03Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.051935 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.051963 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.051975 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.051994 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.052005 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:03Z","lastTransitionTime":"2025-10-13T06:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.064937 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:03Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.076302 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:03Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.087646 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:03Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.154905 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.154956 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.154982 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.155003 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.155018 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:03Z","lastTransitionTime":"2025-10-13T06:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.257704 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.257787 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.257823 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.257863 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.257887 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:03Z","lastTransitionTime":"2025-10-13T06:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.360829 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.360969 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.360996 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.361024 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.361047 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:03Z","lastTransitionTime":"2025-10-13T06:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.463901 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.463972 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.463994 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.464023 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.464057 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:03Z","lastTransitionTime":"2025-10-13T06:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.566531 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.566609 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.566622 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.566640 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.566655 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:03Z","lastTransitionTime":"2025-10-13T06:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.626308 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:03 crc kubenswrapper[4833]: E1013 06:29:03.626495 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.632370 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd"] Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.633509 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.636621 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.637155 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.651800 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:03Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.664796 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b095d1b47e37e55494e6745601476332ff59da79569c574bcbe339f84f3e97c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:03Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.669039 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.669080 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.669097 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.669119 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.669136 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:03Z","lastTransitionTime":"2025-10-13T06:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.675650 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2qtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe7840a-9a54-429e-a148-a3f369ba5fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0622e1f66a8df23a49853cae411fea63ebac384e54b016796847f631873261d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2z6hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2qtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:03Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.688779 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:03Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.702724 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:03Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.714803 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:03Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.728533 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:03Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.730703 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ed1cfb6c-df6a-4c55-b6ed-481f665cdea5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-x6fvd\" (UID: \"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.730759 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhjrg\" (UniqueName: \"kubernetes.io/projected/ed1cfb6c-df6a-4c55-b6ed-481f665cdea5-kube-api-access-fhjrg\") pod \"ovnkube-control-plane-749d76644c-x6fvd\" (UID: \"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.730793 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ed1cfb6c-df6a-4c55-b6ed-481f665cdea5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-x6fvd\" (UID: \"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.730938 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ed1cfb6c-df6a-4c55-b6ed-481f665cdea5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-x6fvd\" (UID: \"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.739784 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:03Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.755480 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:03Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.771834 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.771891 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.771908 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.771932 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.771948 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:03Z","lastTransitionTime":"2025-10-13T06:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.773254 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:03Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.792513 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:03Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.812410 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:03Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.831311 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:03Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.831406 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ed1cfb6c-df6a-4c55-b6ed-481f665cdea5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-x6fvd\" (UID: \"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.831654 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ed1cfb6c-df6a-4c55-b6ed-481f665cdea5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-x6fvd\" (UID: \"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.831749 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ed1cfb6c-df6a-4c55-b6ed-481f665cdea5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-x6fvd\" (UID: \"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.831830 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhjrg\" (UniqueName: \"kubernetes.io/projected/ed1cfb6c-df6a-4c55-b6ed-481f665cdea5-kube-api-access-fhjrg\") pod \"ovnkube-control-plane-749d76644c-x6fvd\" (UID: \"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.832278 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ed1cfb6c-df6a-4c55-b6ed-481f665cdea5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-x6fvd\" (UID: \"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.832729 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ed1cfb6c-df6a-4c55-b6ed-481f665cdea5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-x6fvd\" (UID: \"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.841259 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ed1cfb6c-df6a-4c55-b6ed-481f665cdea5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-x6fvd\" (UID: \"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.850092 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhjrg\" (UniqueName: \"kubernetes.io/projected/ed1cfb6c-df6a-4c55-b6ed-481f665cdea5-kube-api-access-fhjrg\") pod \"ovnkube-control-plane-749d76644c-x6fvd\" (UID: \"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.863383 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8c795090aae63c1fbb048307831c958ca8ba67709ae8bf8ae320c9f620da42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae8c795090aae63c1fbb048307831c958ca8ba67709ae8bf8ae320c9f620da42\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:00Z\\\",\\\"message\\\":\\\"ing OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4c1be812-05d3-4f45-91b5-a853a5c8de71}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1013 06:29:00.891650 6258 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-wnpc6_openshift-ovn-kubernetes(cb9a788e-b626-43a8-955a-bf4a5a3cb145)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:03Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.875513 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.875619 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.875641 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.875671 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.875694 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:03Z","lastTransitionTime":"2025-10-13T06:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.882359 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x6fvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:03Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.947534 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" Oct 13 06:29:03 crc kubenswrapper[4833]: W1013 06:29:03.966462 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded1cfb6c_df6a_4c55_b6ed_481f665cdea5.slice/crio-208f142f0a504410cbaf6b811e542cebabe75735d72c7ecf1d657e3cd2348aba WatchSource:0}: Error finding container 208f142f0a504410cbaf6b811e542cebabe75735d72c7ecf1d657e3cd2348aba: Status 404 returned error can't find the container with id 208f142f0a504410cbaf6b811e542cebabe75735d72c7ecf1d657e3cd2348aba Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.978478 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.978514 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.978525 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.978559 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:03 crc kubenswrapper[4833]: I1013 06:29:03.978573 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:03Z","lastTransitionTime":"2025-10-13T06:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.080871 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.080916 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.080929 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.080946 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.080958 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:04Z","lastTransitionTime":"2025-10-13T06:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.184984 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.185045 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.185063 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.185086 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.185108 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:04Z","lastTransitionTime":"2025-10-13T06:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.287076 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.287136 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.287155 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.287183 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.287201 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:04Z","lastTransitionTime":"2025-10-13T06:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.380091 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-28gq6"] Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.380777 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:04 crc kubenswrapper[4833]: E1013 06:29:04.380875 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.390237 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.390317 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.390342 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.390374 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.390398 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:04Z","lastTransitionTime":"2025-10-13T06:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.402020 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:04Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.425429 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:04Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.438230 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lbgw\" (UniqueName: \"kubernetes.io/projected/2fd6b1c1-777a-46be-960c-c6109d1615ad-kube-api-access-7lbgw\") pod \"network-metrics-daemon-28gq6\" (UID: \"2fd6b1c1-777a-46be-960c-c6109d1615ad\") " pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.438323 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2fd6b1c1-777a-46be-960c-c6109d1615ad-metrics-certs\") pod \"network-metrics-daemon-28gq6\" (UID: \"2fd6b1c1-777a-46be-960c-c6109d1615ad\") " pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.445824 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:04Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.466167 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:04Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.489942 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:04Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.493181 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.493230 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.493240 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.493257 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.493267 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:04Z","lastTransitionTime":"2025-10-13T06:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.511228 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:04Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.539601 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lbgw\" (UniqueName: \"kubernetes.io/projected/2fd6b1c1-777a-46be-960c-c6109d1615ad-kube-api-access-7lbgw\") pod \"network-metrics-daemon-28gq6\" (UID: \"2fd6b1c1-777a-46be-960c-c6109d1615ad\") " pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.539687 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2fd6b1c1-777a-46be-960c-c6109d1615ad-metrics-certs\") pod \"network-metrics-daemon-28gq6\" (UID: \"2fd6b1c1-777a-46be-960c-c6109d1615ad\") " pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:04 crc kubenswrapper[4833]: E1013 06:29:04.539837 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 06:29:04 crc kubenswrapper[4833]: E1013 06:29:04.539903 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fd6b1c1-777a-46be-960c-c6109d1615ad-metrics-certs podName:2fd6b1c1-777a-46be-960c-c6109d1615ad nodeName:}" failed. No retries permitted until 2025-10-13 06:29:05.039884596 +0000 UTC m=+35.140307522 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2fd6b1c1-777a-46be-960c-c6109d1615ad-metrics-certs") pod "network-metrics-daemon-28gq6" (UID: "2fd6b1c1-777a-46be-960c-c6109d1615ad") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.545726 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8c795090aae63c1fbb048307831c958ca8ba67709ae8bf8ae320c9f620da42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae8c795090aae63c1fbb048307831c958ca8ba67709ae8bf8ae320c9f620da42\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:00Z\\\",\\\"message\\\":\\\"ing OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4c1be812-05d3-4f45-91b5-a853a5c8de71}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1013 06:29:00.891650 6258 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-wnpc6_openshift-ovn-kubernetes(cb9a788e-b626-43a8-955a-bf4a5a3cb145)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:04Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.560865 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x6fvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:04Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.570588 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lbgw\" (UniqueName: \"kubernetes.io/projected/2fd6b1c1-777a-46be-960c-c6109d1615ad-kube-api-access-7lbgw\") pod \"network-metrics-daemon-28gq6\" (UID: \"2fd6b1c1-777a-46be-960c-c6109d1615ad\") " pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.574193 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-28gq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd6b1c1-777a-46be-960c-c6109d1615ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-28gq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:04Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.587802 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:04Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.595289 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.595330 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.595344 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.595363 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.595379 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:04Z","lastTransitionTime":"2025-10-13T06:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.604146 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:04Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.622936 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:04Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.626318 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.626326 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:04 crc kubenswrapper[4833]: E1013 06:29:04.626471 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:29:04 crc kubenswrapper[4833]: E1013 06:29:04.626597 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.641577 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:04Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.655075 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:04Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.671912 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b095d1b47e37e55494e6745601476332ff59da79569c574bcbe339f84f3e97c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:04Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.684496 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2qtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe7840a-9a54-429e-a148-a3f369ba5fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0622e1f66a8df23a49853cae411fea63ebac384e54b016796847f631873261d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2z6hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2qtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:04Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.697409 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.697442 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.697451 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.697467 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.697477 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:04Z","lastTransitionTime":"2025-10-13T06:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.799296 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.799362 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.799379 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.799405 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.799422 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:04Z","lastTransitionTime":"2025-10-13T06:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.901867 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.901908 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.901925 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.901947 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.901966 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:04Z","lastTransitionTime":"2025-10-13T06:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.910591 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" event={"ID":"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5","Type":"ContainerStarted","Data":"f40fe16dccb08459a9e5a899b317acf357cdc6143235e324495af067c3ce2b74"} Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.910665 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" event={"ID":"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5","Type":"ContainerStarted","Data":"83642f5182015076a30c0e069481be77b9a299f52171c24cc2c505c3efedc95f"} Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.910696 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" event={"ID":"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5","Type":"ContainerStarted","Data":"208f142f0a504410cbaf6b811e542cebabe75735d72c7ecf1d657e3cd2348aba"} Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.931422 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:04Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.949730 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:04Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.969743 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:04Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:04 crc kubenswrapper[4833]: I1013 06:29:04.986682 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:04Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.004357 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.004417 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.004434 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.004460 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.004478 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:05Z","lastTransitionTime":"2025-10-13T06:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.010331 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:05Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.030914 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:05Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.045275 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2fd6b1c1-777a-46be-960c-c6109d1615ad-metrics-certs\") pod \"network-metrics-daemon-28gq6\" (UID: \"2fd6b1c1-777a-46be-960c-c6109d1615ad\") " pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:05 crc kubenswrapper[4833]: E1013 06:29:05.045475 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 06:29:05 crc kubenswrapper[4833]: E1013 06:29:05.046151 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fd6b1c1-777a-46be-960c-c6109d1615ad-metrics-certs podName:2fd6b1c1-777a-46be-960c-c6109d1615ad nodeName:}" failed. No retries permitted until 2025-10-13 06:29:06.046081687 +0000 UTC m=+36.146504693 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2fd6b1c1-777a-46be-960c-c6109d1615ad-metrics-certs") pod "network-metrics-daemon-28gq6" (UID: "2fd6b1c1-777a-46be-960c-c6109d1615ad") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.050349 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:05Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.070715 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:05Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.107375 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.107412 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.107420 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.107434 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.107445 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:05Z","lastTransitionTime":"2025-10-13T06:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.108035 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8c795090aae63c1fbb048307831c958ca8ba67709ae8bf8ae320c9f620da42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae8c795090aae63c1fbb048307831c958ca8ba67709ae8bf8ae320c9f620da42\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:00Z\\\",\\\"message\\\":\\\"ing OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4c1be812-05d3-4f45-91b5-a853a5c8de71}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1013 06:29:00.891650 6258 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-wnpc6_openshift-ovn-kubernetes(cb9a788e-b626-43a8-955a-bf4a5a3cb145)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:05Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.133529 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83642f5182015076a30c0e069481be77b9a299f52171c24cc2c505c3efedc95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f40fe16dccb08459a9e5a899b317acf357cdc6143235e324495af067c3ce2b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x6fvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:05Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.149457 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-28gq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd6b1c1-777a-46be-960c-c6109d1615ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-28gq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:05Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.163483 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:05Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.180510 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b095d1b47e37e55494e6745601476332ff59da79569c574bcbe339f84f3e97c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:05Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.191098 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2qtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe7840a-9a54-429e-a148-a3f369ba5fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0622e1f66a8df23a49853cae411fea63ebac384e54b016796847f631873261d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2z6hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2qtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:05Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.204475 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:05Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.213338 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.213416 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.213430 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.213449 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.213462 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:05Z","lastTransitionTime":"2025-10-13T06:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.222968 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:05Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.315985 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.316031 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.316049 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.316070 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.316085 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:05Z","lastTransitionTime":"2025-10-13T06:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.419424 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.419493 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.419515 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.419571 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.419589 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:05Z","lastTransitionTime":"2025-10-13T06:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.522860 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.522904 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.522916 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.522932 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.522944 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:05Z","lastTransitionTime":"2025-10-13T06:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.625445 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.625498 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.625515 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.625567 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.625585 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:05Z","lastTransitionTime":"2025-10-13T06:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.626138 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:05 crc kubenswrapper[4833]: E1013 06:29:05.626312 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.626361 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:05 crc kubenswrapper[4833]: E1013 06:29:05.626791 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.627137 4833 scope.go:117] "RemoveContainer" containerID="404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.728127 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.728429 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.728440 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.728484 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.728495 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:05Z","lastTransitionTime":"2025-10-13T06:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.830560 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.830588 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.830596 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.830609 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.830618 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:05Z","lastTransitionTime":"2025-10-13T06:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.916295 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.920060 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9a288d8bd15e54eefc3abdb190d6c4e996336adb4b36241e4339c5fbeac77242"} Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.920523 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.933270 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.933350 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.933366 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.933413 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.933430 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:05Z","lastTransitionTime":"2025-10-13T06:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.934495 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2qtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe7840a-9a54-429e-a148-a3f369ba5fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0622e1f66a8df23a49853cae411fea63ebac384e54b016796847f631873261d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2z6hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2qtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:05Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.955359 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:05Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.968272 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:05Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.982646 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b095d1b47e37e55494e6745601476332ff59da79569c574bcbe339f84f3e97c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:05Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:05 crc kubenswrapper[4833]: I1013 06:29:05.992949 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:05Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.004182 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:06Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.016499 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:06Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.028211 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:06Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.036269 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.036343 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.036360 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.036380 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.036392 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:06Z","lastTransitionTime":"2025-10-13T06:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.044023 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a288d8bd15e54eefc3abdb190d6c4e996336adb4b36241e4339c5fbeac77242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:06Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.056976 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2fd6b1c1-777a-46be-960c-c6109d1615ad-metrics-certs\") pod \"network-metrics-daemon-28gq6\" (UID: \"2fd6b1c1-777a-46be-960c-c6109d1615ad\") " pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:06 crc kubenswrapper[4833]: E1013 06:29:06.057135 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.057073 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:06Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:06 crc kubenswrapper[4833]: E1013 06:29:06.057249 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fd6b1c1-777a-46be-960c-c6109d1615ad-metrics-certs podName:2fd6b1c1-777a-46be-960c-c6109d1615ad nodeName:}" failed. No retries permitted until 2025-10-13 06:29:08.057223715 +0000 UTC m=+38.157646661 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2fd6b1c1-777a-46be-960c-c6109d1615ad-metrics-certs") pod "network-metrics-daemon-28gq6" (UID: "2fd6b1c1-777a-46be-960c-c6109d1615ad") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.073609 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:06Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.097174 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8c795090aae63c1fbb048307831c958ca8ba67709ae8bf8ae320c9f620da42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae8c795090aae63c1fbb048307831c958ca8ba67709ae8bf8ae320c9f620da42\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:00Z\\\",\\\"message\\\":\\\"ing OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4c1be812-05d3-4f45-91b5-a853a5c8de71}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1013 06:29:00.891650 6258 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-wnpc6_openshift-ovn-kubernetes(cb9a788e-b626-43a8-955a-bf4a5a3cb145)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:06Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.110949 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83642f5182015076a30c0e069481be77b9a299f52171c24cc2c505c3efedc95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f40fe16dccb08459a9e5a899b317acf357cdc6143235e324495af067c3ce2b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x6fvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:06Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.122157 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-28gq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd6b1c1-777a-46be-960c-c6109d1615ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-28gq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:06Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.138103 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:06Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.138371 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.138413 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.138427 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.138448 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.138462 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:06Z","lastTransitionTime":"2025-10-13T06:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.151908 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:06Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.240724 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.240780 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.240792 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.240809 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.240821 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:06Z","lastTransitionTime":"2025-10-13T06:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.259597 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:06 crc kubenswrapper[4833]: E1013 06:29:06.259789 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 06:29:06 crc kubenswrapper[4833]: E1013 06:29:06.259894 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 06:29:22.25987174 +0000 UTC m=+52.360294736 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.343062 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.343098 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.343109 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.343124 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.343135 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:06Z","lastTransitionTime":"2025-10-13T06:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.361224 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.361352 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.361447 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.361521 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:06 crc kubenswrapper[4833]: E1013 06:29:06.361762 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 06:29:06 crc kubenswrapper[4833]: E1013 06:29:06.361780 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 06:29:06 crc kubenswrapper[4833]: E1013 06:29:06.361808 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 06:29:06 crc kubenswrapper[4833]: E1013 06:29:06.361829 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 06:29:06 crc kubenswrapper[4833]: E1013 06:29:06.361841 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:29:06 crc kubenswrapper[4833]: E1013 06:29:06.361861 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 06:29:06 crc kubenswrapper[4833]: E1013 06:29:06.361872 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:29:22.361845011 +0000 UTC m=+52.462267927 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:29:06 crc kubenswrapper[4833]: E1013 06:29:06.361882 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:29:06 crc kubenswrapper[4833]: E1013 06:29:06.361920 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-13 06:29:22.361896562 +0000 UTC m=+52.462319518 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:29:06 crc kubenswrapper[4833]: E1013 06:29:06.361951 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-13 06:29:22.361931833 +0000 UTC m=+52.462354789 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:29:06 crc kubenswrapper[4833]: E1013 06:29:06.361980 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 06:29:22.361966734 +0000 UTC m=+52.462389690 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.446300 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.446338 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.446348 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.446363 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.446373 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:06Z","lastTransitionTime":"2025-10-13T06:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.549447 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.549513 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.549532 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.549586 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.549605 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:06Z","lastTransitionTime":"2025-10-13T06:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.626239 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.626267 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:06 crc kubenswrapper[4833]: E1013 06:29:06.626477 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:29:06 crc kubenswrapper[4833]: E1013 06:29:06.626659 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.652558 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.652596 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.652606 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.652621 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.652632 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:06Z","lastTransitionTime":"2025-10-13T06:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.755774 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.755817 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.755825 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.755836 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.755845 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:06Z","lastTransitionTime":"2025-10-13T06:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.858787 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.858851 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.858914 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.858940 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.858958 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:06Z","lastTransitionTime":"2025-10-13T06:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.920874 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.920919 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.920931 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.920948 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.920961 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:06Z","lastTransitionTime":"2025-10-13T06:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:06 crc kubenswrapper[4833]: E1013 06:29:06.940256 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:06Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.945411 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.945461 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.945477 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.945498 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.945516 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:06Z","lastTransitionTime":"2025-10-13T06:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:06 crc kubenswrapper[4833]: E1013 06:29:06.965704 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:06Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.971017 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.971055 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.971071 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.971094 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.971111 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:06Z","lastTransitionTime":"2025-10-13T06:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:06 crc kubenswrapper[4833]: E1013 06:29:06.986842 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:06Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.990980 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.991011 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.991024 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.991037 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:06 crc kubenswrapper[4833]: I1013 06:29:06.991048 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:06Z","lastTransitionTime":"2025-10-13T06:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:07 crc kubenswrapper[4833]: E1013 06:29:07.010525 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:07Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.015175 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.015219 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.015229 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.015243 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.015253 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:07Z","lastTransitionTime":"2025-10-13T06:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:07 crc kubenswrapper[4833]: E1013 06:29:07.033248 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:07Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:07 crc kubenswrapper[4833]: E1013 06:29:07.033485 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.035323 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.035365 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.035382 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.035402 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.035420 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:07Z","lastTransitionTime":"2025-10-13T06:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.138667 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.138728 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.138748 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.138774 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.138795 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:07Z","lastTransitionTime":"2025-10-13T06:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.242038 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.242083 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.242094 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.242110 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.242121 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:07Z","lastTransitionTime":"2025-10-13T06:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.345734 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.345805 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.345823 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.345849 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.345867 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:07Z","lastTransitionTime":"2025-10-13T06:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.448996 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.449054 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.449065 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.449083 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.449095 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:07Z","lastTransitionTime":"2025-10-13T06:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.552420 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.552489 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.552512 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.552571 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.552596 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:07Z","lastTransitionTime":"2025-10-13T06:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.626786 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:07 crc kubenswrapper[4833]: E1013 06:29:07.626977 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.626786 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:07 crc kubenswrapper[4833]: E1013 06:29:07.627168 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.655769 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.655827 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.655846 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.655868 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.655885 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:07Z","lastTransitionTime":"2025-10-13T06:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.758201 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.758262 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.758285 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.758315 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.758337 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:07Z","lastTransitionTime":"2025-10-13T06:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.861733 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.861791 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.861809 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.861832 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.861850 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:07Z","lastTransitionTime":"2025-10-13T06:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.965027 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.965088 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.965106 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.965131 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:07 crc kubenswrapper[4833]: I1013 06:29:07.965150 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:07Z","lastTransitionTime":"2025-10-13T06:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.068278 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.068354 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.068378 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.068408 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.068430 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:08Z","lastTransitionTime":"2025-10-13T06:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.082240 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2fd6b1c1-777a-46be-960c-c6109d1615ad-metrics-certs\") pod \"network-metrics-daemon-28gq6\" (UID: \"2fd6b1c1-777a-46be-960c-c6109d1615ad\") " pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:08 crc kubenswrapper[4833]: E1013 06:29:08.082509 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 06:29:08 crc kubenswrapper[4833]: E1013 06:29:08.082642 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fd6b1c1-777a-46be-960c-c6109d1615ad-metrics-certs podName:2fd6b1c1-777a-46be-960c-c6109d1615ad nodeName:}" failed. No retries permitted until 2025-10-13 06:29:12.082617816 +0000 UTC m=+42.183040842 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2fd6b1c1-777a-46be-960c-c6109d1615ad-metrics-certs") pod "network-metrics-daemon-28gq6" (UID: "2fd6b1c1-777a-46be-960c-c6109d1615ad") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.170687 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.170736 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.170758 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.170779 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.170792 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:08Z","lastTransitionTime":"2025-10-13T06:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.274250 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.274295 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.274306 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.274321 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.274336 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:08Z","lastTransitionTime":"2025-10-13T06:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.378310 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.378373 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.378393 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.378416 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.378435 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:08Z","lastTransitionTime":"2025-10-13T06:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.481600 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.481634 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.481642 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.481653 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.481662 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:08Z","lastTransitionTime":"2025-10-13T06:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.584589 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.584647 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.584666 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.584691 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.584709 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:08Z","lastTransitionTime":"2025-10-13T06:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.626886 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.626978 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:08 crc kubenswrapper[4833]: E1013 06:29:08.627064 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:29:08 crc kubenswrapper[4833]: E1013 06:29:08.627172 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.688023 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.688082 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.688100 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.688124 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.688150 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:08Z","lastTransitionTime":"2025-10-13T06:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.791595 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.791670 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.791698 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.791731 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.791754 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:08Z","lastTransitionTime":"2025-10-13T06:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.894588 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.894655 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.894677 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.894699 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.894716 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:08Z","lastTransitionTime":"2025-10-13T06:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.997087 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.997172 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.997204 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.997234 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:08 crc kubenswrapper[4833]: I1013 06:29:08.997256 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:08Z","lastTransitionTime":"2025-10-13T06:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.100576 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.100636 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.100648 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.100665 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.100678 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:09Z","lastTransitionTime":"2025-10-13T06:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.202977 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.203018 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.203030 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.203048 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.203060 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:09Z","lastTransitionTime":"2025-10-13T06:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.305712 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.305779 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.305799 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.305824 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.305842 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:09Z","lastTransitionTime":"2025-10-13T06:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.408322 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.408376 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.408388 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.408406 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.408419 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:09Z","lastTransitionTime":"2025-10-13T06:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.511321 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.511387 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.511398 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.511420 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.511433 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:09Z","lastTransitionTime":"2025-10-13T06:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.615045 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.615113 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.615127 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.615142 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.615158 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:09Z","lastTransitionTime":"2025-10-13T06:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.626731 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.626731 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:09 crc kubenswrapper[4833]: E1013 06:29:09.626921 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:29:09 crc kubenswrapper[4833]: E1013 06:29:09.627057 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.717422 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.717482 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.717502 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.717524 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.717576 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:09Z","lastTransitionTime":"2025-10-13T06:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.819633 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.819721 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.819737 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.819759 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.819774 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:09Z","lastTransitionTime":"2025-10-13T06:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.922126 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.922177 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.922194 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.922217 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:09 crc kubenswrapper[4833]: I1013 06:29:09.922234 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:09Z","lastTransitionTime":"2025-10-13T06:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.024435 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.024493 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.024510 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.024559 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.024577 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:10Z","lastTransitionTime":"2025-10-13T06:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.127843 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.127890 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.127907 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.127927 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.127944 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:10Z","lastTransitionTime":"2025-10-13T06:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.230936 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.230987 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.231004 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.231024 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.231041 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:10Z","lastTransitionTime":"2025-10-13T06:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.334490 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.334568 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.334593 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.334622 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.334641 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:10Z","lastTransitionTime":"2025-10-13T06:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.437192 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.437258 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.437276 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.437298 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.437315 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:10Z","lastTransitionTime":"2025-10-13T06:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.540302 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.540367 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.540384 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.540409 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.540426 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:10Z","lastTransitionTime":"2025-10-13T06:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.626129 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:10 crc kubenswrapper[4833]: E1013 06:29:10.626250 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.626517 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:10 crc kubenswrapper[4833]: E1013 06:29:10.626619 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.640665 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a288d8bd15e54eefc3abdb190d6c4e996336adb4b36241e4339c5fbeac77242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.642323 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.642361 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.642370 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.642383 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.642394 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:10Z","lastTransitionTime":"2025-10-13T06:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.659583 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.673917 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.684656 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.698135 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.717341 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.731300 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.744067 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.744103 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.744115 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.744134 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.744145 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:10Z","lastTransitionTime":"2025-10-13T06:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.745191 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.755012 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.769871 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8c795090aae63c1fbb048307831c958ca8ba67709ae8bf8ae320c9f620da42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae8c795090aae63c1fbb048307831c958ca8ba67709ae8bf8ae320c9f620da42\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:00Z\\\",\\\"message\\\":\\\"ing OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4c1be812-05d3-4f45-91b5-a853a5c8de71}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1013 06:29:00.891650 6258 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-wnpc6_openshift-ovn-kubernetes(cb9a788e-b626-43a8-955a-bf4a5a3cb145)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.778594 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83642f5182015076a30c0e069481be77b9a299f52171c24cc2c505c3efedc95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f40fe16dccb08459a9e5a899b317acf357cdc6143235e324495af067c3ce2b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x6fvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.786696 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-28gq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd6b1c1-777a-46be-960c-c6109d1615ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-28gq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.796667 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.809043 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.824576 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b095d1b47e37e55494e6745601476332ff59da79569c574bcbe339f84f3e97c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.835059 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2qtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe7840a-9a54-429e-a148-a3f369ba5fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0622e1f66a8df23a49853cae411fea63ebac384e54b016796847f631873261d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2z6hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2qtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.846501 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.846598 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.846632 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.846654 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.846674 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:10Z","lastTransitionTime":"2025-10-13T06:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.949343 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.949650 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.949717 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.949811 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:10 crc kubenswrapper[4833]: I1013 06:29:10.949900 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:10Z","lastTransitionTime":"2025-10-13T06:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.051788 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.051826 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.051838 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.051853 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.051864 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:11Z","lastTransitionTime":"2025-10-13T06:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.154561 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.154599 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.154610 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.154627 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.154644 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:11Z","lastTransitionTime":"2025-10-13T06:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.257794 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.257897 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.257923 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.257955 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.257998 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:11Z","lastTransitionTime":"2025-10-13T06:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.361112 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.361157 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.361167 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.361179 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.361188 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:11Z","lastTransitionTime":"2025-10-13T06:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.464236 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.464296 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.464319 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.464346 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.464369 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:11Z","lastTransitionTime":"2025-10-13T06:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.567935 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.568004 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.568023 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.568043 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.568065 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:11Z","lastTransitionTime":"2025-10-13T06:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.626085 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.626195 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:11 crc kubenswrapper[4833]: E1013 06:29:11.626308 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:29:11 crc kubenswrapper[4833]: E1013 06:29:11.626453 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.671664 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.671730 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.671745 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.671769 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.671783 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:11Z","lastTransitionTime":"2025-10-13T06:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.775350 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.775403 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.775426 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.775461 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.775487 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:11Z","lastTransitionTime":"2025-10-13T06:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.878705 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.879096 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.879283 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.879482 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.879712 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:11Z","lastTransitionTime":"2025-10-13T06:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.982195 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.982234 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.982244 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.982260 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:11 crc kubenswrapper[4833]: I1013 06:29:11.982274 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:11Z","lastTransitionTime":"2025-10-13T06:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.085050 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.085165 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.085182 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.085206 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.085225 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:12Z","lastTransitionTime":"2025-10-13T06:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.124595 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2fd6b1c1-777a-46be-960c-c6109d1615ad-metrics-certs\") pod \"network-metrics-daemon-28gq6\" (UID: \"2fd6b1c1-777a-46be-960c-c6109d1615ad\") " pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:12 crc kubenswrapper[4833]: E1013 06:29:12.124761 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 06:29:12 crc kubenswrapper[4833]: E1013 06:29:12.124854 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fd6b1c1-777a-46be-960c-c6109d1615ad-metrics-certs podName:2fd6b1c1-777a-46be-960c-c6109d1615ad nodeName:}" failed. No retries permitted until 2025-10-13 06:29:20.124828608 +0000 UTC m=+50.225251564 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2fd6b1c1-777a-46be-960c-c6109d1615ad-metrics-certs") pod "network-metrics-daemon-28gq6" (UID: "2fd6b1c1-777a-46be-960c-c6109d1615ad") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.188103 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.188166 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.188184 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.188206 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.188227 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:12Z","lastTransitionTime":"2025-10-13T06:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.290367 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.290441 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.290452 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.290473 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.290485 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:12Z","lastTransitionTime":"2025-10-13T06:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.392737 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.392776 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.392786 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.392801 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.392813 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:12Z","lastTransitionTime":"2025-10-13T06:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.494946 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.495011 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.495034 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.495061 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.495083 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:12Z","lastTransitionTime":"2025-10-13T06:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.597461 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.597528 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.597592 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.597620 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.597642 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:12Z","lastTransitionTime":"2025-10-13T06:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.626909 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.626894 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:12 crc kubenswrapper[4833]: E1013 06:29:12.627133 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:29:12 crc kubenswrapper[4833]: E1013 06:29:12.627244 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.701185 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.701241 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.701258 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.701281 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.701303 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:12Z","lastTransitionTime":"2025-10-13T06:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.803735 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.803803 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.803821 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.803852 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.803874 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:12Z","lastTransitionTime":"2025-10-13T06:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.906949 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.907011 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.907027 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.907058 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:12 crc kubenswrapper[4833]: I1013 06:29:12.907077 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:12Z","lastTransitionTime":"2025-10-13T06:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.010186 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.010242 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.010259 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.010282 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.010299 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:13Z","lastTransitionTime":"2025-10-13T06:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.113744 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.113800 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.113817 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.113844 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.113861 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:13Z","lastTransitionTime":"2025-10-13T06:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.217042 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.217182 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.217198 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.217223 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.217659 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:13Z","lastTransitionTime":"2025-10-13T06:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.326017 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.326096 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.326114 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.326139 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.326160 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:13Z","lastTransitionTime":"2025-10-13T06:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.429183 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.429242 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.429259 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.429282 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.429301 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:13Z","lastTransitionTime":"2025-10-13T06:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.532041 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.532105 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.532121 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.532144 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.532162 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:13Z","lastTransitionTime":"2025-10-13T06:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.626668 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.626723 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:13 crc kubenswrapper[4833]: E1013 06:29:13.626805 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:29:13 crc kubenswrapper[4833]: E1013 06:29:13.626903 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.634133 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.634166 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.634174 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.634187 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.634197 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:13Z","lastTransitionTime":"2025-10-13T06:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.736786 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.736862 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.736885 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.736914 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.736938 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:13Z","lastTransitionTime":"2025-10-13T06:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.839910 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.839966 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.840003 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.840026 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.840053 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:13Z","lastTransitionTime":"2025-10-13T06:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.942315 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.942368 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.942385 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.942407 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:13 crc kubenswrapper[4833]: I1013 06:29:13.942425 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:13Z","lastTransitionTime":"2025-10-13T06:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.044726 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.044769 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.044781 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.044796 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.044809 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:14Z","lastTransitionTime":"2025-10-13T06:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.146718 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.146762 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.146775 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.146794 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.146805 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:14Z","lastTransitionTime":"2025-10-13T06:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.249448 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.249492 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.249502 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.249517 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.249529 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:14Z","lastTransitionTime":"2025-10-13T06:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.352743 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.352794 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.352806 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.352824 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.352839 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:14Z","lastTransitionTime":"2025-10-13T06:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.455275 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.455346 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.455364 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.455387 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.455405 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:14Z","lastTransitionTime":"2025-10-13T06:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.558420 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.558487 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.558509 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.558571 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.558594 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:14Z","lastTransitionTime":"2025-10-13T06:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.627007 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:14 crc kubenswrapper[4833]: E1013 06:29:14.627225 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.627314 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:14 crc kubenswrapper[4833]: E1013 06:29:14.627497 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.661645 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.661726 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.661759 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.661789 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.661812 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:14Z","lastTransitionTime":"2025-10-13T06:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.764395 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.764434 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.764442 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.764455 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.764465 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:14Z","lastTransitionTime":"2025-10-13T06:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.866971 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.867029 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.867048 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.867073 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.867091 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:14Z","lastTransitionTime":"2025-10-13T06:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.970072 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.970106 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.970116 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.970128 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:14 crc kubenswrapper[4833]: I1013 06:29:14.970138 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:14Z","lastTransitionTime":"2025-10-13T06:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.072385 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.072460 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.072496 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.072526 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.072586 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:15Z","lastTransitionTime":"2025-10-13T06:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.175188 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.175268 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.175288 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.175324 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.175342 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:15Z","lastTransitionTime":"2025-10-13T06:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.208173 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.229628 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:15Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.247124 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:15Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.267334 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a288d8bd15e54eefc3abdb190d6c4e996336adb4b36241e4339c5fbeac77242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:15Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.278016 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.278073 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.278092 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.278115 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.278133 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:15Z","lastTransitionTime":"2025-10-13T06:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.291475 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:15Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.304153 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:15Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.316575 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:15Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.328308 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83642f5182015076a30c0e069481be77b9a299f52171c24cc2c505c3efedc95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f40fe16dccb08459a9e5a899b317acf357cdc6143235e324495af067c3ce2b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x6fvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:15Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.338950 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-28gq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd6b1c1-777a-46be-960c-c6109d1615ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-28gq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:15Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.353061 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:15Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.366067 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:15Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.379010 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:15Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.380622 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.380654 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.380666 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.380683 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.380696 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:15Z","lastTransitionTime":"2025-10-13T06:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.395481 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8c795090aae63c1fbb048307831c958ca8ba67709ae8bf8ae320c9f620da42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae8c795090aae63c1fbb048307831c958ca8ba67709ae8bf8ae320c9f620da42\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:00Z\\\",\\\"message\\\":\\\"ing OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4c1be812-05d3-4f45-91b5-a853a5c8de71}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1013 06:29:00.891650 6258 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-wnpc6_openshift-ovn-kubernetes(cb9a788e-b626-43a8-955a-bf4a5a3cb145)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:15Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.409024 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:15Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.420716 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:15Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.437157 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b095d1b47e37e55494e6745601476332ff59da79569c574bcbe339f84f3e97c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:15Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.447374 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2qtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe7840a-9a54-429e-a148-a3f369ba5fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0622e1f66a8df23a49853cae411fea63ebac384e54b016796847f631873261d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2z6hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2qtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:15Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.483061 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.483114 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.483137 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.483158 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.483173 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:15Z","lastTransitionTime":"2025-10-13T06:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.585557 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.586262 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.586329 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.586358 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.586382 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:15Z","lastTransitionTime":"2025-10-13T06:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.626899 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.627053 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:15 crc kubenswrapper[4833]: E1013 06:29:15.627180 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:29:15 crc kubenswrapper[4833]: E1013 06:29:15.627271 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.689460 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.689573 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.689600 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.689631 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.689656 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:15Z","lastTransitionTime":"2025-10-13T06:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.792580 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.792659 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.792697 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.792729 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.792751 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:15Z","lastTransitionTime":"2025-10-13T06:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.895949 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.896006 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.896023 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.896047 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.896064 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:15Z","lastTransitionTime":"2025-10-13T06:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.998982 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.999042 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.999065 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.999097 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:15 crc kubenswrapper[4833]: I1013 06:29:15.999117 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:15Z","lastTransitionTime":"2025-10-13T06:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.101211 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.101273 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.101290 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.101313 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.101330 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:16Z","lastTransitionTime":"2025-10-13T06:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.204697 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.204764 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.204776 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.204792 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.204805 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:16Z","lastTransitionTime":"2025-10-13T06:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.307803 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.307843 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.307858 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.307873 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.307882 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:16Z","lastTransitionTime":"2025-10-13T06:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.410417 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.410477 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.410494 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.410520 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.410571 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:16Z","lastTransitionTime":"2025-10-13T06:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.513794 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.513842 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.513851 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.513870 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.513879 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:16Z","lastTransitionTime":"2025-10-13T06:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.617306 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.617357 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.617368 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.617385 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.617398 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:16Z","lastTransitionTime":"2025-10-13T06:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.626715 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.626766 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:16 crc kubenswrapper[4833]: E1013 06:29:16.626909 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:29:16 crc kubenswrapper[4833]: E1013 06:29:16.627033 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.720385 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.720453 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.720473 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.720498 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.720519 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:16Z","lastTransitionTime":"2025-10-13T06:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.827066 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.827116 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.827128 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.827144 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.827156 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:16Z","lastTransitionTime":"2025-10-13T06:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.929592 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.929643 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.929655 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.929673 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:16 crc kubenswrapper[4833]: I1013 06:29:16.929688 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:16Z","lastTransitionTime":"2025-10-13T06:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.032682 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.033001 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.033107 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.033200 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.033285 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:17Z","lastTransitionTime":"2025-10-13T06:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.135878 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.135926 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.135937 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.135956 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.135967 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:17Z","lastTransitionTime":"2025-10-13T06:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.238643 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.238695 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.238706 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.238722 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.238732 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:17Z","lastTransitionTime":"2025-10-13T06:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.252306 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.252332 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.252341 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.252352 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.252361 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:17Z","lastTransitionTime":"2025-10-13T06:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:17 crc kubenswrapper[4833]: E1013 06:29:17.269868 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:17Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.273812 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.273852 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.273864 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.273883 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.273896 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:17Z","lastTransitionTime":"2025-10-13T06:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:17 crc kubenswrapper[4833]: E1013 06:29:17.290293 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:17Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.294904 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.294938 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.294948 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.294964 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.294978 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:17Z","lastTransitionTime":"2025-10-13T06:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:17 crc kubenswrapper[4833]: E1013 06:29:17.311324 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:17Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.315362 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.315419 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.315438 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.315463 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.315480 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:17Z","lastTransitionTime":"2025-10-13T06:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:17 crc kubenswrapper[4833]: E1013 06:29:17.331528 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:17Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.335636 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.335691 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.335709 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.335732 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.335749 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:17Z","lastTransitionTime":"2025-10-13T06:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:17 crc kubenswrapper[4833]: E1013 06:29:17.354628 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:17Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:17 crc kubenswrapper[4833]: E1013 06:29:17.354818 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.356602 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.356668 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.356684 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.356701 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.356713 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:17Z","lastTransitionTime":"2025-10-13T06:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.459922 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.459979 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.459992 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.460009 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.460021 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:17Z","lastTransitionTime":"2025-10-13T06:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.563297 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.563356 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.563374 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.563417 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.563456 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:17Z","lastTransitionTime":"2025-10-13T06:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.626473 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.626583 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:17 crc kubenswrapper[4833]: E1013 06:29:17.626652 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:29:17 crc kubenswrapper[4833]: E1013 06:29:17.626713 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.627726 4833 scope.go:117] "RemoveContainer" containerID="ae8c795090aae63c1fbb048307831c958ca8ba67709ae8bf8ae320c9f620da42" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.665973 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.666021 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.666032 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.666048 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.666059 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:17Z","lastTransitionTime":"2025-10-13T06:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.768138 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.768588 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.768612 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.768637 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.768655 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:17Z","lastTransitionTime":"2025-10-13T06:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.871648 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.871721 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.871745 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.871776 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.871801 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:17Z","lastTransitionTime":"2025-10-13T06:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.962996 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wnpc6_cb9a788e-b626-43a8-955a-bf4a5a3cb145/ovnkube-controller/1.log" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.969933 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" event={"ID":"cb9a788e-b626-43a8-955a-bf4a5a3cb145","Type":"ContainerStarted","Data":"48cf8db24c370c2c184a078d0a333e0f1c95101881215fe52eadb79fa6667151"} Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.970395 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.973511 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.973603 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.973624 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.973654 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.973680 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:17Z","lastTransitionTime":"2025-10-13T06:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:17 crc kubenswrapper[4833]: I1013 06:29:17.987833 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:17Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.005291 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a288d8bd15e54eefc3abdb190d6c4e996336adb4b36241e4339c5fbeac77242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:18Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.026036 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:18Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.063435 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:18Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.076499 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.076572 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.076587 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.076606 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.076622 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:18Z","lastTransitionTime":"2025-10-13T06:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.081369 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:18Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.101895 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:18Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.117676 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-28gq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd6b1c1-777a-46be-960c-c6109d1615ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-28gq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:18Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.141693 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:18Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.163250 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:18Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.178611 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.178672 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.178682 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.178697 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.178710 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:18Z","lastTransitionTime":"2025-10-13T06:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.183209 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:18Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.201957 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cf8db24c370c2c184a078d0a333e0f1c95101881215fe52eadb79fa6667151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae8c795090aae63c1fbb048307831c958ca8ba67709ae8bf8ae320c9f620da42\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:00Z\\\",\\\"message\\\":\\\"ing OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4c1be812-05d3-4f45-91b5-a853a5c8de71}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1013 06:29:00.891650 6258 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:18Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.212676 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83642f5182015076a30c0e069481be77b9a299f52171c24cc2c505c3efedc95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f40fe16dccb08459a9e5a899b317acf357cdc6143235e324495af067c3ce2b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x6fvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:18Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.224013 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:18Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.232992 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:18Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.247357 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b095d1b47e37e55494e6745601476332ff59da79569c574bcbe339f84f3e97c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:18Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.256617 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2qtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe7840a-9a54-429e-a148-a3f369ba5fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0622e1f66a8df23a49853cae411fea63ebac384e54b016796847f631873261d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2z6hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2qtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:18Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.281108 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.281144 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.281154 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.281166 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.281175 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:18Z","lastTransitionTime":"2025-10-13T06:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.383999 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.384050 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.384068 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.384094 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.384112 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:18Z","lastTransitionTime":"2025-10-13T06:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.487084 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.487133 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.487146 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.487162 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.487175 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:18Z","lastTransitionTime":"2025-10-13T06:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.589264 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.589327 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.589339 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.589358 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.589376 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:18Z","lastTransitionTime":"2025-10-13T06:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.625992 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.626063 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:18 crc kubenswrapper[4833]: E1013 06:29:18.626431 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:29:18 crc kubenswrapper[4833]: E1013 06:29:18.626651 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.692770 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.692840 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.692865 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.692895 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.692919 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:18Z","lastTransitionTime":"2025-10-13T06:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.796178 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.796634 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.796773 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.796861 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.796946 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:18Z","lastTransitionTime":"2025-10-13T06:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.900011 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.900083 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.900098 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.900117 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.900128 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:18Z","lastTransitionTime":"2025-10-13T06:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.977601 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wnpc6_cb9a788e-b626-43a8-955a-bf4a5a3cb145/ovnkube-controller/2.log" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.978965 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wnpc6_cb9a788e-b626-43a8-955a-bf4a5a3cb145/ovnkube-controller/1.log" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.984004 4833 generic.go:334] "Generic (PLEG): container finished" podID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerID="48cf8db24c370c2c184a078d0a333e0f1c95101881215fe52eadb79fa6667151" exitCode=1 Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.984075 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" event={"ID":"cb9a788e-b626-43a8-955a-bf4a5a3cb145","Type":"ContainerDied","Data":"48cf8db24c370c2c184a078d0a333e0f1c95101881215fe52eadb79fa6667151"} Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.984146 4833 scope.go:117] "RemoveContainer" containerID="ae8c795090aae63c1fbb048307831c958ca8ba67709ae8bf8ae320c9f620da42" Oct 13 06:29:18 crc kubenswrapper[4833]: I1013 06:29:18.985320 4833 scope.go:117] "RemoveContainer" containerID="48cf8db24c370c2c184a078d0a333e0f1c95101881215fe52eadb79fa6667151" Oct 13 06:29:18 crc kubenswrapper[4833]: E1013 06:29:18.985751 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wnpc6_openshift-ovn-kubernetes(cb9a788e-b626-43a8-955a-bf4a5a3cb145)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.002609 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:18Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.004040 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.004119 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.004138 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.004169 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.004189 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:19Z","lastTransitionTime":"2025-10-13T06:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.018095 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.031975 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.046002 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.067727 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a288d8bd15e54eefc3abdb190d6c4e996336adb4b36241e4339c5fbeac77242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.080280 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.094296 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.107256 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.107521 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.107611 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.107687 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.107768 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:19Z","lastTransitionTime":"2025-10-13T06:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.111891 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.135268 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cf8db24c370c2c184a078d0a333e0f1c95101881215fe52eadb79fa6667151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae8c795090aae63c1fbb048307831c958ca8ba67709ae8bf8ae320c9f620da42\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:00Z\\\",\\\"message\\\":\\\"ing OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4c1be812-05d3-4f45-91b5-a853a5c8de71}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1013 06:29:00.891650 6258 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48cf8db24c370c2c184a078d0a333e0f1c95101881215fe52eadb79fa6667151\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:18Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1013 06:29:18.543409 6494 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[ip_prefix:10.217.0.0/22 nexthop:100.64.0.2 policy:{GoSet:[src-ip]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8944024f-deb7-4076-afb3-4b50a2ff4b4b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1013 06:29:18.543681 6494 obj_retry.go:551] Creating *factory.egressNode crc took: 4.104695ms\\\\nI1013 06:29:18.543710 6494 factory.go:1336] Added *v1.Node event handler 7\\\\nI1013 06:29:18.543736 6494 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1013 06:29:18.543969 6494 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1013 06:29:18.544045 6494 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1013 06:29:18.544076 6494 ovnkube.go:599] Stopped ovnkube\\\\nI1013 06:29:18.544098 6494 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1013 06:29:18.544158 6494 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:29:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.151051 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83642f5182015076a30c0e069481be77b9a299f52171c24cc2c505c3efedc95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f40fe16dccb08459a9e5a899b317acf357cdc6143235e324495af067c3ce2b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x6fvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.164973 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-28gq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd6b1c1-777a-46be-960c-c6109d1615ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-28gq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.184287 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.208402 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b095d1b47e37e55494e6745601476332ff59da79569c574bcbe339f84f3e97c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.210595 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.210672 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.210697 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.210727 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.210747 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:19Z","lastTransitionTime":"2025-10-13T06:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.216467 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.226994 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.228227 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2qtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe7840a-9a54-429e-a148-a3f369ba5fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0622e1f66a8df23a49853cae411fea63ebac384e54b016796847f631873261d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2z6hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2qtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.247520 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.262350 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.280190 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.295256 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.313583 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.313661 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.313688 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.313716 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.313740 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:19Z","lastTransitionTime":"2025-10-13T06:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.317364 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cf8db24c370c2c184a078d0a333e0f1c95101881215fe52eadb79fa6667151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae8c795090aae63c1fbb048307831c958ca8ba67709ae8bf8ae320c9f620da42\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:00Z\\\",\\\"message\\\":\\\"ing OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4c1be812-05d3-4f45-91b5-a853a5c8de71}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1013 06:29:00.891650 6258 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48cf8db24c370c2c184a078d0a333e0f1c95101881215fe52eadb79fa6667151\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:18Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1013 06:29:18.543409 6494 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[ip_prefix:10.217.0.0/22 nexthop:100.64.0.2 policy:{GoSet:[src-ip]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8944024f-deb7-4076-afb3-4b50a2ff4b4b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1013 06:29:18.543681 6494 obj_retry.go:551] Creating *factory.egressNode crc took: 4.104695ms\\\\nI1013 06:29:18.543710 6494 factory.go:1336] Added *v1.Node event handler 7\\\\nI1013 06:29:18.543736 6494 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1013 06:29:18.543969 6494 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1013 06:29:18.544045 6494 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1013 06:29:18.544076 6494 ovnkube.go:599] Stopped ovnkube\\\\nI1013 06:29:18.544098 6494 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1013 06:29:18.544158 6494 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:29:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.331904 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83642f5182015076a30c0e069481be77b9a299f52171c24cc2c505c3efedc95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f40fe16dccb08459a9e5a899b317acf357cdc6143235e324495af067c3ce2b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x6fvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.344236 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-28gq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd6b1c1-777a-46be-960c-c6109d1615ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-28gq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.357852 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.372725 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b095d1b47e37e55494e6745601476332ff59da79569c574bcbe339f84f3e97c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.383201 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2qtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe7840a-9a54-429e-a148-a3f369ba5fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0622e1f66a8df23a49853cae411fea63ebac384e54b016796847f631873261d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2z6hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2qtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.398965 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.409936 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.415699 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.415728 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.415740 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.415756 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.415767 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:19Z","lastTransitionTime":"2025-10-13T06:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.422282 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"974c8dd5-8d30-481e-87e9-a93fc827d83b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ab024e93fccec089531cd9b30c0dddb671f50dc2545e91808b9194879518141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd17b6ca285f57d8161394548b55fdfed2681f648cfe5a7619cc3c325694e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26887843c37a94b82dc6fe25858a9a8e7d6cd5f78a4567bda07afba8e3a1a94b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd73615b96be8034f386dbac0f32cb406ddf8e0f8eee12249fa46d785cdf0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd73615b96be8034f386dbac0f32cb406ddf8e0f8eee12249fa46d785cdf0cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.436117 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.450875 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.463853 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.478196 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.495258 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a288d8bd15e54eefc3abdb190d6c4e996336adb4b36241e4339c5fbeac77242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.513957 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:19Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.517309 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.517348 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.517360 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.517380 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.517394 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:19Z","lastTransitionTime":"2025-10-13T06:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.622178 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.622252 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.622272 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.622296 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.622313 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:19Z","lastTransitionTime":"2025-10-13T06:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.626898 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.626928 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:19 crc kubenswrapper[4833]: E1013 06:29:19.627139 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:29:19 crc kubenswrapper[4833]: E1013 06:29:19.627234 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.725065 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.725116 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.725127 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.725146 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.725160 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:19Z","lastTransitionTime":"2025-10-13T06:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.828328 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.828394 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.828413 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.828442 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.828465 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:19Z","lastTransitionTime":"2025-10-13T06:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.931956 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.932051 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.932069 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.932092 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.932110 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:19Z","lastTransitionTime":"2025-10-13T06:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.988603 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wnpc6_cb9a788e-b626-43a8-955a-bf4a5a3cb145/ovnkube-controller/2.log" Oct 13 06:29:19 crc kubenswrapper[4833]: I1013 06:29:19.992495 4833 scope.go:117] "RemoveContainer" containerID="48cf8db24c370c2c184a078d0a333e0f1c95101881215fe52eadb79fa6667151" Oct 13 06:29:19 crc kubenswrapper[4833]: E1013 06:29:19.992862 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wnpc6_openshift-ovn-kubernetes(cb9a788e-b626-43a8-955a-bf4a5a3cb145)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.009993 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83642f5182015076a30c0e069481be77b9a299f52171c24cc2c505c3efedc95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f40fe16dccb08459a9e5a899b317acf357cdc6143235e324495af067c3ce2b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x6fvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.022603 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-28gq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd6b1c1-777a-46be-960c-c6109d1615ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-28gq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.035385 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.035444 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.035461 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.035484 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.035517 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:20Z","lastTransitionTime":"2025-10-13T06:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.041045 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.086055 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.107289 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.132259 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cf8db24c370c2c184a078d0a333e0f1c95101881215fe52eadb79fa6667151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48cf8db24c370c2c184a078d0a333e0f1c95101881215fe52eadb79fa6667151\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:18Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1013 06:29:18.543409 6494 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[ip_prefix:10.217.0.0/22 nexthop:100.64.0.2 policy:{GoSet:[src-ip]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8944024f-deb7-4076-afb3-4b50a2ff4b4b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1013 06:29:18.543681 6494 obj_retry.go:551] Creating *factory.egressNode crc took: 4.104695ms\\\\nI1013 06:29:18.543710 6494 factory.go:1336] Added *v1.Node event handler 7\\\\nI1013 06:29:18.543736 6494 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1013 06:29:18.543969 6494 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1013 06:29:18.544045 6494 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1013 06:29:18.544076 6494 ovnkube.go:599] Stopped ovnkube\\\\nI1013 06:29:18.544098 6494 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1013 06:29:18.544158 6494 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:29:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wnpc6_openshift-ovn-kubernetes(cb9a788e-b626-43a8-955a-bf4a5a3cb145)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.137753 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.137815 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.137837 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.137861 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.137881 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:20Z","lastTransitionTime":"2025-10-13T06:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.151748 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.163139 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.185272 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b095d1b47e37e55494e6745601476332ff59da79569c574bcbe339f84f3e97c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.198870 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2qtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe7840a-9a54-429e-a148-a3f369ba5fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0622e1f66a8df23a49853cae411fea63ebac384e54b016796847f631873261d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2z6hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2qtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.207091 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2fd6b1c1-777a-46be-960c-c6109d1615ad-metrics-certs\") pod \"network-metrics-daemon-28gq6\" (UID: \"2fd6b1c1-777a-46be-960c-c6109d1615ad\") " pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:20 crc kubenswrapper[4833]: E1013 06:29:20.207346 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 06:29:20 crc kubenswrapper[4833]: E1013 06:29:20.207456 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fd6b1c1-777a-46be-960c-c6109d1615ad-metrics-certs podName:2fd6b1c1-777a-46be-960c-c6109d1615ad nodeName:}" failed. No retries permitted until 2025-10-13 06:29:36.207430814 +0000 UTC m=+66.307853780 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2fd6b1c1-777a-46be-960c-c6109d1615ad-metrics-certs") pod "network-metrics-daemon-28gq6" (UID: "2fd6b1c1-777a-46be-960c-c6109d1615ad") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.218133 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.232849 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.240739 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.240882 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.240908 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.240935 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.240952 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:20Z","lastTransitionTime":"2025-10-13T06:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.253887 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a288d8bd15e54eefc3abdb190d6c4e996336adb4b36241e4339c5fbeac77242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.266747 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.279474 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"974c8dd5-8d30-481e-87e9-a93fc827d83b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ab024e93fccec089531cd9b30c0dddb671f50dc2545e91808b9194879518141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd17b6ca285f57d8161394548b55fdfed2681f648cfe5a7619cc3c325694e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26887843c37a94b82dc6fe25858a9a8e7d6cd5f78a4567bda07afba8e3a1a94b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd73615b96be8034f386dbac0f32cb406ddf8e0f8eee12249fa46d785cdf0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd73615b96be8034f386dbac0f32cb406ddf8e0f8eee12249fa46d785cdf0cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.293125 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.305388 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.344859 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.344914 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.344929 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.344946 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.344958 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:20Z","lastTransitionTime":"2025-10-13T06:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.447178 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.447211 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.447223 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.447240 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.447256 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:20Z","lastTransitionTime":"2025-10-13T06:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.549915 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.549964 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.549975 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.549996 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.550009 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:20Z","lastTransitionTime":"2025-10-13T06:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.627043 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:20 crc kubenswrapper[4833]: E1013 06:29:20.627175 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.627270 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:20 crc kubenswrapper[4833]: E1013 06:29:20.627477 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.643248 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.652473 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.652510 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.652521 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.652551 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.652563 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:20Z","lastTransitionTime":"2025-10-13T06:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.660910 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"974c8dd5-8d30-481e-87e9-a93fc827d83b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ab024e93fccec089531cd9b30c0dddb671f50dc2545e91808b9194879518141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd17b6ca285f57d8161394548b55fdfed2681f648cfe5a7619cc3c325694e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26887843c37a94b82dc6fe25858a9a8e7d6cd5f78a4567bda07afba8e3a1a94b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd73615b96be8034f386dbac0f32cb406ddf8e0f8eee12249fa46d785cdf0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd73615b96be8034f386dbac0f32cb406ddf8e0f8eee12249fa46d785cdf0cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.680918 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.696745 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.720607 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.738704 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.755583 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.755624 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.755635 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.755652 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.755663 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:20Z","lastTransitionTime":"2025-10-13T06:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.756465 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a288d8bd15e54eefc3abdb190d6c4e996336adb4b36241e4339c5fbeac77242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.773527 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.797815 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.816785 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.849147 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cf8db24c370c2c184a078d0a333e0f1c95101881215fe52eadb79fa6667151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48cf8db24c370c2c184a078d0a333e0f1c95101881215fe52eadb79fa6667151\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:18Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1013 06:29:18.543409 6494 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[ip_prefix:10.217.0.0/22 nexthop:100.64.0.2 policy:{GoSet:[src-ip]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8944024f-deb7-4076-afb3-4b50a2ff4b4b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1013 06:29:18.543681 6494 obj_retry.go:551] Creating *factory.egressNode crc took: 4.104695ms\\\\nI1013 06:29:18.543710 6494 factory.go:1336] Added *v1.Node event handler 7\\\\nI1013 06:29:18.543736 6494 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1013 06:29:18.543969 6494 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1013 06:29:18.544045 6494 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1013 06:29:18.544076 6494 ovnkube.go:599] Stopped ovnkube\\\\nI1013 06:29:18.544098 6494 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1013 06:29:18.544158 6494 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:29:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wnpc6_openshift-ovn-kubernetes(cb9a788e-b626-43a8-955a-bf4a5a3cb145)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.859044 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.859079 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.859090 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.859108 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.859119 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:20Z","lastTransitionTime":"2025-10-13T06:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.864175 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83642f5182015076a30c0e069481be77b9a299f52171c24cc2c505c3efedc95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f40fe16dccb08459a9e5a899b317acf357cdc6143235e324495af067c3ce2b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x6fvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.885526 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-28gq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd6b1c1-777a-46be-960c-c6109d1615ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-28gq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.898268 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.916892 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b095d1b47e37e55494e6745601476332ff59da79569c574bcbe339f84f3e97c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.929092 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2qtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe7840a-9a54-429e-a148-a3f369ba5fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0622e1f66a8df23a49853cae411fea63ebac384e54b016796847f631873261d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2z6hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2qtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.945466 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:20Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.961443 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.961506 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.961524 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.961589 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:20 crc kubenswrapper[4833]: I1013 06:29:20.961607 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:20Z","lastTransitionTime":"2025-10-13T06:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.064521 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.064598 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.064614 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.064633 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.064645 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:21Z","lastTransitionTime":"2025-10-13T06:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.167143 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.167207 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.167230 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.167259 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.167283 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:21Z","lastTransitionTime":"2025-10-13T06:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.269515 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.269619 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.269645 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.269676 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.269699 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:21Z","lastTransitionTime":"2025-10-13T06:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.373030 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.373113 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.373126 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.373143 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.373155 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:21Z","lastTransitionTime":"2025-10-13T06:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.475768 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.475825 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.475843 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.475865 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.475882 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:21Z","lastTransitionTime":"2025-10-13T06:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.579212 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.579297 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.579311 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.579329 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.579342 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:21Z","lastTransitionTime":"2025-10-13T06:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.626311 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.626396 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:21 crc kubenswrapper[4833]: E1013 06:29:21.626469 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:29:21 crc kubenswrapper[4833]: E1013 06:29:21.626607 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.681997 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.682047 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.682064 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.682088 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.682106 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:21Z","lastTransitionTime":"2025-10-13T06:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.785047 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.785099 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.785117 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.785142 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.785158 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:21Z","lastTransitionTime":"2025-10-13T06:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.887893 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.887961 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.887984 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.888010 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.888033 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:21Z","lastTransitionTime":"2025-10-13T06:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.991334 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.991372 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.991383 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.991400 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:21 crc kubenswrapper[4833]: I1013 06:29:21.991412 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:21Z","lastTransitionTime":"2025-10-13T06:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.094199 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.094238 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.094250 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.094266 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.094277 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:22Z","lastTransitionTime":"2025-10-13T06:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.197043 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.197099 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.197115 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.197138 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.197156 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:22Z","lastTransitionTime":"2025-10-13T06:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.299857 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.299918 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.299935 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.299959 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.299991 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:22Z","lastTransitionTime":"2025-10-13T06:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.332012 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:22 crc kubenswrapper[4833]: E1013 06:29:22.332226 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 06:29:22 crc kubenswrapper[4833]: E1013 06:29:22.332350 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 06:29:54.332323507 +0000 UTC m=+84.432746463 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.403143 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.403180 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.403192 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.403213 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.403223 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:22Z","lastTransitionTime":"2025-10-13T06:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.433485 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.433777 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.433844 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.433910 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:22 crc kubenswrapper[4833]: E1013 06:29:22.434074 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 06:29:22 crc kubenswrapper[4833]: E1013 06:29:22.434106 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 06:29:22 crc kubenswrapper[4833]: E1013 06:29:22.434125 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:29:22 crc kubenswrapper[4833]: E1013 06:29:22.434189 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-13 06:29:54.434167174 +0000 UTC m=+84.534590140 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:29:22 crc kubenswrapper[4833]: E1013 06:29:22.434365 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 06:29:22 crc kubenswrapper[4833]: E1013 06:29:22.434388 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 06:29:22 crc kubenswrapper[4833]: E1013 06:29:22.434400 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:29:22 crc kubenswrapper[4833]: E1013 06:29:22.434435 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-13 06:29:54.434422661 +0000 UTC m=+84.534845587 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:29:22 crc kubenswrapper[4833]: E1013 06:29:22.434487 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:29:54.434479922 +0000 UTC m=+84.534902848 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:29:22 crc kubenswrapper[4833]: E1013 06:29:22.434868 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 06:29:22 crc kubenswrapper[4833]: E1013 06:29:22.435035 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 06:29:54.435019137 +0000 UTC m=+84.535442063 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.506034 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.506485 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.506711 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.506881 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.507039 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:22Z","lastTransitionTime":"2025-10-13T06:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.610408 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.610468 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.610485 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.610512 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.610529 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:22Z","lastTransitionTime":"2025-10-13T06:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.626830 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.626849 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:22 crc kubenswrapper[4833]: E1013 06:29:22.627038 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:29:22 crc kubenswrapper[4833]: E1013 06:29:22.627132 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.713059 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.713132 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.713151 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.713177 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.713194 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:22Z","lastTransitionTime":"2025-10-13T06:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.816456 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.816509 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.816526 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.816588 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.816607 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:22Z","lastTransitionTime":"2025-10-13T06:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.920123 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.920494 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.920666 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.920804 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:22 crc kubenswrapper[4833]: I1013 06:29:22.920949 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:22Z","lastTransitionTime":"2025-10-13T06:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.024262 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.024343 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.024356 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.024376 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.024390 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:23Z","lastTransitionTime":"2025-10-13T06:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.127458 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.127492 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.127506 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.127522 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.127531 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:23Z","lastTransitionTime":"2025-10-13T06:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.230612 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.230660 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.230673 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.230690 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.230701 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:23Z","lastTransitionTime":"2025-10-13T06:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.333940 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.334356 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.334522 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.334857 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.335007 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:23Z","lastTransitionTime":"2025-10-13T06:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.437197 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.437506 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.437683 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.437861 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.438015 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:23Z","lastTransitionTime":"2025-10-13T06:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.540514 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.540571 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.540582 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.540596 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.540607 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:23Z","lastTransitionTime":"2025-10-13T06:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.626815 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.626823 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:23 crc kubenswrapper[4833]: E1013 06:29:23.626971 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:29:23 crc kubenswrapper[4833]: E1013 06:29:23.627162 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.643902 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.643945 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.643957 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.643975 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.644007 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:23Z","lastTransitionTime":"2025-10-13T06:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.746773 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.746810 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.746821 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.746839 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.746852 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:23Z","lastTransitionTime":"2025-10-13T06:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.849925 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.849962 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.849972 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.849989 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.850000 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:23Z","lastTransitionTime":"2025-10-13T06:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.952039 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.952076 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.952086 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.952100 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:23 crc kubenswrapper[4833]: I1013 06:29:23.952112 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:23Z","lastTransitionTime":"2025-10-13T06:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.054481 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.054558 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.054571 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.054588 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.054600 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:24Z","lastTransitionTime":"2025-10-13T06:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.157128 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.157206 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.157216 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.157229 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.157238 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:24Z","lastTransitionTime":"2025-10-13T06:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.260448 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.260508 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.260525 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.260575 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.260591 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:24Z","lastTransitionTime":"2025-10-13T06:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.363569 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.363608 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.363618 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.363635 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.363673 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:24Z","lastTransitionTime":"2025-10-13T06:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.466212 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.466252 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.466264 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.466280 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.466291 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:24Z","lastTransitionTime":"2025-10-13T06:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.568990 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.569052 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.569073 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.569103 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.569125 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:24Z","lastTransitionTime":"2025-10-13T06:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.627034 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:24 crc kubenswrapper[4833]: E1013 06:29:24.627175 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.627214 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:24 crc kubenswrapper[4833]: E1013 06:29:24.627386 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.672383 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.672453 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.672478 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.672508 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.672533 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:24Z","lastTransitionTime":"2025-10-13T06:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.774468 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.774588 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.774612 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.774640 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.774660 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:24Z","lastTransitionTime":"2025-10-13T06:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.877920 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.877964 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.877975 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.877991 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.878004 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:24Z","lastTransitionTime":"2025-10-13T06:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.980415 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.980461 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.980474 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.980490 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:24 crc kubenswrapper[4833]: I1013 06:29:24.980502 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:24Z","lastTransitionTime":"2025-10-13T06:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.083725 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.083758 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.083766 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.083778 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.083786 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:25Z","lastTransitionTime":"2025-10-13T06:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.186581 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.186654 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.186678 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.186709 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.186732 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:25Z","lastTransitionTime":"2025-10-13T06:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.289274 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.289309 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.289323 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.289340 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.289351 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:25Z","lastTransitionTime":"2025-10-13T06:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.392738 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.392786 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.392796 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.392813 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.392827 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:25Z","lastTransitionTime":"2025-10-13T06:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.495781 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.495842 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.495858 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.495881 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.495899 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:25Z","lastTransitionTime":"2025-10-13T06:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.598755 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.598812 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.598844 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.598860 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.598871 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:25Z","lastTransitionTime":"2025-10-13T06:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.626701 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.626713 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:25 crc kubenswrapper[4833]: E1013 06:29:25.626858 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:29:25 crc kubenswrapper[4833]: E1013 06:29:25.626988 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.701060 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.701120 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.701138 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.701165 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.701183 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:25Z","lastTransitionTime":"2025-10-13T06:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.803681 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.803732 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.803744 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.803764 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.803780 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:25Z","lastTransitionTime":"2025-10-13T06:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.906667 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.906725 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.906743 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.906766 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:25 crc kubenswrapper[4833]: I1013 06:29:25.906783 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:25Z","lastTransitionTime":"2025-10-13T06:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.009849 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.009985 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.010011 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.010038 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.010060 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:26Z","lastTransitionTime":"2025-10-13T06:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.113047 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.113278 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.113369 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.113617 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.113779 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:26Z","lastTransitionTime":"2025-10-13T06:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.217398 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.217756 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.217931 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.218057 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.218180 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:26Z","lastTransitionTime":"2025-10-13T06:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.320434 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.320487 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.320499 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.320516 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.320527 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:26Z","lastTransitionTime":"2025-10-13T06:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.422522 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.422575 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.422584 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.422598 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.422607 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:26Z","lastTransitionTime":"2025-10-13T06:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.525214 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.525257 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.525268 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.525285 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.525296 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:26Z","lastTransitionTime":"2025-10-13T06:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.626130 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.626206 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:26 crc kubenswrapper[4833]: E1013 06:29:26.626316 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:29:26 crc kubenswrapper[4833]: E1013 06:29:26.626452 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.627744 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.627781 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.627797 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.627819 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.627834 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:26Z","lastTransitionTime":"2025-10-13T06:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.730492 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.730569 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.730579 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.730595 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.730606 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:26Z","lastTransitionTime":"2025-10-13T06:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.833287 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.833350 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.833367 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.833392 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.833413 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:26Z","lastTransitionTime":"2025-10-13T06:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.936384 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.936420 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.936430 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.936444 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:26 crc kubenswrapper[4833]: I1013 06:29:26.936455 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:26Z","lastTransitionTime":"2025-10-13T06:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.039405 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.039447 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.039458 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.039475 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.039486 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:27Z","lastTransitionTime":"2025-10-13T06:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.142387 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.142435 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.142453 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.142472 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.142485 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:27Z","lastTransitionTime":"2025-10-13T06:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.244774 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.244828 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.244839 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.244857 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.244872 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:27Z","lastTransitionTime":"2025-10-13T06:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.347806 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.347843 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.347853 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.347869 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.347882 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:27Z","lastTransitionTime":"2025-10-13T06:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.450115 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.450151 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.450162 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.450176 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.450185 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:27Z","lastTransitionTime":"2025-10-13T06:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.451630 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.451673 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.451684 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.451699 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.451711 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:27Z","lastTransitionTime":"2025-10-13T06:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:27 crc kubenswrapper[4833]: E1013 06:29:27.466079 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:27Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.470821 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.470862 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.470871 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.470886 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.470896 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:27Z","lastTransitionTime":"2025-10-13T06:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:27 crc kubenswrapper[4833]: E1013 06:29:27.483638 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:27Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.488576 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.488630 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.488641 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.488672 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.488683 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:27Z","lastTransitionTime":"2025-10-13T06:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:27 crc kubenswrapper[4833]: E1013 06:29:27.502581 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:27Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.505640 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.505667 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.505680 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.505695 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.505706 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:27Z","lastTransitionTime":"2025-10-13T06:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:27 crc kubenswrapper[4833]: E1013 06:29:27.516772 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:27Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.520646 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.520706 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.520722 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.520745 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.520765 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:27Z","lastTransitionTime":"2025-10-13T06:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:27 crc kubenswrapper[4833]: E1013 06:29:27.536816 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:27Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:27 crc kubenswrapper[4833]: E1013 06:29:27.537022 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.552494 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.552561 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.552578 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.552598 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.552614 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:27Z","lastTransitionTime":"2025-10-13T06:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.626400 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.626407 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:27 crc kubenswrapper[4833]: E1013 06:29:27.626749 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:29:27 crc kubenswrapper[4833]: E1013 06:29:27.626976 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.655388 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.655451 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.655475 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.655507 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.655567 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:27Z","lastTransitionTime":"2025-10-13T06:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.758582 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.758622 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.758635 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.758651 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.758664 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:27Z","lastTransitionTime":"2025-10-13T06:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.861260 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.861311 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.861323 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.861340 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.861351 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:27Z","lastTransitionTime":"2025-10-13T06:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.963827 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.963898 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.963923 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.963953 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:27 crc kubenswrapper[4833]: I1013 06:29:27.963975 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:27Z","lastTransitionTime":"2025-10-13T06:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.065943 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.065978 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.065990 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.066006 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.066017 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:28Z","lastTransitionTime":"2025-10-13T06:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.168472 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.168529 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.168576 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.168602 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.168619 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:28Z","lastTransitionTime":"2025-10-13T06:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.270663 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.270722 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.270741 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.270763 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.270781 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:28Z","lastTransitionTime":"2025-10-13T06:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.373467 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.373510 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.373521 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.373551 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.373564 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:28Z","lastTransitionTime":"2025-10-13T06:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.476002 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.476068 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.476085 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.476108 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.476125 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:28Z","lastTransitionTime":"2025-10-13T06:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.579286 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.579337 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.579354 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.579375 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.579403 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:28Z","lastTransitionTime":"2025-10-13T06:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.626178 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:28 crc kubenswrapper[4833]: E1013 06:29:28.626349 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.626559 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:28 crc kubenswrapper[4833]: E1013 06:29:28.626672 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.682098 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.682131 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.682141 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.682154 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.682164 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:28Z","lastTransitionTime":"2025-10-13T06:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.788398 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.788970 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.788989 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.789013 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.789030 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:28Z","lastTransitionTime":"2025-10-13T06:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.891753 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.891799 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.891812 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.891828 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.891844 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:28Z","lastTransitionTime":"2025-10-13T06:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.994952 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.995070 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.995089 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.995115 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:28 crc kubenswrapper[4833]: I1013 06:29:28.995133 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:28Z","lastTransitionTime":"2025-10-13T06:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.098802 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.098865 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.098883 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.098907 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.098924 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:29Z","lastTransitionTime":"2025-10-13T06:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.201467 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.201508 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.201519 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.201553 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.201566 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:29Z","lastTransitionTime":"2025-10-13T06:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.304747 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.304799 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.304815 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.304828 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.304837 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:29Z","lastTransitionTime":"2025-10-13T06:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.407977 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.408048 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.408072 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.408095 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.408112 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:29Z","lastTransitionTime":"2025-10-13T06:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.510315 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.510360 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.510374 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.510390 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.510401 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:29Z","lastTransitionTime":"2025-10-13T06:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.613843 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.613905 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.613923 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.613947 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.613964 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:29Z","lastTransitionTime":"2025-10-13T06:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.626698 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.626804 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:29 crc kubenswrapper[4833]: E1013 06:29:29.626864 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:29:29 crc kubenswrapper[4833]: E1013 06:29:29.626992 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.717668 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.717763 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.717782 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.717807 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.717825 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:29Z","lastTransitionTime":"2025-10-13T06:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.820732 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.820802 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.820822 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.820848 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.820866 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:29Z","lastTransitionTime":"2025-10-13T06:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.923575 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.923617 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.923629 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.923645 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:29 crc kubenswrapper[4833]: I1013 06:29:29.923658 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:29Z","lastTransitionTime":"2025-10-13T06:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.025635 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.025713 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.025755 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.025788 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.025812 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:30Z","lastTransitionTime":"2025-10-13T06:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.128256 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.128330 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.128351 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.128378 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.128399 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:30Z","lastTransitionTime":"2025-10-13T06:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.230818 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.230879 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.230911 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.230940 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.230961 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:30Z","lastTransitionTime":"2025-10-13T06:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.333343 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.333454 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.333478 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.333506 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.333525 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:30Z","lastTransitionTime":"2025-10-13T06:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.435712 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.435771 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.435804 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.435844 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.435868 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:30Z","lastTransitionTime":"2025-10-13T06:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.538935 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.539023 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.539048 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.539075 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.539096 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:30Z","lastTransitionTime":"2025-10-13T06:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.626295 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:30 crc kubenswrapper[4833]: E1013 06:29:30.626514 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.626872 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:30 crc kubenswrapper[4833]: E1013 06:29:30.627027 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.640922 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.640976 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.640990 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.641009 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.641022 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:30Z","lastTransitionTime":"2025-10-13T06:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.644247 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2qtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe7840a-9a54-429e-a148-a3f369ba5fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0622e1f66a8df23a49853cae411fea63ebac384e54b016796847f631873261d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2z6hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2qtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:30Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.663604 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:30Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.676750 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:30Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.697617 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b095d1b47e37e55494e6745601476332ff59da79569c574bcbe339f84f3e97c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:30Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.718321 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:30Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.734385 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:30Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.744528 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.744630 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.744645 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.744677 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.744696 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:30Z","lastTransitionTime":"2025-10-13T06:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.753886 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:30Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.771426 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:30Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.791097 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a288d8bd15e54eefc3abdb190d6c4e996336adb4b36241e4339c5fbeac77242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:30Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.804072 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:30Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.820992 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"974c8dd5-8d30-481e-87e9-a93fc827d83b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ab024e93fccec089531cd9b30c0dddb671f50dc2545e91808b9194879518141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd17b6ca285f57d8161394548b55fdfed2681f648cfe5a7619cc3c325694e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26887843c37a94b82dc6fe25858a9a8e7d6cd5f78a4567bda07afba8e3a1a94b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd73615b96be8034f386dbac0f32cb406ddf8e0f8eee12249fa46d785cdf0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd73615b96be8034f386dbac0f32cb406ddf8e0f8eee12249fa46d785cdf0cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:30Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.835157 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:30Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.847548 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.847588 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.847597 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.847612 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.847621 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:30Z","lastTransitionTime":"2025-10-13T06:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.866553 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cf8db24c370c2c184a078d0a333e0f1c95101881215fe52eadb79fa6667151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48cf8db24c370c2c184a078d0a333e0f1c95101881215fe52eadb79fa6667151\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:18Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1013 06:29:18.543409 6494 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[ip_prefix:10.217.0.0/22 nexthop:100.64.0.2 policy:{GoSet:[src-ip]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8944024f-deb7-4076-afb3-4b50a2ff4b4b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1013 06:29:18.543681 6494 obj_retry.go:551] Creating *factory.egressNode crc took: 4.104695ms\\\\nI1013 06:29:18.543710 6494 factory.go:1336] Added *v1.Node event handler 7\\\\nI1013 06:29:18.543736 6494 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1013 06:29:18.543969 6494 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1013 06:29:18.544045 6494 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1013 06:29:18.544076 6494 ovnkube.go:599] Stopped ovnkube\\\\nI1013 06:29:18.544098 6494 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1013 06:29:18.544158 6494 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:29:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wnpc6_openshift-ovn-kubernetes(cb9a788e-b626-43a8-955a-bf4a5a3cb145)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:30Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.881597 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83642f5182015076a30c0e069481be77b9a299f52171c24cc2c505c3efedc95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f40fe16dccb08459a9e5a899b317acf357cdc6143235e324495af067c3ce2b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x6fvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:30Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.896649 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-28gq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd6b1c1-777a-46be-960c-c6109d1615ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-28gq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:30Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.912933 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:30Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.933521 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:30Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.949055 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.949093 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.949101 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.949116 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:30 crc kubenswrapper[4833]: I1013 06:29:30.949125 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:30Z","lastTransitionTime":"2025-10-13T06:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.051360 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.051402 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.051413 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.051428 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.051439 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:31Z","lastTransitionTime":"2025-10-13T06:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.153858 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.153891 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.153900 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.153914 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.153923 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:31Z","lastTransitionTime":"2025-10-13T06:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.255964 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.256242 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.256333 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.256411 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.256480 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:31Z","lastTransitionTime":"2025-10-13T06:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.359049 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.359086 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.359097 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.359112 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.359123 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:31Z","lastTransitionTime":"2025-10-13T06:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.462058 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.462112 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.462122 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.462139 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.462152 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:31Z","lastTransitionTime":"2025-10-13T06:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.565440 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.565810 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.565962 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.566115 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.566245 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:31Z","lastTransitionTime":"2025-10-13T06:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.626762 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.626801 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:31 crc kubenswrapper[4833]: E1013 06:29:31.627179 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:29:31 crc kubenswrapper[4833]: E1013 06:29:31.627272 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.627469 4833 scope.go:117] "RemoveContainer" containerID="48cf8db24c370c2c184a078d0a333e0f1c95101881215fe52eadb79fa6667151" Oct 13 06:29:31 crc kubenswrapper[4833]: E1013 06:29:31.627836 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wnpc6_openshift-ovn-kubernetes(cb9a788e-b626-43a8-955a-bf4a5a3cb145)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.670040 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.670113 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.670137 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.670166 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.670186 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:31Z","lastTransitionTime":"2025-10-13T06:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.773120 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.773187 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.773204 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.773226 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.773244 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:31Z","lastTransitionTime":"2025-10-13T06:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.876228 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.876283 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.876300 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.876322 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.876340 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:31Z","lastTransitionTime":"2025-10-13T06:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.979712 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.979785 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.979804 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.979828 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:31 crc kubenswrapper[4833]: I1013 06:29:31.979846 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:31Z","lastTransitionTime":"2025-10-13T06:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.082422 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.082486 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.082504 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.082531 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.082596 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:32Z","lastTransitionTime":"2025-10-13T06:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.185461 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.185520 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.185574 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.185607 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.185629 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:32Z","lastTransitionTime":"2025-10-13T06:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.288773 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.288829 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.288840 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.288859 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.288915 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:32Z","lastTransitionTime":"2025-10-13T06:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.391579 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.391629 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.391638 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.391652 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.391663 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:32Z","lastTransitionTime":"2025-10-13T06:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.493837 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.493884 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.493928 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.493952 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.493972 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:32Z","lastTransitionTime":"2025-10-13T06:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.596668 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.596710 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.596722 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.596739 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.596749 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:32Z","lastTransitionTime":"2025-10-13T06:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.626562 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.626595 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:32 crc kubenswrapper[4833]: E1013 06:29:32.626725 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:29:32 crc kubenswrapper[4833]: E1013 06:29:32.626808 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.699662 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.699708 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.699719 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.699735 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.699748 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:32Z","lastTransitionTime":"2025-10-13T06:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.802207 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.802256 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.802275 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.802298 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.802314 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:32Z","lastTransitionTime":"2025-10-13T06:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.904478 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.904506 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.904514 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.904526 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:32 crc kubenswrapper[4833]: I1013 06:29:32.904547 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:32Z","lastTransitionTime":"2025-10-13T06:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.007036 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.007108 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.007119 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.007134 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.007147 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:33Z","lastTransitionTime":"2025-10-13T06:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.110250 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.110302 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.110314 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.110331 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.110347 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:33Z","lastTransitionTime":"2025-10-13T06:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.212384 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.212422 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.212434 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.212449 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.212460 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:33Z","lastTransitionTime":"2025-10-13T06:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.315359 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.315406 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.315417 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.315433 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.315443 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:33Z","lastTransitionTime":"2025-10-13T06:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.417457 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.417499 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.417510 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.417529 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.417557 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:33Z","lastTransitionTime":"2025-10-13T06:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.520973 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.521015 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.521026 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.521041 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.521050 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:33Z","lastTransitionTime":"2025-10-13T06:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.624085 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.624122 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.624133 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.624149 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.624467 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:33Z","lastTransitionTime":"2025-10-13T06:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.626353 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.626355 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:33 crc kubenswrapper[4833]: E1013 06:29:33.626603 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:29:33 crc kubenswrapper[4833]: E1013 06:29:33.626971 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.727774 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.727845 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.727858 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.727874 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.727885 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:33Z","lastTransitionTime":"2025-10-13T06:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.830193 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.830265 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.830291 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.830321 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.830343 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:33Z","lastTransitionTime":"2025-10-13T06:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.932982 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.933045 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.933057 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.933073 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:33 crc kubenswrapper[4833]: I1013 06:29:33.933083 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:33Z","lastTransitionTime":"2025-10-13T06:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.035677 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.035711 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.035720 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.035732 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.035740 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:34Z","lastTransitionTime":"2025-10-13T06:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.139630 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.139710 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.139722 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.139744 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.139760 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:34Z","lastTransitionTime":"2025-10-13T06:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.242323 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.242368 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.242379 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.242396 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.242410 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:34Z","lastTransitionTime":"2025-10-13T06:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.344654 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.344712 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.344721 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.344736 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.344745 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:34Z","lastTransitionTime":"2025-10-13T06:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.446766 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.446799 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.446810 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.446824 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.446834 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:34Z","lastTransitionTime":"2025-10-13T06:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.549803 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.549858 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.549875 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.549900 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.549916 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:34Z","lastTransitionTime":"2025-10-13T06:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.626195 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:34 crc kubenswrapper[4833]: E1013 06:29:34.626334 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.626402 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:34 crc kubenswrapper[4833]: E1013 06:29:34.626617 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.651890 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.651938 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.651949 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.651963 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.651972 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:34Z","lastTransitionTime":"2025-10-13T06:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.754451 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.754498 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.754512 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.754530 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.754560 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:34Z","lastTransitionTime":"2025-10-13T06:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.856485 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.856558 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.856574 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.856594 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.856606 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:34Z","lastTransitionTime":"2025-10-13T06:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.958487 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.958521 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.958548 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.958567 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:34 crc kubenswrapper[4833]: I1013 06:29:34.958579 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:34Z","lastTransitionTime":"2025-10-13T06:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.060613 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.060674 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.060691 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.060713 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.060731 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:35Z","lastTransitionTime":"2025-10-13T06:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.163289 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.163361 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.163385 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.163412 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.163433 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:35Z","lastTransitionTime":"2025-10-13T06:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.266493 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.266531 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.266556 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.266572 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.266582 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:35Z","lastTransitionTime":"2025-10-13T06:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.368447 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.368485 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.368495 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.368508 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.368518 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:35Z","lastTransitionTime":"2025-10-13T06:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.471240 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.471655 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.471811 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.471946 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.472029 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:35Z","lastTransitionTime":"2025-10-13T06:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.575182 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.575221 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.575229 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.575242 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.575251 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:35Z","lastTransitionTime":"2025-10-13T06:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.627017 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:35 crc kubenswrapper[4833]: E1013 06:29:35.627125 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.627025 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:35 crc kubenswrapper[4833]: E1013 06:29:35.627281 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.677373 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.677408 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.677421 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.677436 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.677446 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:35Z","lastTransitionTime":"2025-10-13T06:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.778689 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.778780 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.778794 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.778811 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.778823 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:35Z","lastTransitionTime":"2025-10-13T06:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.880662 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.880706 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.880718 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.880735 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.880745 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:35Z","lastTransitionTime":"2025-10-13T06:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.983237 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.983269 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.983280 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.983296 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:35 crc kubenswrapper[4833]: I1013 06:29:35.983310 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:35Z","lastTransitionTime":"2025-10-13T06:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.085287 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.085324 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.085337 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.085351 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.085363 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:36Z","lastTransitionTime":"2025-10-13T06:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.187426 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.187476 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.187487 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.187511 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.187524 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:36Z","lastTransitionTime":"2025-10-13T06:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.271238 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2fd6b1c1-777a-46be-960c-c6109d1615ad-metrics-certs\") pod \"network-metrics-daemon-28gq6\" (UID: \"2fd6b1c1-777a-46be-960c-c6109d1615ad\") " pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:36 crc kubenswrapper[4833]: E1013 06:29:36.272239 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 06:29:36 crc kubenswrapper[4833]: E1013 06:29:36.272349 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fd6b1c1-777a-46be-960c-c6109d1615ad-metrics-certs podName:2fd6b1c1-777a-46be-960c-c6109d1615ad nodeName:}" failed. No retries permitted until 2025-10-13 06:30:08.272296201 +0000 UTC m=+98.372719117 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2fd6b1c1-777a-46be-960c-c6109d1615ad-metrics-certs") pod "network-metrics-daemon-28gq6" (UID: "2fd6b1c1-777a-46be-960c-c6109d1615ad") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.289975 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.290019 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.290037 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.290056 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.290069 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:36Z","lastTransitionTime":"2025-10-13T06:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.392937 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.393308 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.393340 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.393366 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.393382 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:36Z","lastTransitionTime":"2025-10-13T06:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.495610 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.495701 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.495724 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.495750 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.495770 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:36Z","lastTransitionTime":"2025-10-13T06:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.598080 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.598138 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.598154 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.598175 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.598192 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:36Z","lastTransitionTime":"2025-10-13T06:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.626523 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.626627 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:36 crc kubenswrapper[4833]: E1013 06:29:36.626686 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:29:36 crc kubenswrapper[4833]: E1013 06:29:36.626740 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.699917 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.699965 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.699979 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.699994 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.700006 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:36Z","lastTransitionTime":"2025-10-13T06:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.802565 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.802597 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.802609 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.802623 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.802634 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:36Z","lastTransitionTime":"2025-10-13T06:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.905548 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.905589 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.905600 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.905614 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:36 crc kubenswrapper[4833]: I1013 06:29:36.905624 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:36Z","lastTransitionTime":"2025-10-13T06:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.008025 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.008079 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.008093 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.008111 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.008122 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:37Z","lastTransitionTime":"2025-10-13T06:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.110670 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.110713 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.110726 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.110745 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.110759 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:37Z","lastTransitionTime":"2025-10-13T06:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.213402 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.213447 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.213458 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.213474 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.213486 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:37Z","lastTransitionTime":"2025-10-13T06:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.316096 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.316155 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.316172 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.316196 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.316214 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:37Z","lastTransitionTime":"2025-10-13T06:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.418711 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.418778 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.418788 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.418803 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.418813 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:37Z","lastTransitionTime":"2025-10-13T06:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.521404 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.521464 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.521475 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.521493 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.521506 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:37Z","lastTransitionTime":"2025-10-13T06:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.624699 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.624757 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.624779 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.624809 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.624832 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:37Z","lastTransitionTime":"2025-10-13T06:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.626269 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.626284 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:37 crc kubenswrapper[4833]: E1013 06:29:37.626436 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:29:37 crc kubenswrapper[4833]: E1013 06:29:37.626625 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.727391 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.727455 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.727474 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.727508 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.727606 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:37Z","lastTransitionTime":"2025-10-13T06:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.829732 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.829777 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.829789 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.829804 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.829816 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:37Z","lastTransitionTime":"2025-10-13T06:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.830740 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.830783 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.830795 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.830812 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.830822 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:37Z","lastTransitionTime":"2025-10-13T06:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:37 crc kubenswrapper[4833]: E1013 06:29:37.846354 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:37Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.850292 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.850358 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.850372 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.850389 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.850400 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:37Z","lastTransitionTime":"2025-10-13T06:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:37 crc kubenswrapper[4833]: E1013 06:29:37.865518 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:37Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.869354 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.869391 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.869403 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.869417 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.869425 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:37Z","lastTransitionTime":"2025-10-13T06:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:37 crc kubenswrapper[4833]: E1013 06:29:37.884626 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:37Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.888026 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.888094 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.888112 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.888136 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.888153 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:37Z","lastTransitionTime":"2025-10-13T06:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:37 crc kubenswrapper[4833]: E1013 06:29:37.903078 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:37Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.907085 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.907151 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.907168 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.907194 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.907214 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:37Z","lastTransitionTime":"2025-10-13T06:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:37 crc kubenswrapper[4833]: E1013 06:29:37.924816 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:37Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:37 crc kubenswrapper[4833]: E1013 06:29:37.924986 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.932734 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.932809 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.932831 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.932855 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:37 crc kubenswrapper[4833]: I1013 06:29:37.932871 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:37Z","lastTransitionTime":"2025-10-13T06:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.034982 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.035033 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.035044 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.035061 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.035073 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:38Z","lastTransitionTime":"2025-10-13T06:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.046614 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zbg2r_9d1bd0f7-c161-456d-af32-2da416006789/kube-multus/0.log" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.046659 4833 generic.go:334] "Generic (PLEG): container finished" podID="9d1bd0f7-c161-456d-af32-2da416006789" containerID="b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b" exitCode=1 Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.046684 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zbg2r" event={"ID":"9d1bd0f7-c161-456d-af32-2da416006789","Type":"ContainerDied","Data":"b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b"} Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.047034 4833 scope.go:117] "RemoveContainer" containerID="b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.064150 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:38Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.075921 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:38Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.092099 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a288d8bd15e54eefc3abdb190d6c4e996336adb4b36241e4339c5fbeac77242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:38Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.106263 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:38Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.116936 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"974c8dd5-8d30-481e-87e9-a93fc827d83b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ab024e93fccec089531cd9b30c0dddb671f50dc2545e91808b9194879518141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd17b6ca285f57d8161394548b55fdfed2681f648cfe5a7619cc3c325694e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26887843c37a94b82dc6fe25858a9a8e7d6cd5f78a4567bda07afba8e3a1a94b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd73615b96be8034f386dbac0f32cb406ddf8e0f8eee12249fa46d785cdf0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd73615b96be8034f386dbac0f32cb406ddf8e0f8eee12249fa46d785cdf0cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:38Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.127481 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:38Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.136805 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.136836 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.136844 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.136858 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.136875 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:38Z","lastTransitionTime":"2025-10-13T06:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.140822 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:38Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.151151 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83642f5182015076a30c0e069481be77b9a299f52171c24cc2c505c3efedc95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f40fe16dccb08459a9e5a899b317acf357cdc6143235e324495af067c3ce2b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x6fvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:38Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.162205 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-28gq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd6b1c1-777a-46be-960c-c6109d1615ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-28gq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:38Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.173271 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"message\\\":\\\"2025-10-13T06:28:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ab78e864-a9c0-47bc-8f5b-67cdce776927\\\\n2025-10-13T06:28:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ab78e864-a9c0-47bc-8f5b-67cdce776927 to /host/opt/cni/bin/\\\\n2025-10-13T06:28:52Z [verbose] multus-daemon started\\\\n2025-10-13T06:28:52Z [verbose] Readiness Indicator file check\\\\n2025-10-13T06:29:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:38Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.184193 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:38Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.194882 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:38Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.212220 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cf8db24c370c2c184a078d0a333e0f1c95101881215fe52eadb79fa6667151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48cf8db24c370c2c184a078d0a333e0f1c95101881215fe52eadb79fa6667151\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:18Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1013 06:29:18.543409 6494 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[ip_prefix:10.217.0.0/22 nexthop:100.64.0.2 policy:{GoSet:[src-ip]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8944024f-deb7-4076-afb3-4b50a2ff4b4b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1013 06:29:18.543681 6494 obj_retry.go:551] Creating *factory.egressNode crc took: 4.104695ms\\\\nI1013 06:29:18.543710 6494 factory.go:1336] Added *v1.Node event handler 7\\\\nI1013 06:29:18.543736 6494 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1013 06:29:18.543969 6494 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1013 06:29:18.544045 6494 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1013 06:29:18.544076 6494 ovnkube.go:599] Stopped ovnkube\\\\nI1013 06:29:18.544098 6494 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1013 06:29:18.544158 6494 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:29:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wnpc6_openshift-ovn-kubernetes(cb9a788e-b626-43a8-955a-bf4a5a3cb145)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:38Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.225425 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:38Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.235734 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:38Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.239149 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.239200 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.239209 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.239222 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.239232 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:38Z","lastTransitionTime":"2025-10-13T06:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.251358 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b095d1b47e37e55494e6745601476332ff59da79569c574bcbe339f84f3e97c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:38Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.262934 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2qtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe7840a-9a54-429e-a148-a3f369ba5fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0622e1f66a8df23a49853cae411fea63ebac384e54b016796847f631873261d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2z6hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2qtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:38Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.341023 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.341067 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.341082 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.341101 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.341112 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:38Z","lastTransitionTime":"2025-10-13T06:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.443307 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.443478 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.443633 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.443766 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.443873 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:38Z","lastTransitionTime":"2025-10-13T06:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.546436 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.546472 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.546483 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.546500 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.546511 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:38Z","lastTransitionTime":"2025-10-13T06:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.627017 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.627078 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:38 crc kubenswrapper[4833]: E1013 06:29:38.627172 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:29:38 crc kubenswrapper[4833]: E1013 06:29:38.627321 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.648238 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.648276 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.648287 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.648303 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.648315 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:38Z","lastTransitionTime":"2025-10-13T06:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.751386 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.751444 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.751460 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.751483 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.751500 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:38Z","lastTransitionTime":"2025-10-13T06:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.854555 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.854623 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.854634 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.854650 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.854664 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:38Z","lastTransitionTime":"2025-10-13T06:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.957479 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.957526 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.957555 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.957573 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:38 crc kubenswrapper[4833]: I1013 06:29:38.957585 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:38Z","lastTransitionTime":"2025-10-13T06:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.052593 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zbg2r_9d1bd0f7-c161-456d-af32-2da416006789/kube-multus/0.log" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.052875 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zbg2r" event={"ID":"9d1bd0f7-c161-456d-af32-2da416006789","Type":"ContainerStarted","Data":"f7560e6781e45623f8f09699ee026305664eb7a06da06088ac4e870b174c94c6"} Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.061202 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.061827 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.062010 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.062163 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.062302 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:39Z","lastTransitionTime":"2025-10-13T06:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.068501 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7560e6781e45623f8f09699ee026305664eb7a06da06088ac4e870b174c94c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"message\\\":\\\"2025-10-13T06:28:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ab78e864-a9c0-47bc-8f5b-67cdce776927\\\\n2025-10-13T06:28:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ab78e864-a9c0-47bc-8f5b-67cdce776927 to /host/opt/cni/bin/\\\\n2025-10-13T06:28:52Z [verbose] multus-daemon started\\\\n2025-10-13T06:28:52Z [verbose] Readiness Indicator file check\\\\n2025-10-13T06:29:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:39Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.081918 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:39Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.094576 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:39Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.111797 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cf8db24c370c2c184a078d0a333e0f1c95101881215fe52eadb79fa6667151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48cf8db24c370c2c184a078d0a333e0f1c95101881215fe52eadb79fa6667151\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:18Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1013 06:29:18.543409 6494 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[ip_prefix:10.217.0.0/22 nexthop:100.64.0.2 policy:{GoSet:[src-ip]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8944024f-deb7-4076-afb3-4b50a2ff4b4b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1013 06:29:18.543681 6494 obj_retry.go:551] Creating *factory.egressNode crc took: 4.104695ms\\\\nI1013 06:29:18.543710 6494 factory.go:1336] Added *v1.Node event handler 7\\\\nI1013 06:29:18.543736 6494 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1013 06:29:18.543969 6494 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1013 06:29:18.544045 6494 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1013 06:29:18.544076 6494 ovnkube.go:599] Stopped ovnkube\\\\nI1013 06:29:18.544098 6494 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1013 06:29:18.544158 6494 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:29:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wnpc6_openshift-ovn-kubernetes(cb9a788e-b626-43a8-955a-bf4a5a3cb145)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:39Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.123680 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83642f5182015076a30c0e069481be77b9a299f52171c24cc2c505c3efedc95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f40fe16dccb08459a9e5a899b317acf357cdc6143235e324495af067c3ce2b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x6fvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:39Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.132643 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-28gq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd6b1c1-777a-46be-960c-c6109d1615ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-28gq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:39Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.141503 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:39Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.158739 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b095d1b47e37e55494e6745601476332ff59da79569c574bcbe339f84f3e97c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:39Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.165641 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.165711 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.165729 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.166159 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.166211 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:39Z","lastTransitionTime":"2025-10-13T06:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.169755 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2qtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe7840a-9a54-429e-a148-a3f369ba5fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0622e1f66a8df23a49853cae411fea63ebac384e54b016796847f631873261d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2z6hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2qtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:39Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.183338 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:39Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.196102 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:39Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.206051 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"974c8dd5-8d30-481e-87e9-a93fc827d83b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ab024e93fccec089531cd9b30c0dddb671f50dc2545e91808b9194879518141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd17b6ca285f57d8161394548b55fdfed2681f648cfe5a7619cc3c325694e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26887843c37a94b82dc6fe25858a9a8e7d6cd5f78a4567bda07afba8e3a1a94b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd73615b96be8034f386dbac0f32cb406ddf8e0f8eee12249fa46d785cdf0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd73615b96be8034f386dbac0f32cb406ddf8e0f8eee12249fa46d785cdf0cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:39Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.217128 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:39Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.228396 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:39Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.240738 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:39Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.252128 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:39Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.266231 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a288d8bd15e54eefc3abdb190d6c4e996336adb4b36241e4339c5fbeac77242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:39Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.268683 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.268718 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.268731 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.268748 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.268762 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:39Z","lastTransitionTime":"2025-10-13T06:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.370653 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.370701 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.370714 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.370731 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.370744 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:39Z","lastTransitionTime":"2025-10-13T06:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.473864 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.473925 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.473942 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.473967 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.473984 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:39Z","lastTransitionTime":"2025-10-13T06:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.576995 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.577039 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.577047 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.577060 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.577070 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:39Z","lastTransitionTime":"2025-10-13T06:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.626591 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.626675 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:39 crc kubenswrapper[4833]: E1013 06:29:39.626775 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:29:39 crc kubenswrapper[4833]: E1013 06:29:39.626826 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.679507 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.679581 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.679594 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.679611 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.679624 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:39Z","lastTransitionTime":"2025-10-13T06:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.781603 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.781651 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.781662 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.781679 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.781693 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:39Z","lastTransitionTime":"2025-10-13T06:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.884315 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.884341 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.884350 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.884361 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.884368 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:39Z","lastTransitionTime":"2025-10-13T06:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.986194 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.986254 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.986271 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.986294 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:39 crc kubenswrapper[4833]: I1013 06:29:39.986314 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:39Z","lastTransitionTime":"2025-10-13T06:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.089301 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.089352 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.089364 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.089377 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.089385 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:40Z","lastTransitionTime":"2025-10-13T06:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.192355 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.192403 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.192417 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.192435 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.192447 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:40Z","lastTransitionTime":"2025-10-13T06:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.295503 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.295622 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.295644 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.295667 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.295684 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:40Z","lastTransitionTime":"2025-10-13T06:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.398235 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.398382 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.398407 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.398438 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.398697 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:40Z","lastTransitionTime":"2025-10-13T06:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.502251 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.502320 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.502342 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.502371 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.502394 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:40Z","lastTransitionTime":"2025-10-13T06:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.605817 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.605909 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.605940 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.605969 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.605991 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:40Z","lastTransitionTime":"2025-10-13T06:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.626316 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:40 crc kubenswrapper[4833]: E1013 06:29:40.626529 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.626584 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:40 crc kubenswrapper[4833]: E1013 06:29:40.626732 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.647553 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:40Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.663936 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:40Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.685292 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b095d1b47e37e55494e6745601476332ff59da79569c574bcbe339f84f3e97c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:40Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.698894 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2qtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe7840a-9a54-429e-a148-a3f369ba5fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0622e1f66a8df23a49853cae411fea63ebac384e54b016796847f631873261d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2z6hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2qtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:40Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.708831 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.708855 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.708881 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.708899 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.708907 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:40Z","lastTransitionTime":"2025-10-13T06:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.715098 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:40Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.735778 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:40Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.749997 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:40Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.770139 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a288d8bd15e54eefc3abdb190d6c4e996336adb4b36241e4339c5fbeac77242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:40Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.788679 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:40Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.804983 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"974c8dd5-8d30-481e-87e9-a93fc827d83b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ab024e93fccec089531cd9b30c0dddb671f50dc2545e91808b9194879518141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd17b6ca285f57d8161394548b55fdfed2681f648cfe5a7619cc3c325694e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26887843c37a94b82dc6fe25858a9a8e7d6cd5f78a4567bda07afba8e3a1a94b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd73615b96be8034f386dbac0f32cb406ddf8e0f8eee12249fa46d785cdf0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd73615b96be8034f386dbac0f32cb406ddf8e0f8eee12249fa46d785cdf0cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:40Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.811781 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.811810 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.811821 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.811835 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.811853 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:40Z","lastTransitionTime":"2025-10-13T06:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.819785 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:40Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.839863 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cf8db24c370c2c184a078d0a333e0f1c95101881215fe52eadb79fa6667151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48cf8db24c370c2c184a078d0a333e0f1c95101881215fe52eadb79fa6667151\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:18Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1013 06:29:18.543409 6494 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[ip_prefix:10.217.0.0/22 nexthop:100.64.0.2 policy:{GoSet:[src-ip]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8944024f-deb7-4076-afb3-4b50a2ff4b4b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1013 06:29:18.543681 6494 obj_retry.go:551] Creating *factory.egressNode crc took: 4.104695ms\\\\nI1013 06:29:18.543710 6494 factory.go:1336] Added *v1.Node event handler 7\\\\nI1013 06:29:18.543736 6494 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1013 06:29:18.543969 6494 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1013 06:29:18.544045 6494 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1013 06:29:18.544076 6494 ovnkube.go:599] Stopped ovnkube\\\\nI1013 06:29:18.544098 6494 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1013 06:29:18.544158 6494 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:29:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wnpc6_openshift-ovn-kubernetes(cb9a788e-b626-43a8-955a-bf4a5a3cb145)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:40Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.851644 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83642f5182015076a30c0e069481be77b9a299f52171c24cc2c505c3efedc95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f40fe16dccb08459a9e5a899b317acf357cdc6143235e324495af067c3ce2b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x6fvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:40Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.863740 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-28gq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd6b1c1-777a-46be-960c-c6109d1615ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-28gq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:40Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.877285 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7560e6781e45623f8f09699ee026305664eb7a06da06088ac4e870b174c94c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"message\\\":\\\"2025-10-13T06:28:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ab78e864-a9c0-47bc-8f5b-67cdce776927\\\\n2025-10-13T06:28:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ab78e864-a9c0-47bc-8f5b-67cdce776927 to /host/opt/cni/bin/\\\\n2025-10-13T06:28:52Z [verbose] multus-daemon started\\\\n2025-10-13T06:28:52Z [verbose] Readiness Indicator file check\\\\n2025-10-13T06:29:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:40Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.892557 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:40Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.905934 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:40Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.915029 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.915087 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.915095 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.915108 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:40 crc kubenswrapper[4833]: I1013 06:29:40.915118 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:40Z","lastTransitionTime":"2025-10-13T06:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.018273 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.018330 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.018349 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.018375 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.018457 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:41Z","lastTransitionTime":"2025-10-13T06:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.121374 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.121419 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.121433 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.121450 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.121462 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:41Z","lastTransitionTime":"2025-10-13T06:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.223620 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.223666 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.223678 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.223700 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.223713 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:41Z","lastTransitionTime":"2025-10-13T06:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.325402 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.325435 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.325444 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.325457 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.325470 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:41Z","lastTransitionTime":"2025-10-13T06:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.427430 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.427464 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.427475 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.427489 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.427501 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:41Z","lastTransitionTime":"2025-10-13T06:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.529792 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.529833 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.529846 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.529862 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.529873 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:41Z","lastTransitionTime":"2025-10-13T06:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.626237 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.626331 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:41 crc kubenswrapper[4833]: E1013 06:29:41.626427 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:29:41 crc kubenswrapper[4833]: E1013 06:29:41.626514 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.631969 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.632017 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.632035 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.632056 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.632072 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:41Z","lastTransitionTime":"2025-10-13T06:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.733948 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.734001 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.734012 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.734029 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.734044 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:41Z","lastTransitionTime":"2025-10-13T06:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.836189 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.836255 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.836307 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.836337 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.836360 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:41Z","lastTransitionTime":"2025-10-13T06:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.938372 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.938404 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.938413 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.938426 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:41 crc kubenswrapper[4833]: I1013 06:29:41.938438 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:41Z","lastTransitionTime":"2025-10-13T06:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.040321 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.040364 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.040375 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.040394 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.040406 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:42Z","lastTransitionTime":"2025-10-13T06:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.142378 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.142431 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.142450 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.142472 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.142489 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:42Z","lastTransitionTime":"2025-10-13T06:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.244934 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.244986 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.245006 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.245029 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.245046 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:42Z","lastTransitionTime":"2025-10-13T06:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.351244 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.351407 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.351437 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.351517 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.351624 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:42Z","lastTransitionTime":"2025-10-13T06:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.454061 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.454106 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.454123 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.454145 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.454162 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:42Z","lastTransitionTime":"2025-10-13T06:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.556624 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.556660 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.556670 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.556685 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.556694 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:42Z","lastTransitionTime":"2025-10-13T06:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.626585 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.626657 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:42 crc kubenswrapper[4833]: E1013 06:29:42.626709 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:29:42 crc kubenswrapper[4833]: E1013 06:29:42.626778 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.659212 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.659248 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.659257 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.659268 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.659277 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:42Z","lastTransitionTime":"2025-10-13T06:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.762373 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.762428 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.762439 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.762454 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.762466 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:42Z","lastTransitionTime":"2025-10-13T06:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.864903 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.864975 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.864989 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.865008 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.865019 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:42Z","lastTransitionTime":"2025-10-13T06:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.967229 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.967259 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.967267 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.967279 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:42 crc kubenswrapper[4833]: I1013 06:29:42.967287 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:42Z","lastTransitionTime":"2025-10-13T06:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.069006 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.069063 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.069075 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.069090 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.069100 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:43Z","lastTransitionTime":"2025-10-13T06:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.171287 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.171318 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.171327 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.171340 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.171348 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:43Z","lastTransitionTime":"2025-10-13T06:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.273871 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.273913 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.273924 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.273941 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.273953 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:43Z","lastTransitionTime":"2025-10-13T06:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.377244 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.377301 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.377319 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.377341 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.377357 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:43Z","lastTransitionTime":"2025-10-13T06:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.479447 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.479511 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.479522 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.479553 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.479562 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:43Z","lastTransitionTime":"2025-10-13T06:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.581960 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.582000 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.582008 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.582022 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.582031 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:43Z","lastTransitionTime":"2025-10-13T06:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.626740 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.626773 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:43 crc kubenswrapper[4833]: E1013 06:29:43.626916 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:29:43 crc kubenswrapper[4833]: E1013 06:29:43.627047 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.684617 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.685077 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.685281 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.685448 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.685659 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:43Z","lastTransitionTime":"2025-10-13T06:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.788241 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.788612 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.788774 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.788931 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.789063 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:43Z","lastTransitionTime":"2025-10-13T06:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.892319 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.892355 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.892364 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.892378 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.892389 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:43Z","lastTransitionTime":"2025-10-13T06:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.994803 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.994848 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.994856 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.994872 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:43 crc kubenswrapper[4833]: I1013 06:29:43.994881 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:43Z","lastTransitionTime":"2025-10-13T06:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.098142 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.098208 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.098225 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.098249 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.098265 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:44Z","lastTransitionTime":"2025-10-13T06:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.201110 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.201190 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.201209 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.201267 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.201287 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:44Z","lastTransitionTime":"2025-10-13T06:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.304427 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.304488 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.304505 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.304529 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.304584 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:44Z","lastTransitionTime":"2025-10-13T06:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.407726 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.407787 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.407804 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.407828 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.407847 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:44Z","lastTransitionTime":"2025-10-13T06:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.510427 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.510490 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.510507 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.510523 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.510559 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:44Z","lastTransitionTime":"2025-10-13T06:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.613444 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.613505 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.613518 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.613552 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.613564 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:44Z","lastTransitionTime":"2025-10-13T06:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.629247 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:44 crc kubenswrapper[4833]: E1013 06:29:44.629366 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.629604 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:44 crc kubenswrapper[4833]: E1013 06:29:44.629712 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.716048 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.716078 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.716086 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.716116 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.716126 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:44Z","lastTransitionTime":"2025-10-13T06:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.819133 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.819161 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.819169 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.819182 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.819194 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:44Z","lastTransitionTime":"2025-10-13T06:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.923297 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.923354 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.923370 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.923415 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:44 crc kubenswrapper[4833]: I1013 06:29:44.923432 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:44Z","lastTransitionTime":"2025-10-13T06:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.027045 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.027437 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.027624 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.027785 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.027922 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:45Z","lastTransitionTime":"2025-10-13T06:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.129818 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.129869 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.129882 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.129902 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.129914 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:45Z","lastTransitionTime":"2025-10-13T06:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.233448 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.233519 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.233545 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.233562 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.233573 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:45Z","lastTransitionTime":"2025-10-13T06:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.336258 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.336321 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.336333 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.336351 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.336791 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:45Z","lastTransitionTime":"2025-10-13T06:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.440094 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.440146 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.440158 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.440179 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.440193 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:45Z","lastTransitionTime":"2025-10-13T06:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.543414 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.543476 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.543503 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.543517 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.543526 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:45Z","lastTransitionTime":"2025-10-13T06:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.626900 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.626977 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:45 crc kubenswrapper[4833]: E1013 06:29:45.627251 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:29:45 crc kubenswrapper[4833]: E1013 06:29:45.627415 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.642316 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.646353 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.646417 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.646430 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.646447 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.646461 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:45Z","lastTransitionTime":"2025-10-13T06:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.749809 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.749873 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.749895 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.749924 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.749948 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:45Z","lastTransitionTime":"2025-10-13T06:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.853132 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.853209 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.853234 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.853263 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.853287 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:45Z","lastTransitionTime":"2025-10-13T06:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.955847 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.955910 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.955928 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.955951 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:45 crc kubenswrapper[4833]: I1013 06:29:45.955967 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:45Z","lastTransitionTime":"2025-10-13T06:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.059588 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.059646 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.059664 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.059691 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.059709 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:46Z","lastTransitionTime":"2025-10-13T06:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.163348 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.163401 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.163410 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.163424 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.163434 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:46Z","lastTransitionTime":"2025-10-13T06:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.266625 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.266706 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.266723 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.266746 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.266763 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:46Z","lastTransitionTime":"2025-10-13T06:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.370794 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.371175 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.371326 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.371495 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.371685 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:46Z","lastTransitionTime":"2025-10-13T06:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.475327 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.475412 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.475443 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.475480 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.475506 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:46Z","lastTransitionTime":"2025-10-13T06:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.579661 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.579730 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.579747 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.579767 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.579779 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:46Z","lastTransitionTime":"2025-10-13T06:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.626488 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:46 crc kubenswrapper[4833]: E1013 06:29:46.626731 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.626748 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:46 crc kubenswrapper[4833]: E1013 06:29:46.627386 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.628015 4833 scope.go:117] "RemoveContainer" containerID="48cf8db24c370c2c184a078d0a333e0f1c95101881215fe52eadb79fa6667151" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.681746 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.681809 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.681831 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.681857 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.681880 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:46Z","lastTransitionTime":"2025-10-13T06:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.784919 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.784993 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.785018 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.785050 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.785073 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:46Z","lastTransitionTime":"2025-10-13T06:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.887034 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.887073 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.887083 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.887099 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.887111 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:46Z","lastTransitionTime":"2025-10-13T06:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.990312 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.990379 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.990403 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.990432 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:46 crc kubenswrapper[4833]: I1013 06:29:46.990456 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:46Z","lastTransitionTime":"2025-10-13T06:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.082450 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wnpc6_cb9a788e-b626-43a8-955a-bf4a5a3cb145/ovnkube-controller/2.log" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.084581 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" event={"ID":"cb9a788e-b626-43a8-955a-bf4a5a3cb145","Type":"ContainerStarted","Data":"bc6b7ae614a47894eb39d173d42b688003f00a958ae034ee47875ae4a41b139c"} Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.084986 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.092727 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.092756 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.092766 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.092782 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.092795 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:47Z","lastTransitionTime":"2025-10-13T06:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.095946 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7560e6781e45623f8f09699ee026305664eb7a06da06088ac4e870b174c94c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"message\\\":\\\"2025-10-13T06:28:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ab78e864-a9c0-47bc-8f5b-67cdce776927\\\\n2025-10-13T06:28:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ab78e864-a9c0-47bc-8f5b-67cdce776927 to /host/opt/cni/bin/\\\\n2025-10-13T06:28:52Z [verbose] multus-daemon started\\\\n2025-10-13T06:28:52Z [verbose] Readiness Indicator file check\\\\n2025-10-13T06:29:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:47Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.106845 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:47Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.116867 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:47Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.133822 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6b7ae614a47894eb39d173d42b688003f00a958ae034ee47875ae4a41b139c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48cf8db24c370c2c184a078d0a333e0f1c95101881215fe52eadb79fa6667151\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:18Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1013 06:29:18.543409 6494 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[ip_prefix:10.217.0.0/22 nexthop:100.64.0.2 policy:{GoSet:[src-ip]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8944024f-deb7-4076-afb3-4b50a2ff4b4b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1013 06:29:18.543681 6494 obj_retry.go:551] Creating *factory.egressNode crc took: 4.104695ms\\\\nI1013 06:29:18.543710 6494 factory.go:1336] Added *v1.Node event handler 7\\\\nI1013 06:29:18.543736 6494 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1013 06:29:18.543969 6494 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1013 06:29:18.544045 6494 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1013 06:29:18.544076 6494 ovnkube.go:599] Stopped ovnkube\\\\nI1013 06:29:18.544098 6494 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1013 06:29:18.544158 6494 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:29:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:47Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.145745 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83642f5182015076a30c0e069481be77b9a299f52171c24cc2c505c3efedc95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f40fe16dccb08459a9e5a899b317acf357cdc6143235e324495af067c3ce2b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x6fvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:47Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.158370 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-28gq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd6b1c1-777a-46be-960c-c6109d1615ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-28gq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:47Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.167933 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:47Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.189983 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:47Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.195414 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.195445 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.195456 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.195471 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.195480 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:47Z","lastTransitionTime":"2025-10-13T06:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.207644 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b095d1b47e37e55494e6745601476332ff59da79569c574bcbe339f84f3e97c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:47Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.217854 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2qtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe7840a-9a54-429e-a148-a3f369ba5fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0622e1f66a8df23a49853cae411fea63ebac384e54b016796847f631873261d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2z6hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2qtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:47Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.233414 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a288d8bd15e54eefc3abdb190d6c4e996336adb4b36241e4339c5fbeac77242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:47Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.254309 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:47Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.272639 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"974c8dd5-8d30-481e-87e9-a93fc827d83b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ab024e93fccec089531cd9b30c0dddb671f50dc2545e91808b9194879518141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd17b6ca285f57d8161394548b55fdfed2681f648cfe5a7619cc3c325694e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26887843c37a94b82dc6fe25858a9a8e7d6cd5f78a4567bda07afba8e3a1a94b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd73615b96be8034f386dbac0f32cb406ddf8e0f8eee12249fa46d785cdf0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd73615b96be8034f386dbac0f32cb406ddf8e0f8eee12249fa46d785cdf0cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:47Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.284888 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:47Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.295256 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:47Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.297505 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.297553 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.297565 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.297579 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.297589 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:47Z","lastTransitionTime":"2025-10-13T06:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.305682 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:47Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.316263 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:47Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.324171 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ad6995-3650-4ece-92c3-28d8736ef7ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158c71a74819eac0b6778680208bfd0f402fe582343198c0af41a68e823495af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b661bbc138ab62cadbc130d842dc8d1dce42a0650e585deef53aae5f57189d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b661bbc138ab62cadbc130d842dc8d1dce42a0650e585deef53aae5f57189d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:47Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.400779 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.400819 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.400831 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.400847 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.400859 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:47Z","lastTransitionTime":"2025-10-13T06:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.503205 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.503232 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.503241 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.503256 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.503266 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:47Z","lastTransitionTime":"2025-10-13T06:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.605619 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.605656 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.605668 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.605683 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.605692 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:47Z","lastTransitionTime":"2025-10-13T06:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.626741 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.626802 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:47 crc kubenswrapper[4833]: E1013 06:29:47.626916 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:29:47 crc kubenswrapper[4833]: E1013 06:29:47.627098 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.708725 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.708758 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.708767 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.708781 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.708790 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:47Z","lastTransitionTime":"2025-10-13T06:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.811659 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.811723 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.811744 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.811772 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.811796 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:47Z","lastTransitionTime":"2025-10-13T06:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.914993 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.915060 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.915085 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.915113 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:47 crc kubenswrapper[4833]: I1013 06:29:47.915139 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:47Z","lastTransitionTime":"2025-10-13T06:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.018166 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.018223 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.018245 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.018273 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.018294 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:48Z","lastTransitionTime":"2025-10-13T06:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.090977 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wnpc6_cb9a788e-b626-43a8-955a-bf4a5a3cb145/ovnkube-controller/3.log" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.092138 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wnpc6_cb9a788e-b626-43a8-955a-bf4a5a3cb145/ovnkube-controller/2.log" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.095783 4833 generic.go:334] "Generic (PLEG): container finished" podID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerID="bc6b7ae614a47894eb39d173d42b688003f00a958ae034ee47875ae4a41b139c" exitCode=1 Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.095846 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" event={"ID":"cb9a788e-b626-43a8-955a-bf4a5a3cb145","Type":"ContainerDied","Data":"bc6b7ae614a47894eb39d173d42b688003f00a958ae034ee47875ae4a41b139c"} Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.095910 4833 scope.go:117] "RemoveContainer" containerID="48cf8db24c370c2c184a078d0a333e0f1c95101881215fe52eadb79fa6667151" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.096413 4833 scope.go:117] "RemoveContainer" containerID="bc6b7ae614a47894eb39d173d42b688003f00a958ae034ee47875ae4a41b139c" Oct 13 06:29:48 crc kubenswrapper[4833]: E1013 06:29:48.096614 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wnpc6_openshift-ovn-kubernetes(cb9a788e-b626-43a8-955a-bf4a5a3cb145)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.120746 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.121021 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.121149 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.121287 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.121487 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:48Z","lastTransitionTime":"2025-10-13T06:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.121945 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a288d8bd15e54eefc3abdb190d6c4e996336adb4b36241e4339c5fbeac77242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:48Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.137711 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:48Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.150526 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"974c8dd5-8d30-481e-87e9-a93fc827d83b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ab024e93fccec089531cd9b30c0dddb671f50dc2545e91808b9194879518141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd17b6ca285f57d8161394548b55fdfed2681f648cfe5a7619cc3c325694e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26887843c37a94b82dc6fe25858a9a8e7d6cd5f78a4567bda07afba8e3a1a94b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd73615b96be8034f386dbac0f32cb406ddf8e0f8eee12249fa46d785cdf0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd73615b96be8034f386dbac0f32cb406ddf8e0f8eee12249fa46d785cdf0cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:48Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.163840 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:48Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.175190 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:48Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.182949 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.182987 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.182997 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.183014 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.183027 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:48Z","lastTransitionTime":"2025-10-13T06:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.189410 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:48Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:48 crc kubenswrapper[4833]: E1013 06:29:48.194910 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:48Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.201857 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:48Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.201974 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.202010 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.202020 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.202035 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.202044 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:48Z","lastTransitionTime":"2025-10-13T06:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.211488 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ad6995-3650-4ece-92c3-28d8736ef7ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158c71a74819eac0b6778680208bfd0f402fe582343198c0af41a68e823495af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b661bbc138ab62cadbc130d842dc8d1dce42a0650e585deef53aae5f57189d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b661bbc138ab62cadbc130d842dc8d1dce42a0650e585deef53aae5f57189d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:48Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:48 crc kubenswrapper[4833]: E1013 06:29:48.214080 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:48Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.217385 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.217418 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.217427 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.217442 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.217453 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:48Z","lastTransitionTime":"2025-10-13T06:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.223412 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7560e6781e45623f8f09699ee026305664eb7a06da06088ac4e870b174c94c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"message\\\":\\\"2025-10-13T06:28:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ab78e864-a9c0-47bc-8f5b-67cdce776927\\\\n2025-10-13T06:28:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ab78e864-a9c0-47bc-8f5b-67cdce776927 to /host/opt/cni/bin/\\\\n2025-10-13T06:28:52Z [verbose] multus-daemon started\\\\n2025-10-13T06:28:52Z [verbose] Readiness Indicator file check\\\\n2025-10-13T06:29:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:48Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:48 crc kubenswrapper[4833]: E1013 06:29:48.228500 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:48Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.232489 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.232517 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.232526 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.232563 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.232575 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:48Z","lastTransitionTime":"2025-10-13T06:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.236965 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:48Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:48 crc kubenswrapper[4833]: E1013 06:29:48.243327 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:48Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.246672 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.246738 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.246748 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.246764 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.246775 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:48Z","lastTransitionTime":"2025-10-13T06:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.248831 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:48Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:48 crc kubenswrapper[4833]: E1013 06:29:48.261929 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:48Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:48 crc kubenswrapper[4833]: E1013 06:29:48.262047 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.263845 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.263878 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.263886 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.263898 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.263907 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:48Z","lastTransitionTime":"2025-10-13T06:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.269044 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6b7ae614a47894eb39d173d42b688003f00a958ae034ee47875ae4a41b139c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48cf8db24c370c2c184a078d0a333e0f1c95101881215fe52eadb79fa6667151\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:18Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1013 06:29:18.543409 6494 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[ip_prefix:10.217.0.0/22 nexthop:100.64.0.2 policy:{GoSet:[src-ip]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8944024f-deb7-4076-afb3-4b50a2ff4b4b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1013 06:29:18.543681 6494 obj_retry.go:551] Creating *factory.egressNode crc took: 4.104695ms\\\\nI1013 06:29:18.543710 6494 factory.go:1336] Added *v1.Node event handler 7\\\\nI1013 06:29:18.543736 6494 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1013 06:29:18.543969 6494 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1013 06:29:18.544045 6494 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1013 06:29:18.544076 6494 ovnkube.go:599] Stopped ovnkube\\\\nI1013 06:29:18.544098 6494 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1013 06:29:18.544158 6494 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:29:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6b7ae614a47894eb39d173d42b688003f00a958ae034ee47875ae4a41b139c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:47Z\\\",\\\"message\\\":\\\" 06:29:47.613379 6857 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-wd7ss\\\\nI1013 06:29:47.613344 6857 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1013 06:29:47.613230 6857 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-5xwt6\\\\nI1013 06:29:47.613424 6857 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.168:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {63b1440a-0908-4cab-8799-012fa1cf0b07}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1013 06:29:47.613500 6857 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-5xwt6 in node crc\\\\nF1013 06:29:47.613548 6857 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:48Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.283569 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83642f5182015076a30c0e069481be77b9a299f52171c24cc2c505c3efedc95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f40fe16dccb08459a9e5a899b317acf357cdc6143235e324495af067c3ce2b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x6fvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:48Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.294847 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-28gq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd6b1c1-777a-46be-960c-c6109d1615ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-28gq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:48Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.311704 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:48Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.323975 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:48Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.341628 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b095d1b47e37e55494e6745601476332ff59da79569c574bcbe339f84f3e97c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:48Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.353724 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2qtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe7840a-9a54-429e-a148-a3f369ba5fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0622e1f66a8df23a49853cae411fea63ebac384e54b016796847f631873261d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2z6hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2qtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:48Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.366232 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.366390 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.366501 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.366620 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.366706 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:48Z","lastTransitionTime":"2025-10-13T06:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.468708 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.469147 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.469315 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.469470 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.469625 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:48Z","lastTransitionTime":"2025-10-13T06:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.572487 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.572757 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.572915 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.573035 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.573141 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:48Z","lastTransitionTime":"2025-10-13T06:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.626494 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.626512 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:48 crc kubenswrapper[4833]: E1013 06:29:48.626750 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:29:48 crc kubenswrapper[4833]: E1013 06:29:48.626876 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.675403 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.675455 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.675465 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.675479 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.675491 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:48Z","lastTransitionTime":"2025-10-13T06:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.778347 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.778408 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.778428 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.778454 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.778468 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:48Z","lastTransitionTime":"2025-10-13T06:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.880837 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.880920 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.880941 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.880968 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.880984 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:48Z","lastTransitionTime":"2025-10-13T06:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.983509 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.983565 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.983575 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.983588 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:48 crc kubenswrapper[4833]: I1013 06:29:48.983597 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:48Z","lastTransitionTime":"2025-10-13T06:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.086353 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.086387 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.086395 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.086407 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.086416 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:49Z","lastTransitionTime":"2025-10-13T06:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.101850 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wnpc6_cb9a788e-b626-43a8-955a-bf4a5a3cb145/ovnkube-controller/3.log" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.105185 4833 scope.go:117] "RemoveContainer" containerID="bc6b7ae614a47894eb39d173d42b688003f00a958ae034ee47875ae4a41b139c" Oct 13 06:29:49 crc kubenswrapper[4833]: E1013 06:29:49.105358 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wnpc6_openshift-ovn-kubernetes(cb9a788e-b626-43a8-955a-bf4a5a3cb145)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.134602 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:49Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.157467 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:49Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.178925 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b095d1b47e37e55494e6745601476332ff59da79569c574bcbe339f84f3e97c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:49Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.188370 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.188408 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.188420 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.188437 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.188448 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:49Z","lastTransitionTime":"2025-10-13T06:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.190586 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2qtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe7840a-9a54-429e-a148-a3f369ba5fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0622e1f66a8df23a49853cae411fea63ebac384e54b016796847f631873261d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2z6hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2qtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:49Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.203474 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a288d8bd15e54eefc3abdb190d6c4e996336adb4b36241e4339c5fbeac77242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:49Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.216249 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:49Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.227013 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"974c8dd5-8d30-481e-87e9-a93fc827d83b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ab024e93fccec089531cd9b30c0dddb671f50dc2545e91808b9194879518141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd17b6ca285f57d8161394548b55fdfed2681f648cfe5a7619cc3c325694e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26887843c37a94b82dc6fe25858a9a8e7d6cd5f78a4567bda07afba8e3a1a94b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd73615b96be8034f386dbac0f32cb406ddf8e0f8eee12249fa46d785cdf0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd73615b96be8034f386dbac0f32cb406ddf8e0f8eee12249fa46d785cdf0cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:49Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.240494 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:49Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.251006 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:49Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.261478 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:49Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.274276 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:49Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.285202 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ad6995-3650-4ece-92c3-28d8736ef7ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158c71a74819eac0b6778680208bfd0f402fe582343198c0af41a68e823495af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b661bbc138ab62cadbc130d842dc8d1dce42a0650e585deef53aae5f57189d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b661bbc138ab62cadbc130d842dc8d1dce42a0650e585deef53aae5f57189d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:49Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.290335 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.290404 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.290426 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.290455 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.290476 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:49Z","lastTransitionTime":"2025-10-13T06:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.300482 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7560e6781e45623f8f09699ee026305664eb7a06da06088ac4e870b174c94c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"message\\\":\\\"2025-10-13T06:28:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ab78e864-a9c0-47bc-8f5b-67cdce776927\\\\n2025-10-13T06:28:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ab78e864-a9c0-47bc-8f5b-67cdce776927 to /host/opt/cni/bin/\\\\n2025-10-13T06:28:52Z [verbose] multus-daemon started\\\\n2025-10-13T06:28:52Z [verbose] Readiness Indicator file check\\\\n2025-10-13T06:29:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:49Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.312924 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:49Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.327357 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:49Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.344505 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6b7ae614a47894eb39d173d42b688003f00a958ae034ee47875ae4a41b139c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6b7ae614a47894eb39d173d42b688003f00a958ae034ee47875ae4a41b139c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:47Z\\\",\\\"message\\\":\\\" 06:29:47.613379 6857 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-wd7ss\\\\nI1013 06:29:47.613344 6857 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1013 06:29:47.613230 6857 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-5xwt6\\\\nI1013 06:29:47.613424 6857 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.168:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {63b1440a-0908-4cab-8799-012fa1cf0b07}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1013 06:29:47.613500 6857 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-5xwt6 in node crc\\\\nF1013 06:29:47.613548 6857 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:29:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wnpc6_openshift-ovn-kubernetes(cb9a788e-b626-43a8-955a-bf4a5a3cb145)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:49Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.357472 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83642f5182015076a30c0e069481be77b9a299f52171c24cc2c505c3efedc95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f40fe16dccb08459a9e5a899b317acf357cdc6143235e324495af067c3ce2b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x6fvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:49Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.369519 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-28gq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd6b1c1-777a-46be-960c-c6109d1615ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-28gq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:49Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.392339 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.392380 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.392391 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.392408 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.392419 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:49Z","lastTransitionTime":"2025-10-13T06:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.495616 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.495659 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.495668 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.495683 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.495693 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:49Z","lastTransitionTime":"2025-10-13T06:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.598285 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.598327 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.598335 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.598365 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.598374 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:49Z","lastTransitionTime":"2025-10-13T06:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.626127 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.626153 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:49 crc kubenswrapper[4833]: E1013 06:29:49.626367 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:29:49 crc kubenswrapper[4833]: E1013 06:29:49.626533 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.700809 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.700856 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.700868 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.700885 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.700898 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:49Z","lastTransitionTime":"2025-10-13T06:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.803614 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.803686 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.803704 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.803730 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.803750 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:49Z","lastTransitionTime":"2025-10-13T06:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.906664 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.906778 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.906801 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.906831 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:49 crc kubenswrapper[4833]: I1013 06:29:49.906856 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:49Z","lastTransitionTime":"2025-10-13T06:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.010165 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.010270 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.010289 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.010312 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.010329 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:50Z","lastTransitionTime":"2025-10-13T06:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.112407 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.112452 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.112468 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.112485 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.112498 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:50Z","lastTransitionTime":"2025-10-13T06:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.214596 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.214631 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.214638 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.214651 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.214659 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:50Z","lastTransitionTime":"2025-10-13T06:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.317143 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.317179 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.317187 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.317203 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.317212 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:50Z","lastTransitionTime":"2025-10-13T06:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.420474 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.420600 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.420629 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.420661 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.420683 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:50Z","lastTransitionTime":"2025-10-13T06:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.523341 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.523418 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.523439 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.523493 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.523514 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:50Z","lastTransitionTime":"2025-10-13T06:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.626528 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.626830 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.626891 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.627165 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.627181 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.627198 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:50 crc kubenswrapper[4833]: E1013 06:29:50.627183 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.627210 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:50Z","lastTransitionTime":"2025-10-13T06:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:50 crc kubenswrapper[4833]: E1013 06:29:50.627305 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.643517 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a288d8bd15e54eefc3abdb190d6c4e996336adb4b36241e4339c5fbeac77242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:50Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.656105 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:50Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.666475 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"974c8dd5-8d30-481e-87e9-a93fc827d83b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ab024e93fccec089531cd9b30c0dddb671f50dc2545e91808b9194879518141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd17b6ca285f57d8161394548b55fdfed2681f648cfe5a7619cc3c325694e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26887843c37a94b82dc6fe25858a9a8e7d6cd5f78a4567bda07afba8e3a1a94b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd73615b96be8034f386dbac0f32cb406ddf8e0f8eee12249fa46d785cdf0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd73615b96be8034f386dbac0f32cb406ddf8e0f8eee12249fa46d785cdf0cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:50Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.677462 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:50Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.688760 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:50Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.703206 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:50Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.716415 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:50Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.728657 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ad6995-3650-4ece-92c3-28d8736ef7ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158c71a74819eac0b6778680208bfd0f402fe582343198c0af41a68e823495af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b661bbc138ab62cadbc130d842dc8d1dce42a0650e585deef53aae5f57189d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b661bbc138ab62cadbc130d842dc8d1dce42a0650e585deef53aae5f57189d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:50Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.729240 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.729298 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.729311 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.729326 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.729336 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:50Z","lastTransitionTime":"2025-10-13T06:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.742581 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7560e6781e45623f8f09699ee026305664eb7a06da06088ac4e870b174c94c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"message\\\":\\\"2025-10-13T06:28:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ab78e864-a9c0-47bc-8f5b-67cdce776927\\\\n2025-10-13T06:28:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ab78e864-a9c0-47bc-8f5b-67cdce776927 to /host/opt/cni/bin/\\\\n2025-10-13T06:28:52Z [verbose] multus-daemon started\\\\n2025-10-13T06:28:52Z [verbose] Readiness Indicator file check\\\\n2025-10-13T06:29:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:50Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.758550 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:50Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.776032 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:50Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.806337 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6b7ae614a47894eb39d173d42b688003f00a958ae034ee47875ae4a41b139c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6b7ae614a47894eb39d173d42b688003f00a958ae034ee47875ae4a41b139c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:47Z\\\",\\\"message\\\":\\\" 06:29:47.613379 6857 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-wd7ss\\\\nI1013 06:29:47.613344 6857 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1013 06:29:47.613230 6857 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-5xwt6\\\\nI1013 06:29:47.613424 6857 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.168:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {63b1440a-0908-4cab-8799-012fa1cf0b07}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1013 06:29:47.613500 6857 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-5xwt6 in node crc\\\\nF1013 06:29:47.613548 6857 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:29:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wnpc6_openshift-ovn-kubernetes(cb9a788e-b626-43a8-955a-bf4a5a3cb145)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:50Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.819063 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83642f5182015076a30c0e069481be77b9a299f52171c24cc2c505c3efedc95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f40fe16dccb08459a9e5a899b317acf357cdc6143235e324495af067c3ce2b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x6fvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:50Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.829802 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-28gq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd6b1c1-777a-46be-960c-c6109d1615ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-28gq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:50Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.831405 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.831453 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.831469 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.831492 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.831508 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:50Z","lastTransitionTime":"2025-10-13T06:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.843488 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:50Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.854866 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:50Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.869242 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b095d1b47e37e55494e6745601476332ff59da79569c574bcbe339f84f3e97c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:50Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.881692 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2qtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe7840a-9a54-429e-a148-a3f369ba5fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0622e1f66a8df23a49853cae411fea63ebac384e54b016796847f631873261d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2z6hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2qtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:50Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.934177 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.934275 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.934292 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.934316 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:50 crc kubenswrapper[4833]: I1013 06:29:50.934333 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:50Z","lastTransitionTime":"2025-10-13T06:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.037389 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.037467 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.037489 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.037516 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.037628 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:51Z","lastTransitionTime":"2025-10-13T06:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.141860 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.141933 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.141957 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.141986 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.142009 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:51Z","lastTransitionTime":"2025-10-13T06:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.244313 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.244351 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.244363 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.244377 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.244388 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:51Z","lastTransitionTime":"2025-10-13T06:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.346736 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.346780 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.346792 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.346807 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.346820 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:51Z","lastTransitionTime":"2025-10-13T06:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.449688 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.449785 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.449811 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.449845 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.449880 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:51Z","lastTransitionTime":"2025-10-13T06:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.553114 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.553182 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.553206 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.553236 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.553260 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:51Z","lastTransitionTime":"2025-10-13T06:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.626984 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.627010 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:51 crc kubenswrapper[4833]: E1013 06:29:51.627145 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:29:51 crc kubenswrapper[4833]: E1013 06:29:51.627352 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.656235 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.656293 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.656332 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.656367 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.656390 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:51Z","lastTransitionTime":"2025-10-13T06:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.758933 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.758984 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.759001 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.759034 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.759069 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:51Z","lastTransitionTime":"2025-10-13T06:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.861990 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.862100 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.862124 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.862156 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.862180 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:51Z","lastTransitionTime":"2025-10-13T06:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.964653 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.964732 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.964741 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.964758 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:51 crc kubenswrapper[4833]: I1013 06:29:51.964767 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:51Z","lastTransitionTime":"2025-10-13T06:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.067936 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.068023 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.068046 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.068075 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.068098 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:52Z","lastTransitionTime":"2025-10-13T06:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.170607 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.170695 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.170721 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.170751 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.170776 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:52Z","lastTransitionTime":"2025-10-13T06:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.273650 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.273700 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.273715 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.273736 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.273752 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:52Z","lastTransitionTime":"2025-10-13T06:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.376345 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.376417 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.376440 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.376464 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.376482 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:52Z","lastTransitionTime":"2025-10-13T06:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.479290 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.479340 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.479356 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.479380 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.479397 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:52Z","lastTransitionTime":"2025-10-13T06:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.582711 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.582750 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.582761 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.582777 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.582790 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:52Z","lastTransitionTime":"2025-10-13T06:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.626987 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.627024 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:52 crc kubenswrapper[4833]: E1013 06:29:52.627178 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:29:52 crc kubenswrapper[4833]: E1013 06:29:52.627276 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.685655 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.685690 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.685719 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.685737 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.685749 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:52Z","lastTransitionTime":"2025-10-13T06:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.789148 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.789216 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.789228 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.789243 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.789258 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:52Z","lastTransitionTime":"2025-10-13T06:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.892084 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.892126 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.892136 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.892151 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.892161 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:52Z","lastTransitionTime":"2025-10-13T06:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.995485 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.995574 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.995594 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.995621 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:52 crc kubenswrapper[4833]: I1013 06:29:52.995638 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:52Z","lastTransitionTime":"2025-10-13T06:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.098223 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.098266 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.098275 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.098289 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.098298 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:53Z","lastTransitionTime":"2025-10-13T06:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.200793 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.200864 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.200889 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.200920 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.200961 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:53Z","lastTransitionTime":"2025-10-13T06:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.304478 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.304528 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.304562 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.304593 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.304613 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:53Z","lastTransitionTime":"2025-10-13T06:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.407445 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.407494 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.407514 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.407561 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.407579 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:53Z","lastTransitionTime":"2025-10-13T06:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.510299 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.510365 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.510393 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.510423 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.510461 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:53Z","lastTransitionTime":"2025-10-13T06:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.613674 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.613723 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.613739 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.613760 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.613775 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:53Z","lastTransitionTime":"2025-10-13T06:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.626126 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.626181 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:53 crc kubenswrapper[4833]: E1013 06:29:53.626310 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:29:53 crc kubenswrapper[4833]: E1013 06:29:53.626401 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.717157 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.717233 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.717261 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.717292 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.717314 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:53Z","lastTransitionTime":"2025-10-13T06:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.820793 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.820852 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.820868 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.820891 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.820909 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:53Z","lastTransitionTime":"2025-10-13T06:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.924272 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.924340 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.924366 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.924392 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:53 crc kubenswrapper[4833]: I1013 06:29:53.924514 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:53Z","lastTransitionTime":"2025-10-13T06:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.027654 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.027703 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.027715 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.027731 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.027743 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:54Z","lastTransitionTime":"2025-10-13T06:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.130042 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.130076 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.130084 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.130096 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.130107 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:54Z","lastTransitionTime":"2025-10-13T06:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.232455 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.232565 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.232584 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.232609 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.232626 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:54Z","lastTransitionTime":"2025-10-13T06:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.334731 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.334765 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.334792 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.334805 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.334814 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:54Z","lastTransitionTime":"2025-10-13T06:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.375558 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:54 crc kubenswrapper[4833]: E1013 06:29:54.375747 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 06:29:54 crc kubenswrapper[4833]: E1013 06:29:54.375856 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 06:30:58.375829359 +0000 UTC m=+148.476252305 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.436505 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.436878 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.436898 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.436924 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.436942 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:54Z","lastTransitionTime":"2025-10-13T06:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.476375 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.476510 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:54 crc kubenswrapper[4833]: E1013 06:29:54.476607 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:58.476524734 +0000 UTC m=+148.576947690 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:29:54 crc kubenswrapper[4833]: E1013 06:29:54.476657 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.476735 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:54 crc kubenswrapper[4833]: E1013 06:29:54.476749 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 06:30:58.476723191 +0000 UTC m=+148.577146137 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.476823 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:54 crc kubenswrapper[4833]: E1013 06:29:54.477009 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 06:29:54 crc kubenswrapper[4833]: E1013 06:29:54.477034 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 06:29:54 crc kubenswrapper[4833]: E1013 06:29:54.477052 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:29:54 crc kubenswrapper[4833]: E1013 06:29:54.477069 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 06:29:54 crc kubenswrapper[4833]: E1013 06:29:54.477118 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-13 06:30:58.477100843 +0000 UTC m=+148.577523799 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:29:54 crc kubenswrapper[4833]: E1013 06:29:54.477136 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 06:29:54 crc kubenswrapper[4833]: E1013 06:29:54.477163 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:29:54 crc kubenswrapper[4833]: E1013 06:29:54.477256 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-13 06:30:58.477231437 +0000 UTC m=+148.577654393 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.539931 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.539991 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.540015 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.540047 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.540070 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:54Z","lastTransitionTime":"2025-10-13T06:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.626837 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.626870 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:54 crc kubenswrapper[4833]: E1013 06:29:54.627031 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:29:54 crc kubenswrapper[4833]: E1013 06:29:54.627128 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.641659 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.641692 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.641699 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.641710 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.641719 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:54Z","lastTransitionTime":"2025-10-13T06:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.744621 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.744664 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.744677 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.744694 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.744707 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:54Z","lastTransitionTime":"2025-10-13T06:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.848229 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.848292 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.848313 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.848341 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.848365 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:54Z","lastTransitionTime":"2025-10-13T06:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.950259 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.950295 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.950306 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.950321 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:54 crc kubenswrapper[4833]: I1013 06:29:54.950333 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:54Z","lastTransitionTime":"2025-10-13T06:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.052829 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.052875 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.052887 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.052905 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.052917 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:55Z","lastTransitionTime":"2025-10-13T06:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.155228 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.155275 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.155292 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.155316 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.155333 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:55Z","lastTransitionTime":"2025-10-13T06:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.258828 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.258869 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.258880 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.258895 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.258906 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:55Z","lastTransitionTime":"2025-10-13T06:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.362127 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.362187 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.362205 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.362230 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.362248 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:55Z","lastTransitionTime":"2025-10-13T06:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.464309 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.464396 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.464416 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.464439 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.464456 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:55Z","lastTransitionTime":"2025-10-13T06:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.566988 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.567037 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.567050 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.567068 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.567080 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:55Z","lastTransitionTime":"2025-10-13T06:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.626288 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.626288 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:55 crc kubenswrapper[4833]: E1013 06:29:55.626514 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:29:55 crc kubenswrapper[4833]: E1013 06:29:55.626672 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.669321 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.669453 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.669482 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.669511 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.669530 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:55Z","lastTransitionTime":"2025-10-13T06:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.772486 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.772588 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.772612 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.772635 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.772652 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:55Z","lastTransitionTime":"2025-10-13T06:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.875256 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.875308 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.875330 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.875358 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.875379 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:55Z","lastTransitionTime":"2025-10-13T06:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.979010 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.979067 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.979083 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.979103 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:55 crc kubenswrapper[4833]: I1013 06:29:55.979120 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:55Z","lastTransitionTime":"2025-10-13T06:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.081878 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.081959 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.081985 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.082017 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.082036 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:56Z","lastTransitionTime":"2025-10-13T06:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.184588 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.184650 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.184667 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.184689 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.184707 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:56Z","lastTransitionTime":"2025-10-13T06:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.288071 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.288126 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.288143 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.288166 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.288183 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:56Z","lastTransitionTime":"2025-10-13T06:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.391646 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.391710 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.391738 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.391763 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.391781 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:56Z","lastTransitionTime":"2025-10-13T06:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.495471 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.495569 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.495589 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.495645 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.495663 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:56Z","lastTransitionTime":"2025-10-13T06:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.599084 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.599140 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.599157 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.599181 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.599197 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:56Z","lastTransitionTime":"2025-10-13T06:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.626297 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.626349 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:56 crc kubenswrapper[4833]: E1013 06:29:56.626479 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:29:56 crc kubenswrapper[4833]: E1013 06:29:56.626655 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.702205 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.702261 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.702278 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.702300 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.702317 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:56Z","lastTransitionTime":"2025-10-13T06:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.805075 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.805131 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.805148 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.805171 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.805189 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:56Z","lastTransitionTime":"2025-10-13T06:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.908171 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.908244 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.908265 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.908298 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:56 crc kubenswrapper[4833]: I1013 06:29:56.908320 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:56Z","lastTransitionTime":"2025-10-13T06:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.010981 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.011057 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.011082 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.011110 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.011133 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:57Z","lastTransitionTime":"2025-10-13T06:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.114904 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.114974 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.114992 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.115020 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.115043 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:57Z","lastTransitionTime":"2025-10-13T06:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.218236 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.218338 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.218358 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.218381 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.218397 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:57Z","lastTransitionTime":"2025-10-13T06:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.321493 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.321590 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.321608 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.322009 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.322065 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:57Z","lastTransitionTime":"2025-10-13T06:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.425291 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.425339 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.425354 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.425375 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.425389 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:57Z","lastTransitionTime":"2025-10-13T06:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.528000 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.528059 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.528075 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.528098 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.528115 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:57Z","lastTransitionTime":"2025-10-13T06:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.626966 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.627075 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:57 crc kubenswrapper[4833]: E1013 06:29:57.627140 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:29:57 crc kubenswrapper[4833]: E1013 06:29:57.627306 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.630126 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.630197 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.630210 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.630223 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.630235 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:57Z","lastTransitionTime":"2025-10-13T06:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.732948 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.733001 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.733018 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.733038 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.733054 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:57Z","lastTransitionTime":"2025-10-13T06:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.835664 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.835705 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.835717 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.835730 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.835740 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:57Z","lastTransitionTime":"2025-10-13T06:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.938076 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.938108 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.938117 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.938131 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:57 crc kubenswrapper[4833]: I1013 06:29:57.938159 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:57Z","lastTransitionTime":"2025-10-13T06:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.040141 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.040407 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.040420 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.040437 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.040451 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:58Z","lastTransitionTime":"2025-10-13T06:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.143389 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.143444 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.143458 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.143476 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.143489 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:58Z","lastTransitionTime":"2025-10-13T06:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.245780 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.245825 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.245840 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.245856 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.245869 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:58Z","lastTransitionTime":"2025-10-13T06:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.349168 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.349230 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.349249 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.349276 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.349299 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:58Z","lastTransitionTime":"2025-10-13T06:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.452525 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.452610 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.452627 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.452650 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.452667 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:58Z","lastTransitionTime":"2025-10-13T06:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.486510 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.486615 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.486634 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.486659 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.486677 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:58Z","lastTransitionTime":"2025-10-13T06:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:58 crc kubenswrapper[4833]: E1013 06:29:58.508604 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:58Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.513760 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.513862 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.513925 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.513972 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.513997 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:58Z","lastTransitionTime":"2025-10-13T06:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:58 crc kubenswrapper[4833]: E1013 06:29:58.539525 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:58Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.544690 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.544761 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.544779 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.544803 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.544821 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:58Z","lastTransitionTime":"2025-10-13T06:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:58 crc kubenswrapper[4833]: E1013 06:29:58.562950 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:58Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.567995 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.568047 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.568063 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.568085 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.568101 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:58Z","lastTransitionTime":"2025-10-13T06:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:58 crc kubenswrapper[4833]: E1013 06:29:58.587805 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:58Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.593187 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.593247 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.593264 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.593287 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.593305 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:58Z","lastTransitionTime":"2025-10-13T06:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:58 crc kubenswrapper[4833]: E1013 06:29:58.612648 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:29:58Z is after 2025-08-24T17:21:41Z" Oct 13 06:29:58 crc kubenswrapper[4833]: E1013 06:29:58.612874 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.615087 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.615157 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.615177 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.615204 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.615223 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:58Z","lastTransitionTime":"2025-10-13T06:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.626704 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.626836 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:29:58 crc kubenswrapper[4833]: E1013 06:29:58.626898 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:29:58 crc kubenswrapper[4833]: E1013 06:29:58.626892 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.717794 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.717852 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.717869 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.717891 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.717909 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:58Z","lastTransitionTime":"2025-10-13T06:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.820623 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.820664 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.820673 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.820685 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.820695 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:58Z","lastTransitionTime":"2025-10-13T06:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.922802 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.922893 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.922918 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.922947 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:58 crc kubenswrapper[4833]: I1013 06:29:58.922975 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:58Z","lastTransitionTime":"2025-10-13T06:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.026268 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.026342 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.026364 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.026397 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.026421 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:59Z","lastTransitionTime":"2025-10-13T06:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.129420 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.129498 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.129515 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.129563 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.129584 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:59Z","lastTransitionTime":"2025-10-13T06:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.232120 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.232161 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.232171 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.232186 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.232198 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:59Z","lastTransitionTime":"2025-10-13T06:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.334600 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.334658 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.334676 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.334699 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.334717 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:59Z","lastTransitionTime":"2025-10-13T06:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.438083 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.438151 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.438169 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.438193 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.438210 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:59Z","lastTransitionTime":"2025-10-13T06:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.540092 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.540130 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.540139 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.540151 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.540159 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:59Z","lastTransitionTime":"2025-10-13T06:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.627082 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:29:59 crc kubenswrapper[4833]: E1013 06:29:59.627280 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.627345 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:29:59 crc kubenswrapper[4833]: E1013 06:29:59.627802 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.643148 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.643211 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.643229 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.643252 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.643270 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:59Z","lastTransitionTime":"2025-10-13T06:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.746350 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.746450 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.746529 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.746600 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.746623 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:59Z","lastTransitionTime":"2025-10-13T06:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.849673 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.849746 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.849780 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.849817 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.849843 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:59Z","lastTransitionTime":"2025-10-13T06:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.953115 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.953209 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.953237 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.953269 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:29:59 crc kubenswrapper[4833]: I1013 06:29:59.953293 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:29:59Z","lastTransitionTime":"2025-10-13T06:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.056487 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.056598 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.056623 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.056653 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.056680 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:00Z","lastTransitionTime":"2025-10-13T06:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.159166 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.159232 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.159244 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.159259 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.159270 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:00Z","lastTransitionTime":"2025-10-13T06:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.262439 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.262493 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.262509 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.262532 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.262622 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:00Z","lastTransitionTime":"2025-10-13T06:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.366293 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.366358 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.366391 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.366422 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.366446 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:00Z","lastTransitionTime":"2025-10-13T06:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.469460 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.469509 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.469519 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.469551 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.469581 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:00Z","lastTransitionTime":"2025-10-13T06:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.571310 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.571380 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.571402 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.571426 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.571459 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:00Z","lastTransitionTime":"2025-10-13T06:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.626377 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.626392 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:30:00 crc kubenswrapper[4833]: E1013 06:30:00.626658 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:30:00 crc kubenswrapper[4833]: E1013 06:30:00.626737 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.643870 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2qtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe7840a-9a54-429e-a148-a3f369ba5fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0622e1f66a8df23a49853cae411fea63ebac384e54b016796847f631873261d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2z6hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2qtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.663517 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.674864 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.674920 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.674946 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.674969 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.674986 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:00Z","lastTransitionTime":"2025-10-13T06:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.680039 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.703496 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b095d1b47e37e55494e6745601476332ff59da79569c574bcbe339f84f3e97c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.735222 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.752075 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.766225 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.777769 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.777825 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.777841 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.777868 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.777886 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:00Z","lastTransitionTime":"2025-10-13T06:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.785396 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.804943 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a288d8bd15e54eefc3abdb190d6c4e996336adb4b36241e4339c5fbeac77242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.819868 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.834072 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"974c8dd5-8d30-481e-87e9-a93fc827d83b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ab024e93fccec089531cd9b30c0dddb671f50dc2545e91808b9194879518141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd17b6ca285f57d8161394548b55fdfed2681f648cfe5a7619cc3c325694e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26887843c37a94b82dc6fe25858a9a8e7d6cd5f78a4567bda07afba8e3a1a94b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd73615b96be8034f386dbac0f32cb406ddf8e0f8eee12249fa46d785cdf0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd73615b96be8034f386dbac0f32cb406ddf8e0f8eee12249fa46d785cdf0cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.846062 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ad6995-3650-4ece-92c3-28d8736ef7ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158c71a74819eac0b6778680208bfd0f402fe582343198c0af41a68e823495af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b661bbc138ab62cadbc130d842dc8d1dce42a0650e585deef53aae5f57189d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b661bbc138ab62cadbc130d842dc8d1dce42a0650e585deef53aae5f57189d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.863922 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.880593 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.880629 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.880641 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.880656 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.880667 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:00Z","lastTransitionTime":"2025-10-13T06:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.895108 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6b7ae614a47894eb39d173d42b688003f00a958ae034ee47875ae4a41b139c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6b7ae614a47894eb39d173d42b688003f00a958ae034ee47875ae4a41b139c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:47Z\\\",\\\"message\\\":\\\" 06:29:47.613379 6857 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-wd7ss\\\\nI1013 06:29:47.613344 6857 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1013 06:29:47.613230 6857 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-5xwt6\\\\nI1013 06:29:47.613424 6857 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.168:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {63b1440a-0908-4cab-8799-012fa1cf0b07}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1013 06:29:47.613500 6857 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-5xwt6 in node crc\\\\nF1013 06:29:47.613548 6857 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:29:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wnpc6_openshift-ovn-kubernetes(cb9a788e-b626-43a8-955a-bf4a5a3cb145)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.915038 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83642f5182015076a30c0e069481be77b9a299f52171c24cc2c505c3efedc95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f40fe16dccb08459a9e5a899b317acf357cdc6143235e324495af067c3ce2b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x6fvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.931100 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-28gq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd6b1c1-777a-46be-960c-c6109d1615ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-28gq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.949339 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7560e6781e45623f8f09699ee026305664eb7a06da06088ac4e870b174c94c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"message\\\":\\\"2025-10-13T06:28:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ab78e864-a9c0-47bc-8f5b-67cdce776927\\\\n2025-10-13T06:28:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ab78e864-a9c0-47bc-8f5b-67cdce776927 to /host/opt/cni/bin/\\\\n2025-10-13T06:28:52Z [verbose] multus-daemon started\\\\n2025-10-13T06:28:52Z [verbose] Readiness Indicator file check\\\\n2025-10-13T06:29:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.967174 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:00Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.983639 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.983715 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.983740 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.983769 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:00 crc kubenswrapper[4833]: I1013 06:30:00.983794 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:00Z","lastTransitionTime":"2025-10-13T06:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.086925 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.086988 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.087004 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.087026 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.087043 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:01Z","lastTransitionTime":"2025-10-13T06:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.190956 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.191000 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.191109 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.191129 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.191142 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:01Z","lastTransitionTime":"2025-10-13T06:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.293453 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.293512 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.293526 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.293568 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.293584 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:01Z","lastTransitionTime":"2025-10-13T06:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.396453 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.396492 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.396500 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.396514 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.396523 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:01Z","lastTransitionTime":"2025-10-13T06:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.499415 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.499476 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.499500 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.499531 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.499611 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:01Z","lastTransitionTime":"2025-10-13T06:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.602337 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.602374 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.602384 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.602399 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.602410 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:01Z","lastTransitionTime":"2025-10-13T06:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.626786 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.626947 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:30:01 crc kubenswrapper[4833]: E1013 06:30:01.627206 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:30:01 crc kubenswrapper[4833]: E1013 06:30:01.627397 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.705267 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.705325 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.705345 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.705369 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.705387 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:01Z","lastTransitionTime":"2025-10-13T06:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.808143 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.808176 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.808185 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.808205 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.808218 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:01Z","lastTransitionTime":"2025-10-13T06:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.911505 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.911567 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.911578 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.911594 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:01 crc kubenswrapper[4833]: I1013 06:30:01.911605 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:01Z","lastTransitionTime":"2025-10-13T06:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.013488 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.013522 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.013553 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.013568 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.013579 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:02Z","lastTransitionTime":"2025-10-13T06:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.116405 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.116459 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.116475 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.116500 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.116517 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:02Z","lastTransitionTime":"2025-10-13T06:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.219272 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.219321 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.219334 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.219353 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.219364 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:02Z","lastTransitionTime":"2025-10-13T06:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.322016 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.322081 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.322098 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.322122 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.322138 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:02Z","lastTransitionTime":"2025-10-13T06:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.424723 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.424807 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.424833 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.424865 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.424888 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:02Z","lastTransitionTime":"2025-10-13T06:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.528593 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.528658 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.528680 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.528708 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.528731 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:02Z","lastTransitionTime":"2025-10-13T06:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.626238 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:30:02 crc kubenswrapper[4833]: E1013 06:30:02.626485 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.626238 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:30:02 crc kubenswrapper[4833]: E1013 06:30:02.626887 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.631755 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.631821 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.631846 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.631871 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.631891 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:02Z","lastTransitionTime":"2025-10-13T06:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.735993 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.736049 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.736066 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.736090 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.736109 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:02Z","lastTransitionTime":"2025-10-13T06:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.839124 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.839163 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.839171 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.839185 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.839198 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:02Z","lastTransitionTime":"2025-10-13T06:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.942095 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.942140 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.942161 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.942179 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:02 crc kubenswrapper[4833]: I1013 06:30:02.942194 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:02Z","lastTransitionTime":"2025-10-13T06:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.045172 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.045295 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.045378 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.045409 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.045467 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:03Z","lastTransitionTime":"2025-10-13T06:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.147718 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.147772 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.147790 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.147810 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.147821 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:03Z","lastTransitionTime":"2025-10-13T06:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.250927 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.250976 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.250985 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.250998 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.251007 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:03Z","lastTransitionTime":"2025-10-13T06:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.354230 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.354301 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.354317 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.354342 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.354362 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:03Z","lastTransitionTime":"2025-10-13T06:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.457568 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.457620 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.457635 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.457657 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.457672 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:03Z","lastTransitionTime":"2025-10-13T06:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.559912 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.559946 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.559961 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.559975 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.559987 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:03Z","lastTransitionTime":"2025-10-13T06:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.626460 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.626604 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:30:03 crc kubenswrapper[4833]: E1013 06:30:03.626774 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:30:03 crc kubenswrapper[4833]: E1013 06:30:03.626974 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.662695 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.662756 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.662777 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.662798 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.662812 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:03Z","lastTransitionTime":"2025-10-13T06:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.766128 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.766195 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.766218 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.766251 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.766278 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:03Z","lastTransitionTime":"2025-10-13T06:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.869162 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.869226 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.869244 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.869269 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.869287 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:03Z","lastTransitionTime":"2025-10-13T06:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.972841 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.972920 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.972943 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.972974 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:03 crc kubenswrapper[4833]: I1013 06:30:03.972993 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:03Z","lastTransitionTime":"2025-10-13T06:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.075683 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.075747 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.075771 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.075801 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.075825 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:04Z","lastTransitionTime":"2025-10-13T06:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.179251 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.179350 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.179371 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.179435 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.179675 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:04Z","lastTransitionTime":"2025-10-13T06:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.285332 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.285833 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.286070 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.286277 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.286471 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:04Z","lastTransitionTime":"2025-10-13T06:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.390097 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.390174 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.390192 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.390217 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.390236 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:04Z","lastTransitionTime":"2025-10-13T06:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.493818 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.493894 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.493912 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.493937 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.493956 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:04Z","lastTransitionTime":"2025-10-13T06:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.597048 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.597121 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.597138 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.597160 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.597177 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:04Z","lastTransitionTime":"2025-10-13T06:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.627024 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:30:04 crc kubenswrapper[4833]: E1013 06:30:04.627208 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.627020 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:30:04 crc kubenswrapper[4833]: E1013 06:30:04.627947 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.628364 4833 scope.go:117] "RemoveContainer" containerID="bc6b7ae614a47894eb39d173d42b688003f00a958ae034ee47875ae4a41b139c" Oct 13 06:30:04 crc kubenswrapper[4833]: E1013 06:30:04.628721 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wnpc6_openshift-ovn-kubernetes(cb9a788e-b626-43a8-955a-bf4a5a3cb145)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.700224 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.700327 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.700348 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.700374 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.700391 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:04Z","lastTransitionTime":"2025-10-13T06:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.803680 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.803747 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.803765 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.803787 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.803805 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:04Z","lastTransitionTime":"2025-10-13T06:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.908372 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.908444 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.908468 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.908496 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:04 crc kubenswrapper[4833]: I1013 06:30:04.908517 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:04Z","lastTransitionTime":"2025-10-13T06:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.011398 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.011477 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.011497 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.011521 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.011574 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:05Z","lastTransitionTime":"2025-10-13T06:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.114272 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.114362 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.114384 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.114419 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.114458 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:05Z","lastTransitionTime":"2025-10-13T06:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.217723 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.217791 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.217810 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.217837 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.217858 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:05Z","lastTransitionTime":"2025-10-13T06:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.321204 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.321259 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.321274 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.321298 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.321315 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:05Z","lastTransitionTime":"2025-10-13T06:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.423707 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.423824 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.423842 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.423867 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.423883 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:05Z","lastTransitionTime":"2025-10-13T06:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.526657 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.526721 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.526738 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.526761 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.526779 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:05Z","lastTransitionTime":"2025-10-13T06:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.626901 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:30:05 crc kubenswrapper[4833]: E1013 06:30:05.627030 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.626916 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:30:05 crc kubenswrapper[4833]: E1013 06:30:05.627244 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.628401 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.628426 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.628434 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.628446 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.628454 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:05Z","lastTransitionTime":"2025-10-13T06:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.731863 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.731922 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.731946 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.731973 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.731993 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:05Z","lastTransitionTime":"2025-10-13T06:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.835209 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.835304 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.835321 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.835345 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.835365 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:05Z","lastTransitionTime":"2025-10-13T06:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.938324 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.938389 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.938398 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.938414 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:05 crc kubenswrapper[4833]: I1013 06:30:05.938426 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:05Z","lastTransitionTime":"2025-10-13T06:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.041345 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.041382 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.041394 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.041409 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.041421 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:06Z","lastTransitionTime":"2025-10-13T06:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.144166 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.144225 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.144239 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.144254 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.144264 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:06Z","lastTransitionTime":"2025-10-13T06:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.246360 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.246395 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.246403 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.246417 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.246427 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:06Z","lastTransitionTime":"2025-10-13T06:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.348891 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.348953 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.348970 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.348995 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.349015 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:06Z","lastTransitionTime":"2025-10-13T06:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.452351 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.452425 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.452444 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.452473 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.452493 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:06Z","lastTransitionTime":"2025-10-13T06:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.555642 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.555706 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.555723 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.555745 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.555760 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:06Z","lastTransitionTime":"2025-10-13T06:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.626950 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.627191 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:30:06 crc kubenswrapper[4833]: E1013 06:30:06.627386 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:30:06 crc kubenswrapper[4833]: E1013 06:30:06.627531 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.658881 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.658960 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.658983 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.659017 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.659040 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:06Z","lastTransitionTime":"2025-10-13T06:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.762485 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.762599 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.762632 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.762668 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.762691 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:06Z","lastTransitionTime":"2025-10-13T06:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.865990 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.866065 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.866083 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.866106 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.866124 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:06Z","lastTransitionTime":"2025-10-13T06:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.968155 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.968210 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.968227 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.968250 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:06 crc kubenswrapper[4833]: I1013 06:30:06.968267 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:06Z","lastTransitionTime":"2025-10-13T06:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.070522 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.070587 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.070599 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.070619 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.070634 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:07Z","lastTransitionTime":"2025-10-13T06:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.173558 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.173603 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.173614 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.173630 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.173642 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:07Z","lastTransitionTime":"2025-10-13T06:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.275742 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.275769 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.275777 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.275790 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.275799 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:07Z","lastTransitionTime":"2025-10-13T06:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.378141 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.378177 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.378188 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.378203 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.378215 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:07Z","lastTransitionTime":"2025-10-13T06:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.480606 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.480653 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.480662 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.480675 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.480685 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:07Z","lastTransitionTime":"2025-10-13T06:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.583032 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.583082 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.583093 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.583109 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.583122 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:07Z","lastTransitionTime":"2025-10-13T06:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.626591 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.626684 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:30:07 crc kubenswrapper[4833]: E1013 06:30:07.626742 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:30:07 crc kubenswrapper[4833]: E1013 06:30:07.626902 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.686691 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.686763 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.686772 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.686786 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.686796 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:07Z","lastTransitionTime":"2025-10-13T06:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.789410 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.789447 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.789455 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.789469 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.789481 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:07Z","lastTransitionTime":"2025-10-13T06:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.892340 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.892418 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.892444 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.892508 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.892532 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:07Z","lastTransitionTime":"2025-10-13T06:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.994927 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.994974 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.994994 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.995016 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:07 crc kubenswrapper[4833]: I1013 06:30:07.995033 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:07Z","lastTransitionTime":"2025-10-13T06:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.097352 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.097389 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.097398 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.097411 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.097420 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:08Z","lastTransitionTime":"2025-10-13T06:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.200171 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.200213 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.200230 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.200260 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.200318 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:08Z","lastTransitionTime":"2025-10-13T06:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.303386 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.303454 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.303477 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.303505 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.303528 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:08Z","lastTransitionTime":"2025-10-13T06:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.336097 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2fd6b1c1-777a-46be-960c-c6109d1615ad-metrics-certs\") pod \"network-metrics-daemon-28gq6\" (UID: \"2fd6b1c1-777a-46be-960c-c6109d1615ad\") " pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:30:08 crc kubenswrapper[4833]: E1013 06:30:08.336351 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 06:30:08 crc kubenswrapper[4833]: E1013 06:30:08.336447 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fd6b1c1-777a-46be-960c-c6109d1615ad-metrics-certs podName:2fd6b1c1-777a-46be-960c-c6109d1615ad nodeName:}" failed. No retries permitted until 2025-10-13 06:31:12.336420653 +0000 UTC m=+162.436843599 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2fd6b1c1-777a-46be-960c-c6109d1615ad-metrics-certs") pod "network-metrics-daemon-28gq6" (UID: "2fd6b1c1-777a-46be-960c-c6109d1615ad") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.405847 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.405912 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.405934 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.405963 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.405985 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:08Z","lastTransitionTime":"2025-10-13T06:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.509103 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.509171 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.509194 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.509219 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.509241 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:08Z","lastTransitionTime":"2025-10-13T06:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.612894 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.612956 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.612978 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.613006 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.613027 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:08Z","lastTransitionTime":"2025-10-13T06:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.626838 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.627035 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:30:08 crc kubenswrapper[4833]: E1013 06:30:08.627165 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:30:08 crc kubenswrapper[4833]: E1013 06:30:08.627032 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.717141 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.717292 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.717322 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.717370 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.717390 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:08Z","lastTransitionTime":"2025-10-13T06:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.820638 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.820703 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.820728 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.820756 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.820777 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:08Z","lastTransitionTime":"2025-10-13T06:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.866491 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.866703 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.866735 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.866758 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.866798 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:08Z","lastTransitionTime":"2025-10-13T06:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:08 crc kubenswrapper[4833]: E1013 06:30:08.889267 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:08Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.894451 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.894534 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.894605 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.894635 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.894659 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:08Z","lastTransitionTime":"2025-10-13T06:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:08 crc kubenswrapper[4833]: E1013 06:30:08.914935 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:08Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.920053 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.920110 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.920127 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.920153 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.920170 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:08Z","lastTransitionTime":"2025-10-13T06:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:08 crc kubenswrapper[4833]: E1013 06:30:08.940940 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:08Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.945224 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.945443 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.945656 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.945816 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.946015 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:08Z","lastTransitionTime":"2025-10-13T06:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:08 crc kubenswrapper[4833]: E1013 06:30:08.961419 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:08Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.965715 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.965769 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.965784 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.965807 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.965819 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:08Z","lastTransitionTime":"2025-10-13T06:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:08 crc kubenswrapper[4833]: E1013 06:30:08.979446 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T06:30:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"34e4ed34-c49c-4b1b-8fbf-570796f37a92\\\",\\\"systemUUID\\\":\\\"2a40fffb-7b97-4765-9d1a-75d6749bf8d3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:08Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:08 crc kubenswrapper[4833]: E1013 06:30:08.979723 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.982007 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.982048 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.982059 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.982077 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:08 crc kubenswrapper[4833]: I1013 06:30:08.982088 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:08Z","lastTransitionTime":"2025-10-13T06:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.084534 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.084800 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.084826 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.084854 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.084877 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:09Z","lastTransitionTime":"2025-10-13T06:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.187719 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.187793 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.187810 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.187832 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.187850 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:09Z","lastTransitionTime":"2025-10-13T06:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.290655 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.290721 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.290738 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.290763 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.290779 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:09Z","lastTransitionTime":"2025-10-13T06:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.394049 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.394105 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.394124 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.394151 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.394173 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:09Z","lastTransitionTime":"2025-10-13T06:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.497191 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.497275 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.497294 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.497319 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.497337 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:09Z","lastTransitionTime":"2025-10-13T06:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.599795 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.599848 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.599862 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.599882 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.599898 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:09Z","lastTransitionTime":"2025-10-13T06:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.626526 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.626572 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:30:09 crc kubenswrapper[4833]: E1013 06:30:09.626764 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:30:09 crc kubenswrapper[4833]: E1013 06:30:09.626874 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.702080 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.702135 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.702154 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.702178 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.702197 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:09Z","lastTransitionTime":"2025-10-13T06:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.805356 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.805421 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.805439 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.805473 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.805503 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:09Z","lastTransitionTime":"2025-10-13T06:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.908609 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.908675 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.908693 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.908724 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:09 crc kubenswrapper[4833]: I1013 06:30:09.908743 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:09Z","lastTransitionTime":"2025-10-13T06:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.011826 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.011904 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.011929 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.011953 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.011971 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:10Z","lastTransitionTime":"2025-10-13T06:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.115194 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.115236 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.115249 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.115266 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.115276 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:10Z","lastTransitionTime":"2025-10-13T06:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.217889 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.217981 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.218000 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.218022 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.218038 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:10Z","lastTransitionTime":"2025-10-13T06:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.321289 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.321350 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.321367 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.321389 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.321405 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:10Z","lastTransitionTime":"2025-10-13T06:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.424478 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.424586 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.424614 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.424642 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.424663 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:10Z","lastTransitionTime":"2025-10-13T06:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.528235 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.528296 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.528337 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.528366 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.528389 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:10Z","lastTransitionTime":"2025-10-13T06:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.626801 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.626836 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:30:10 crc kubenswrapper[4833]: E1013 06:30:10.627044 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:30:10 crc kubenswrapper[4833]: E1013 06:30:10.627206 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.632897 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.632943 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.632954 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.632969 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.632980 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:10Z","lastTransitionTime":"2025-10-13T06:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.643684 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zbg2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1bd0f7-c161-456d-af32-2da416006789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7560e6781e45623f8f09699ee026305664eb7a06da06088ac4e870b174c94c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:37Z\\\",\\\"message\\\":\\\"2025-10-13T06:28:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ab78e864-a9c0-47bc-8f5b-67cdce776927\\\\n2025-10-13T06:28:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ab78e864-a9c0-47bc-8f5b-67cdce776927 to /host/opt/cni/bin/\\\\n2025-10-13T06:28:52Z [verbose] multus-daemon started\\\\n2025-10-13T06:28:52Z [verbose] Readiness Indicator file check\\\\n2025-10-13T06:29:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k588t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zbg2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.666998 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2ebc9bfa93205f2f45a32401e1c1273dd4b8e972afe9a37da37a8992ddf6fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.684297 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580796012a1118de0542824f297dc58e075a171e6212a0c4548c6b53d49059e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa4eef77697ad20e50e6cb14b8682b070e6b4e52061a66a787687854dcd9639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.713474 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9a788e-b626-43a8-955a-bf4a5a3cb145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6b7ae614a47894eb39d173d42b688003f00a958ae034ee47875ae4a41b139c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6b7ae614a47894eb39d173d42b688003f00a958ae034ee47875ae4a41b139c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T06:29:47Z\\\",\\\"message\\\":\\\" 06:29:47.613379 6857 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-wd7ss\\\\nI1013 06:29:47.613344 6857 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1013 06:29:47.613230 6857 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-5xwt6\\\\nI1013 06:29:47.613424 6857 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.168:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {63b1440a-0908-4cab-8799-012fa1cf0b07}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1013 06:29:47.613500 6857 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-5xwt6 in node crc\\\\nF1013 06:29:47.613548 6857 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:29:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wnpc6_openshift-ovn-kubernetes(cb9a788e-b626-43a8-955a-bf4a5a3cb145)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5vxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wnpc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.733038 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed1cfb6c-df6a-4c55-b6ed-481f665cdea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83642f5182015076a30c0e069481be77b9a299f52171c24cc2c505c3efedc95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f40fe16dccb08459a9e5a899b317acf357cdc6143235e324495af067c3ce2b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x6fvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.735370 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.735421 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.735439 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.735463 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.735480 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:10Z","lastTransitionTime":"2025-10-13T06:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.751108 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-28gq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd6b1c1-777a-46be-960c-c6109d1615ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lbgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:29:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-28gq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.771493 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.787596 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xwt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed83f8a-c94f-4ec2-945c-dc4114ec0a7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c9da791e130f3426e204c24d4be364f9a1aa6120ac6541f5160d6df87166b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62lfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xwt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.802877 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5da43b-92dd-4de8-942c-5c546a33ee6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b095d1b47e37e55494e6745601476332ff59da79569c574bcbe339f84f3e97c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baf383cfafe723e01fa636896bc1312eab46d9ed0c7f06a11148819fbb608ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd05e29f73a0d53a3751e2226f99e16b7380580236a879816495a02835684f49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e740a9dccfa64c4acf31b8fdb08c79d973407276bdef510a20df2c3da457b3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d29e1db21ef35ded0113236d02840ab0ba0bf300264d8ff3bb071500c00170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://201353ccbcce6cdbafdca9ce84b09cc6ddf26f7c216665fe781e9cf727b1f20d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92253824c1c7ecafd43d7ef009fafd0076dd498f85e1088c94b6479d72d4393b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8r5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9c9nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.819083 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2qtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe7840a-9a54-429e-a148-a3f369ba5fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0622e1f66a8df23a49853cae411fea63ebac384e54b016796847f631873261d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2z6hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2qtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.839040 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.839082 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.839095 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.839113 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.839128 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:10Z","lastTransitionTime":"2025-10-13T06:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.840650 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b11fb710-807d-4c1d-b605-0d5571c77b4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe3261c39b2e34c9bf201d00c9b7cad5547b727fa1b55cc84f0c1c73ac571b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dd461bcc5a972ac36d40f021c16e93c01b4a1b6df77a1e09fdaf3ebceeb545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://062e39667792dcb0b971f5e9c8fd394599d003f81d19fe5696e0e88b324bfc52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a288d8bd15e54eefc3abdb190d6c4e996336adb4b36241e4339c5fbeac77242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://404b419e5d9f0264b987d8d690e487a828a6e9e251b7258f78cf03b0832386e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1013 06:28:50.050780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1013 06:28:50.051035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 06:28:50.052572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3044984498/tls.crt::/tmp/serving-cert-3044984498/tls.key\\\\\\\"\\\\nI1013 06:28:50.574093 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 06:28:50.577361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 06:28:50.577381 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 06:28:50.577400 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 06:28:50.577405 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 06:28:50.586530 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1013 06:28:50.587285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587296 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 06:28:50.587300 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 06:28:50.587303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 06:28:50.587306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 06:28:50.587309 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1013 06:28:50.587377 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1013 06:28:50.595213 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb30389276cd9a56a593e56be4b2413b2a9e26c579de42122dc6178104e9c85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe5fb9deb588d82b1a7f8f1e07d76123d17b532487ddd803279d023bb6f879c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.859379 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40d4f3fb-d05a-463a-bd20-2c548959d23d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://922f6b74cd6c6de604ed55974fe4f02c8eeb73358d3d2075259c5bac9d5e1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2403130eb4d7c4d26e8d254f37e89187c5d99e80028db79c1d820d581a6cdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c8af5f3b1f4d758bf4f8fb5795550ef0d2c3dc1e4fdd97de5d772bf978afe8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bf1b7396fbee0fcbc716c0e0c21a4ec012423b68a3735a034905254997595f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.874319 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"974c8dd5-8d30-481e-87e9-a93fc827d83b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:29:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ab024e93fccec089531cd9b30c0dddb671f50dc2545e91808b9194879518141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd17b6ca285f57d8161394548b55fdfed2681f648cfe5a7619cc3c325694e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26887843c37a94b82dc6fe25858a9a8e7d6cd5f78a4567bda07afba8e3a1a94b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd73615b96be8034f386dbac0f32cb406ddf8e0f8eee12249fa46d785cdf0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd73615b96be8034f386dbac0f32cb406ddf8e0f8eee12249fa46d785cdf0cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.893787 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.910409 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bd8f8376b61ea23741577d207fc6d64ccc189f732d174c11507376464b3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.924878 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.940504 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5b6ea2-f89e-4768-8663-bd965bde64fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85729bd2fb31f11df2223cc77568d1b5aa27be5c2f9634c667890fd42ca22c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xgfh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wd7ss\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.942013 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.942072 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.942095 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.942124 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.942146 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:10Z","lastTransitionTime":"2025-10-13T06:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:10 crc kubenswrapper[4833]: I1013 06:30:10.955957 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ad6995-3650-4ece-92c3-28d8736ef7ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T06:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158c71a74819eac0b6778680208bfd0f402fe582343198c0af41a68e823495af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T06:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b661bbc138ab62cadbc130d842dc8d1dce42a0650e585deef53aae5f57189d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b661bbc138ab62cadbc130d842dc8d1dce42a0650e585deef53aae5f57189d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T06:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T06:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T06:28:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T06:30:10Z is after 2025-08-24T17:21:41Z" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.044673 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.044750 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.044776 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.044812 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.044834 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:11Z","lastTransitionTime":"2025-10-13T06:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.147885 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.147957 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.147976 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.148001 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.148021 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:11Z","lastTransitionTime":"2025-10-13T06:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.250698 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.250763 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.250789 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.250817 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.250839 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:11Z","lastTransitionTime":"2025-10-13T06:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.354455 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.354520 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.354569 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.354599 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.354619 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:11Z","lastTransitionTime":"2025-10-13T06:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.457681 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.457775 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.457839 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.457919 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.457951 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:11Z","lastTransitionTime":"2025-10-13T06:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.561373 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.561425 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.561443 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.561466 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.561482 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:11Z","lastTransitionTime":"2025-10-13T06:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.626244 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.626274 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:30:11 crc kubenswrapper[4833]: E1013 06:30:11.626486 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:30:11 crc kubenswrapper[4833]: E1013 06:30:11.626714 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.664061 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.664100 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.664109 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.664123 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.664132 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:11Z","lastTransitionTime":"2025-10-13T06:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.767579 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.767647 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.767664 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.767688 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.767702 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:11Z","lastTransitionTime":"2025-10-13T06:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.870531 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.870600 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.870612 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.870633 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.870645 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:11Z","lastTransitionTime":"2025-10-13T06:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.973278 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.973327 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.973342 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.973830 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:11 crc kubenswrapper[4833]: I1013 06:30:11.973912 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:11Z","lastTransitionTime":"2025-10-13T06:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.077356 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.077426 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.077444 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.077469 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.077487 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:12Z","lastTransitionTime":"2025-10-13T06:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.179315 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.179367 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.179382 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.179401 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.179416 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:12Z","lastTransitionTime":"2025-10-13T06:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.282568 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.282625 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.282641 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.282664 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.282679 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:12Z","lastTransitionTime":"2025-10-13T06:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.385791 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.385845 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.385862 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.385885 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.385905 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:12Z","lastTransitionTime":"2025-10-13T06:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.489152 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.489212 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.489236 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.489265 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.489286 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:12Z","lastTransitionTime":"2025-10-13T06:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.591771 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.591825 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.591845 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.591868 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.591888 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:12Z","lastTransitionTime":"2025-10-13T06:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.626485 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.626563 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:30:12 crc kubenswrapper[4833]: E1013 06:30:12.626685 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:30:12 crc kubenswrapper[4833]: E1013 06:30:12.626813 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.694655 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.694719 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.694739 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.694761 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.694777 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:12Z","lastTransitionTime":"2025-10-13T06:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.796491 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.796558 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.796572 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.796588 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.796601 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:12Z","lastTransitionTime":"2025-10-13T06:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.899744 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.899803 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.899823 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.899850 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:12 crc kubenswrapper[4833]: I1013 06:30:12.899873 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:12Z","lastTransitionTime":"2025-10-13T06:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.002656 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.002730 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.002748 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.002774 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.002791 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:13Z","lastTransitionTime":"2025-10-13T06:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.106362 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.106499 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.106529 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.106616 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.106641 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:13Z","lastTransitionTime":"2025-10-13T06:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.209728 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.209772 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.209783 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.209798 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.209808 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:13Z","lastTransitionTime":"2025-10-13T06:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.311436 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.311470 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.311482 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.311498 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.311508 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:13Z","lastTransitionTime":"2025-10-13T06:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.415135 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.415196 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.415218 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.415246 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.415267 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:13Z","lastTransitionTime":"2025-10-13T06:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.518722 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.518785 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.518802 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.518826 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.518844 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:13Z","lastTransitionTime":"2025-10-13T06:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.621732 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.621800 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.621824 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.621855 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.621877 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:13Z","lastTransitionTime":"2025-10-13T06:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.627064 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:30:13 crc kubenswrapper[4833]: E1013 06:30:13.627187 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.627070 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:30:13 crc kubenswrapper[4833]: E1013 06:30:13.627405 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.724637 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.724677 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.724687 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.724703 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.724713 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:13Z","lastTransitionTime":"2025-10-13T06:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.826925 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.826992 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.827016 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.827045 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.827068 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:13Z","lastTransitionTime":"2025-10-13T06:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.930169 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.930215 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.930226 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.930244 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:13 crc kubenswrapper[4833]: I1013 06:30:13.930255 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:13Z","lastTransitionTime":"2025-10-13T06:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.032347 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.032380 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.032388 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.032400 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.032408 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:14Z","lastTransitionTime":"2025-10-13T06:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.135847 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.135941 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.135964 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.135990 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.136006 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:14Z","lastTransitionTime":"2025-10-13T06:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.238788 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.238860 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.238885 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.238915 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.238938 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:14Z","lastTransitionTime":"2025-10-13T06:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.342291 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.342351 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.342367 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.342390 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.342407 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:14Z","lastTransitionTime":"2025-10-13T06:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.445599 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.445656 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.445673 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.445695 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.445715 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:14Z","lastTransitionTime":"2025-10-13T06:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.549039 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.549101 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.549123 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.549149 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.549169 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:14Z","lastTransitionTime":"2025-10-13T06:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.627117 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:30:14 crc kubenswrapper[4833]: E1013 06:30:14.627254 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.627296 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:30:14 crc kubenswrapper[4833]: E1013 06:30:14.627337 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.652144 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.652194 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.652212 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.652233 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.652252 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:14Z","lastTransitionTime":"2025-10-13T06:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.755004 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.755056 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.755073 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.755096 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.755113 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:14Z","lastTransitionTime":"2025-10-13T06:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.858306 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.858345 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.858356 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.858370 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.858380 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:14Z","lastTransitionTime":"2025-10-13T06:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.961926 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.961989 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.962006 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.962032 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:14 crc kubenswrapper[4833]: I1013 06:30:14.962049 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:14Z","lastTransitionTime":"2025-10-13T06:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.065350 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.065390 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.065400 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.065415 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.065427 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:15Z","lastTransitionTime":"2025-10-13T06:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.167582 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.167647 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.167665 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.167690 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.167708 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:15Z","lastTransitionTime":"2025-10-13T06:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.270128 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.270172 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.270185 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.270201 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.270213 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:15Z","lastTransitionTime":"2025-10-13T06:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.372003 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.372069 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.372088 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.372112 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.372129 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:15Z","lastTransitionTime":"2025-10-13T06:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.475048 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.475185 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.475204 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.475228 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.475249 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:15Z","lastTransitionTime":"2025-10-13T06:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.577732 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.577781 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.577792 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.577811 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.577824 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:15Z","lastTransitionTime":"2025-10-13T06:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.626805 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.626887 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:30:15 crc kubenswrapper[4833]: E1013 06:30:15.627000 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:30:15 crc kubenswrapper[4833]: E1013 06:30:15.627595 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.628149 4833 scope.go:117] "RemoveContainer" containerID="bc6b7ae614a47894eb39d173d42b688003f00a958ae034ee47875ae4a41b139c" Oct 13 06:30:15 crc kubenswrapper[4833]: E1013 06:30:15.628399 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wnpc6_openshift-ovn-kubernetes(cb9a788e-b626-43a8-955a-bf4a5a3cb145)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.680221 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.680260 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.680268 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.680282 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.680290 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:15Z","lastTransitionTime":"2025-10-13T06:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.782751 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.782811 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.782827 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.782850 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.782867 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:15Z","lastTransitionTime":"2025-10-13T06:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.885906 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.885950 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.885959 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.885973 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.885982 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:15Z","lastTransitionTime":"2025-10-13T06:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.989220 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.989280 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.989296 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.989321 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:15 crc kubenswrapper[4833]: I1013 06:30:15.989337 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:15Z","lastTransitionTime":"2025-10-13T06:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.091717 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.091752 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.091762 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.091780 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.091797 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:16Z","lastTransitionTime":"2025-10-13T06:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.194272 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.194335 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.194352 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.194376 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.194392 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:16Z","lastTransitionTime":"2025-10-13T06:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.296960 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.297034 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.297051 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.297079 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.297097 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:16Z","lastTransitionTime":"2025-10-13T06:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.400170 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.400284 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.400305 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.400329 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.400346 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:16Z","lastTransitionTime":"2025-10-13T06:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.502587 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.502659 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.502679 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.502703 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.502721 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:16Z","lastTransitionTime":"2025-10-13T06:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.606172 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.606233 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.606250 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.606276 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.606293 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:16Z","lastTransitionTime":"2025-10-13T06:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.626342 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:30:16 crc kubenswrapper[4833]: E1013 06:30:16.626633 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.626664 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:30:16 crc kubenswrapper[4833]: E1013 06:30:16.627001 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.709249 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.709299 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.709315 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.709337 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.709353 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:16Z","lastTransitionTime":"2025-10-13T06:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.812647 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.812714 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.812726 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.812749 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.812766 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:16Z","lastTransitionTime":"2025-10-13T06:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.916019 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.916085 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.916097 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.916124 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:16 crc kubenswrapper[4833]: I1013 06:30:16.916137 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:16Z","lastTransitionTime":"2025-10-13T06:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.020082 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.020143 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.020164 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.020193 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.020216 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:17Z","lastTransitionTime":"2025-10-13T06:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.122828 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.122889 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.122906 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.122925 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.122939 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:17Z","lastTransitionTime":"2025-10-13T06:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.225757 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.225828 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.225840 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.225858 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.225868 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:17Z","lastTransitionTime":"2025-10-13T06:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.328963 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.329049 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.329062 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.329086 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.329099 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:17Z","lastTransitionTime":"2025-10-13T06:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.432795 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.432848 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.432858 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.432876 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.432888 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:17Z","lastTransitionTime":"2025-10-13T06:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.535905 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.535955 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.535971 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.535993 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.536010 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:17Z","lastTransitionTime":"2025-10-13T06:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.626864 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.626948 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:30:17 crc kubenswrapper[4833]: E1013 06:30:17.627109 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:30:17 crc kubenswrapper[4833]: E1013 06:30:17.627251 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.639857 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.639926 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.639945 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.639977 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.640001 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:17Z","lastTransitionTime":"2025-10-13T06:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.743283 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.743385 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.743397 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.743415 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.743425 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:17Z","lastTransitionTime":"2025-10-13T06:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.846214 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.846272 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.846280 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.846297 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.846307 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:17Z","lastTransitionTime":"2025-10-13T06:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.949587 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.949644 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.949663 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.949686 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:17 crc kubenswrapper[4833]: I1013 06:30:17.949703 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:17Z","lastTransitionTime":"2025-10-13T06:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.052130 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.052762 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.052802 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.052821 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.052833 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:18Z","lastTransitionTime":"2025-10-13T06:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.155309 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.155366 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.155383 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.155402 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.155418 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:18Z","lastTransitionTime":"2025-10-13T06:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.257392 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.257449 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.257474 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.257503 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.257524 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:18Z","lastTransitionTime":"2025-10-13T06:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.360352 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.360431 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.360455 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.360488 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.360510 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:18Z","lastTransitionTime":"2025-10-13T06:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.463921 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.463967 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.463977 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.463993 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.464005 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:18Z","lastTransitionTime":"2025-10-13T06:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.566651 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.566724 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.566747 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.566781 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.566804 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:18Z","lastTransitionTime":"2025-10-13T06:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.626800 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.627002 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:30:18 crc kubenswrapper[4833]: E1013 06:30:18.627197 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:30:18 crc kubenswrapper[4833]: E1013 06:30:18.627371 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.670423 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.670498 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.670521 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.670585 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.670610 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:18Z","lastTransitionTime":"2025-10-13T06:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.773761 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.773800 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.773811 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.773827 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.773839 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:18Z","lastTransitionTime":"2025-10-13T06:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.876868 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.876909 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.876918 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.876935 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.876946 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:18Z","lastTransitionTime":"2025-10-13T06:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.979621 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.979697 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.979720 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.979748 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:18 crc kubenswrapper[4833]: I1013 06:30:18.979771 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:18Z","lastTransitionTime":"2025-10-13T06:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.082166 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.082236 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.082253 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.082277 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.082295 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:19Z","lastTransitionTime":"2025-10-13T06:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.186148 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.186214 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.186232 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.186257 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.186275 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:19Z","lastTransitionTime":"2025-10-13T06:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.198349 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.198441 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.198460 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.198483 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.198499 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T06:30:19Z","lastTransitionTime":"2025-10-13T06:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.264139 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t2tp"] Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.264572 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t2tp" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.267280 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.267331 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.267971 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.268341 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.287731 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podStartSLOduration=89.287710093 podStartE2EDuration="1m29.287710093s" podCreationTimestamp="2025-10-13 06:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:19.287556388 +0000 UTC m=+109.387979344" watchObservedRunningTime="2025-10-13 06:30:19.287710093 +0000 UTC m=+109.388133009" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.343851 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=89.343826127 podStartE2EDuration="1m29.343826127s" podCreationTimestamp="2025-10-13 06:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:19.320924107 +0000 UTC m=+109.421347053" watchObservedRunningTime="2025-10-13 06:30:19.343826127 +0000 UTC m=+109.444249043" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.353432 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9dc2a93f-c36b-47e0-a35f-627412eeb285-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6t2tp\" (UID: \"9dc2a93f-c36b-47e0-a35f-627412eeb285\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t2tp" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.353485 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dc2a93f-c36b-47e0-a35f-627412eeb285-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6t2tp\" (UID: \"9dc2a93f-c36b-47e0-a35f-627412eeb285\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t2tp" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.353587 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9dc2a93f-c36b-47e0-a35f-627412eeb285-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6t2tp\" (UID: \"9dc2a93f-c36b-47e0-a35f-627412eeb285\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t2tp" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.353631 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9dc2a93f-c36b-47e0-a35f-627412eeb285-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6t2tp\" (UID: \"9dc2a93f-c36b-47e0-a35f-627412eeb285\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t2tp" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.353677 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9dc2a93f-c36b-47e0-a35f-627412eeb285-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6t2tp\" (UID: \"9dc2a93f-c36b-47e0-a35f-627412eeb285\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t2tp" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.356970 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=87.356948012 podStartE2EDuration="1m27.356948012s" podCreationTimestamp="2025-10-13 06:28:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:19.344106406 +0000 UTC m=+109.444529322" watchObservedRunningTime="2025-10-13 06:30:19.356948012 +0000 UTC m=+109.457370928" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.370842 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=60.37081688 podStartE2EDuration="1m0.37081688s" podCreationTimestamp="2025-10-13 06:29:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:19.357687545 +0000 UTC m=+109.458110481" watchObservedRunningTime="2025-10-13 06:30:19.37081688 +0000 UTC m=+109.471239806" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.412398 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=34.412370343 podStartE2EDuration="34.412370343s" podCreationTimestamp="2025-10-13 06:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:19.411814675 +0000 UTC m=+109.512237601" watchObservedRunningTime="2025-10-13 06:30:19.412370343 +0000 UTC m=+109.512793289" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.434352 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-zbg2r" podStartSLOduration=89.434327693 podStartE2EDuration="1m29.434327693s" podCreationTimestamp="2025-10-13 06:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:19.433585979 +0000 UTC m=+109.534008915" watchObservedRunningTime="2025-10-13 06:30:19.434327693 +0000 UTC m=+109.534750649" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.454150 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9dc2a93f-c36b-47e0-a35f-627412eeb285-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6t2tp\" (UID: \"9dc2a93f-c36b-47e0-a35f-627412eeb285\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t2tp" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.454201 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dc2a93f-c36b-47e0-a35f-627412eeb285-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6t2tp\" (UID: \"9dc2a93f-c36b-47e0-a35f-627412eeb285\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t2tp" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.454252 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9dc2a93f-c36b-47e0-a35f-627412eeb285-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6t2tp\" (UID: \"9dc2a93f-c36b-47e0-a35f-627412eeb285\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t2tp" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.454284 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9dc2a93f-c36b-47e0-a35f-627412eeb285-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6t2tp\" (UID: \"9dc2a93f-c36b-47e0-a35f-627412eeb285\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t2tp" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.454317 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9dc2a93f-c36b-47e0-a35f-627412eeb285-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6t2tp\" (UID: \"9dc2a93f-c36b-47e0-a35f-627412eeb285\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t2tp" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.454395 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9dc2a93f-c36b-47e0-a35f-627412eeb285-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6t2tp\" (UID: \"9dc2a93f-c36b-47e0-a35f-627412eeb285\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t2tp" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.454423 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9dc2a93f-c36b-47e0-a35f-627412eeb285-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6t2tp\" (UID: \"9dc2a93f-c36b-47e0-a35f-627412eeb285\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t2tp" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.455479 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9dc2a93f-c36b-47e0-a35f-627412eeb285-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6t2tp\" (UID: \"9dc2a93f-c36b-47e0-a35f-627412eeb285\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t2tp" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.462293 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dc2a93f-c36b-47e0-a35f-627412eeb285-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6t2tp\" (UID: \"9dc2a93f-c36b-47e0-a35f-627412eeb285\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t2tp" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.475093 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9dc2a93f-c36b-47e0-a35f-627412eeb285-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6t2tp\" (UID: \"9dc2a93f-c36b-47e0-a35f-627412eeb285\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t2tp" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.512694 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x6fvd" podStartSLOduration=88.512674036 podStartE2EDuration="1m28.512674036s" podCreationTimestamp="2025-10-13 06:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:19.512299464 +0000 UTC m=+109.612722430" watchObservedRunningTime="2025-10-13 06:30:19.512674036 +0000 UTC m=+109.613096962" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.539688 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-5xwt6" podStartSLOduration=89.539658128 podStartE2EDuration="1m29.539658128s" podCreationTimestamp="2025-10-13 06:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:19.539066359 +0000 UTC m=+109.639489345" watchObservedRunningTime="2025-10-13 06:30:19.539658128 +0000 UTC m=+109.640081074" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.582122 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-9c9nw" podStartSLOduration=89.582104011 podStartE2EDuration="1m29.582104011s" podCreationTimestamp="2025-10-13 06:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:19.565760832 +0000 UTC m=+109.666183818" watchObservedRunningTime="2025-10-13 06:30:19.582104011 +0000 UTC m=+109.682526947" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.585201 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t2tp" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.626499 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.626727 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:30:19 crc kubenswrapper[4833]: E1013 06:30:19.626908 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:30:19 crc kubenswrapper[4833]: E1013 06:30:19.627183 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.642724 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-h2qtv" podStartSLOduration=88.64269797 podStartE2EDuration="1m28.64269797s" podCreationTimestamp="2025-10-13 06:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:19.58208906 +0000 UTC m=+109.682512016" watchObservedRunningTime="2025-10-13 06:30:19.64269797 +0000 UTC m=+109.743120916" Oct 13 06:30:19 crc kubenswrapper[4833]: I1013 06:30:19.642938 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Oct 13 06:30:20 crc kubenswrapper[4833]: I1013 06:30:20.208057 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t2tp" event={"ID":"9dc2a93f-c36b-47e0-a35f-627412eeb285","Type":"ContainerStarted","Data":"af6ae357be67e5e93d92657924c53a1c852a0dbe0e2bb7eb87ff4efcb9508c3c"} Oct 13 06:30:20 crc kubenswrapper[4833]: I1013 06:30:20.208525 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t2tp" event={"ID":"9dc2a93f-c36b-47e0-a35f-627412eeb285","Type":"ContainerStarted","Data":"dad1c5694c60a83c785a34dda30e33c4b1b4a40ca92bbbf2905778b1f6323e59"} Oct 13 06:30:20 crc kubenswrapper[4833]: I1013 06:30:20.246113 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=1.246090297 podStartE2EDuration="1.246090297s" podCreationTimestamp="2025-10-13 06:30:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:20.244526276 +0000 UTC m=+110.344949192" watchObservedRunningTime="2025-10-13 06:30:20.246090297 +0000 UTC m=+110.346513253" Oct 13 06:30:20 crc kubenswrapper[4833]: I1013 06:30:20.263730 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t2tp" podStartSLOduration=90.263702336 podStartE2EDuration="1m30.263702336s" podCreationTimestamp="2025-10-13 06:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:20.262811937 +0000 UTC m=+110.363234903" watchObservedRunningTime="2025-10-13 06:30:20.263702336 +0000 UTC m=+110.364125292" Oct 13 06:30:20 crc kubenswrapper[4833]: I1013 06:30:20.626130 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:30:20 crc kubenswrapper[4833]: I1013 06:30:20.626521 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:30:20 crc kubenswrapper[4833]: E1013 06:30:20.627287 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:30:20 crc kubenswrapper[4833]: E1013 06:30:20.627478 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:30:21 crc kubenswrapper[4833]: I1013 06:30:21.626098 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:30:21 crc kubenswrapper[4833]: E1013 06:30:21.626235 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:30:21 crc kubenswrapper[4833]: I1013 06:30:21.626736 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:30:21 crc kubenswrapper[4833]: E1013 06:30:21.626933 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:30:22 crc kubenswrapper[4833]: I1013 06:30:22.626447 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:30:22 crc kubenswrapper[4833]: I1013 06:30:22.626520 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:30:22 crc kubenswrapper[4833]: E1013 06:30:22.626613 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:30:22 crc kubenswrapper[4833]: E1013 06:30:22.626717 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:30:23 crc kubenswrapper[4833]: I1013 06:30:23.626691 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:30:23 crc kubenswrapper[4833]: I1013 06:30:23.626768 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:30:23 crc kubenswrapper[4833]: E1013 06:30:23.626807 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:30:23 crc kubenswrapper[4833]: E1013 06:30:23.626910 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:30:24 crc kubenswrapper[4833]: I1013 06:30:24.228742 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zbg2r_9d1bd0f7-c161-456d-af32-2da416006789/kube-multus/1.log" Oct 13 06:30:24 crc kubenswrapper[4833]: I1013 06:30:24.229431 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zbg2r_9d1bd0f7-c161-456d-af32-2da416006789/kube-multus/0.log" Oct 13 06:30:24 crc kubenswrapper[4833]: I1013 06:30:24.229600 4833 generic.go:334] "Generic (PLEG): container finished" podID="9d1bd0f7-c161-456d-af32-2da416006789" containerID="f7560e6781e45623f8f09699ee026305664eb7a06da06088ac4e870b174c94c6" exitCode=1 Oct 13 06:30:24 crc kubenswrapper[4833]: I1013 06:30:24.229666 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zbg2r" event={"ID":"9d1bd0f7-c161-456d-af32-2da416006789","Type":"ContainerDied","Data":"f7560e6781e45623f8f09699ee026305664eb7a06da06088ac4e870b174c94c6"} Oct 13 06:30:24 crc kubenswrapper[4833]: I1013 06:30:24.229727 4833 scope.go:117] "RemoveContainer" containerID="b4d6af7c4eef019abda77b8362deb25bb68b4ad5d6564667c9513568fb21191b" Oct 13 06:30:24 crc kubenswrapper[4833]: I1013 06:30:24.230394 4833 scope.go:117] "RemoveContainer" containerID="f7560e6781e45623f8f09699ee026305664eb7a06da06088ac4e870b174c94c6" Oct 13 06:30:24 crc kubenswrapper[4833]: E1013 06:30:24.230702 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-zbg2r_openshift-multus(9d1bd0f7-c161-456d-af32-2da416006789)\"" pod="openshift-multus/multus-zbg2r" podUID="9d1bd0f7-c161-456d-af32-2da416006789" Oct 13 06:30:24 crc kubenswrapper[4833]: I1013 06:30:24.626879 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:30:24 crc kubenswrapper[4833]: I1013 06:30:24.627048 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:30:24 crc kubenswrapper[4833]: E1013 06:30:24.627152 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:30:24 crc kubenswrapper[4833]: E1013 06:30:24.627221 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:30:25 crc kubenswrapper[4833]: I1013 06:30:25.234066 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zbg2r_9d1bd0f7-c161-456d-af32-2da416006789/kube-multus/1.log" Oct 13 06:30:25 crc kubenswrapper[4833]: I1013 06:30:25.626445 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:30:25 crc kubenswrapper[4833]: I1013 06:30:25.626474 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:30:25 crc kubenswrapper[4833]: E1013 06:30:25.626623 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:30:25 crc kubenswrapper[4833]: E1013 06:30:25.626790 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:30:26 crc kubenswrapper[4833]: I1013 06:30:26.627032 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:30:26 crc kubenswrapper[4833]: E1013 06:30:26.627245 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:30:26 crc kubenswrapper[4833]: I1013 06:30:26.627370 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:30:26 crc kubenswrapper[4833]: E1013 06:30:26.627507 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:30:27 crc kubenswrapper[4833]: I1013 06:30:27.626931 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:30:27 crc kubenswrapper[4833]: I1013 06:30:27.626959 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:30:27 crc kubenswrapper[4833]: E1013 06:30:27.627048 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:30:27 crc kubenswrapper[4833]: E1013 06:30:27.627138 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:30:28 crc kubenswrapper[4833]: I1013 06:30:28.626837 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:30:28 crc kubenswrapper[4833]: I1013 06:30:28.626905 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:30:28 crc kubenswrapper[4833]: E1013 06:30:28.627000 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:30:28 crc kubenswrapper[4833]: E1013 06:30:28.627204 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:30:29 crc kubenswrapper[4833]: I1013 06:30:29.627081 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:30:29 crc kubenswrapper[4833]: I1013 06:30:29.627127 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:30:29 crc kubenswrapper[4833]: E1013 06:30:29.627574 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:30:29 crc kubenswrapper[4833]: E1013 06:30:29.627813 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:30:30 crc kubenswrapper[4833]: E1013 06:30:30.615393 4833 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Oct 13 06:30:30 crc kubenswrapper[4833]: I1013 06:30:30.626732 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:30:30 crc kubenswrapper[4833]: I1013 06:30:30.626745 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:30:30 crc kubenswrapper[4833]: E1013 06:30:30.627586 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:30:30 crc kubenswrapper[4833]: E1013 06:30:30.627698 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:30:30 crc kubenswrapper[4833]: I1013 06:30:30.628899 4833 scope.go:117] "RemoveContainer" containerID="bc6b7ae614a47894eb39d173d42b688003f00a958ae034ee47875ae4a41b139c" Oct 13 06:30:30 crc kubenswrapper[4833]: E1013 06:30:30.719532 4833 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 13 06:30:31 crc kubenswrapper[4833]: I1013 06:30:31.265619 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wnpc6_cb9a788e-b626-43a8-955a-bf4a5a3cb145/ovnkube-controller/3.log" Oct 13 06:30:31 crc kubenswrapper[4833]: I1013 06:30:31.268888 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" event={"ID":"cb9a788e-b626-43a8-955a-bf4a5a3cb145","Type":"ContainerStarted","Data":"c5f10ce598296c8168c432174c128e2e124e0eee35617722c804586b0dea4a49"} Oct 13 06:30:31 crc kubenswrapper[4833]: I1013 06:30:31.270208 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:30:31 crc kubenswrapper[4833]: I1013 06:30:31.296468 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" podStartSLOduration=101.296451378 podStartE2EDuration="1m41.296451378s" podCreationTimestamp="2025-10-13 06:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:31.295782026 +0000 UTC m=+121.396204952" watchObservedRunningTime="2025-10-13 06:30:31.296451378 +0000 UTC m=+121.396874304" Oct 13 06:30:31 crc kubenswrapper[4833]: I1013 06:30:31.509887 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-28gq6"] Oct 13 06:30:31 crc kubenswrapper[4833]: I1013 06:30:31.510113 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:30:31 crc kubenswrapper[4833]: E1013 06:30:31.510300 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:30:31 crc kubenswrapper[4833]: I1013 06:30:31.626734 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:30:31 crc kubenswrapper[4833]: E1013 06:30:31.626851 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:30:32 crc kubenswrapper[4833]: I1013 06:30:32.626920 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:30:32 crc kubenswrapper[4833]: I1013 06:30:32.626929 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:30:32 crc kubenswrapper[4833]: E1013 06:30:32.627315 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:30:32 crc kubenswrapper[4833]: E1013 06:30:32.627477 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:30:33 crc kubenswrapper[4833]: I1013 06:30:33.626179 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:30:33 crc kubenswrapper[4833]: I1013 06:30:33.626187 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:30:33 crc kubenswrapper[4833]: E1013 06:30:33.626422 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:30:33 crc kubenswrapper[4833]: E1013 06:30:33.626571 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:30:34 crc kubenswrapper[4833]: I1013 06:30:34.626905 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:30:34 crc kubenswrapper[4833]: E1013 06:30:34.627032 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:30:34 crc kubenswrapper[4833]: I1013 06:30:34.626905 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:30:34 crc kubenswrapper[4833]: E1013 06:30:34.627211 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:30:35 crc kubenswrapper[4833]: I1013 06:30:35.626483 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:30:35 crc kubenswrapper[4833]: I1013 06:30:35.626628 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:30:35 crc kubenswrapper[4833]: E1013 06:30:35.626693 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:30:35 crc kubenswrapper[4833]: E1013 06:30:35.626775 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:30:35 crc kubenswrapper[4833]: I1013 06:30:35.627327 4833 scope.go:117] "RemoveContainer" containerID="f7560e6781e45623f8f09699ee026305664eb7a06da06088ac4e870b174c94c6" Oct 13 06:30:35 crc kubenswrapper[4833]: E1013 06:30:35.720659 4833 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 13 06:30:36 crc kubenswrapper[4833]: I1013 06:30:36.287395 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zbg2r_9d1bd0f7-c161-456d-af32-2da416006789/kube-multus/1.log" Oct 13 06:30:36 crc kubenswrapper[4833]: I1013 06:30:36.287452 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zbg2r" event={"ID":"9d1bd0f7-c161-456d-af32-2da416006789","Type":"ContainerStarted","Data":"8b45e4b875a145b2ae05a9c9a05af30df92a79bf06adb47cb6550ae4ac56cb08"} Oct 13 06:30:36 crc kubenswrapper[4833]: I1013 06:30:36.626864 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:30:36 crc kubenswrapper[4833]: I1013 06:30:36.626905 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:30:36 crc kubenswrapper[4833]: E1013 06:30:36.626975 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:30:36 crc kubenswrapper[4833]: E1013 06:30:36.627039 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:30:37 crc kubenswrapper[4833]: I1013 06:30:37.626064 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:30:37 crc kubenswrapper[4833]: I1013 06:30:37.626084 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:30:37 crc kubenswrapper[4833]: E1013 06:30:37.626597 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:30:37 crc kubenswrapper[4833]: E1013 06:30:37.626675 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:30:38 crc kubenswrapper[4833]: I1013 06:30:38.626992 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:30:38 crc kubenswrapper[4833]: I1013 06:30:38.626992 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:30:38 crc kubenswrapper[4833]: E1013 06:30:38.627143 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:30:38 crc kubenswrapper[4833]: E1013 06:30:38.627205 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:30:39 crc kubenswrapper[4833]: I1013 06:30:39.626406 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:30:39 crc kubenswrapper[4833]: I1013 06:30:39.626406 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:30:39 crc kubenswrapper[4833]: E1013 06:30:39.626564 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-28gq6" podUID="2fd6b1c1-777a-46be-960c-c6109d1615ad" Oct 13 06:30:39 crc kubenswrapper[4833]: E1013 06:30:39.626675 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 06:30:40 crc kubenswrapper[4833]: I1013 06:30:40.627284 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:30:40 crc kubenswrapper[4833]: E1013 06:30:40.628383 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 06:30:40 crc kubenswrapper[4833]: I1013 06:30:40.628597 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:30:40 crc kubenswrapper[4833]: E1013 06:30:40.629434 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 06:30:41 crc kubenswrapper[4833]: I1013 06:30:41.626490 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:30:41 crc kubenswrapper[4833]: I1013 06:30:41.626618 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:30:41 crc kubenswrapper[4833]: I1013 06:30:41.629099 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Oct 13 06:30:41 crc kubenswrapper[4833]: I1013 06:30:41.629151 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Oct 13 06:30:41 crc kubenswrapper[4833]: I1013 06:30:41.630063 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Oct 13 06:30:41 crc kubenswrapper[4833]: I1013 06:30:41.630707 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Oct 13 06:30:42 crc kubenswrapper[4833]: I1013 06:30:42.626152 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:30:42 crc kubenswrapper[4833]: I1013 06:30:42.626207 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:30:42 crc kubenswrapper[4833]: I1013 06:30:42.628485 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Oct 13 06:30:42 crc kubenswrapper[4833]: I1013 06:30:42.629582 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Oct 13 06:30:49 crc kubenswrapper[4833]: I1013 06:30:49.955783 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.055095 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qz7k6"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.055480 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qz7k6" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.058380 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.058640 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.058885 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.059158 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.059194 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.059426 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.068863 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jpjcg"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.069859 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-jpjcg" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.071293 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-l4642"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.072751 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4642" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.073586 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.074139 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.074172 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.074441 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.074571 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.076582 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b6fmh"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.077167 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-b6fmh" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.077831 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.078179 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.078683 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-h8p9r"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.079277 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.080164 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.082414 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.083730 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.085921 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.086044 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.087291 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.087656 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.087800 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.087896 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.087974 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.088055 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.088203 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.088209 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.088218 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.088644 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tqnzn"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.089009 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tqnzn" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.091185 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.091887 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.091923 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.091948 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.092001 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.092075 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.092167 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.092263 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.092333 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.094399 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.094773 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.094956 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.095072 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.095203 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.095289 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.095383 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.095568 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.095799 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.095960 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.096049 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.096083 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.096262 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.097355 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zk7qg"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.097788 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-zg4v6"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.098165 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zg4v6" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.098462 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-zk7qg" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.099096 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.099284 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.099293 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.099687 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.100381 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h529p"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.100672 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h529p" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.102034 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.102147 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.102272 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.102786 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.102943 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.103226 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.103342 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.104575 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8btrk"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.104995 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8btrk" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.105517 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.105659 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.105846 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.105879 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.105959 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.106040 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.106187 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.106357 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.106457 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.107191 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x7dz2"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.107485 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.111703 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-77swh"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.112066 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-77swh" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.113838 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.114075 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.114424 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.127729 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-t6b9x"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.130001 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-tkv6x"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.130829 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tkv6x" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.130949 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t6b9x" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.131335 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zh8tx"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.133119 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zh8tx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.134849 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.134854 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.135064 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.135246 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.135729 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.135778 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.135939 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.136464 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.136608 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.136732 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.136845 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.136484 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.136564 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.137354 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.137636 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.147753 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.148362 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.148584 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.148612 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.148773 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.152558 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.152723 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.152893 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.153185 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rpfzn"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.153801 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.154055 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-tgsfn"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.154200 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rpfzn" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.154597 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52ft8\" (UniqueName: \"kubernetes.io/projected/57f3ce23-f777-41e7-a3ef-23873b3049e9-kube-api-access-52ft8\") pod \"cluster-samples-operator-665b6dd947-8btrk\" (UID: \"57f3ce23-f777-41e7-a3ef-23873b3049e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8btrk" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.154681 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/57f3ce23-f777-41e7-a3ef-23873b3049e9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8btrk\" (UID: \"57f3ce23-f777-41e7-a3ef-23873b3049e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8btrk" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.154846 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tgsfn" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.157203 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.158964 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.159078 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.159116 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.159653 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lr2tr"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.160311 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lr2tr" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.160796 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.161773 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2vbr"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.162431 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2vbr" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.163584 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-dkwvf"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.163990 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dkwvf" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.164282 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7kzpp"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.165162 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pzz9g"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.165441 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.165468 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pzz9g" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.166306 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-p6kj7"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.167373 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rxcsk"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.168130 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.168428 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rxcsk" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.168731 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-p6kj7" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.171922 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.174799 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.175091 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.178365 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.178805 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qz7k6"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.178938 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ds2pp"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.179714 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ds2pp" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.186704 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-l88r9"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.189628 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-l88r9" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.189919 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.194525 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6g499"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.195372 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6g499" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.204120 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.204341 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9c87n"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.205003 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9c87n" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.206273 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lmq94"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.206776 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lmq94" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.208744 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t8q8k"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.209192 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t8q8k" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.211101 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jpjcg"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.214454 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2qlcg"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.215413 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-2qlcg" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.216397 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-m5t5p"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.216936 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-m5t5p" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.219015 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.220569 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgbwk"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.221317 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgbwk" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.221804 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ddtzr"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.222445 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ddtzr" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.225625 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29338950-4m7p7"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.226385 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.226403 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-slxv8"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.227351 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-slxv8" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.227581 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29338950-4m7p7" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.227614 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jl8l9"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.228705 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-jl8l9" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.229852 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-xhg5q"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.230431 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xhg5q" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.231638 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.232690 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8btrk"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.234352 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2vbr"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.235049 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-zg4v6"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.238107 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tqnzn"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.241814 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.248311 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ds2pp"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.249785 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b6fmh"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.250866 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rpfzn"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.251962 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-77swh"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.253626 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pzz9g"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.256512 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-t6b9x"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.258077 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zh8tx"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.258173 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ba083af2-d9a6-42e5-99ec-2b89278b08a2-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qz7k6\" (UID: \"ba083af2-d9a6-42e5-99ec-2b89278b08a2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qz7k6" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.258205 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.258228 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a8c5e6e-2adb-47d6-aca7-a95b42d5444e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tqnzn\" (UID: \"8a8c5e6e-2adb-47d6-aca7-a95b42d5444e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tqnzn" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.258247 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/31fd7ef2-e28a-417f-8b5c-26d976680749-node-pullsecrets\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.258263 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/31fd7ef2-e28a-417f-8b5c-26d976680749-etcd-serving-ca\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.258287 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52ft8\" (UniqueName: \"kubernetes.io/projected/57f3ce23-f777-41e7-a3ef-23873b3049e9-kube-api-access-52ft8\") pod \"cluster-samples-operator-665b6dd947-8btrk\" (UID: \"57f3ce23-f777-41e7-a3ef-23873b3049e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8btrk" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.258308 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fe855835-c379-488f-84c2-46e500e828cd-metrics-tls\") pod \"dns-operator-744455d44c-jpjcg\" (UID: \"fe855835-c379-488f-84c2-46e500e828cd\") " pod="openshift-dns-operator/dns-operator-744455d44c-jpjcg" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.258329 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf7bn\" (UniqueName: \"kubernetes.io/projected/87c673fe-144f-4dc7-bafd-ca7c29e498e2-kube-api-access-wf7bn\") pod \"console-operator-58897d9998-77swh\" (UID: \"87c673fe-144f-4dc7-bafd-ca7c29e498e2\") " pod="openshift-console-operator/console-operator-58897d9998-77swh" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.258353 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5lp8\" (UniqueName: \"kubernetes.io/projected/404a7ccb-1a6f-4185-aba4-e74c8fcd6092-kube-api-access-j5lp8\") pod \"machine-api-operator-5694c8668f-zk7qg\" (UID: \"404a7ccb-1a6f-4185-aba4-e74c8fcd6092\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zk7qg" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.258372 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be22fb13-b4fa-49ac-8931-6beefd571639-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zh8tx\" (UID: \"be22fb13-b4fa-49ac-8931-6beefd571639\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zh8tx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.258388 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.258404 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5067f7f-360d-4760-ad85-d9ad118f5d20-serving-cert\") pod \"authentication-operator-69f744f599-b6fmh\" (UID: \"f5067f7f-360d-4760-ad85-d9ad118f5d20\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b6fmh" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.258420 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8275122f-9ea3-4d09-a31e-75063b4502d1-machine-approver-tls\") pod \"machine-approver-56656f9798-l4642\" (UID: \"8275122f-9ea3-4d09-a31e-75063b4502d1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4642" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.258452 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72ncc\" (UniqueName: \"kubernetes.io/projected/31fd7ef2-e28a-417f-8b5c-26d976680749-kube-api-access-72ncc\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.258471 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d898cf2-fd64-4f08-bdde-90520345ebc5-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-h529p\" (UID: \"6d898cf2-fd64-4f08-bdde-90520345ebc5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h529p" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.258486 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.258507 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/57f3ce23-f777-41e7-a3ef-23873b3049e9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8btrk\" (UID: \"57f3ce23-f777-41e7-a3ef-23873b3049e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8btrk" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.258577 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.258601 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404a7ccb-1a6f-4185-aba4-e74c8fcd6092-images\") pod \"machine-api-operator-5694c8668f-zk7qg\" (UID: \"404a7ccb-1a6f-4185-aba4-e74c8fcd6092\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zk7qg" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.258618 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4rjn\" (UniqueName: \"kubernetes.io/projected/d78c0df2-c046-49a4-b00c-031053c497c4-kube-api-access-r4rjn\") pod \"apiserver-7bbb656c7d-ghwjx\" (UID: \"d78c0df2-c046-49a4-b00c-031053c497c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.258636 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5067f7f-360d-4760-ad85-d9ad118f5d20-service-ca-bundle\") pod \"authentication-operator-69f744f599-b6fmh\" (UID: \"f5067f7f-360d-4760-ad85-d9ad118f5d20\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b6fmh" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.258652 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31fd7ef2-e28a-417f-8b5c-26d976680749-config\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.258667 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d78c0df2-c046-49a4-b00c-031053c497c4-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ghwjx\" (UID: \"d78c0df2-c046-49a4-b00c-031053c497c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.258707 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be22fb13-b4fa-49ac-8931-6beefd571639-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zh8tx\" (UID: \"be22fb13-b4fa-49ac-8931-6beefd571639\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zh8tx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.258742 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba083af2-d9a6-42e5-99ec-2b89278b08a2-client-ca\") pod \"controller-manager-879f6c89f-qz7k6\" (UID: \"ba083af2-d9a6-42e5-99ec-2b89278b08a2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qz7k6" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.258975 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259014 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b4411e13-1d37-4d03-ad8a-7d24be467441-audit-policies\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259039 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5067f7f-360d-4760-ad85-d9ad118f5d20-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b6fmh\" (UID: \"f5067f7f-360d-4760-ad85-d9ad118f5d20\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b6fmh" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259167 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31fd7ef2-e28a-417f-8b5c-26d976680749-serving-cert\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259210 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a46f0ce4-a965-4cc0-869a-0a1edfdb7519-available-featuregates\") pod \"openshift-config-operator-7777fb866f-zg4v6\" (UID: \"a46f0ce4-a965-4cc0-869a-0a1edfdb7519\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zg4v6" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259238 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pns4h\" (UniqueName: \"kubernetes.io/projected/a46f0ce4-a965-4cc0-869a-0a1edfdb7519-kube-api-access-pns4h\") pod \"openshift-config-operator-7777fb866f-zg4v6\" (UID: \"a46f0ce4-a965-4cc0-869a-0a1edfdb7519\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zg4v6" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259285 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259367 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259426 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adfdbeae-0ada-4f22-937a-ff7fdb0d0901-config\") pod \"route-controller-manager-6576b87f9c-652c5\" (UID: \"adfdbeae-0ada-4f22-937a-ff7fdb0d0901\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259452 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a8c5e6e-2adb-47d6-aca7-a95b42d5444e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tqnzn\" (UID: \"8a8c5e6e-2adb-47d6-aca7-a95b42d5444e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tqnzn" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259476 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g24p\" (UniqueName: \"kubernetes.io/projected/ba083af2-d9a6-42e5-99ec-2b89278b08a2-kube-api-access-7g24p\") pod \"controller-manager-879f6c89f-qz7k6\" (UID: \"ba083af2-d9a6-42e5-99ec-2b89278b08a2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qz7k6" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259500 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txs86\" (UniqueName: \"kubernetes.io/projected/c50f9a77-c750-45f8-9655-6002e578c0fd-kube-api-access-txs86\") pod \"ingress-operator-5b745b69d9-t6b9x\" (UID: \"c50f9a77-c750-45f8-9655-6002e578c0fd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t6b9x" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259521 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d78c0df2-c046-49a4-b00c-031053c497c4-audit-dir\") pod \"apiserver-7bbb656c7d-ghwjx\" (UID: \"d78c0df2-c046-49a4-b00c-031053c497c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259558 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/31fd7ef2-e28a-417f-8b5c-26d976680749-etcd-client\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259574 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be22fb13-b4fa-49ac-8931-6beefd571639-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zh8tx\" (UID: \"be22fb13-b4fa-49ac-8931-6beefd571639\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zh8tx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259591 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d78c0df2-c046-49a4-b00c-031053c497c4-encryption-config\") pod \"apiserver-7bbb656c7d-ghwjx\" (UID: \"d78c0df2-c046-49a4-b00c-031053c497c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259609 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adfdbeae-0ada-4f22-937a-ff7fdb0d0901-serving-cert\") pod \"route-controller-manager-6576b87f9c-652c5\" (UID: \"adfdbeae-0ada-4f22-937a-ff7fdb0d0901\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259626 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a46f0ce4-a965-4cc0-869a-0a1edfdb7519-serving-cert\") pod \"openshift-config-operator-7777fb866f-zg4v6\" (UID: \"a46f0ce4-a965-4cc0-869a-0a1edfdb7519\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zg4v6" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259666 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/adfdbeae-0ada-4f22-937a-ff7fdb0d0901-client-ca\") pod \"route-controller-manager-6576b87f9c-652c5\" (UID: \"adfdbeae-0ada-4f22-937a-ff7fdb0d0901\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259688 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fhp2\" (UniqueName: \"kubernetes.io/projected/05a4a2c9-1543-49ae-9f86-ba208d564f75-kube-api-access-8fhp2\") pod \"downloads-7954f5f757-tkv6x\" (UID: \"05a4a2c9-1543-49ae-9f86-ba208d564f75\") " pod="openshift-console/downloads-7954f5f757-tkv6x" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259704 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404a7ccb-1a6f-4185-aba4-e74c8fcd6092-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zk7qg\" (UID: \"404a7ccb-1a6f-4185-aba4-e74c8fcd6092\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zk7qg" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259719 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d78c0df2-c046-49a4-b00c-031053c497c4-serving-cert\") pod \"apiserver-7bbb656c7d-ghwjx\" (UID: \"d78c0df2-c046-49a4-b00c-031053c497c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259740 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c50f9a77-c750-45f8-9655-6002e578c0fd-bound-sa-token\") pod \"ingress-operator-5b745b69d9-t6b9x\" (UID: \"c50f9a77-c750-45f8-9655-6002e578c0fd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t6b9x" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259759 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8qc6\" (UniqueName: \"kubernetes.io/projected/adfdbeae-0ada-4f22-937a-ff7fdb0d0901-kube-api-access-j8qc6\") pod \"route-controller-manager-6576b87f9c-652c5\" (UID: \"adfdbeae-0ada-4f22-937a-ff7fdb0d0901\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259794 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/31fd7ef2-e28a-417f-8b5c-26d976680749-audit\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259842 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259862 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8275122f-9ea3-4d09-a31e-75063b4502d1-config\") pod \"machine-approver-56656f9798-l4642\" (UID: \"8275122f-9ea3-4d09-a31e-75063b4502d1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4642" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259882 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c50f9a77-c750-45f8-9655-6002e578c0fd-trusted-ca\") pod \"ingress-operator-5b745b69d9-t6b9x\" (UID: \"c50f9a77-c750-45f8-9655-6002e578c0fd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t6b9x" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259903 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/31fd7ef2-e28a-417f-8b5c-26d976680749-encryption-config\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259924 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6d898cf2-fd64-4f08-bdde-90520345ebc5-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-h529p\" (UID: \"6d898cf2-fd64-4f08-bdde-90520345ebc5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h529p" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259947 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259966 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5067f7f-360d-4760-ad85-d9ad118f5d20-config\") pod \"authentication-operator-69f744f599-b6fmh\" (UID: \"f5067f7f-360d-4760-ad85-d9ad118f5d20\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b6fmh" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.259985 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8275122f-9ea3-4d09-a31e-75063b4502d1-auth-proxy-config\") pod \"machine-approver-56656f9798-l4642\" (UID: \"8275122f-9ea3-4d09-a31e-75063b4502d1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4642" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.260010 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87c673fe-144f-4dc7-bafd-ca7c29e498e2-trusted-ca\") pod \"console-operator-58897d9998-77swh\" (UID: \"87c673fe-144f-4dc7-bafd-ca7c29e498e2\") " pod="openshift-console-operator/console-operator-58897d9998-77swh" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.260070 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8hzc\" (UniqueName: \"kubernetes.io/projected/8275122f-9ea3-4d09-a31e-75063b4502d1-kube-api-access-v8hzc\") pod \"machine-approver-56656f9798-l4642\" (UID: \"8275122f-9ea3-4d09-a31e-75063b4502d1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4642" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.260094 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d78c0df2-c046-49a4-b00c-031053c497c4-etcd-client\") pod \"apiserver-7bbb656c7d-ghwjx\" (UID: \"d78c0df2-c046-49a4-b00c-031053c497c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.260151 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.260174 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d898cf2-fd64-4f08-bdde-90520345ebc5-config\") pod \"kube-controller-manager-operator-78b949d7b-h529p\" (UID: \"6d898cf2-fd64-4f08-bdde-90520345ebc5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h529p" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.260196 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b4411e13-1d37-4d03-ad8a-7d24be467441-audit-dir\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.260230 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/31fd7ef2-e28a-417f-8b5c-26d976680749-image-import-ca\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.260258 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba083af2-d9a6-42e5-99ec-2b89278b08a2-serving-cert\") pod \"controller-manager-879f6c89f-qz7k6\" (UID: \"ba083af2-d9a6-42e5-99ec-2b89278b08a2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qz7k6" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.260282 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4tww\" (UniqueName: \"kubernetes.io/projected/fe855835-c379-488f-84c2-46e500e828cd-kube-api-access-r4tww\") pod \"dns-operator-744455d44c-jpjcg\" (UID: \"fe855835-c379-488f-84c2-46e500e828cd\") " pod="openshift-dns-operator/dns-operator-744455d44c-jpjcg" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.260300 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.260319 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm2s6\" (UniqueName: \"kubernetes.io/projected/8a8c5e6e-2adb-47d6-aca7-a95b42d5444e-kube-api-access-pm2s6\") pod \"openshift-apiserver-operator-796bbdcf4f-tqnzn\" (UID: \"8a8c5e6e-2adb-47d6-aca7-a95b42d5444e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tqnzn" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.260347 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d78c0df2-c046-49a4-b00c-031053c497c4-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ghwjx\" (UID: \"d78c0df2-c046-49a4-b00c-031053c497c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.260368 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31fd7ef2-e28a-417f-8b5c-26d976680749-trusted-ca-bundle\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.260392 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t9s7\" (UniqueName: \"kubernetes.io/projected/f5067f7f-360d-4760-ad85-d9ad118f5d20-kube-api-access-7t9s7\") pod \"authentication-operator-69f744f599-b6fmh\" (UID: \"f5067f7f-360d-4760-ad85-d9ad118f5d20\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b6fmh" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.260409 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77t8j\" (UniqueName: \"kubernetes.io/projected/b4411e13-1d37-4d03-ad8a-7d24be467441-kube-api-access-77t8j\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.260426 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404a7ccb-1a6f-4185-aba4-e74c8fcd6092-config\") pod \"machine-api-operator-5694c8668f-zk7qg\" (UID: \"404a7ccb-1a6f-4185-aba4-e74c8fcd6092\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zk7qg" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.260441 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87c673fe-144f-4dc7-bafd-ca7c29e498e2-config\") pod \"console-operator-58897d9998-77swh\" (UID: \"87c673fe-144f-4dc7-bafd-ca7c29e498e2\") " pod="openshift-console-operator/console-operator-58897d9998-77swh" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.260456 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d78c0df2-c046-49a4-b00c-031053c497c4-audit-policies\") pod \"apiserver-7bbb656c7d-ghwjx\" (UID: \"d78c0df2-c046-49a4-b00c-031053c497c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.260472 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/31fd7ef2-e28a-417f-8b5c-26d976680749-audit-dir\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.260486 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c50f9a77-c750-45f8-9655-6002e578c0fd-metrics-tls\") pod \"ingress-operator-5b745b69d9-t6b9x\" (UID: \"c50f9a77-c750-45f8-9655-6002e578c0fd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t6b9x" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.260504 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba083af2-d9a6-42e5-99ec-2b89278b08a2-config\") pod \"controller-manager-879f6c89f-qz7k6\" (UID: \"ba083af2-d9a6-42e5-99ec-2b89278b08a2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qz7k6" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.260519 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87c673fe-144f-4dc7-bafd-ca7c29e498e2-serving-cert\") pod \"console-operator-58897d9998-77swh\" (UID: \"87c673fe-144f-4dc7-bafd-ca7c29e498e2\") " pod="openshift-console-operator/console-operator-58897d9998-77swh" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.261490 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tkv6x"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.265631 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6g499"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.268391 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zk7qg"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.270030 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-h8p9r"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.270301 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/57f3ce23-f777-41e7-a3ef-23873b3049e9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8btrk\" (UID: \"57f3ce23-f777-41e7-a3ef-23873b3049e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8btrk" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.272669 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ddtzr"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.273706 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rxcsk"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.274805 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-tgsfn"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.275743 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29338950-4m7p7"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.276677 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lr2tr"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.276817 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.277734 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x7dz2"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.278788 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-l88r9"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.279933 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h529p"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.281705 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7kzpp"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.283047 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-p6kj7"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.284723 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-c7xv6"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.286060 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t8q8k"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.286820 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-c7xv6" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.293656 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jl8l9"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.293688 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-slxv8"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.293700 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xhg5q"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.296147 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9c87n"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.297212 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.298714 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgbwk"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.301602 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-m5t5p"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.304129 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2qlcg"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.306516 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lmq94"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.309645 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-c7xv6"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.313123 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-rh4l6"] Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.313736 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-rh4l6" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.318030 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.337287 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.357086 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.361581 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c50f9a77-c750-45f8-9655-6002e578c0fd-bound-sa-token\") pod \"ingress-operator-5b745b69d9-t6b9x\" (UID: \"c50f9a77-c750-45f8-9655-6002e578c0fd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t6b9x" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.361614 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8qc6\" (UniqueName: \"kubernetes.io/projected/adfdbeae-0ada-4f22-937a-ff7fdb0d0901-kube-api-access-j8qc6\") pod \"route-controller-manager-6576b87f9c-652c5\" (UID: \"adfdbeae-0ada-4f22-937a-ff7fdb0d0901\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.361637 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/31fd7ef2-e28a-417f-8b5c-26d976680749-audit\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.361655 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.361673 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8275122f-9ea3-4d09-a31e-75063b4502d1-config\") pod \"machine-approver-56656f9798-l4642\" (UID: \"8275122f-9ea3-4d09-a31e-75063b4502d1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4642" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.361694 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/411fca3a-272a-4d30-91d3-623952b953aa-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-slxv8\" (UID: \"411fca3a-272a-4d30-91d3-623952b953aa\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-slxv8" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.361713 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c50f9a77-c750-45f8-9655-6002e578c0fd-trusted-ca\") pod \"ingress-operator-5b745b69d9-t6b9x\" (UID: \"c50f9a77-c750-45f8-9655-6002e578c0fd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t6b9x" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.361729 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7b6ff3a0-c424-45f9-92e9-e7b5a46d7464-stats-auth\") pod \"router-default-5444994796-dkwvf\" (UID: \"7b6ff3a0-c424-45f9-92e9-e7b5a46d7464\") " pod="openshift-ingress/router-default-5444994796-dkwvf" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.361773 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/31fd7ef2-e28a-417f-8b5c-26d976680749-encryption-config\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.361838 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6d898cf2-fd64-4f08-bdde-90520345ebc5-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-h529p\" (UID: \"6d898cf2-fd64-4f08-bdde-90520345ebc5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h529p" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.361866 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a76795df-c2cf-4bf9-a6df-34a05c0e6d59-cert\") pod \"ingress-canary-xhg5q\" (UID: \"a76795df-c2cf-4bf9-a6df-34a05c0e6d59\") " pod="openshift-ingress-canary/ingress-canary-xhg5q" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.361905 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.361923 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5067f7f-360d-4760-ad85-d9ad118f5d20-config\") pod \"authentication-operator-69f744f599-b6fmh\" (UID: \"f5067f7f-360d-4760-ad85-d9ad118f5d20\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b6fmh" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.361940 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87c673fe-144f-4dc7-bafd-ca7c29e498e2-trusted-ca\") pod \"console-operator-58897d9998-77swh\" (UID: \"87c673fe-144f-4dc7-bafd-ca7c29e498e2\") " pod="openshift-console-operator/console-operator-58897d9998-77swh" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.361962 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8275122f-9ea3-4d09-a31e-75063b4502d1-auth-proxy-config\") pod \"machine-approver-56656f9798-l4642\" (UID: \"8275122f-9ea3-4d09-a31e-75063b4502d1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4642" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.361984 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f94c08d4-7426-4488-b011-c2b78fa2b705-signing-cabundle\") pod \"service-ca-9c57cc56f-2qlcg\" (UID: \"f94c08d4-7426-4488-b011-c2b78fa2b705\") " pod="openshift-service-ca/service-ca-9c57cc56f-2qlcg" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362005 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d537ffb6-77d0-4bfc-bc53-54cd70938e24-config-volume\") pod \"collect-profiles-29338950-4m7p7\" (UID: \"d537ffb6-77d0-4bfc-bc53-54cd70938e24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338950-4m7p7" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362024 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8hzc\" (UniqueName: \"kubernetes.io/projected/8275122f-9ea3-4d09-a31e-75063b4502d1-kube-api-access-v8hzc\") pod \"machine-approver-56656f9798-l4642\" (UID: \"8275122f-9ea3-4d09-a31e-75063b4502d1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4642" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362059 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7b6ff3a0-c424-45f9-92e9-e7b5a46d7464-default-certificate\") pod \"router-default-5444994796-dkwvf\" (UID: \"7b6ff3a0-c424-45f9-92e9-e7b5a46d7464\") " pod="openshift-ingress/router-default-5444994796-dkwvf" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362077 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362094 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d78c0df2-c046-49a4-b00c-031053c497c4-etcd-client\") pod \"apiserver-7bbb656c7d-ghwjx\" (UID: \"d78c0df2-c046-49a4-b00c-031053c497c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362108 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/31fd7ef2-e28a-417f-8b5c-26d976680749-image-import-ca\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362123 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba083af2-d9a6-42e5-99ec-2b89278b08a2-serving-cert\") pod \"controller-manager-879f6c89f-qz7k6\" (UID: \"ba083af2-d9a6-42e5-99ec-2b89278b08a2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qz7k6" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362137 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d898cf2-fd64-4f08-bdde-90520345ebc5-config\") pod \"kube-controller-manager-operator-78b949d7b-h529p\" (UID: \"6d898cf2-fd64-4f08-bdde-90520345ebc5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h529p" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362153 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b4411e13-1d37-4d03-ad8a-7d24be467441-audit-dir\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362175 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4tww\" (UniqueName: \"kubernetes.io/projected/fe855835-c379-488f-84c2-46e500e828cd-kube-api-access-r4tww\") pod \"dns-operator-744455d44c-jpjcg\" (UID: \"fe855835-c379-488f-84c2-46e500e828cd\") " pod="openshift-dns-operator/dns-operator-744455d44c-jpjcg" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362190 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362205 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm2s6\" (UniqueName: \"kubernetes.io/projected/8a8c5e6e-2adb-47d6-aca7-a95b42d5444e-kube-api-access-pm2s6\") pod \"openshift-apiserver-operator-796bbdcf4f-tqnzn\" (UID: \"8a8c5e6e-2adb-47d6-aca7-a95b42d5444e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tqnzn" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362222 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d78c0df2-c046-49a4-b00c-031053c497c4-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ghwjx\" (UID: \"d78c0df2-c046-49a4-b00c-031053c497c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362243 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjskt\" (UniqueName: \"kubernetes.io/projected/411fca3a-272a-4d30-91d3-623952b953aa-kube-api-access-mjskt\") pod \"package-server-manager-789f6589d5-slxv8\" (UID: \"411fca3a-272a-4d30-91d3-623952b953aa\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-slxv8" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362266 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31fd7ef2-e28a-417f-8b5c-26d976680749-trusted-ca-bundle\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362287 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t9s7\" (UniqueName: \"kubernetes.io/projected/f5067f7f-360d-4760-ad85-d9ad118f5d20-kube-api-access-7t9s7\") pod \"authentication-operator-69f744f599-b6fmh\" (UID: \"f5067f7f-360d-4760-ad85-d9ad118f5d20\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b6fmh" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362304 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77t8j\" (UniqueName: \"kubernetes.io/projected/b4411e13-1d37-4d03-ad8a-7d24be467441-kube-api-access-77t8j\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362320 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404a7ccb-1a6f-4185-aba4-e74c8fcd6092-config\") pod \"machine-api-operator-5694c8668f-zk7qg\" (UID: \"404a7ccb-1a6f-4185-aba4-e74c8fcd6092\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zk7qg" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362335 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87c673fe-144f-4dc7-bafd-ca7c29e498e2-config\") pod \"console-operator-58897d9998-77swh\" (UID: \"87c673fe-144f-4dc7-bafd-ca7c29e498e2\") " pod="openshift-console-operator/console-operator-58897d9998-77swh" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362332 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8275122f-9ea3-4d09-a31e-75063b4502d1-config\") pod \"machine-approver-56656f9798-l4642\" (UID: \"8275122f-9ea3-4d09-a31e-75063b4502d1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4642" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362351 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d78c0df2-c046-49a4-b00c-031053c497c4-audit-policies\") pod \"apiserver-7bbb656c7d-ghwjx\" (UID: \"d78c0df2-c046-49a4-b00c-031053c497c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362397 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/31fd7ef2-e28a-417f-8b5c-26d976680749-audit-dir\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362437 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c50f9a77-c750-45f8-9655-6002e578c0fd-metrics-tls\") pod \"ingress-operator-5b745b69d9-t6b9x\" (UID: \"c50f9a77-c750-45f8-9655-6002e578c0fd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t6b9x" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362503 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba083af2-d9a6-42e5-99ec-2b89278b08a2-config\") pod \"controller-manager-879f6c89f-qz7k6\" (UID: \"ba083af2-d9a6-42e5-99ec-2b89278b08a2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qz7k6" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362520 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87c673fe-144f-4dc7-bafd-ca7c29e498e2-serving-cert\") pod \"console-operator-58897d9998-77swh\" (UID: \"87c673fe-144f-4dc7-bafd-ca7c29e498e2\") " pod="openshift-console-operator/console-operator-58897d9998-77swh" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362561 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52gj6\" (UniqueName: \"kubernetes.io/projected/d537ffb6-77d0-4bfc-bc53-54cd70938e24-kube-api-access-52gj6\") pod \"collect-profiles-29338950-4m7p7\" (UID: \"d537ffb6-77d0-4bfc-bc53-54cd70938e24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338950-4m7p7" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362587 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29dtq\" (UniqueName: \"kubernetes.io/projected/a76795df-c2cf-4bf9-a6df-34a05c0e6d59-kube-api-access-29dtq\") pod \"ingress-canary-xhg5q\" (UID: \"a76795df-c2cf-4bf9-a6df-34a05c0e6d59\") " pod="openshift-ingress-canary/ingress-canary-xhg5q" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362610 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ba083af2-d9a6-42e5-99ec-2b89278b08a2-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qz7k6\" (UID: \"ba083af2-d9a6-42e5-99ec-2b89278b08a2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qz7k6" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362629 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362648 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f94c08d4-7426-4488-b011-c2b78fa2b705-signing-key\") pod \"service-ca-9c57cc56f-2qlcg\" (UID: \"f94c08d4-7426-4488-b011-c2b78fa2b705\") " pod="openshift-service-ca/service-ca-9c57cc56f-2qlcg" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362665 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/31fd7ef2-e28a-417f-8b5c-26d976680749-node-pullsecrets\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362683 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/31fd7ef2-e28a-417f-8b5c-26d976680749-etcd-serving-ca\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362703 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a8c5e6e-2adb-47d6-aca7-a95b42d5444e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tqnzn\" (UID: \"8a8c5e6e-2adb-47d6-aca7-a95b42d5444e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tqnzn" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362731 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fe855835-c379-488f-84c2-46e500e828cd-metrics-tls\") pod \"dns-operator-744455d44c-jpjcg\" (UID: \"fe855835-c379-488f-84c2-46e500e828cd\") " pod="openshift-dns-operator/dns-operator-744455d44c-jpjcg" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362751 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf7bn\" (UniqueName: \"kubernetes.io/projected/87c673fe-144f-4dc7-bafd-ca7c29e498e2-kube-api-access-wf7bn\") pod \"console-operator-58897d9998-77swh\" (UID: \"87c673fe-144f-4dc7-bafd-ca7c29e498e2\") " pod="openshift-console-operator/console-operator-58897d9998-77swh" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362769 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be22fb13-b4fa-49ac-8931-6beefd571639-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zh8tx\" (UID: \"be22fb13-b4fa-49ac-8931-6beefd571639\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zh8tx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362787 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362807 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5lp8\" (UniqueName: \"kubernetes.io/projected/404a7ccb-1a6f-4185-aba4-e74c8fcd6092-kube-api-access-j5lp8\") pod \"machine-api-operator-5694c8668f-zk7qg\" (UID: \"404a7ccb-1a6f-4185-aba4-e74c8fcd6092\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zk7qg" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362827 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spw5r\" (UniqueName: \"kubernetes.io/projected/7b6ff3a0-c424-45f9-92e9-e7b5a46d7464-kube-api-access-spw5r\") pod \"router-default-5444994796-dkwvf\" (UID: \"7b6ff3a0-c424-45f9-92e9-e7b5a46d7464\") " pod="openshift-ingress/router-default-5444994796-dkwvf" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362865 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72ncc\" (UniqueName: \"kubernetes.io/projected/31fd7ef2-e28a-417f-8b5c-26d976680749-kube-api-access-72ncc\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362890 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d898cf2-fd64-4f08-bdde-90520345ebc5-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-h529p\" (UID: \"6d898cf2-fd64-4f08-bdde-90520345ebc5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h529p" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362909 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362928 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5067f7f-360d-4760-ad85-d9ad118f5d20-serving-cert\") pod \"authentication-operator-69f744f599-b6fmh\" (UID: \"f5067f7f-360d-4760-ad85-d9ad118f5d20\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b6fmh" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362945 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8275122f-9ea3-4d09-a31e-75063b4502d1-machine-approver-tls\") pod \"machine-approver-56656f9798-l4642\" (UID: \"8275122f-9ea3-4d09-a31e-75063b4502d1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4642" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362964 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b6ff3a0-c424-45f9-92e9-e7b5a46d7464-service-ca-bundle\") pod \"router-default-5444994796-dkwvf\" (UID: \"7b6ff3a0-c424-45f9-92e9-e7b5a46d7464\") " pod="openshift-ingress/router-default-5444994796-dkwvf" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362985 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.363001 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404a7ccb-1a6f-4185-aba4-e74c8fcd6092-images\") pod \"machine-api-operator-5694c8668f-zk7qg\" (UID: \"404a7ccb-1a6f-4185-aba4-e74c8fcd6092\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zk7qg" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.363017 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4rjn\" (UniqueName: \"kubernetes.io/projected/d78c0df2-c046-49a4-b00c-031053c497c4-kube-api-access-r4rjn\") pod \"apiserver-7bbb656c7d-ghwjx\" (UID: \"d78c0df2-c046-49a4-b00c-031053c497c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.363034 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5067f7f-360d-4760-ad85-d9ad118f5d20-service-ca-bundle\") pod \"authentication-operator-69f744f599-b6fmh\" (UID: \"f5067f7f-360d-4760-ad85-d9ad118f5d20\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b6fmh" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.363074 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31fd7ef2-e28a-417f-8b5c-26d976680749-config\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.363094 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d78c0df2-c046-49a4-b00c-031053c497c4-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ghwjx\" (UID: \"d78c0df2-c046-49a4-b00c-031053c497c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.363069 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c50f9a77-c750-45f8-9655-6002e578c0fd-trusted-ca\") pod \"ingress-operator-5b745b69d9-t6b9x\" (UID: \"c50f9a77-c750-45f8-9655-6002e578c0fd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t6b9x" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.363112 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be22fb13-b4fa-49ac-8931-6beefd571639-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zh8tx\" (UID: \"be22fb13-b4fa-49ac-8931-6beefd571639\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zh8tx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.363151 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba083af2-d9a6-42e5-99ec-2b89278b08a2-client-ca\") pod \"controller-manager-879f6c89f-qz7k6\" (UID: \"ba083af2-d9a6-42e5-99ec-2b89278b08a2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qz7k6" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.363157 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.363177 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.363208 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/31fd7ef2-e28a-417f-8b5c-26d976680749-audit-dir\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.363222 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b4411e13-1d37-4d03-ad8a-7d24be467441-audit-policies\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.363242 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5067f7f-360d-4760-ad85-d9ad118f5d20-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b6fmh\" (UID: \"f5067f7f-360d-4760-ad85-d9ad118f5d20\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b6fmh" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.363263 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhrr9\" (UniqueName: \"kubernetes.io/projected/f94c08d4-7426-4488-b011-c2b78fa2b705-kube-api-access-bhrr9\") pod \"service-ca-9c57cc56f-2qlcg\" (UID: \"f94c08d4-7426-4488-b011-c2b78fa2b705\") " pod="openshift-service-ca/service-ca-9c57cc56f-2qlcg" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.363286 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31fd7ef2-e28a-417f-8b5c-26d976680749-serving-cert\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.363309 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.363297 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d78c0df2-c046-49a4-b00c-031053c497c4-audit-policies\") pod \"apiserver-7bbb656c7d-ghwjx\" (UID: \"d78c0df2-c046-49a4-b00c-031053c497c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.363330 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.363373 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a46f0ce4-a965-4cc0-869a-0a1edfdb7519-available-featuregates\") pod \"openshift-config-operator-7777fb866f-zg4v6\" (UID: \"a46f0ce4-a965-4cc0-869a-0a1edfdb7519\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zg4v6" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.363398 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pns4h\" (UniqueName: \"kubernetes.io/projected/a46f0ce4-a965-4cc0-869a-0a1edfdb7519-kube-api-access-pns4h\") pod \"openshift-config-operator-7777fb866f-zg4v6\" (UID: \"a46f0ce4-a965-4cc0-869a-0a1edfdb7519\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zg4v6" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.363507 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/31fd7ef2-e28a-417f-8b5c-26d976680749-image-import-ca\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.363698 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adfdbeae-0ada-4f22-937a-ff7fdb0d0901-config\") pod \"route-controller-manager-6576b87f9c-652c5\" (UID: \"adfdbeae-0ada-4f22-937a-ff7fdb0d0901\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.363741 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a8c5e6e-2adb-47d6-aca7-a95b42d5444e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tqnzn\" (UID: \"8a8c5e6e-2adb-47d6-aca7-a95b42d5444e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tqnzn" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.363758 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g24p\" (UniqueName: \"kubernetes.io/projected/ba083af2-d9a6-42e5-99ec-2b89278b08a2-kube-api-access-7g24p\") pod \"controller-manager-879f6c89f-qz7k6\" (UID: \"ba083af2-d9a6-42e5-99ec-2b89278b08a2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qz7k6" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.363777 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txs86\" (UniqueName: \"kubernetes.io/projected/c50f9a77-c750-45f8-9655-6002e578c0fd-kube-api-access-txs86\") pod \"ingress-operator-5b745b69d9-t6b9x\" (UID: \"c50f9a77-c750-45f8-9655-6002e578c0fd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t6b9x" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.363798 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d78c0df2-c046-49a4-b00c-031053c497c4-audit-dir\") pod \"apiserver-7bbb656c7d-ghwjx\" (UID: \"d78c0df2-c046-49a4-b00c-031053c497c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.363864 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/31fd7ef2-e28a-417f-8b5c-26d976680749-etcd-client\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.363881 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be22fb13-b4fa-49ac-8931-6beefd571639-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zh8tx\" (UID: \"be22fb13-b4fa-49ac-8931-6beefd571639\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zh8tx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.363904 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d537ffb6-77d0-4bfc-bc53-54cd70938e24-secret-volume\") pod \"collect-profiles-29338950-4m7p7\" (UID: \"d537ffb6-77d0-4bfc-bc53-54cd70938e24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338950-4m7p7" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.363925 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adfdbeae-0ada-4f22-937a-ff7fdb0d0901-serving-cert\") pod \"route-controller-manager-6576b87f9c-652c5\" (UID: \"adfdbeae-0ada-4f22-937a-ff7fdb0d0901\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.363946 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a46f0ce4-a965-4cc0-869a-0a1edfdb7519-serving-cert\") pod \"openshift-config-operator-7777fb866f-zg4v6\" (UID: \"a46f0ce4-a965-4cc0-869a-0a1edfdb7519\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zg4v6" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.364000 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d78c0df2-c046-49a4-b00c-031053c497c4-encryption-config\") pod \"apiserver-7bbb656c7d-ghwjx\" (UID: \"d78c0df2-c046-49a4-b00c-031053c497c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.364054 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/adfdbeae-0ada-4f22-937a-ff7fdb0d0901-client-ca\") pod \"route-controller-manager-6576b87f9c-652c5\" (UID: \"adfdbeae-0ada-4f22-937a-ff7fdb0d0901\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.364078 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b6ff3a0-c424-45f9-92e9-e7b5a46d7464-metrics-certs\") pod \"router-default-5444994796-dkwvf\" (UID: \"7b6ff3a0-c424-45f9-92e9-e7b5a46d7464\") " pod="openshift-ingress/router-default-5444994796-dkwvf" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.364101 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fhp2\" (UniqueName: \"kubernetes.io/projected/05a4a2c9-1543-49ae-9f86-ba208d564f75-kube-api-access-8fhp2\") pod \"downloads-7954f5f757-tkv6x\" (UID: \"05a4a2c9-1543-49ae-9f86-ba208d564f75\") " pod="openshift-console/downloads-7954f5f757-tkv6x" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.364122 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404a7ccb-1a6f-4185-aba4-e74c8fcd6092-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zk7qg\" (UID: \"404a7ccb-1a6f-4185-aba4-e74c8fcd6092\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zk7qg" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.364161 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d78c0df2-c046-49a4-b00c-031053c497c4-serving-cert\") pod \"apiserver-7bbb656c7d-ghwjx\" (UID: \"d78c0df2-c046-49a4-b00c-031053c497c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.364476 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a46f0ce4-a965-4cc0-869a-0a1edfdb7519-available-featuregates\") pod \"openshift-config-operator-7777fb866f-zg4v6\" (UID: \"a46f0ce4-a965-4cc0-869a-0a1edfdb7519\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zg4v6" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.362661 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/31fd7ef2-e28a-417f-8b5c-26d976680749-audit\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.364607 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be22fb13-b4fa-49ac-8931-6beefd571639-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zh8tx\" (UID: \"be22fb13-b4fa-49ac-8931-6beefd571639\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zh8tx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.365060 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d898cf2-fd64-4f08-bdde-90520345ebc5-config\") pod \"kube-controller-manager-operator-78b949d7b-h529p\" (UID: \"6d898cf2-fd64-4f08-bdde-90520345ebc5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h529p" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.365086 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87c673fe-144f-4dc7-bafd-ca7c29e498e2-config\") pod \"console-operator-58897d9998-77swh\" (UID: \"87c673fe-144f-4dc7-bafd-ca7c29e498e2\") " pod="openshift-console-operator/console-operator-58897d9998-77swh" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.365122 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.365130 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b4411e13-1d37-4d03-ad8a-7d24be467441-audit-dir\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.365125 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba083af2-d9a6-42e5-99ec-2b89278b08a2-client-ca\") pod \"controller-manager-879f6c89f-qz7k6\" (UID: \"ba083af2-d9a6-42e5-99ec-2b89278b08a2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qz7k6" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.365832 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5067f7f-360d-4760-ad85-d9ad118f5d20-config\") pod \"authentication-operator-69f744f599-b6fmh\" (UID: \"f5067f7f-360d-4760-ad85-d9ad118f5d20\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b6fmh" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.365915 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31fd7ef2-e28a-417f-8b5c-26d976680749-trusted-ca-bundle\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.366235 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d78c0df2-c046-49a4-b00c-031053c497c4-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ghwjx\" (UID: \"d78c0df2-c046-49a4-b00c-031053c497c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.366705 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8275122f-9ea3-4d09-a31e-75063b4502d1-auth-proxy-config\") pod \"machine-approver-56656f9798-l4642\" (UID: \"8275122f-9ea3-4d09-a31e-75063b4502d1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4642" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.366847 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b4411e13-1d37-4d03-ad8a-7d24be467441-audit-policies\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.366947 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ba083af2-d9a6-42e5-99ec-2b89278b08a2-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qz7k6\" (UID: \"ba083af2-d9a6-42e5-99ec-2b89278b08a2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qz7k6" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.367047 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5067f7f-360d-4760-ad85-d9ad118f5d20-service-ca-bundle\") pod \"authentication-operator-69f744f599-b6fmh\" (UID: \"f5067f7f-360d-4760-ad85-d9ad118f5d20\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b6fmh" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.367049 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404a7ccb-1a6f-4185-aba4-e74c8fcd6092-images\") pod \"machine-api-operator-5694c8668f-zk7qg\" (UID: \"404a7ccb-1a6f-4185-aba4-e74c8fcd6092\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zk7qg" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.367569 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31fd7ef2-e28a-417f-8b5c-26d976680749-config\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.367668 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87c673fe-144f-4dc7-bafd-ca7c29e498e2-trusted-ca\") pod \"console-operator-58897d9998-77swh\" (UID: \"87c673fe-144f-4dc7-bafd-ca7c29e498e2\") " pod="openshift-console-operator/console-operator-58897d9998-77swh" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.367890 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/31fd7ef2-e28a-417f-8b5c-26d976680749-encryption-config\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.367921 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba083af2-d9a6-42e5-99ec-2b89278b08a2-serving-cert\") pod \"controller-manager-879f6c89f-qz7k6\" (UID: \"ba083af2-d9a6-42e5-99ec-2b89278b08a2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qz7k6" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.368004 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.368221 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d78c0df2-c046-49a4-b00c-031053c497c4-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ghwjx\" (UID: \"d78c0df2-c046-49a4-b00c-031053c497c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.368285 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/31fd7ef2-e28a-417f-8b5c-26d976680749-node-pullsecrets\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.368358 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.368565 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.368741 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/31fd7ef2-e28a-417f-8b5c-26d976680749-etcd-serving-ca\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.369064 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d78c0df2-c046-49a4-b00c-031053c497c4-etcd-client\") pod \"apiserver-7bbb656c7d-ghwjx\" (UID: \"d78c0df2-c046-49a4-b00c-031053c497c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.369069 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404a7ccb-1a6f-4185-aba4-e74c8fcd6092-config\") pod \"machine-api-operator-5694c8668f-zk7qg\" (UID: \"404a7ccb-1a6f-4185-aba4-e74c8fcd6092\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zk7qg" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.369341 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5067f7f-360d-4760-ad85-d9ad118f5d20-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b6fmh\" (UID: \"f5067f7f-360d-4760-ad85-d9ad118f5d20\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b6fmh" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.369603 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d78c0df2-c046-49a4-b00c-031053c497c4-audit-dir\") pod \"apiserver-7bbb656c7d-ghwjx\" (UID: \"d78c0df2-c046-49a4-b00c-031053c497c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.371797 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/adfdbeae-0ada-4f22-937a-ff7fdb0d0901-client-ca\") pod \"route-controller-manager-6576b87f9c-652c5\" (UID: \"adfdbeae-0ada-4f22-937a-ff7fdb0d0901\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.371988 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.372055 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404a7ccb-1a6f-4185-aba4-e74c8fcd6092-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zk7qg\" (UID: \"404a7ccb-1a6f-4185-aba4-e74c8fcd6092\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zk7qg" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.372068 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fe855835-c379-488f-84c2-46e500e828cd-metrics-tls\") pod \"dns-operator-744455d44c-jpjcg\" (UID: \"fe855835-c379-488f-84c2-46e500e828cd\") " pod="openshift-dns-operator/dns-operator-744455d44c-jpjcg" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.372279 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a8c5e6e-2adb-47d6-aca7-a95b42d5444e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tqnzn\" (UID: \"8a8c5e6e-2adb-47d6-aca7-a95b42d5444e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tqnzn" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.372391 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5067f7f-360d-4760-ad85-d9ad118f5d20-serving-cert\") pod \"authentication-operator-69f744f599-b6fmh\" (UID: \"f5067f7f-360d-4760-ad85-d9ad118f5d20\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b6fmh" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.372407 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a8c5e6e-2adb-47d6-aca7-a95b42d5444e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tqnzn\" (UID: \"8a8c5e6e-2adb-47d6-aca7-a95b42d5444e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tqnzn" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.372856 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c50f9a77-c750-45f8-9655-6002e578c0fd-metrics-tls\") pod \"ingress-operator-5b745b69d9-t6b9x\" (UID: \"c50f9a77-c750-45f8-9655-6002e578c0fd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t6b9x" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.373245 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/31fd7ef2-e28a-417f-8b5c-26d976680749-etcd-client\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.373322 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be22fb13-b4fa-49ac-8931-6beefd571639-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zh8tx\" (UID: \"be22fb13-b4fa-49ac-8931-6beefd571639\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zh8tx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.373380 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31fd7ef2-e28a-417f-8b5c-26d976680749-serving-cert\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.373421 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d898cf2-fd64-4f08-bdde-90520345ebc5-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-h529p\" (UID: \"6d898cf2-fd64-4f08-bdde-90520345ebc5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h529p" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.373519 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d78c0df2-c046-49a4-b00c-031053c497c4-serving-cert\") pod \"apiserver-7bbb656c7d-ghwjx\" (UID: \"d78c0df2-c046-49a4-b00c-031053c497c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.373530 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.373582 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.373650 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d78c0df2-c046-49a4-b00c-031053c497c4-encryption-config\") pod \"apiserver-7bbb656c7d-ghwjx\" (UID: \"d78c0df2-c046-49a4-b00c-031053c497c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.373677 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8275122f-9ea3-4d09-a31e-75063b4502d1-machine-approver-tls\") pod \"machine-approver-56656f9798-l4642\" (UID: \"8275122f-9ea3-4d09-a31e-75063b4502d1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4642" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.373682 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adfdbeae-0ada-4f22-937a-ff7fdb0d0901-config\") pod \"route-controller-manager-6576b87f9c-652c5\" (UID: \"adfdbeae-0ada-4f22-937a-ff7fdb0d0901\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.373806 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a46f0ce4-a965-4cc0-869a-0a1edfdb7519-serving-cert\") pod \"openshift-config-operator-7777fb866f-zg4v6\" (UID: \"a46f0ce4-a965-4cc0-869a-0a1edfdb7519\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zg4v6" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.374006 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adfdbeae-0ada-4f22-937a-ff7fdb0d0901-serving-cert\") pod \"route-controller-manager-6576b87f9c-652c5\" (UID: \"adfdbeae-0ada-4f22-937a-ff7fdb0d0901\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.374197 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.374504 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.374853 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba083af2-d9a6-42e5-99ec-2b89278b08a2-config\") pod \"controller-manager-879f6c89f-qz7k6\" (UID: \"ba083af2-d9a6-42e5-99ec-2b89278b08a2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qz7k6" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.374937 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.376299 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87c673fe-144f-4dc7-bafd-ca7c29e498e2-serving-cert\") pod \"console-operator-58897d9998-77swh\" (UID: \"87c673fe-144f-4dc7-bafd-ca7c29e498e2\") " pod="openshift-console-operator/console-operator-58897d9998-77swh" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.377708 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.397269 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.426506 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.444678 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.457612 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.465450 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a76795df-c2cf-4bf9-a6df-34a05c0e6d59-cert\") pod \"ingress-canary-xhg5q\" (UID: \"a76795df-c2cf-4bf9-a6df-34a05c0e6d59\") " pod="openshift-ingress-canary/ingress-canary-xhg5q" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.465548 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d537ffb6-77d0-4bfc-bc53-54cd70938e24-config-volume\") pod \"collect-profiles-29338950-4m7p7\" (UID: \"d537ffb6-77d0-4bfc-bc53-54cd70938e24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338950-4m7p7" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.465576 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f94c08d4-7426-4488-b011-c2b78fa2b705-signing-cabundle\") pod \"service-ca-9c57cc56f-2qlcg\" (UID: \"f94c08d4-7426-4488-b011-c2b78fa2b705\") " pod="openshift-service-ca/service-ca-9c57cc56f-2qlcg" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.465609 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7b6ff3a0-c424-45f9-92e9-e7b5a46d7464-default-certificate\") pod \"router-default-5444994796-dkwvf\" (UID: \"7b6ff3a0-c424-45f9-92e9-e7b5a46d7464\") " pod="openshift-ingress/router-default-5444994796-dkwvf" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.465699 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjskt\" (UniqueName: \"kubernetes.io/projected/411fca3a-272a-4d30-91d3-623952b953aa-kube-api-access-mjskt\") pod \"package-server-manager-789f6589d5-slxv8\" (UID: \"411fca3a-272a-4d30-91d3-623952b953aa\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-slxv8" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.465745 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52gj6\" (UniqueName: \"kubernetes.io/projected/d537ffb6-77d0-4bfc-bc53-54cd70938e24-kube-api-access-52gj6\") pod \"collect-profiles-29338950-4m7p7\" (UID: \"d537ffb6-77d0-4bfc-bc53-54cd70938e24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338950-4m7p7" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.465769 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29dtq\" (UniqueName: \"kubernetes.io/projected/a76795df-c2cf-4bf9-a6df-34a05c0e6d59-kube-api-access-29dtq\") pod \"ingress-canary-xhg5q\" (UID: \"a76795df-c2cf-4bf9-a6df-34a05c0e6d59\") " pod="openshift-ingress-canary/ingress-canary-xhg5q" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.465801 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f94c08d4-7426-4488-b011-c2b78fa2b705-signing-key\") pod \"service-ca-9c57cc56f-2qlcg\" (UID: \"f94c08d4-7426-4488-b011-c2b78fa2b705\") " pod="openshift-service-ca/service-ca-9c57cc56f-2qlcg" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.465850 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spw5r\" (UniqueName: \"kubernetes.io/projected/7b6ff3a0-c424-45f9-92e9-e7b5a46d7464-kube-api-access-spw5r\") pod \"router-default-5444994796-dkwvf\" (UID: \"7b6ff3a0-c424-45f9-92e9-e7b5a46d7464\") " pod="openshift-ingress/router-default-5444994796-dkwvf" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.465874 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b6ff3a0-c424-45f9-92e9-e7b5a46d7464-service-ca-bundle\") pod \"router-default-5444994796-dkwvf\" (UID: \"7b6ff3a0-c424-45f9-92e9-e7b5a46d7464\") " pod="openshift-ingress/router-default-5444994796-dkwvf" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.465929 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhrr9\" (UniqueName: \"kubernetes.io/projected/f94c08d4-7426-4488-b011-c2b78fa2b705-kube-api-access-bhrr9\") pod \"service-ca-9c57cc56f-2qlcg\" (UID: \"f94c08d4-7426-4488-b011-c2b78fa2b705\") " pod="openshift-service-ca/service-ca-9c57cc56f-2qlcg" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.465989 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d537ffb6-77d0-4bfc-bc53-54cd70938e24-secret-volume\") pod \"collect-profiles-29338950-4m7p7\" (UID: \"d537ffb6-77d0-4bfc-bc53-54cd70938e24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338950-4m7p7" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.466027 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b6ff3a0-c424-45f9-92e9-e7b5a46d7464-metrics-certs\") pod \"router-default-5444994796-dkwvf\" (UID: \"7b6ff3a0-c424-45f9-92e9-e7b5a46d7464\") " pod="openshift-ingress/router-default-5444994796-dkwvf" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.466074 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/411fca3a-272a-4d30-91d3-623952b953aa-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-slxv8\" (UID: \"411fca3a-272a-4d30-91d3-623952b953aa\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-slxv8" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.466097 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7b6ff3a0-c424-45f9-92e9-e7b5a46d7464-stats-auth\") pod \"router-default-5444994796-dkwvf\" (UID: \"7b6ff3a0-c424-45f9-92e9-e7b5a46d7464\") " pod="openshift-ingress/router-default-5444994796-dkwvf" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.478719 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.505278 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.517155 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.537374 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.557291 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.576616 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.592726 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b6ff3a0-c424-45f9-92e9-e7b5a46d7464-metrics-certs\") pod \"router-default-5444994796-dkwvf\" (UID: \"7b6ff3a0-c424-45f9-92e9-e7b5a46d7464\") " pod="openshift-ingress/router-default-5444994796-dkwvf" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.597216 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.614041 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7b6ff3a0-c424-45f9-92e9-e7b5a46d7464-stats-auth\") pod \"router-default-5444994796-dkwvf\" (UID: \"7b6ff3a0-c424-45f9-92e9-e7b5a46d7464\") " pod="openshift-ingress/router-default-5444994796-dkwvf" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.618264 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.627133 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b6ff3a0-c424-45f9-92e9-e7b5a46d7464-service-ca-bundle\") pod \"router-default-5444994796-dkwvf\" (UID: \"7b6ff3a0-c424-45f9-92e9-e7b5a46d7464\") " pod="openshift-ingress/router-default-5444994796-dkwvf" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.637078 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.657358 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.679191 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.689502 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7b6ff3a0-c424-45f9-92e9-e7b5a46d7464-default-certificate\") pod \"router-default-5444994796-dkwvf\" (UID: \"7b6ff3a0-c424-45f9-92e9-e7b5a46d7464\") " pod="openshift-ingress/router-default-5444994796-dkwvf" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.698608 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.717834 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.738200 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.758406 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.777955 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.797664 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.817841 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.838451 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.858807 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.879254 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.897927 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.917804 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.938238 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.958231 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.977939 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Oct 13 06:30:50 crc kubenswrapper[4833]: I1013 06:30:50.999125 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.017979 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.039562 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.057395 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.077477 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.097409 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.118010 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.137063 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.168464 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.178034 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.197668 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.216758 4833 request.go:700] Waited for 1.011457719s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.218489 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.238078 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.259197 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.277412 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.298224 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.317228 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.338221 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.349507 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d537ffb6-77d0-4bfc-bc53-54cd70938e24-secret-volume\") pod \"collect-profiles-29338950-4m7p7\" (UID: \"d537ffb6-77d0-4bfc-bc53-54cd70938e24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338950-4m7p7" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.358414 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.378220 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.397336 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.417489 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.437414 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.450274 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f94c08d4-7426-4488-b011-c2b78fa2b705-signing-key\") pod \"service-ca-9c57cc56f-2qlcg\" (UID: \"f94c08d4-7426-4488-b011-c2b78fa2b705\") " pod="openshift-service-ca/service-ca-9c57cc56f-2qlcg" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.457899 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Oct 13 06:30:51 crc kubenswrapper[4833]: E1013 06:30:51.465783 4833 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Oct 13 06:30:51 crc kubenswrapper[4833]: E1013 06:30:51.465794 4833 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Oct 13 06:30:51 crc kubenswrapper[4833]: E1013 06:30:51.465785 4833 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Oct 13 06:30:51 crc kubenswrapper[4833]: E1013 06:30:51.465844 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a76795df-c2cf-4bf9-a6df-34a05c0e6d59-cert podName:a76795df-c2cf-4bf9-a6df-34a05c0e6d59 nodeName:}" failed. No retries permitted until 2025-10-13 06:30:51.965825938 +0000 UTC m=+142.066248854 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a76795df-c2cf-4bf9-a6df-34a05c0e6d59-cert") pod "ingress-canary-xhg5q" (UID: "a76795df-c2cf-4bf9-a6df-34a05c0e6d59") : failed to sync secret cache: timed out waiting for the condition Oct 13 06:30:51 crc kubenswrapper[4833]: E1013 06:30:51.466005 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f94c08d4-7426-4488-b011-c2b78fa2b705-signing-cabundle podName:f94c08d4-7426-4488-b011-c2b78fa2b705 nodeName:}" failed. No retries permitted until 2025-10-13 06:30:51.965973422 +0000 UTC m=+142.066396338 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/f94c08d4-7426-4488-b011-c2b78fa2b705-signing-cabundle") pod "service-ca-9c57cc56f-2qlcg" (UID: "f94c08d4-7426-4488-b011-c2b78fa2b705") : failed to sync configmap cache: timed out waiting for the condition Oct 13 06:30:51 crc kubenswrapper[4833]: E1013 06:30:51.466021 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d537ffb6-77d0-4bfc-bc53-54cd70938e24-config-volume podName:d537ffb6-77d0-4bfc-bc53-54cd70938e24 nodeName:}" failed. No retries permitted until 2025-10-13 06:30:51.966014784 +0000 UTC m=+142.066437700 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d537ffb6-77d0-4bfc-bc53-54cd70938e24-config-volume") pod "collect-profiles-29338950-4m7p7" (UID: "d537ffb6-77d0-4bfc-bc53-54cd70938e24") : failed to sync configmap cache: timed out waiting for the condition Oct 13 06:30:51 crc kubenswrapper[4833]: E1013 06:30:51.466221 4833 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Oct 13 06:30:51 crc kubenswrapper[4833]: E1013 06:30:51.466291 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/411fca3a-272a-4d30-91d3-623952b953aa-package-server-manager-serving-cert podName:411fca3a-272a-4d30-91d3-623952b953aa nodeName:}" failed. No retries permitted until 2025-10-13 06:30:51.966269501 +0000 UTC m=+142.066692427 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/411fca3a-272a-4d30-91d3-623952b953aa-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-slxv8" (UID: "411fca3a-272a-4d30-91d3-623952b953aa") : failed to sync secret cache: timed out waiting for the condition Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.476635 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.496665 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.517217 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.537693 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.558220 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.578436 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.598220 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.617213 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.638352 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.658305 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.678117 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.699283 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.718455 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.737740 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.757875 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.777934 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.798628 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.818136 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.837794 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.857882 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.879403 4833 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.898274 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.917487 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.938152 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.958881 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.978475 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.984621 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/411fca3a-272a-4d30-91d3-623952b953aa-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-slxv8\" (UID: \"411fca3a-272a-4d30-91d3-623952b953aa\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-slxv8" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.984672 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a76795df-c2cf-4bf9-a6df-34a05c0e6d59-cert\") pod \"ingress-canary-xhg5q\" (UID: \"a76795df-c2cf-4bf9-a6df-34a05c0e6d59\") " pod="openshift-ingress-canary/ingress-canary-xhg5q" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.984698 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f94c08d4-7426-4488-b011-c2b78fa2b705-signing-cabundle\") pod \"service-ca-9c57cc56f-2qlcg\" (UID: \"f94c08d4-7426-4488-b011-c2b78fa2b705\") " pod="openshift-service-ca/service-ca-9c57cc56f-2qlcg" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.984725 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d537ffb6-77d0-4bfc-bc53-54cd70938e24-config-volume\") pod \"collect-profiles-29338950-4m7p7\" (UID: \"d537ffb6-77d0-4bfc-bc53-54cd70938e24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338950-4m7p7" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.985655 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d537ffb6-77d0-4bfc-bc53-54cd70938e24-config-volume\") pod \"collect-profiles-29338950-4m7p7\" (UID: \"d537ffb6-77d0-4bfc-bc53-54cd70938e24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338950-4m7p7" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.987429 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f94c08d4-7426-4488-b011-c2b78fa2b705-signing-cabundle\") pod \"service-ca-9c57cc56f-2qlcg\" (UID: \"f94c08d4-7426-4488-b011-c2b78fa2b705\") " pod="openshift-service-ca/service-ca-9c57cc56f-2qlcg" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.991009 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/411fca3a-272a-4d30-91d3-623952b953aa-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-slxv8\" (UID: \"411fca3a-272a-4d30-91d3-623952b953aa\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-slxv8" Oct 13 06:30:51 crc kubenswrapper[4833]: I1013 06:30:51.991126 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a76795df-c2cf-4bf9-a6df-34a05c0e6d59-cert\") pod \"ingress-canary-xhg5q\" (UID: \"a76795df-c2cf-4bf9-a6df-34a05c0e6d59\") " pod="openshift-ingress-canary/ingress-canary-xhg5q" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.013842 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52ft8\" (UniqueName: \"kubernetes.io/projected/57f3ce23-f777-41e7-a3ef-23873b3049e9-kube-api-access-52ft8\") pod \"cluster-samples-operator-665b6dd947-8btrk\" (UID: \"57f3ce23-f777-41e7-a3ef-23873b3049e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8btrk" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.042859 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8btrk" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.057994 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.079611 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.098322 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.116966 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.138521 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.158406 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.199210 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c50f9a77-c750-45f8-9655-6002e578c0fd-bound-sa-token\") pod \"ingress-operator-5b745b69d9-t6b9x\" (UID: \"c50f9a77-c750-45f8-9655-6002e578c0fd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t6b9x" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.219386 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t9s7\" (UniqueName: \"kubernetes.io/projected/f5067f7f-360d-4760-ad85-d9ad118f5d20-kube-api-access-7t9s7\") pod \"authentication-operator-69f744f599-b6fmh\" (UID: \"f5067f7f-360d-4760-ad85-d9ad118f5d20\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b6fmh" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.231689 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8qc6\" (UniqueName: \"kubernetes.io/projected/adfdbeae-0ada-4f22-937a-ff7fdb0d0901-kube-api-access-j8qc6\") pod \"route-controller-manager-6576b87f9c-652c5\" (UID: \"adfdbeae-0ada-4f22-937a-ff7fdb0d0901\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.232659 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-b6fmh" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.236466 4833 request.go:700] Waited for 1.873212218s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/serviceaccounts/openshift-kube-scheduler-operator/token Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.239483 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8btrk"] Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.247967 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.258598 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be22fb13-b4fa-49ac-8931-6beefd571639-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zh8tx\" (UID: \"be22fb13-b4fa-49ac-8931-6beefd571639\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zh8tx" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.280214 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6d898cf2-fd64-4f08-bdde-90520345ebc5-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-h529p\" (UID: \"6d898cf2-fd64-4f08-bdde-90520345ebc5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h529p" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.291278 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8hzc\" (UniqueName: \"kubernetes.io/projected/8275122f-9ea3-4d09-a31e-75063b4502d1-kube-api-access-v8hzc\") pod \"machine-approver-56656f9798-l4642\" (UID: \"8275122f-9ea3-4d09-a31e-75063b4502d1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4642" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.314321 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pns4h\" (UniqueName: \"kubernetes.io/projected/a46f0ce4-a965-4cc0-869a-0a1edfdb7519-kube-api-access-pns4h\") pod \"openshift-config-operator-7777fb866f-zg4v6\" (UID: \"a46f0ce4-a965-4cc0-869a-0a1edfdb7519\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zg4v6" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.314939 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zg4v6" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.330406 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h529p" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.340129 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5lp8\" (UniqueName: \"kubernetes.io/projected/404a7ccb-1a6f-4185-aba4-e74c8fcd6092-kube-api-access-j5lp8\") pod \"machine-api-operator-5694c8668f-zk7qg\" (UID: \"404a7ccb-1a6f-4185-aba4-e74c8fcd6092\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zk7qg" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.363126 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77t8j\" (UniqueName: \"kubernetes.io/projected/b4411e13-1d37-4d03-ad8a-7d24be467441-kube-api-access-77t8j\") pod \"oauth-openshift-558db77b4-x7dz2\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.375338 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm2s6\" (UniqueName: \"kubernetes.io/projected/8a8c5e6e-2adb-47d6-aca7-a95b42d5444e-kube-api-access-pm2s6\") pod \"openshift-apiserver-operator-796bbdcf4f-tqnzn\" (UID: \"8a8c5e6e-2adb-47d6-aca7-a95b42d5444e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tqnzn" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.392737 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zh8tx" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.393260 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4tww\" (UniqueName: \"kubernetes.io/projected/fe855835-c379-488f-84c2-46e500e828cd-kube-api-access-r4tww\") pod \"dns-operator-744455d44c-jpjcg\" (UID: \"fe855835-c379-488f-84c2-46e500e828cd\") " pod="openshift-dns-operator/dns-operator-744455d44c-jpjcg" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.418839 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4rjn\" (UniqueName: \"kubernetes.io/projected/d78c0df2-c046-49a4-b00c-031053c497c4-kube-api-access-r4rjn\") pod \"apiserver-7bbb656c7d-ghwjx\" (UID: \"d78c0df2-c046-49a4-b00c-031053c497c4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.443641 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf7bn\" (UniqueName: \"kubernetes.io/projected/87c673fe-144f-4dc7-bafd-ca7c29e498e2-kube-api-access-wf7bn\") pod \"console-operator-58897d9998-77swh\" (UID: \"87c673fe-144f-4dc7-bafd-ca7c29e498e2\") " pod="openshift-console-operator/console-operator-58897d9998-77swh" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.458195 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72ncc\" (UniqueName: \"kubernetes.io/projected/31fd7ef2-e28a-417f-8b5c-26d976680749-kube-api-access-72ncc\") pod \"apiserver-76f77b778f-h8p9r\" (UID: \"31fd7ef2-e28a-417f-8b5c-26d976680749\") " pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.470227 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fhp2\" (UniqueName: \"kubernetes.io/projected/05a4a2c9-1543-49ae-9f86-ba208d564f75-kube-api-access-8fhp2\") pod \"downloads-7954f5f757-tkv6x\" (UID: \"05a4a2c9-1543-49ae-9f86-ba208d564f75\") " pod="openshift-console/downloads-7954f5f757-tkv6x" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.498305 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b6fmh"] Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.502452 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g24p\" (UniqueName: \"kubernetes.io/projected/ba083af2-d9a6-42e5-99ec-2b89278b08a2-kube-api-access-7g24p\") pod \"controller-manager-879f6c89f-qz7k6\" (UID: \"ba083af2-d9a6-42e5-99ec-2b89278b08a2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qz7k6" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.502650 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-jpjcg" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.503824 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5"] Oct 13 06:30:52 crc kubenswrapper[4833]: W1013 06:30:52.510922 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5067f7f_360d_4760_ad85_d9ad118f5d20.slice/crio-bc6b833c2a126d3a78f344c359b1a509c96a1df07012406094760480513ddbf3 WatchSource:0}: Error finding container bc6b833c2a126d3a78f344c359b1a509c96a1df07012406094760480513ddbf3: Status 404 returned error can't find the container with id bc6b833c2a126d3a78f344c359b1a509c96a1df07012406094760480513ddbf3 Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.515665 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txs86\" (UniqueName: \"kubernetes.io/projected/c50f9a77-c750-45f8-9655-6002e578c0fd-kube-api-access-txs86\") pod \"ingress-operator-5b745b69d9-t6b9x\" (UID: \"c50f9a77-c750-45f8-9655-6002e578c0fd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t6b9x" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.517627 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4642" Oct 13 06:30:52 crc kubenswrapper[4833]: W1013 06:30:52.523091 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadfdbeae_0ada_4f22_937a_ff7fdb0d0901.slice/crio-d30d23e71b31f898ce173b34e7c5976e0a0df2f581cdf68a4955bf6442dadb21 WatchSource:0}: Error finding container d30d23e71b31f898ce173b34e7c5976e0a0df2f581cdf68a4955bf6442dadb21: Status 404 returned error can't find the container with id d30d23e71b31f898ce173b34e7c5976e0a0df2f581cdf68a4955bf6442dadb21 Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.533624 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29dtq\" (UniqueName: \"kubernetes.io/projected/a76795df-c2cf-4bf9-a6df-34a05c0e6d59-kube-api-access-29dtq\") pod \"ingress-canary-xhg5q\" (UID: \"a76795df-c2cf-4bf9-a6df-34a05c0e6d59\") " pod="openshift-ingress-canary/ingress-canary-xhg5q" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.554664 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjskt\" (UniqueName: \"kubernetes.io/projected/411fca3a-272a-4d30-91d3-623952b953aa-kube-api-access-mjskt\") pod \"package-server-manager-789f6589d5-slxv8\" (UID: \"411fca3a-272a-4d30-91d3-623952b953aa\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-slxv8" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.567678 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-slxv8" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.571581 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52gj6\" (UniqueName: \"kubernetes.io/projected/d537ffb6-77d0-4bfc-bc53-54cd70938e24-kube-api-access-52gj6\") pod \"collect-profiles-29338950-4m7p7\" (UID: \"d537ffb6-77d0-4bfc-bc53-54cd70938e24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338950-4m7p7" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.574284 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.581725 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-zg4v6"] Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.589256 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.601710 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spw5r\" (UniqueName: \"kubernetes.io/projected/7b6ff3a0-c424-45f9-92e9-e7b5a46d7464-kube-api-access-spw5r\") pod \"router-default-5444994796-dkwvf\" (UID: \"7b6ff3a0-c424-45f9-92e9-e7b5a46d7464\") " pod="openshift-ingress/router-default-5444994796-dkwvf" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.602089 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29338950-4m7p7" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.602118 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tqnzn" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.603864 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xhg5q" Oct 13 06:30:52 crc kubenswrapper[4833]: W1013 06:30:52.611401 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda46f0ce4_a965_4cc0_869a_0a1edfdb7519.slice/crio-f033156d531b596e0d3acf8fe9ea10b5695573b574f915d2f558f5400497cb4e WatchSource:0}: Error finding container f033156d531b596e0d3acf8fe9ea10b5695573b574f915d2f558f5400497cb4e: Status 404 returned error can't find the container with id f033156d531b596e0d3acf8fe9ea10b5695573b574f915d2f558f5400497cb4e Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.622514 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhrr9\" (UniqueName: \"kubernetes.io/projected/f94c08d4-7426-4488-b011-c2b78fa2b705-kube-api-access-bhrr9\") pod \"service-ca-9c57cc56f-2qlcg\" (UID: \"f94c08d4-7426-4488-b011-c2b78fa2b705\") " pod="openshift-service-ca/service-ca-9c57cc56f-2qlcg" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.651783 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-zk7qg" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.652625 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.660503 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-77swh" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.670768 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tkv6x" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.686820 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t6b9x" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.703133 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmsvj\" (UniqueName: \"kubernetes.io/projected/fade4b8e-c06e-46ef-aaad-70a1257290aa-kube-api-access-vmsvj\") pod \"kube-storage-version-migrator-operator-b67b599dd-9c87n\" (UID: \"fade4b8e-c06e-46ef-aaad-70a1257290aa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9c87n" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.703165 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7s2v\" (UniqueName: \"kubernetes.io/projected/dc4b124f-5c7d-441f-ba49-ad167dc10163-kube-api-access-n7s2v\") pod \"machine-config-operator-74547568cd-rxcsk\" (UID: \"dc4b124f-5c7d-441f-ba49-ad167dc10163\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rxcsk" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.703243 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-console-oauth-config\") pod \"console-f9d7485db-tgsfn\" (UID: \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\") " pod="openshift-console/console-f9d7485db-tgsfn" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.703268 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b5054cb2-4fb5-4389-82bd-7533b8813025-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-d2vbr\" (UID: \"b5054cb2-4fb5-4389-82bd-7533b8813025\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2vbr" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.703292 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/803e09bf-dacb-49f5-b812-1415b7bc2c37-webhook-cert\") pod \"packageserver-d55dfcdfc-sgbwk\" (UID: \"803e09bf-dacb-49f5-b812-1415b7bc2c37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgbwk" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.703321 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.703345 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/822cc654-3f56-4ca1-b73c-863bc7129d43-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-lr2tr\" (UID: \"822cc654-3f56-4ca1-b73c-863bc7129d43\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lr2tr" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.703377 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb9r9\" (UniqueName: \"kubernetes.io/projected/4bd0a924-aaee-4a87-b87d-bbc1d7ddd4d1-kube-api-access-hb9r9\") pod \"multus-admission-controller-857f4d67dd-l88r9\" (UID: \"4bd0a924-aaee-4a87-b87d-bbc1d7ddd4d1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l88r9" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.703399 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/96a6d6d4-3b2e-410a-af74-2e05a6dc0025-registration-dir\") pod \"csi-hostpathplugin-jl8l9\" (UID: \"96a6d6d4-3b2e-410a-af74-2e05a6dc0025\") " pod="hostpath-provisioner/csi-hostpathplugin-jl8l9" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.703421 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/96a6d6d4-3b2e-410a-af74-2e05a6dc0025-plugins-dir\") pod \"csi-hostpathplugin-jl8l9\" (UID: \"96a6d6d4-3b2e-410a-af74-2e05a6dc0025\") " pod="hostpath-provisioner/csi-hostpathplugin-jl8l9" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.703458 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fade4b8e-c06e-46ef-aaad-70a1257290aa-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9c87n\" (UID: \"fade4b8e-c06e-46ef-aaad-70a1257290aa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9c87n" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.703482 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5054cb2-4fb5-4389-82bd-7533b8813025-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-d2vbr\" (UID: \"b5054cb2-4fb5-4389-82bd-7533b8813025\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2vbr" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.703573 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt6wz\" (UniqueName: \"kubernetes.io/projected/135158fc-7ab1-4642-a36a-4ac5e06bb33e-kube-api-access-lt6wz\") pod \"olm-operator-6b444d44fb-lmq94\" (UID: \"135158fc-7ab1-4642-a36a-4ac5e06bb33e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lmq94" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.703649 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc79f3f9-f6a6-43db-830d-f9300e774b68-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ds2pp\" (UID: \"dc79f3f9-f6a6-43db-830d-f9300e774b68\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ds2pp" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.703690 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f0435d5-94cb-4de1-a43b-7d784b2c8022-config\") pod \"service-ca-operator-777779d784-ddtzr\" (UID: \"8f0435d5-94cb-4de1-a43b-7d784b2c8022\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ddtzr" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.703716 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgf76\" (UniqueName: \"kubernetes.io/projected/68000101-bc2e-44dd-affe-be84000fba74-kube-api-access-cgf76\") pod \"migrator-59844c95c7-p6kj7\" (UID: \"68000101-bc2e-44dd-affe-be84000fba74\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-p6kj7" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.703734 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg89p\" (UniqueName: \"kubernetes.io/projected/523f8b54-3667-4d44-99b3-99a4caca1cee-kube-api-access-rg89p\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.703760 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76h4k\" (UniqueName: \"kubernetes.io/projected/dc79f3f9-f6a6-43db-830d-f9300e774b68-kube-api-access-76h4k\") pod \"openshift-controller-manager-operator-756b6f6bc6-ds2pp\" (UID: \"dc79f3f9-f6a6-43db-830d-f9300e774b68\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ds2pp" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.703822 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-trusted-ca-bundle\") pod \"console-f9d7485db-tgsfn\" (UID: \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\") " pod="openshift-console/console-f9d7485db-tgsfn" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.703856 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/523f8b54-3667-4d44-99b3-99a4caca1cee-registry-certificates\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:52 crc kubenswrapper[4833]: E1013 06:30:52.703882 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:53.203866907 +0000 UTC m=+143.304289813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.703916 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fade4b8e-c06e-46ef-aaad-70a1257290aa-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9c87n\" (UID: \"fade4b8e-c06e-46ef-aaad-70a1257290aa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9c87n" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.703939 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-console-serving-cert\") pod \"console-f9d7485db-tgsfn\" (UID: \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\") " pod="openshift-console/console-f9d7485db-tgsfn" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.703977 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/96a6d6d4-3b2e-410a-af74-2e05a6dc0025-socket-dir\") pod \"csi-hostpathplugin-jl8l9\" (UID: \"96a6d6d4-3b2e-410a-af74-2e05a6dc0025\") " pod="hostpath-provisioner/csi-hostpathplugin-jl8l9" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.703996 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-console-config\") pod \"console-f9d7485db-tgsfn\" (UID: \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\") " pod="openshift-console/console-f9d7485db-tgsfn" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.704012 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/135158fc-7ab1-4642-a36a-4ac5e06bb33e-srv-cert\") pod \"olm-operator-6b444d44fb-lmq94\" (UID: \"135158fc-7ab1-4642-a36a-4ac5e06bb33e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lmq94" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.704040 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/425255dc-0aa1-4b46-ae72-525d25c65135-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pzz9g\" (UID: \"425255dc-0aa1-4b46-ae72-525d25c65135\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pzz9g" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.704590 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-service-ca\") pod \"console-f9d7485db-tgsfn\" (UID: \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\") " pod="openshift-console/console-f9d7485db-tgsfn" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.704658 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jggtq\" (UniqueName: \"kubernetes.io/projected/1a1d87b9-cb40-4860-8445-4729e0945358-kube-api-access-jggtq\") pod \"marketplace-operator-79b997595-6g499\" (UID: \"1a1d87b9-cb40-4860-8445-4729e0945358\") " pod="openshift-marketplace/marketplace-operator-79b997595-6g499" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.704775 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4bd0a924-aaee-4a87-b87d-bbc1d7ddd4d1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-l88r9\" (UID: \"4bd0a924-aaee-4a87-b87d-bbc1d7ddd4d1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l88r9" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.704817 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2xcm\" (UniqueName: \"kubernetes.io/projected/44a6f1e1-b3f3-4720-b282-65300d1cbf36-kube-api-access-c2xcm\") pod \"catalog-operator-68c6474976-t8q8k\" (UID: \"44a6f1e1-b3f3-4720-b282-65300d1cbf36\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t8q8k" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.704841 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e912af71-9fc5-42ea-99cf-5a2e5a42cf0d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rpfzn\" (UID: \"e912af71-9fc5-42ea-99cf-5a2e5a42cf0d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rpfzn" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.704864 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g98lv\" (UniqueName: \"kubernetes.io/projected/b5054cb2-4fb5-4389-82bd-7533b8813025-kube-api-access-g98lv\") pod \"cluster-image-registry-operator-dc59b4c8b-d2vbr\" (UID: \"b5054cb2-4fb5-4389-82bd-7533b8813025\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2vbr" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.705023 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/96a6d6d4-3b2e-410a-af74-2e05a6dc0025-mountpoint-dir\") pod \"csi-hostpathplugin-jl8l9\" (UID: \"96a6d6d4-3b2e-410a-af74-2e05a6dc0025\") " pod="hostpath-provisioner/csi-hostpathplugin-jl8l9" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.705051 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e912af71-9fc5-42ea-99cf-5a2e5a42cf0d-proxy-tls\") pod \"machine-config-controller-84d6567774-rpfzn\" (UID: \"e912af71-9fc5-42ea-99cf-5a2e5a42cf0d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rpfzn" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.705075 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/523f8b54-3667-4d44-99b3-99a4caca1cee-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.705096 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f8qw\" (UniqueName: \"kubernetes.io/projected/8f0435d5-94cb-4de1-a43b-7d784b2c8022-kube-api-access-9f8qw\") pod \"service-ca-operator-777779d784-ddtzr\" (UID: \"8f0435d5-94cb-4de1-a43b-7d784b2c8022\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ddtzr" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.705118 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/523f8b54-3667-4d44-99b3-99a4caca1cee-registry-tls\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.705138 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/96a6d6d4-3b2e-410a-af74-2e05a6dc0025-csi-data-dir\") pod \"csi-hostpathplugin-jl8l9\" (UID: \"96a6d6d4-3b2e-410a-af74-2e05a6dc0025\") " pod="hostpath-provisioner/csi-hostpathplugin-jl8l9" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.705159 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a02fa3df-eb86-46d1-af05-0559a10899c8-etcd-service-ca\") pod \"etcd-operator-b45778765-m5t5p\" (UID: \"a02fa3df-eb86-46d1-af05-0559a10899c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m5t5p" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.705193 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/44a6f1e1-b3f3-4720-b282-65300d1cbf36-srv-cert\") pod \"catalog-operator-68c6474976-t8q8k\" (UID: \"44a6f1e1-b3f3-4720-b282-65300d1cbf36\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t8q8k" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.705257 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/523f8b54-3667-4d44-99b3-99a4caca1cee-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.705281 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/135158fc-7ab1-4642-a36a-4ac5e06bb33e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lmq94\" (UID: \"135158fc-7ab1-4642-a36a-4ac5e06bb33e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lmq94" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.705303 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02fa3df-eb86-46d1-af05-0559a10899c8-serving-cert\") pod \"etcd-operator-b45778765-m5t5p\" (UID: \"a02fa3df-eb86-46d1-af05-0559a10899c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m5t5p" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.705338 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/523f8b54-3667-4d44-99b3-99a4caca1cee-bound-sa-token\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.705363 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b5054cb2-4fb5-4389-82bd-7533b8813025-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-d2vbr\" (UID: \"b5054cb2-4fb5-4389-82bd-7533b8813025\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2vbr" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.705440 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-858zr\" (UniqueName: \"kubernetes.io/projected/a02fa3df-eb86-46d1-af05-0559a10899c8-kube-api-access-858zr\") pod \"etcd-operator-b45778765-m5t5p\" (UID: \"a02fa3df-eb86-46d1-af05-0559a10899c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m5t5p" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.705462 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc4b124f-5c7d-441f-ba49-ad167dc10163-images\") pod \"machine-config-operator-74547568cd-rxcsk\" (UID: \"dc4b124f-5c7d-441f-ba49-ad167dc10163\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rxcsk" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.705643 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84mtv\" (UniqueName: \"kubernetes.io/projected/e912af71-9fc5-42ea-99cf-5a2e5a42cf0d-kube-api-access-84mtv\") pod \"machine-config-controller-84d6567774-rpfzn\" (UID: \"e912af71-9fc5-42ea-99cf-5a2e5a42cf0d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rpfzn" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.705690 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc79f3f9-f6a6-43db-830d-f9300e774b68-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ds2pp\" (UID: \"dc79f3f9-f6a6-43db-830d-f9300e774b68\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ds2pp" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.705713 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dc4b124f-5c7d-441f-ba49-ad167dc10163-proxy-tls\") pod \"machine-config-operator-74547568cd-rxcsk\" (UID: \"dc4b124f-5c7d-441f-ba49-ad167dc10163\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rxcsk" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.705736 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1a1d87b9-cb40-4860-8445-4729e0945358-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6g499\" (UID: \"1a1d87b9-cb40-4860-8445-4729e0945358\") " pod="openshift-marketplace/marketplace-operator-79b997595-6g499" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.705758 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-oauth-serving-cert\") pod \"console-f9d7485db-tgsfn\" (UID: \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\") " pod="openshift-console/console-f9d7485db-tgsfn" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.705783 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a02fa3df-eb86-46d1-af05-0559a10899c8-etcd-ca\") pod \"etcd-operator-b45778765-m5t5p\" (UID: \"a02fa3df-eb86-46d1-af05-0559a10899c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m5t5p" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.705806 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/44a6f1e1-b3f3-4720-b282-65300d1cbf36-profile-collector-cert\") pod \"catalog-operator-68c6474976-t8q8k\" (UID: \"44a6f1e1-b3f3-4720-b282-65300d1cbf36\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t8q8k" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.705846 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a1d87b9-cb40-4860-8445-4729e0945358-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6g499\" (UID: \"1a1d87b9-cb40-4860-8445-4729e0945358\") " pod="openshift-marketplace/marketplace-operator-79b997595-6g499" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.705880 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/523f8b54-3667-4d44-99b3-99a4caca1cee-trusted-ca\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.705901 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a02fa3df-eb86-46d1-af05-0559a10899c8-etcd-client\") pod \"etcd-operator-b45778765-m5t5p\" (UID: \"a02fa3df-eb86-46d1-af05-0559a10899c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m5t5p" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.705940 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74pmn\" (UniqueName: \"kubernetes.io/projected/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-kube-api-access-74pmn\") pod \"console-f9d7485db-tgsfn\" (UID: \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\") " pod="openshift-console/console-f9d7485db-tgsfn" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.706006 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/425255dc-0aa1-4b46-ae72-525d25c65135-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pzz9g\" (UID: \"425255dc-0aa1-4b46-ae72-525d25c65135\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pzz9g" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.706035 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwgjd\" (UniqueName: \"kubernetes.io/projected/96a6d6d4-3b2e-410a-af74-2e05a6dc0025-kube-api-access-cwgjd\") pod \"csi-hostpathplugin-jl8l9\" (UID: \"96a6d6d4-3b2e-410a-af74-2e05a6dc0025\") " pod="hostpath-provisioner/csi-hostpathplugin-jl8l9" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.706058 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd657\" (UniqueName: \"kubernetes.io/projected/822cc654-3f56-4ca1-b73c-863bc7129d43-kube-api-access-xd657\") pod \"control-plane-machine-set-operator-78cbb6b69f-lr2tr\" (UID: \"822cc654-3f56-4ca1-b73c-863bc7129d43\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lr2tr" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.706079 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc4b124f-5c7d-441f-ba49-ad167dc10163-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rxcsk\" (UID: \"dc4b124f-5c7d-441f-ba49-ad167dc10163\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rxcsk" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.706176 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/803e09bf-dacb-49f5-b812-1415b7bc2c37-apiservice-cert\") pod \"packageserver-d55dfcdfc-sgbwk\" (UID: \"803e09bf-dacb-49f5-b812-1415b7bc2c37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgbwk" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.706379 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/425255dc-0aa1-4b46-ae72-525d25c65135-config\") pod \"kube-apiserver-operator-766d6c64bb-pzz9g\" (UID: \"425255dc-0aa1-4b46-ae72-525d25c65135\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pzz9g" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.706403 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdh2t\" (UniqueName: \"kubernetes.io/projected/803e09bf-dacb-49f5-b812-1415b7bc2c37-kube-api-access-gdh2t\") pod \"packageserver-d55dfcdfc-sgbwk\" (UID: \"803e09bf-dacb-49f5-b812-1415b7bc2c37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgbwk" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.706458 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/803e09bf-dacb-49f5-b812-1415b7bc2c37-tmpfs\") pod \"packageserver-d55dfcdfc-sgbwk\" (UID: \"803e09bf-dacb-49f5-b812-1415b7bc2c37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgbwk" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.706507 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a02fa3df-eb86-46d1-af05-0559a10899c8-config\") pod \"etcd-operator-b45778765-m5t5p\" (UID: \"a02fa3df-eb86-46d1-af05-0559a10899c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m5t5p" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.706610 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f0435d5-94cb-4de1-a43b-7d784b2c8022-serving-cert\") pod \"service-ca-operator-777779d784-ddtzr\" (UID: \"8f0435d5-94cb-4de1-a43b-7d784b2c8022\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ddtzr" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.715222 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jpjcg"] Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.729031 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dkwvf" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.780217 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qz7k6" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.810031 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.810349 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a02fa3df-eb86-46d1-af05-0559a10899c8-config\") pod \"etcd-operator-b45778765-m5t5p\" (UID: \"a02fa3df-eb86-46d1-af05-0559a10899c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m5t5p" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.810398 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f0435d5-94cb-4de1-a43b-7d784b2c8022-serving-cert\") pod \"service-ca-operator-777779d784-ddtzr\" (UID: \"8f0435d5-94cb-4de1-a43b-7d784b2c8022\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ddtzr" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.810456 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmsvj\" (UniqueName: \"kubernetes.io/projected/fade4b8e-c06e-46ef-aaad-70a1257290aa-kube-api-access-vmsvj\") pod \"kube-storage-version-migrator-operator-b67b599dd-9c87n\" (UID: \"fade4b8e-c06e-46ef-aaad-70a1257290aa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9c87n" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.810481 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7s2v\" (UniqueName: \"kubernetes.io/projected/dc4b124f-5c7d-441f-ba49-ad167dc10163-kube-api-access-n7s2v\") pod \"machine-config-operator-74547568cd-rxcsk\" (UID: \"dc4b124f-5c7d-441f-ba49-ad167dc10163\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rxcsk" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.810520 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-console-oauth-config\") pod \"console-f9d7485db-tgsfn\" (UID: \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\") " pod="openshift-console/console-f9d7485db-tgsfn" Oct 13 06:30:52 crc kubenswrapper[4833]: E1013 06:30:52.815132 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:53.314153779 +0000 UTC m=+143.414576705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.815200 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b5054cb2-4fb5-4389-82bd-7533b8813025-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-d2vbr\" (UID: \"b5054cb2-4fb5-4389-82bd-7533b8813025\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2vbr" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.815234 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/803e09bf-dacb-49f5-b812-1415b7bc2c37-webhook-cert\") pod \"packageserver-d55dfcdfc-sgbwk\" (UID: \"803e09bf-dacb-49f5-b812-1415b7bc2c37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgbwk" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.815280 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.815872 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f0435d5-94cb-4de1-a43b-7d784b2c8022-serving-cert\") pod \"service-ca-operator-777779d784-ddtzr\" (UID: \"8f0435d5-94cb-4de1-a43b-7d784b2c8022\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ddtzr" Oct 13 06:30:52 crc kubenswrapper[4833]: E1013 06:30:52.816153 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:53.316139027 +0000 UTC m=+143.416561943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.816396 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/822cc654-3f56-4ca1-b73c-863bc7129d43-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-lr2tr\" (UID: \"822cc654-3f56-4ca1-b73c-863bc7129d43\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lr2tr" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.816425 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb9r9\" (UniqueName: \"kubernetes.io/projected/4bd0a924-aaee-4a87-b87d-bbc1d7ddd4d1-kube-api-access-hb9r9\") pod \"multus-admission-controller-857f4d67dd-l88r9\" (UID: \"4bd0a924-aaee-4a87-b87d-bbc1d7ddd4d1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l88r9" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.816447 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/96a6d6d4-3b2e-410a-af74-2e05a6dc0025-registration-dir\") pod \"csi-hostpathplugin-jl8l9\" (UID: \"96a6d6d4-3b2e-410a-af74-2e05a6dc0025\") " pod="hostpath-provisioner/csi-hostpathplugin-jl8l9" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.816463 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/96a6d6d4-3b2e-410a-af74-2e05a6dc0025-plugins-dir\") pod \"csi-hostpathplugin-jl8l9\" (UID: \"96a6d6d4-3b2e-410a-af74-2e05a6dc0025\") " pod="hostpath-provisioner/csi-hostpathplugin-jl8l9" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.816484 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxq2w\" (UniqueName: \"kubernetes.io/projected/9e99b112-724a-48b6-9b92-b019d5092add-kube-api-access-dxq2w\") pod \"machine-config-server-rh4l6\" (UID: \"9e99b112-724a-48b6-9b92-b019d5092add\") " pod="openshift-machine-config-operator/machine-config-server-rh4l6" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.816504 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fade4b8e-c06e-46ef-aaad-70a1257290aa-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9c87n\" (UID: \"fade4b8e-c06e-46ef-aaad-70a1257290aa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9c87n" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.816549 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5054cb2-4fb5-4389-82bd-7533b8813025-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-d2vbr\" (UID: \"b5054cb2-4fb5-4389-82bd-7533b8813025\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2vbr" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.816577 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c40960a-a5c5-442a-9124-fba675359f3b-config-volume\") pod \"dns-default-c7xv6\" (UID: \"8c40960a-a5c5-442a-9124-fba675359f3b\") " pod="openshift-dns/dns-default-c7xv6" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.816613 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt6wz\" (UniqueName: \"kubernetes.io/projected/135158fc-7ab1-4642-a36a-4ac5e06bb33e-kube-api-access-lt6wz\") pod \"olm-operator-6b444d44fb-lmq94\" (UID: \"135158fc-7ab1-4642-a36a-4ac5e06bb33e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lmq94" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.816629 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9e99b112-724a-48b6-9b92-b019d5092add-node-bootstrap-token\") pod \"machine-config-server-rh4l6\" (UID: \"9e99b112-724a-48b6-9b92-b019d5092add\") " pod="openshift-machine-config-operator/machine-config-server-rh4l6" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.816658 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc79f3f9-f6a6-43db-830d-f9300e774b68-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ds2pp\" (UID: \"dc79f3f9-f6a6-43db-830d-f9300e774b68\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ds2pp" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.816676 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f0435d5-94cb-4de1-a43b-7d784b2c8022-config\") pod \"service-ca-operator-777779d784-ddtzr\" (UID: \"8f0435d5-94cb-4de1-a43b-7d784b2c8022\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ddtzr" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.816704 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgf76\" (UniqueName: \"kubernetes.io/projected/68000101-bc2e-44dd-affe-be84000fba74-kube-api-access-cgf76\") pod \"migrator-59844c95c7-p6kj7\" (UID: \"68000101-bc2e-44dd-affe-be84000fba74\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-p6kj7" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.816723 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg89p\" (UniqueName: \"kubernetes.io/projected/523f8b54-3667-4d44-99b3-99a4caca1cee-kube-api-access-rg89p\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.816743 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76h4k\" (UniqueName: \"kubernetes.io/projected/dc79f3f9-f6a6-43db-830d-f9300e774b68-kube-api-access-76h4k\") pod \"openshift-controller-manager-operator-756b6f6bc6-ds2pp\" (UID: \"dc79f3f9-f6a6-43db-830d-f9300e774b68\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ds2pp" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.816784 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-trusted-ca-bundle\") pod \"console-f9d7485db-tgsfn\" (UID: \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\") " pod="openshift-console/console-f9d7485db-tgsfn" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.816802 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/523f8b54-3667-4d44-99b3-99a4caca1cee-registry-certificates\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.816817 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fade4b8e-c06e-46ef-aaad-70a1257290aa-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9c87n\" (UID: \"fade4b8e-c06e-46ef-aaad-70a1257290aa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9c87n" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.816835 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-console-serving-cert\") pod \"console-f9d7485db-tgsfn\" (UID: \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\") " pod="openshift-console/console-f9d7485db-tgsfn" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.816861 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/96a6d6d4-3b2e-410a-af74-2e05a6dc0025-socket-dir\") pod \"csi-hostpathplugin-jl8l9\" (UID: \"96a6d6d4-3b2e-410a-af74-2e05a6dc0025\") " pod="hostpath-provisioner/csi-hostpathplugin-jl8l9" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.816887 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-console-config\") pod \"console-f9d7485db-tgsfn\" (UID: \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\") " pod="openshift-console/console-f9d7485db-tgsfn" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.816902 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/135158fc-7ab1-4642-a36a-4ac5e06bb33e-srv-cert\") pod \"olm-operator-6b444d44fb-lmq94\" (UID: \"135158fc-7ab1-4642-a36a-4ac5e06bb33e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lmq94" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.816921 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/425255dc-0aa1-4b46-ae72-525d25c65135-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pzz9g\" (UID: \"425255dc-0aa1-4b46-ae72-525d25c65135\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pzz9g" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.816938 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-service-ca\") pod \"console-f9d7485db-tgsfn\" (UID: \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\") " pod="openshift-console/console-f9d7485db-tgsfn" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.816960 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jggtq\" (UniqueName: \"kubernetes.io/projected/1a1d87b9-cb40-4860-8445-4729e0945358-kube-api-access-jggtq\") pod \"marketplace-operator-79b997595-6g499\" (UID: \"1a1d87b9-cb40-4860-8445-4729e0945358\") " pod="openshift-marketplace/marketplace-operator-79b997595-6g499" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.816999 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4bd0a924-aaee-4a87-b87d-bbc1d7ddd4d1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-l88r9\" (UID: \"4bd0a924-aaee-4a87-b87d-bbc1d7ddd4d1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l88r9" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817016 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6mkv\" (UniqueName: \"kubernetes.io/projected/8c40960a-a5c5-442a-9124-fba675359f3b-kube-api-access-z6mkv\") pod \"dns-default-c7xv6\" (UID: \"8c40960a-a5c5-442a-9124-fba675359f3b\") " pod="openshift-dns/dns-default-c7xv6" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817038 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2xcm\" (UniqueName: \"kubernetes.io/projected/44a6f1e1-b3f3-4720-b282-65300d1cbf36-kube-api-access-c2xcm\") pod \"catalog-operator-68c6474976-t8q8k\" (UID: \"44a6f1e1-b3f3-4720-b282-65300d1cbf36\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t8q8k" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817055 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e912af71-9fc5-42ea-99cf-5a2e5a42cf0d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rpfzn\" (UID: \"e912af71-9fc5-42ea-99cf-5a2e5a42cf0d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rpfzn" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817071 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g98lv\" (UniqueName: \"kubernetes.io/projected/b5054cb2-4fb5-4389-82bd-7533b8813025-kube-api-access-g98lv\") pod \"cluster-image-registry-operator-dc59b4c8b-d2vbr\" (UID: \"b5054cb2-4fb5-4389-82bd-7533b8813025\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2vbr" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817114 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/96a6d6d4-3b2e-410a-af74-2e05a6dc0025-mountpoint-dir\") pod \"csi-hostpathplugin-jl8l9\" (UID: \"96a6d6d4-3b2e-410a-af74-2e05a6dc0025\") " pod="hostpath-provisioner/csi-hostpathplugin-jl8l9" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817134 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e912af71-9fc5-42ea-99cf-5a2e5a42cf0d-proxy-tls\") pod \"machine-config-controller-84d6567774-rpfzn\" (UID: \"e912af71-9fc5-42ea-99cf-5a2e5a42cf0d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rpfzn" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817149 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9e99b112-724a-48b6-9b92-b019d5092add-certs\") pod \"machine-config-server-rh4l6\" (UID: \"9e99b112-724a-48b6-9b92-b019d5092add\") " pod="openshift-machine-config-operator/machine-config-server-rh4l6" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817166 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/523f8b54-3667-4d44-99b3-99a4caca1cee-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817182 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f8qw\" (UniqueName: \"kubernetes.io/projected/8f0435d5-94cb-4de1-a43b-7d784b2c8022-kube-api-access-9f8qw\") pod \"service-ca-operator-777779d784-ddtzr\" (UID: \"8f0435d5-94cb-4de1-a43b-7d784b2c8022\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ddtzr" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817197 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/523f8b54-3667-4d44-99b3-99a4caca1cee-registry-tls\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817212 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/96a6d6d4-3b2e-410a-af74-2e05a6dc0025-csi-data-dir\") pod \"csi-hostpathplugin-jl8l9\" (UID: \"96a6d6d4-3b2e-410a-af74-2e05a6dc0025\") " pod="hostpath-provisioner/csi-hostpathplugin-jl8l9" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817227 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a02fa3df-eb86-46d1-af05-0559a10899c8-etcd-service-ca\") pod \"etcd-operator-b45778765-m5t5p\" (UID: \"a02fa3df-eb86-46d1-af05-0559a10899c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m5t5p" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817244 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/44a6f1e1-b3f3-4720-b282-65300d1cbf36-srv-cert\") pod \"catalog-operator-68c6474976-t8q8k\" (UID: \"44a6f1e1-b3f3-4720-b282-65300d1cbf36\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t8q8k" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817268 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8c40960a-a5c5-442a-9124-fba675359f3b-metrics-tls\") pod \"dns-default-c7xv6\" (UID: \"8c40960a-a5c5-442a-9124-fba675359f3b\") " pod="openshift-dns/dns-default-c7xv6" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817296 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/523f8b54-3667-4d44-99b3-99a4caca1cee-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817311 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/135158fc-7ab1-4642-a36a-4ac5e06bb33e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lmq94\" (UID: \"135158fc-7ab1-4642-a36a-4ac5e06bb33e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lmq94" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817348 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02fa3df-eb86-46d1-af05-0559a10899c8-serving-cert\") pod \"etcd-operator-b45778765-m5t5p\" (UID: \"a02fa3df-eb86-46d1-af05-0559a10899c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m5t5p" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817367 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/523f8b54-3667-4d44-99b3-99a4caca1cee-bound-sa-token\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817386 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b5054cb2-4fb5-4389-82bd-7533b8813025-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-d2vbr\" (UID: \"b5054cb2-4fb5-4389-82bd-7533b8813025\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2vbr" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817402 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-858zr\" (UniqueName: \"kubernetes.io/projected/a02fa3df-eb86-46d1-af05-0559a10899c8-kube-api-access-858zr\") pod \"etcd-operator-b45778765-m5t5p\" (UID: \"a02fa3df-eb86-46d1-af05-0559a10899c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m5t5p" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817418 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc4b124f-5c7d-441f-ba49-ad167dc10163-images\") pod \"machine-config-operator-74547568cd-rxcsk\" (UID: \"dc4b124f-5c7d-441f-ba49-ad167dc10163\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rxcsk" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817436 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84mtv\" (UniqueName: \"kubernetes.io/projected/e912af71-9fc5-42ea-99cf-5a2e5a42cf0d-kube-api-access-84mtv\") pod \"machine-config-controller-84d6567774-rpfzn\" (UID: \"e912af71-9fc5-42ea-99cf-5a2e5a42cf0d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rpfzn" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817460 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc79f3f9-f6a6-43db-830d-f9300e774b68-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ds2pp\" (UID: \"dc79f3f9-f6a6-43db-830d-f9300e774b68\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ds2pp" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817476 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dc4b124f-5c7d-441f-ba49-ad167dc10163-proxy-tls\") pod \"machine-config-operator-74547568cd-rxcsk\" (UID: \"dc4b124f-5c7d-441f-ba49-ad167dc10163\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rxcsk" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817492 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1a1d87b9-cb40-4860-8445-4729e0945358-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6g499\" (UID: \"1a1d87b9-cb40-4860-8445-4729e0945358\") " pod="openshift-marketplace/marketplace-operator-79b997595-6g499" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817509 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-oauth-serving-cert\") pod \"console-f9d7485db-tgsfn\" (UID: \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\") " pod="openshift-console/console-f9d7485db-tgsfn" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817524 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a02fa3df-eb86-46d1-af05-0559a10899c8-etcd-ca\") pod \"etcd-operator-b45778765-m5t5p\" (UID: \"a02fa3df-eb86-46d1-af05-0559a10899c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m5t5p" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817562 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/44a6f1e1-b3f3-4720-b282-65300d1cbf36-profile-collector-cert\") pod \"catalog-operator-68c6474976-t8q8k\" (UID: \"44a6f1e1-b3f3-4720-b282-65300d1cbf36\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t8q8k" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817589 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a1d87b9-cb40-4860-8445-4729e0945358-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6g499\" (UID: \"1a1d87b9-cb40-4860-8445-4729e0945358\") " pod="openshift-marketplace/marketplace-operator-79b997595-6g499" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817607 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/523f8b54-3667-4d44-99b3-99a4caca1cee-trusted-ca\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817622 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a02fa3df-eb86-46d1-af05-0559a10899c8-etcd-client\") pod \"etcd-operator-b45778765-m5t5p\" (UID: \"a02fa3df-eb86-46d1-af05-0559a10899c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m5t5p" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817640 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74pmn\" (UniqueName: \"kubernetes.io/projected/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-kube-api-access-74pmn\") pod \"console-f9d7485db-tgsfn\" (UID: \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\") " pod="openshift-console/console-f9d7485db-tgsfn" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817660 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/425255dc-0aa1-4b46-ae72-525d25c65135-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pzz9g\" (UID: \"425255dc-0aa1-4b46-ae72-525d25c65135\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pzz9g" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817679 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwgjd\" (UniqueName: \"kubernetes.io/projected/96a6d6d4-3b2e-410a-af74-2e05a6dc0025-kube-api-access-cwgjd\") pod \"csi-hostpathplugin-jl8l9\" (UID: \"96a6d6d4-3b2e-410a-af74-2e05a6dc0025\") " pod="hostpath-provisioner/csi-hostpathplugin-jl8l9" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817694 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd657\" (UniqueName: \"kubernetes.io/projected/822cc654-3f56-4ca1-b73c-863bc7129d43-kube-api-access-xd657\") pod \"control-plane-machine-set-operator-78cbb6b69f-lr2tr\" (UID: \"822cc654-3f56-4ca1-b73c-863bc7129d43\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lr2tr" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817712 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc4b124f-5c7d-441f-ba49-ad167dc10163-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rxcsk\" (UID: \"dc4b124f-5c7d-441f-ba49-ad167dc10163\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rxcsk" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817750 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/803e09bf-dacb-49f5-b812-1415b7bc2c37-apiservice-cert\") pod \"packageserver-d55dfcdfc-sgbwk\" (UID: \"803e09bf-dacb-49f5-b812-1415b7bc2c37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgbwk" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817765 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/425255dc-0aa1-4b46-ae72-525d25c65135-config\") pod \"kube-apiserver-operator-766d6c64bb-pzz9g\" (UID: \"425255dc-0aa1-4b46-ae72-525d25c65135\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pzz9g" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817782 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdh2t\" (UniqueName: \"kubernetes.io/projected/803e09bf-dacb-49f5-b812-1415b7bc2c37-kube-api-access-gdh2t\") pod \"packageserver-d55dfcdfc-sgbwk\" (UID: \"803e09bf-dacb-49f5-b812-1415b7bc2c37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgbwk" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.817810 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/803e09bf-dacb-49f5-b812-1415b7bc2c37-tmpfs\") pod \"packageserver-d55dfcdfc-sgbwk\" (UID: \"803e09bf-dacb-49f5-b812-1415b7bc2c37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgbwk" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.818161 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/803e09bf-dacb-49f5-b812-1415b7bc2c37-tmpfs\") pod \"packageserver-d55dfcdfc-sgbwk\" (UID: \"803e09bf-dacb-49f5-b812-1415b7bc2c37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgbwk" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.818260 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fade4b8e-c06e-46ef-aaad-70a1257290aa-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9c87n\" (UID: \"fade4b8e-c06e-46ef-aaad-70a1257290aa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9c87n" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.818673 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/96a6d6d4-3b2e-410a-af74-2e05a6dc0025-registration-dir\") pod \"csi-hostpathplugin-jl8l9\" (UID: \"96a6d6d4-3b2e-410a-af74-2e05a6dc0025\") " pod="hostpath-provisioner/csi-hostpathplugin-jl8l9" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.818727 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/96a6d6d4-3b2e-410a-af74-2e05a6dc0025-plugins-dir\") pod \"csi-hostpathplugin-jl8l9\" (UID: \"96a6d6d4-3b2e-410a-af74-2e05a6dc0025\") " pod="hostpath-provisioner/csi-hostpathplugin-jl8l9" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.819251 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5054cb2-4fb5-4389-82bd-7533b8813025-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-d2vbr\" (UID: \"b5054cb2-4fb5-4389-82bd-7533b8813025\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2vbr" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.819249 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a02fa3df-eb86-46d1-af05-0559a10899c8-config\") pod \"etcd-operator-b45778765-m5t5p\" (UID: \"a02fa3df-eb86-46d1-af05-0559a10899c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m5t5p" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.819313 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/96a6d6d4-3b2e-410a-af74-2e05a6dc0025-csi-data-dir\") pod \"csi-hostpathplugin-jl8l9\" (UID: \"96a6d6d4-3b2e-410a-af74-2e05a6dc0025\") " pod="hostpath-provisioner/csi-hostpathplugin-jl8l9" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.820180 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-oauth-serving-cert\") pod \"console-f9d7485db-tgsfn\" (UID: \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\") " pod="openshift-console/console-f9d7485db-tgsfn" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.820483 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/96a6d6d4-3b2e-410a-af74-2e05a6dc0025-socket-dir\") pod \"csi-hostpathplugin-jl8l9\" (UID: \"96a6d6d4-3b2e-410a-af74-2e05a6dc0025\") " pod="hostpath-provisioner/csi-hostpathplugin-jl8l9" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.820767 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc4b124f-5c7d-441f-ba49-ad167dc10163-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rxcsk\" (UID: \"dc4b124f-5c7d-441f-ba49-ad167dc10163\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rxcsk" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.822751 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a1d87b9-cb40-4860-8445-4729e0945358-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6g499\" (UID: \"1a1d87b9-cb40-4860-8445-4729e0945358\") " pod="openshift-marketplace/marketplace-operator-79b997595-6g499" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.823266 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a02fa3df-eb86-46d1-af05-0559a10899c8-etcd-ca\") pod \"etcd-operator-b45778765-m5t5p\" (UID: \"a02fa3df-eb86-46d1-af05-0559a10899c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m5t5p" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.824262 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/803e09bf-dacb-49f5-b812-1415b7bc2c37-webhook-cert\") pod \"packageserver-d55dfcdfc-sgbwk\" (UID: \"803e09bf-dacb-49f5-b812-1415b7bc2c37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgbwk" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.824564 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/425255dc-0aa1-4b46-ae72-525d25c65135-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pzz9g\" (UID: \"425255dc-0aa1-4b46-ae72-525d25c65135\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pzz9g" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.824592 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/803e09bf-dacb-49f5-b812-1415b7bc2c37-apiservice-cert\") pod \"packageserver-d55dfcdfc-sgbwk\" (UID: \"803e09bf-dacb-49f5-b812-1415b7bc2c37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgbwk" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.825417 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/44a6f1e1-b3f3-4720-b282-65300d1cbf36-profile-collector-cert\") pod \"catalog-operator-68c6474976-t8q8k\" (UID: \"44a6f1e1-b3f3-4720-b282-65300d1cbf36\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t8q8k" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.826279 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc4b124f-5c7d-441f-ba49-ad167dc10163-images\") pod \"machine-config-operator-74547568cd-rxcsk\" (UID: \"dc4b124f-5c7d-441f-ba49-ad167dc10163\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rxcsk" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.828073 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f0435d5-94cb-4de1-a43b-7d784b2c8022-config\") pod \"service-ca-operator-777779d784-ddtzr\" (UID: \"8f0435d5-94cb-4de1-a43b-7d784b2c8022\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ddtzr" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.829692 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/523f8b54-3667-4d44-99b3-99a4caca1cee-trusted-ca\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.829789 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/425255dc-0aa1-4b46-ae72-525d25c65135-config\") pod \"kube-apiserver-operator-766d6c64bb-pzz9g\" (UID: \"425255dc-0aa1-4b46-ae72-525d25c65135\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pzz9g" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.830053 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-slxv8"] Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.830161 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc79f3f9-f6a6-43db-830d-f9300e774b68-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ds2pp\" (UID: \"dc79f3f9-f6a6-43db-830d-f9300e774b68\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ds2pp" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.831179 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a02fa3df-eb86-46d1-af05-0559a10899c8-etcd-service-ca\") pod \"etcd-operator-b45778765-m5t5p\" (UID: \"a02fa3df-eb86-46d1-af05-0559a10899c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m5t5p" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.831415 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/523f8b54-3667-4d44-99b3-99a4caca1cee-registry-certificates\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.831511 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-trusted-ca-bundle\") pod \"console-f9d7485db-tgsfn\" (UID: \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\") " pod="openshift-console/console-f9d7485db-tgsfn" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.832302 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/96a6d6d4-3b2e-410a-af74-2e05a6dc0025-mountpoint-dir\") pod \"csi-hostpathplugin-jl8l9\" (UID: \"96a6d6d4-3b2e-410a-af74-2e05a6dc0025\") " pod="hostpath-provisioner/csi-hostpathplugin-jl8l9" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.832580 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-console-config\") pod \"console-f9d7485db-tgsfn\" (UID: \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\") " pod="openshift-console/console-f9d7485db-tgsfn" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.833065 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e912af71-9fc5-42ea-99cf-5a2e5a42cf0d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rpfzn\" (UID: \"e912af71-9fc5-42ea-99cf-5a2e5a42cf0d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rpfzn" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.834982 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-console-oauth-config\") pod \"console-f9d7485db-tgsfn\" (UID: \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\") " pod="openshift-console/console-f9d7485db-tgsfn" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.835886 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-service-ca\") pod \"console-f9d7485db-tgsfn\" (UID: \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\") " pod="openshift-console/console-f9d7485db-tgsfn" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.836116 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc79f3f9-f6a6-43db-830d-f9300e774b68-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ds2pp\" (UID: \"dc79f3f9-f6a6-43db-830d-f9300e774b68\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ds2pp" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.836149 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/523f8b54-3667-4d44-99b3-99a4caca1cee-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.836233 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zh8tx"] Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.838495 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/44a6f1e1-b3f3-4720-b282-65300d1cbf36-srv-cert\") pod \"catalog-operator-68c6474976-t8q8k\" (UID: \"44a6f1e1-b3f3-4720-b282-65300d1cbf36\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t8q8k" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.838760 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-2qlcg" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.844355 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dc4b124f-5c7d-441f-ba49-ad167dc10163-proxy-tls\") pod \"machine-config-operator-74547568cd-rxcsk\" (UID: \"dc4b124f-5c7d-441f-ba49-ad167dc10163\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rxcsk" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.844598 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/135158fc-7ab1-4642-a36a-4ac5e06bb33e-srv-cert\") pod \"olm-operator-6b444d44fb-lmq94\" (UID: \"135158fc-7ab1-4642-a36a-4ac5e06bb33e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lmq94" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.845469 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a02fa3df-eb86-46d1-af05-0559a10899c8-etcd-client\") pod \"etcd-operator-b45778765-m5t5p\" (UID: \"a02fa3df-eb86-46d1-af05-0559a10899c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m5t5p" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.846301 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/135158fc-7ab1-4642-a36a-4ac5e06bb33e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lmq94\" (UID: \"135158fc-7ab1-4642-a36a-4ac5e06bb33e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lmq94" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.846311 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h529p"] Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.846370 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b5054cb2-4fb5-4389-82bd-7533b8813025-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-d2vbr\" (UID: \"b5054cb2-4fb5-4389-82bd-7533b8813025\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2vbr" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.846780 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/523f8b54-3667-4d44-99b3-99a4caca1cee-registry-tls\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.847483 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/822cc654-3f56-4ca1-b73c-863bc7129d43-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-lr2tr\" (UID: \"822cc654-3f56-4ca1-b73c-863bc7129d43\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lr2tr" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.852271 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/523f8b54-3667-4d44-99b3-99a4caca1cee-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.854565 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02fa3df-eb86-46d1-af05-0559a10899c8-serving-cert\") pod \"etcd-operator-b45778765-m5t5p\" (UID: \"a02fa3df-eb86-46d1-af05-0559a10899c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m5t5p" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.856735 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-console-serving-cert\") pod \"console-f9d7485db-tgsfn\" (UID: \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\") " pod="openshift-console/console-f9d7485db-tgsfn" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.857174 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmsvj\" (UniqueName: \"kubernetes.io/projected/fade4b8e-c06e-46ef-aaad-70a1257290aa-kube-api-access-vmsvj\") pod \"kube-storage-version-migrator-operator-b67b599dd-9c87n\" (UID: \"fade4b8e-c06e-46ef-aaad-70a1257290aa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9c87n" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.861519 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4bd0a924-aaee-4a87-b87d-bbc1d7ddd4d1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-l88r9\" (UID: \"4bd0a924-aaee-4a87-b87d-bbc1d7ddd4d1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l88r9" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.861744 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e912af71-9fc5-42ea-99cf-5a2e5a42cf0d-proxy-tls\") pod \"machine-config-controller-84d6567774-rpfzn\" (UID: \"e912af71-9fc5-42ea-99cf-5a2e5a42cf0d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rpfzn" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.866669 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1a1d87b9-cb40-4860-8445-4729e0945358-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6g499\" (UID: \"1a1d87b9-cb40-4860-8445-4729e0945358\") " pod="openshift-marketplace/marketplace-operator-79b997595-6g499" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.869383 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fade4b8e-c06e-46ef-aaad-70a1257290aa-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9c87n\" (UID: \"fade4b8e-c06e-46ef-aaad-70a1257290aa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9c87n" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.872411 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7s2v\" (UniqueName: \"kubernetes.io/projected/dc4b124f-5c7d-441f-ba49-ad167dc10163-kube-api-access-n7s2v\") pod \"machine-config-operator-74547568cd-rxcsk\" (UID: \"dc4b124f-5c7d-441f-ba49-ad167dc10163\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rxcsk" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.888103 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xhg5q"] Oct 13 06:30:52 crc kubenswrapper[4833]: W1013 06:30:52.893477 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod411fca3a_272a_4d30_91d3_623952b953aa.slice/crio-6ae39ffa354ce7e1d0a6233fb52c83487b0c6e7ed11f4d0f7c6b5388ab6423a2 WatchSource:0}: Error finding container 6ae39ffa354ce7e1d0a6233fb52c83487b0c6e7ed11f4d0f7c6b5388ab6423a2: Status 404 returned error can't find the container with id 6ae39ffa354ce7e1d0a6233fb52c83487b0c6e7ed11f4d0f7c6b5388ab6423a2 Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.899443 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb9r9\" (UniqueName: \"kubernetes.io/projected/4bd0a924-aaee-4a87-b87d-bbc1d7ddd4d1-kube-api-access-hb9r9\") pod \"multus-admission-controller-857f4d67dd-l88r9\" (UID: \"4bd0a924-aaee-4a87-b87d-bbc1d7ddd4d1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l88r9" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.921369 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:52 crc kubenswrapper[4833]: E1013 06:30:52.921726 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:53.421695972 +0000 UTC m=+143.522118888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.921873 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.921899 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxq2w\" (UniqueName: \"kubernetes.io/projected/9e99b112-724a-48b6-9b92-b019d5092add-kube-api-access-dxq2w\") pod \"machine-config-server-rh4l6\" (UID: \"9e99b112-724a-48b6-9b92-b019d5092add\") " pod="openshift-machine-config-operator/machine-config-server-rh4l6" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.921916 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c40960a-a5c5-442a-9124-fba675359f3b-config-volume\") pod \"dns-default-c7xv6\" (UID: \"8c40960a-a5c5-442a-9124-fba675359f3b\") " pod="openshift-dns/dns-default-c7xv6" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.921944 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9e99b112-724a-48b6-9b92-b019d5092add-node-bootstrap-token\") pod \"machine-config-server-rh4l6\" (UID: \"9e99b112-724a-48b6-9b92-b019d5092add\") " pod="openshift-machine-config-operator/machine-config-server-rh4l6" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.922015 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6mkv\" (UniqueName: \"kubernetes.io/projected/8c40960a-a5c5-442a-9124-fba675359f3b-kube-api-access-z6mkv\") pod \"dns-default-c7xv6\" (UID: \"8c40960a-a5c5-442a-9124-fba675359f3b\") " pod="openshift-dns/dns-default-c7xv6" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.922051 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9e99b112-724a-48b6-9b92-b019d5092add-certs\") pod \"machine-config-server-rh4l6\" (UID: \"9e99b112-724a-48b6-9b92-b019d5092add\") " pod="openshift-machine-config-operator/machine-config-server-rh4l6" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.922071 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8c40960a-a5c5-442a-9124-fba675359f3b-metrics-tls\") pod \"dns-default-c7xv6\" (UID: \"8c40960a-a5c5-442a-9124-fba675359f3b\") " pod="openshift-dns/dns-default-c7xv6" Oct 13 06:30:52 crc kubenswrapper[4833]: E1013 06:30:52.922443 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:53.422427233 +0000 UTC m=+143.522850229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.923736 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c40960a-a5c5-442a-9124-fba675359f3b-config-volume\") pod \"dns-default-c7xv6\" (UID: \"8c40960a-a5c5-442a-9124-fba675359f3b\") " pod="openshift-dns/dns-default-c7xv6" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.932373 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8c40960a-a5c5-442a-9124-fba675359f3b-metrics-tls\") pod \"dns-default-c7xv6\" (UID: \"8c40960a-a5c5-442a-9124-fba675359f3b\") " pod="openshift-dns/dns-default-c7xv6" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.933381 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9e99b112-724a-48b6-9b92-b019d5092add-node-bootstrap-token\") pod \"machine-config-server-rh4l6\" (UID: \"9e99b112-724a-48b6-9b92-b019d5092add\") " pod="openshift-machine-config-operator/machine-config-server-rh4l6" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.939065 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwgjd\" (UniqueName: \"kubernetes.io/projected/96a6d6d4-3b2e-410a-af74-2e05a6dc0025-kube-api-access-cwgjd\") pod \"csi-hostpathplugin-jl8l9\" (UID: \"96a6d6d4-3b2e-410a-af74-2e05a6dc0025\") " pod="hostpath-provisioner/csi-hostpathplugin-jl8l9" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.941728 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9e99b112-724a-48b6-9b92-b019d5092add-certs\") pod \"machine-config-server-rh4l6\" (UID: \"9e99b112-724a-48b6-9b92-b019d5092add\") " pod="openshift-machine-config-operator/machine-config-server-rh4l6" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.949467 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd657\" (UniqueName: \"kubernetes.io/projected/822cc654-3f56-4ca1-b73c-863bc7129d43-kube-api-access-xd657\") pod \"control-plane-machine-set-operator-78cbb6b69f-lr2tr\" (UID: \"822cc654-3f56-4ca1-b73c-863bc7129d43\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lr2tr" Oct 13 06:30:52 crc kubenswrapper[4833]: I1013 06:30:52.954654 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt6wz\" (UniqueName: \"kubernetes.io/projected/135158fc-7ab1-4642-a36a-4ac5e06bb33e-kube-api-access-lt6wz\") pod \"olm-operator-6b444d44fb-lmq94\" (UID: \"135158fc-7ab1-4642-a36a-4ac5e06bb33e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lmq94" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.005336 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b5054cb2-4fb5-4389-82bd-7533b8813025-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-d2vbr\" (UID: \"b5054cb2-4fb5-4389-82bd-7533b8813025\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2vbr" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.014158 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdh2t\" (UniqueName: \"kubernetes.io/projected/803e09bf-dacb-49f5-b812-1415b7bc2c37-kube-api-access-gdh2t\") pod \"packageserver-d55dfcdfc-sgbwk\" (UID: \"803e09bf-dacb-49f5-b812-1415b7bc2c37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgbwk" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.015501 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lr2tr" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.022996 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:53 crc kubenswrapper[4833]: E1013 06:30:53.023489 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:53.523463546 +0000 UTC m=+143.623886462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.038195 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-858zr\" (UniqueName: \"kubernetes.io/projected/a02fa3df-eb86-46d1-af05-0559a10899c8-kube-api-access-858zr\") pod \"etcd-operator-b45778765-m5t5p\" (UID: \"a02fa3df-eb86-46d1-af05-0559a10899c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m5t5p" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.052646 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rxcsk" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.058718 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84mtv\" (UniqueName: \"kubernetes.io/projected/e912af71-9fc5-42ea-99cf-5a2e5a42cf0d-kube-api-access-84mtv\") pod \"machine-config-controller-84d6567774-rpfzn\" (UID: \"e912af71-9fc5-42ea-99cf-5a2e5a42cf0d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rpfzn" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.075210 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgf76\" (UniqueName: \"kubernetes.io/projected/68000101-bc2e-44dd-affe-be84000fba74-kube-api-access-cgf76\") pod \"migrator-59844c95c7-p6kj7\" (UID: \"68000101-bc2e-44dd-affe-be84000fba74\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-p6kj7" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.093764 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg89p\" (UniqueName: \"kubernetes.io/projected/523f8b54-3667-4d44-99b3-99a4caca1cee-kube-api-access-rg89p\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.110865 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-l88r9" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.123518 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76h4k\" (UniqueName: \"kubernetes.io/projected/dc79f3f9-f6a6-43db-830d-f9300e774b68-kube-api-access-76h4k\") pod \"openshift-controller-manager-operator-756b6f6bc6-ds2pp\" (UID: \"dc79f3f9-f6a6-43db-830d-f9300e774b68\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ds2pp" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.123860 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9c87n" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.124347 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:53 crc kubenswrapper[4833]: E1013 06:30:53.124757 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:53.624746666 +0000 UTC m=+143.725169582 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.124971 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lmq94" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.126270 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29338950-4m7p7"] Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.127960 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-h8p9r"] Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.129343 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tqnzn"] Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.141090 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/425255dc-0aa1-4b46-ae72-525d25c65135-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pzz9g\" (UID: \"425255dc-0aa1-4b46-ae72-525d25c65135\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pzz9g" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.151095 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-m5t5p" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.152077 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-77swh"] Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.152409 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgbwk" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.157290 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2xcm\" (UniqueName: \"kubernetes.io/projected/44a6f1e1-b3f3-4720-b282-65300d1cbf36-kube-api-access-c2xcm\") pod \"catalog-operator-68c6474976-t8q8k\" (UID: \"44a6f1e1-b3f3-4720-b282-65300d1cbf36\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t8q8k" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.157646 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-t6b9x"] Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.178718 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g98lv\" (UniqueName: \"kubernetes.io/projected/b5054cb2-4fb5-4389-82bd-7533b8813025-kube-api-access-g98lv\") pod \"cluster-image-registry-operator-dc59b4c8b-d2vbr\" (UID: \"b5054cb2-4fb5-4389-82bd-7533b8813025\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2vbr" Oct 13 06:30:53 crc kubenswrapper[4833]: W1013 06:30:53.183389 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87c673fe_144f_4dc7_bafd_ca7c29e498e2.slice/crio-87a2f356966136adb7982cc1e45a10ab757fc17094303f1bd50dc85ef2710bc1 WatchSource:0}: Error finding container 87a2f356966136adb7982cc1e45a10ab757fc17094303f1bd50dc85ef2710bc1: Status 404 returned error can't find the container with id 87a2f356966136adb7982cc1e45a10ab757fc17094303f1bd50dc85ef2710bc1 Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.197789 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-jl8l9" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.210072 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jggtq\" (UniqueName: \"kubernetes.io/projected/1a1d87b9-cb40-4860-8445-4729e0945358-kube-api-access-jggtq\") pod \"marketplace-operator-79b997595-6g499\" (UID: \"1a1d87b9-cb40-4860-8445-4729e0945358\") " pod="openshift-marketplace/marketplace-operator-79b997595-6g499" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.212667 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x7dz2"] Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.214984 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f8qw\" (UniqueName: \"kubernetes.io/projected/8f0435d5-94cb-4de1-a43b-7d784b2c8022-kube-api-access-9f8qw\") pod \"service-ca-operator-777779d784-ddtzr\" (UID: \"8f0435d5-94cb-4de1-a43b-7d784b2c8022\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ddtzr" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.221212 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tkv6x"] Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.228373 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:53 crc kubenswrapper[4833]: E1013 06:30:53.228639 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:53.728611231 +0000 UTC m=+143.829034147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.228860 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:53 crc kubenswrapper[4833]: E1013 06:30:53.229327 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:53.729319052 +0000 UTC m=+143.829741968 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.245280 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74pmn\" (UniqueName: \"kubernetes.io/projected/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-kube-api-access-74pmn\") pod \"console-f9d7485db-tgsfn\" (UID: \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\") " pod="openshift-console/console-f9d7485db-tgsfn" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.246743 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zk7qg"] Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.264023 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/523f8b54-3667-4d44-99b3-99a4caca1cee-bound-sa-token\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.268318 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx"] Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.299373 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rpfzn" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.308999 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tgsfn" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.321732 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2vbr" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.330208 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.330587 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxq2w\" (UniqueName: \"kubernetes.io/projected/9e99b112-724a-48b6-9b92-b019d5092add-kube-api-access-dxq2w\") pod \"machine-config-server-rh4l6\" (UID: \"9e99b112-724a-48b6-9b92-b019d5092add\") " pod="openshift-machine-config-operator/machine-config-server-rh4l6" Oct 13 06:30:53 crc kubenswrapper[4833]: E1013 06:30:53.330677 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:53.830656913 +0000 UTC m=+143.931079889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:53 crc kubenswrapper[4833]: W1013 06:30:53.330780 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05a4a2c9_1543_49ae_9f86_ba208d564f75.slice/crio-57e6a57ad6fead3961fbe89abfe9a13b3d6fa153a9b43463632cc949e1941c8d WatchSource:0}: Error finding container 57e6a57ad6fead3961fbe89abfe9a13b3d6fa153a9b43463632cc949e1941c8d: Status 404 returned error can't find the container with id 57e6a57ad6fead3961fbe89abfe9a13b3d6fa153a9b43463632cc949e1941c8d Oct 13 06:30:53 crc kubenswrapper[4833]: W1013 06:30:53.342650 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod404a7ccb_1a6f_4185_aba4_e74c8fcd6092.slice/crio-6cb889fab14d5e221a1dba4f630ad61ab32e55eb784df78f0fe2f139451c1120 WatchSource:0}: Error finding container 6cb889fab14d5e221a1dba4f630ad61ab32e55eb784df78f0fe2f139451c1120: Status 404 returned error can't find the container with id 6cb889fab14d5e221a1dba4f630ad61ab32e55eb784df78f0fe2f139451c1120 Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.346650 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pzz9g" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.352572 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6mkv\" (UniqueName: \"kubernetes.io/projected/8c40960a-a5c5-442a-9124-fba675359f3b-kube-api-access-z6mkv\") pod \"dns-default-c7xv6\" (UID: \"8c40960a-a5c5-442a-9124-fba675359f3b\") " pod="openshift-dns/dns-default-c7xv6" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.359341 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-p6kj7" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.364864 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-b6fmh" event={"ID":"f5067f7f-360d-4760-ad85-d9ad118f5d20","Type":"ContainerStarted","Data":"65acbfa4c34184e5dc006fbe7f85c0f1666158d3194271861b6c951c25d7946e"} Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.364915 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-b6fmh" event={"ID":"f5067f7f-360d-4760-ad85-d9ad118f5d20","Type":"ContainerStarted","Data":"bc6b833c2a126d3a78f344c359b1a509c96a1df07012406094760480513ddbf3"} Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.368284 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ds2pp" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.369785 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t6b9x" event={"ID":"c50f9a77-c750-45f8-9655-6002e578c0fd","Type":"ContainerStarted","Data":"7343b1ca5748b59f759e1766e718b4ad62f6191341dfbd64ec434beb8011554b"} Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.385804 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dkwvf" event={"ID":"7b6ff3a0-c424-45f9-92e9-e7b5a46d7464","Type":"ContainerStarted","Data":"98ab237ca026b643c45c12100f98314b241aaed7d654a0436e5dcbd40d152e52"} Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.385845 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dkwvf" event={"ID":"7b6ff3a0-c424-45f9-92e9-e7b5a46d7464","Type":"ContainerStarted","Data":"fb5bb76df3281c5935e238aa40782b9f389c10a281981cd53ebcb3267a39240b"} Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.390457 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4642" event={"ID":"8275122f-9ea3-4d09-a31e-75063b4502d1","Type":"ContainerStarted","Data":"1a4eee360f5fef356ff1427fcd9af399469400cd3793a421c01c61383fb9ec3d"} Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.390486 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4642" event={"ID":"8275122f-9ea3-4d09-a31e-75063b4502d1","Type":"ContainerStarted","Data":"1f4ecb34347e29e6524fbd30b84c3c78b147e12f5a16bcad541b03c3d18676fe"} Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.391813 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zh8tx" event={"ID":"be22fb13-b4fa-49ac-8931-6beefd571639","Type":"ContainerStarted","Data":"a75aa323695f9d781d9a559c30f565bf8a614ec1a3854b9b5c980f2fe9afe59e"} Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.406311 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xhg5q" event={"ID":"a76795df-c2cf-4bf9-a6df-34a05c0e6d59","Type":"ContainerStarted","Data":"930d8ad116a114c77b46fb995862b4ae86985317d15422858e79e071788db025"} Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.409499 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jpjcg" event={"ID":"fe855835-c379-488f-84c2-46e500e828cd","Type":"ContainerStarted","Data":"2fcd553b0a5002be9312ecb1976616ef3ddba5c437cf5d352d8a9d2dfe63473d"} Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.412265 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6g499" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.412965 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5" event={"ID":"adfdbeae-0ada-4f22-937a-ff7fdb0d0901","Type":"ContainerStarted","Data":"d29b2f8c1a96ae0d11c2a312d83c361bf592b6d00ed768953777ac925bcc9ea4"} Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.413029 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5" event={"ID":"adfdbeae-0ada-4f22-937a-ff7fdb0d0901","Type":"ContainerStarted","Data":"d30d23e71b31f898ce173b34e7c5976e0a0df2f581cdf68a4955bf6442dadb21"} Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.413082 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.418795 4833 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-652c5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.418834 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5" podUID="adfdbeae-0ada-4f22-937a-ff7fdb0d0901" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.423367 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tkv6x" event={"ID":"05a4a2c9-1543-49ae-9f86-ba208d564f75","Type":"ContainerStarted","Data":"57e6a57ad6fead3961fbe89abfe9a13b3d6fa153a9b43463632cc949e1941c8d"} Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.439244 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t8q8k" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.439675 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:53 crc kubenswrapper[4833]: E1013 06:30:53.439986 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:53.939973357 +0000 UTC m=+144.040396273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.440899 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" event={"ID":"b4411e13-1d37-4d03-ad8a-7d24be467441","Type":"ContainerStarted","Data":"c342d73f28c3cbce983fa52dd67aef537c61f9de595fac7dbab41d00a72c265b"} Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.443578 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h529p" event={"ID":"6d898cf2-fd64-4f08-bdde-90520345ebc5","Type":"ContainerStarted","Data":"1bdccbb6c4b90ab4dd82109dca36dba7d8182f12a70cd0283157bbb0b9741370"} Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.444950 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tqnzn" event={"ID":"8a8c5e6e-2adb-47d6-aca7-a95b42d5444e","Type":"ContainerStarted","Data":"968fa16d2a14e52ca12a44353106ba6ad78f69afa23b010bf267d22b131590c9"} Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.446075 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29338950-4m7p7" event={"ID":"d537ffb6-77d0-4bfc-bc53-54cd70938e24","Type":"ContainerStarted","Data":"5deefd5b787e411d65e6d43be969201491923306d62dffe5b032a364a9853aea"} Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.454465 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-l88r9"] Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.454501 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lr2tr"] Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.456968 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8btrk" event={"ID":"57f3ce23-f777-41e7-a3ef-23873b3049e9","Type":"ContainerStarted","Data":"83bdba191aafdabff235c2d6ae872cfdc17eb314d0684ef133f6cb7ddd015aa6"} Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.457038 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8btrk" event={"ID":"57f3ce23-f777-41e7-a3ef-23873b3049e9","Type":"ContainerStarted","Data":"b1fe998a1bb89558f7136302a2580a607e14a6e5751af1969c82b878d7440b15"} Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.461116 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ddtzr" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.468773 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-77swh" event={"ID":"87c673fe-144f-4dc7-bafd-ca7c29e498e2","Type":"ContainerStarted","Data":"87a2f356966136adb7982cc1e45a10ab757fc17094303f1bd50dc85ef2710bc1"} Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.498006 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rxcsk"] Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.514623 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" event={"ID":"d78c0df2-c046-49a4-b00c-031053c497c4","Type":"ContainerStarted","Data":"00db9b8d8502c2d586aee3790620a91a1ce30f24a07f6ee43568debb02e71d3d"} Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.514912 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-c7xv6" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.524045 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-slxv8" event={"ID":"411fca3a-272a-4d30-91d3-623952b953aa","Type":"ContainerStarted","Data":"6ae39ffa354ce7e1d0a6233fb52c83487b0c6e7ed11f4d0f7c6b5388ab6423a2"} Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.526678 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" event={"ID":"31fd7ef2-e28a-417f-8b5c-26d976680749","Type":"ContainerStarted","Data":"7b61d5921bc7c5060e0c5c8570dd5817a56685eb1c9a882e584e6ef7a18bd2b6"} Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.531032 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-rh4l6" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.534833 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2qlcg"] Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.539768 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zg4v6" event={"ID":"a46f0ce4-a965-4cc0-869a-0a1edfdb7519","Type":"ContainerStarted","Data":"9789ba3c957d61bee6af931f91480d212b4defbc60ce36826c55a6bed3d73b83"} Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.539811 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zg4v6" event={"ID":"a46f0ce4-a965-4cc0-869a-0a1edfdb7519","Type":"ContainerStarted","Data":"f033156d531b596e0d3acf8fe9ea10b5695573b574f915d2f558f5400497cb4e"} Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.540688 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:53 crc kubenswrapper[4833]: E1013 06:30:53.544145 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:54.04412194 +0000 UTC m=+144.144544926 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.611614 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qz7k6"] Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.643681 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:53 crc kubenswrapper[4833]: E1013 06:30:53.643961 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:54.143948748 +0000 UTC m=+144.244371664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.729683 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-dkwvf" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.731153 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dkwvf container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.735163 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkwvf" podUID="7b6ff3a0-c424-45f9-92e9-e7b5a46d7464" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.745436 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:53 crc kubenswrapper[4833]: E1013 06:30:53.745881 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:54.245863766 +0000 UTC m=+144.346286682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:53 crc kubenswrapper[4833]: W1013 06:30:53.778794 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf94c08d4_7426_4488_b011_c2b78fa2b705.slice/crio-fdfc9c525259c4179f251c4342e7bebc69400392807a97db8fa0b38d511cd796 WatchSource:0}: Error finding container fdfc9c525259c4179f251c4342e7bebc69400392807a97db8fa0b38d511cd796: Status 404 returned error can't find the container with id fdfc9c525259c4179f251c4342e7bebc69400392807a97db8fa0b38d511cd796 Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.846657 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:53 crc kubenswrapper[4833]: E1013 06:30:53.847078 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:54.347062684 +0000 UTC m=+144.447485610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:53 crc kubenswrapper[4833]: I1013 06:30:53.947865 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:53 crc kubenswrapper[4833]: E1013 06:30:53.948598 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:54.448579421 +0000 UTC m=+144.549002337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.034303 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-m5t5p"] Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.051842 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:54 crc kubenswrapper[4833]: E1013 06:30:54.052237 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:54.552222419 +0000 UTC m=+144.652645335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.057231 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9c87n"] Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.064599 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgbwk"] Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.075707 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pzz9g"] Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.130240 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2vbr"] Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.150557 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lmq94"] Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.153989 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jl8l9"] Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.154760 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:54 crc kubenswrapper[4833]: E1013 06:30:54.154873 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:54.654851688 +0000 UTC m=+144.755274604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.154949 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:54 crc kubenswrapper[4833]: E1013 06:30:54.155316 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:54.655306542 +0000 UTC m=+144.755729538 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.270659 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:54 crc kubenswrapper[4833]: E1013 06:30:54.271390 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:54.771366472 +0000 UTC m=+144.871789388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.316353 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6g499"] Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.374580 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:54 crc kubenswrapper[4833]: E1013 06:30:54.374885 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:54.874873677 +0000 UTC m=+144.975296593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.395797 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-tgsfn"] Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.461724 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-b6fmh" podStartSLOduration=123.461707635 podStartE2EDuration="2m3.461707635s" podCreationTimestamp="2025-10-13 06:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:54.455504964 +0000 UTC m=+144.555927880" watchObservedRunningTime="2025-10-13 06:30:54.461707635 +0000 UTC m=+144.562130551" Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.473924 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-p6kj7"] Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.483328 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:54 crc kubenswrapper[4833]: E1013 06:30:54.484002 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:54.983965473 +0000 UTC m=+145.084388389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.517021 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-dkwvf" podStartSLOduration=123.517002305 podStartE2EDuration="2m3.517002305s" podCreationTimestamp="2025-10-13 06:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:54.499084764 +0000 UTC m=+144.599507680" watchObservedRunningTime="2025-10-13 06:30:54.517002305 +0000 UTC m=+144.617425221" Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.519429 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-c7xv6"] Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.545573 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rpfzn"] Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.569943 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgbwk" event={"ID":"803e09bf-dacb-49f5-b812-1415b7bc2c37","Type":"ContainerStarted","Data":"54aa72df23b9aa935a418dd7b77a0b36aa6c8b28d4fafe41b6ab433e32312824"} Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.570190 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t8q8k"] Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.571972 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-m5t5p" event={"ID":"a02fa3df-eb86-46d1-af05-0559a10899c8","Type":"ContainerStarted","Data":"c8f4a5dcaf606b44528bc685c513a447ac55452b650e6d520a783bf05cfd8a16"} Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.577008 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rxcsk" event={"ID":"dc4b124f-5c7d-441f-ba49-ad167dc10163","Type":"ContainerStarted","Data":"d4fdde62d26ef12c9cce00789cdfc53b5e5bb4120f94e0e4a47e699a8a857e9b"} Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.584943 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:54 crc kubenswrapper[4833]: E1013 06:30:54.585299 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:55.085284234 +0000 UTC m=+145.185707150 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.589103 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-77swh" event={"ID":"87c673fe-144f-4dc7-bafd-ca7c29e498e2","Type":"ContainerStarted","Data":"34b19f19bfc7587de39ae7dda5aee7143cb2981599745402fcd5ebcdb95bbd58"} Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.607393 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h529p" event={"ID":"6d898cf2-fd64-4f08-bdde-90520345ebc5","Type":"ContainerStarted","Data":"acd168cc631652f8222ee6c3c1c4f3860aa695e014d1a80ff40cc63ac30cacd0"} Oct 13 06:30:54 crc kubenswrapper[4833]: W1013 06:30:54.611452 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode912af71_9fc5_42ea_99cf_5a2e5a42cf0d.slice/crio-1fb0fb58de118ab9345cd4f76440a643c51ead0d78a1de49ad44bc413f588b28 WatchSource:0}: Error finding container 1fb0fb58de118ab9345cd4f76440a643c51ead0d78a1de49ad44bc413f588b28: Status 404 returned error can't find the container with id 1fb0fb58de118ab9345cd4f76440a643c51ead0d78a1de49ad44bc413f588b28 Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.618973 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8btrk" event={"ID":"57f3ce23-f777-41e7-a3ef-23873b3049e9","Type":"ContainerStarted","Data":"5d831511c9f3b7dd68abe9a10cb4e0727ac1ab3d0ac32686f9e1cf8ffaf6d9a3"} Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.622338 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-slxv8" event={"ID":"411fca3a-272a-4d30-91d3-623952b953aa","Type":"ContainerStarted","Data":"74a7dc37b98369fc26d04d437cf2ba90bd65b08694c0812ddbe468aef2cd64c3"} Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.647177 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zk7qg" event={"ID":"404a7ccb-1a6f-4185-aba4-e74c8fcd6092","Type":"ContainerStarted","Data":"3ecdb2e37e51eb1756fb34624cef1bfe5a5e56c2b7272bce531d8eacc9d150ae"} Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.647322 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ddtzr"] Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.647419 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zk7qg" event={"ID":"404a7ccb-1a6f-4185-aba4-e74c8fcd6092","Type":"ContainerStarted","Data":"6cb889fab14d5e221a1dba4f630ad61ab32e55eb784df78f0fe2f139451c1120"} Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.653398 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pzz9g" event={"ID":"425255dc-0aa1-4b46-ae72-525d25c65135","Type":"ContainerStarted","Data":"5c61ba53c098f9a7ee6c5c59ac3eec7dde75e490349ac683fead9fb652f30ebd"} Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.654482 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lr2tr" event={"ID":"822cc654-3f56-4ca1-b73c-863bc7129d43","Type":"ContainerStarted","Data":"b03c1d32c239efa93a9cae63026f12052164e4a5d6a5e6cd4d55206ec720098a"} Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.658048 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qz7k6" event={"ID":"ba083af2-d9a6-42e5-99ec-2b89278b08a2","Type":"ContainerStarted","Data":"61c094351334356f940136cc582ca7d863ab70e590ac14a38a57a99f367aff05"} Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.658150 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-l88r9" event={"ID":"4bd0a924-aaee-4a87-b87d-bbc1d7ddd4d1","Type":"ContainerStarted","Data":"d4197a28d92aeb7b23e6145b9d25c8344368ba7dd38d4951c731bf701704fd05"} Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.658242 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tgsfn" event={"ID":"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2","Type":"ContainerStarted","Data":"6a7bff9589055443af10abb86c9299460b71801a49f0fb40e967dc46c79ce011"} Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.658330 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jpjcg" event={"ID":"fe855835-c379-488f-84c2-46e500e828cd","Type":"ContainerStarted","Data":"a7ac3f5c55adaf5f71f4825d82719e0793b7882e0f154b88a20422d02927749b"} Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.658434 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9c87n" event={"ID":"fade4b8e-c06e-46ef-aaad-70a1257290aa","Type":"ContainerStarted","Data":"0c9115eca6270a16b71ad3771241652ed7d55fc4b79e5ae9157a6c7ccd08c502"} Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.658519 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-2qlcg" event={"ID":"f94c08d4-7426-4488-b011-c2b78fa2b705","Type":"ContainerStarted","Data":"fdfc9c525259c4179f251c4342e7bebc69400392807a97db8fa0b38d511cd796"} Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.658668 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29338950-4m7p7" event={"ID":"d537ffb6-77d0-4bfc-bc53-54cd70938e24","Type":"ContainerStarted","Data":"2f084ac0c9164b4220b602a3e14b51bd0d26a02c2b5adee8704ad4b6f0d06662"} Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.658874 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xhg5q" event={"ID":"a76795df-c2cf-4bf9-a6df-34a05c0e6d59","Type":"ContainerStarted","Data":"0c4001fa7a9f689577d06e2861377bff237866b2732d08b705c241837561a860"} Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.661441 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-rh4l6" event={"ID":"9e99b112-724a-48b6-9b92-b019d5092add","Type":"ContainerStarted","Data":"39488ab7e0230da006edc4ca62354d07ca67c2c3dc31ef4aa512e702dcd2a3ad"} Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.664407 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tkv6x" event={"ID":"05a4a2c9-1543-49ae-9f86-ba208d564f75","Type":"ContainerStarted","Data":"ee9419f5b0a3dd3e5fd971ce6c064a5dc0b7d25bf0363a7565402c287f01da59"} Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.665409 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-tkv6x" Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.667031 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-tkv6x container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.667140 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tkv6x" podUID="05a4a2c9-1543-49ae-9f86-ba208d564f75" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.667914 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t6b9x" event={"ID":"c50f9a77-c750-45f8-9655-6002e578c0fd","Type":"ContainerStarted","Data":"f459b256a105d3f4d9a8342cfe3526dcebea9b828daaaeb08bba36cf3b0d3b2a"} Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.670631 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zh8tx" event={"ID":"be22fb13-b4fa-49ac-8931-6beefd571639","Type":"ContainerStarted","Data":"1cb0187e216adcc771881cc1c4e7658452aea819874abe36359fe927b22c6cb0"} Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.680434 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tqnzn" event={"ID":"8a8c5e6e-2adb-47d6-aca7-a95b42d5444e","Type":"ContainerStarted","Data":"c91625171f7f4a721f54e29d6a76c5af6978568640fba55159a698a26beb50a8"} Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.683382 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2vbr" event={"ID":"b5054cb2-4fb5-4389-82bd-7533b8813025","Type":"ContainerStarted","Data":"d8f966f73aa77bbba2297477bd6d19c3152db8a6392239aee871f5a0ddcab2ae"} Oct 13 06:30:54 crc kubenswrapper[4833]: W1013 06:30:54.685449 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f0435d5_94cb_4de1_a43b_7d784b2c8022.slice/crio-3b14a8b37872e0293894e8925e26ae70aaedf01280897f756247831be1e7bce5 WatchSource:0}: Error finding container 3b14a8b37872e0293894e8925e26ae70aaedf01280897f756247831be1e7bce5: Status 404 returned error can't find the container with id 3b14a8b37872e0293894e8925e26ae70aaedf01280897f756247831be1e7bce5 Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.685611 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.685739 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jl8l9" event={"ID":"96a6d6d4-3b2e-410a-af74-2e05a6dc0025","Type":"ContainerStarted","Data":"fe71b0a4f974909e4f65b6c8ee83e9e537088051124401399d68ab519a83cd99"} Oct 13 06:30:54 crc kubenswrapper[4833]: E1013 06:30:54.685886 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:55.185873154 +0000 UTC m=+145.286296070 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.688204 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lmq94" event={"ID":"135158fc-7ab1-4642-a36a-4ac5e06bb33e","Type":"ContainerStarted","Data":"f2fc55e810953f04e4f0214af0a0175d1696bbff3faff06b26dd8742694e2bfd"} Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.691315 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6g499" event={"ID":"1a1d87b9-cb40-4860-8445-4729e0945358","Type":"ContainerStarted","Data":"a99f2d24a81c7502faac060b2ea9f14c270eae647e842e771f976de45ec662b9"} Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.694244 4833 generic.go:334] "Generic (PLEG): container finished" podID="a46f0ce4-a965-4cc0-869a-0a1edfdb7519" containerID="9789ba3c957d61bee6af931f91480d212b4defbc60ce36826c55a6bed3d73b83" exitCode=0 Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.694551 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zg4v6" event={"ID":"a46f0ce4-a965-4cc0-869a-0a1edfdb7519","Type":"ContainerDied","Data":"9789ba3c957d61bee6af931f91480d212b4defbc60ce36826c55a6bed3d73b83"} Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.694598 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ds2pp"] Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.695127 4833 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-652c5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.695161 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5" podUID="adfdbeae-0ada-4f22-937a-ff7fdb0d0901" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.696519 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5" podStartSLOduration=123.696505674 podStartE2EDuration="2m3.696505674s" podCreationTimestamp="2025-10-13 06:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:54.689465228 +0000 UTC m=+144.789888144" watchObservedRunningTime="2025-10-13 06:30:54.696505674 +0000 UTC m=+144.796928590" Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.733874 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dkwvf container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.734177 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkwvf" podUID="7b6ff3a0-c424-45f9-92e9-e7b5a46d7464" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Oct 13 06:30:54 crc kubenswrapper[4833]: W1013 06:30:54.743452 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc79f3f9_f6a6_43db_830d_f9300e774b68.slice/crio-08b6b569d549b826b533da48114eb333089abb51af4b6776e46f34a49a5dfa2a WatchSource:0}: Error finding container 08b6b569d549b826b533da48114eb333089abb51af4b6776e46f34a49a5dfa2a: Status 404 returned error can't find the container with id 08b6b569d549b826b533da48114eb333089abb51af4b6776e46f34a49a5dfa2a Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.786853 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:54 crc kubenswrapper[4833]: E1013 06:30:54.788338 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:55.288317658 +0000 UTC m=+145.388740654 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.888162 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:54 crc kubenswrapper[4833]: E1013 06:30:54.888311 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:55.388279939 +0000 UTC m=+145.488702855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.888744 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:54 crc kubenswrapper[4833]: E1013 06:30:54.889092 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:55.389075332 +0000 UTC m=+145.489498248 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.989782 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:54 crc kubenswrapper[4833]: E1013 06:30:54.990175 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:55.490155956 +0000 UTC m=+145.590578872 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:54 crc kubenswrapper[4833]: I1013 06:30:54.996825 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zh8tx" podStartSLOduration=123.99680831 podStartE2EDuration="2m3.99680831s" podCreationTimestamp="2025-10-13 06:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:54.985193362 +0000 UTC m=+145.085616278" watchObservedRunningTime="2025-10-13 06:30:54.99680831 +0000 UTC m=+145.097231226" Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.040156 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-tkv6x" podStartSLOduration=125.040137962 podStartE2EDuration="2m5.040137962s" podCreationTimestamp="2025-10-13 06:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:55.037923488 +0000 UTC m=+145.138346404" watchObservedRunningTime="2025-10-13 06:30:55.040137962 +0000 UTC m=+145.140560878" Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.087055 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8btrk" podStartSLOduration=125.087039598 podStartE2EDuration="2m5.087039598s" podCreationTimestamp="2025-10-13 06:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:55.083374491 +0000 UTC m=+145.183797427" watchObservedRunningTime="2025-10-13 06:30:55.087039598 +0000 UTC m=+145.187462514" Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.091150 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:55 crc kubenswrapper[4833]: E1013 06:30:55.092942 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:55.59292199 +0000 UTC m=+145.693344906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.115388 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29338950-4m7p7" podStartSLOduration=55.115371743 podStartE2EDuration="55.115371743s" podCreationTimestamp="2025-10-13 06:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:55.108702659 +0000 UTC m=+145.209125575" watchObservedRunningTime="2025-10-13 06:30:55.115371743 +0000 UTC m=+145.215794659" Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.148965 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h529p" podStartSLOduration=124.148928221 podStartE2EDuration="2m4.148928221s" podCreationTimestamp="2025-10-13 06:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:55.147640073 +0000 UTC m=+145.248063009" watchObservedRunningTime="2025-10-13 06:30:55.148928221 +0000 UTC m=+145.249351137" Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.188725 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-xhg5q" podStartSLOduration=5.188707059 podStartE2EDuration="5.188707059s" podCreationTimestamp="2025-10-13 06:30:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:55.18840079 +0000 UTC m=+145.288823706" watchObservedRunningTime="2025-10-13 06:30:55.188707059 +0000 UTC m=+145.289129975" Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.192252 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:55 crc kubenswrapper[4833]: E1013 06:30:55.192679 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:55.692664455 +0000 UTC m=+145.793087361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.294995 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:55 crc kubenswrapper[4833]: E1013 06:30:55.295328 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:55.795308664 +0000 UTC m=+145.895731580 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.397408 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:55 crc kubenswrapper[4833]: E1013 06:30:55.397993 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:55.897953664 +0000 UTC m=+145.998376590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.498996 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:55 crc kubenswrapper[4833]: E1013 06:30:55.499472 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:55.99945444 +0000 UTC m=+146.099877426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.601003 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:55 crc kubenswrapper[4833]: E1013 06:30:55.601202 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:56.101172533 +0000 UTC m=+146.201595459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.601705 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:55 crc kubenswrapper[4833]: E1013 06:30:55.602155 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:56.102138081 +0000 UTC m=+146.202560997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.702435 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:55 crc kubenswrapper[4833]: E1013 06:30:55.702865 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:56.202835704 +0000 UTC m=+146.303258620 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.707983 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pzz9g" event={"ID":"425255dc-0aa1-4b46-ae72-525d25c65135","Type":"ContainerStarted","Data":"5ad8315b76c41f3d365999ac6542f1b3b64daed5fb6f62b1c32354a3305a372d"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.713349 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lmq94" event={"ID":"135158fc-7ab1-4642-a36a-4ac5e06bb33e","Type":"ContainerStarted","Data":"22955e1136c782c18db36ce90d3780fcc9347ec47dcdd9a505aafa6f735e81ed"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.716029 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-slxv8" event={"ID":"411fca3a-272a-4d30-91d3-623952b953aa","Type":"ContainerStarted","Data":"62ae65ee1281c3f734b42d592368fc3c6cd86ca68819a245290aec0f6ef5a88f"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.716145 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-slxv8" Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.723185 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rxcsk" event={"ID":"dc4b124f-5c7d-441f-ba49-ad167dc10163","Type":"ContainerStarted","Data":"06271c94c5e84e1735bb858c5328071fb531ea3dddbb6313ea8a4e3f446862b7"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.732076 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ddtzr" event={"ID":"8f0435d5-94cb-4de1-a43b-7d784b2c8022","Type":"ContainerStarted","Data":"3b14a8b37872e0293894e8925e26ae70aaedf01280897f756247831be1e7bce5"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.733578 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" event={"ID":"b4411e13-1d37-4d03-ad8a-7d24be467441","Type":"ContainerStarted","Data":"1caf582fa4c7398683f68d8f1143d2641787f87c549ca020cc0f42a9b28b3b89"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.733892 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.736984 4833 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-x7dz2 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.26:6443/healthz\": dial tcp 10.217.0.26:6443: connect: connection refused" start-of-body= Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.737037 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" podUID="b4411e13-1d37-4d03-ad8a-7d24be467441" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.26:6443/healthz\": dial tcp 10.217.0.26:6443: connect: connection refused" Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.737364 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4642" event={"ID":"8275122f-9ea3-4d09-a31e-75063b4502d1","Type":"ContainerStarted","Data":"8ab79dc20218f43ca49389ff72ff4c63bcf2255c9f5104da9a203a43488eb975"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.740715 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tgsfn" event={"ID":"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2","Type":"ContainerStarted","Data":"f6df0a09a7e6886f5366652bc3207e4501ddd59540cf3ae0b116942282666989"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.742941 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2vbr" event={"ID":"b5054cb2-4fb5-4389-82bd-7533b8813025","Type":"ContainerStarted","Data":"c9c24468f386f5f0506f764bc8bb5094280824b25750777f888ef8a323706481"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.744363 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-l88r9" event={"ID":"4bd0a924-aaee-4a87-b87d-bbc1d7ddd4d1","Type":"ContainerStarted","Data":"f0f96b5ceb15a8744eba0a5c89b645f23739ef443a728ee0fe1a7003c67d6980"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.748316 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6g499" event={"ID":"1a1d87b9-cb40-4860-8445-4729e0945358","Type":"ContainerStarted","Data":"1dfe118d5dc9228ba7296d59accacc286fa07080a76802af5b9ab7ed59f5cb67"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.748786 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6g499" Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.750039 4833 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6g499 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.750092 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6g499" podUID="1a1d87b9-cb40-4860-8445-4729e0945358" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.750200 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dkwvf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 13 06:30:55 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Oct 13 06:30:55 crc kubenswrapper[4833]: [+]process-running ok Oct 13 06:30:55 crc kubenswrapper[4833]: healthz check failed Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.750244 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkwvf" podUID="7b6ff3a0-c424-45f9-92e9-e7b5a46d7464" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.755143 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zk7qg" event={"ID":"404a7ccb-1a6f-4185-aba4-e74c8fcd6092","Type":"ContainerStarted","Data":"574bd1461e093d0a9a10795cab0315d16cfdd345babb35c96a75b9d9bfc83cec"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.756485 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lr2tr" event={"ID":"822cc654-3f56-4ca1-b73c-863bc7129d43","Type":"ContainerStarted","Data":"baff1122bb9e4da8245e92e147a169bbecbe8f152a2e127617dd94ccbda89575"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.758293 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-2qlcg" event={"ID":"f94c08d4-7426-4488-b011-c2b78fa2b705","Type":"ContainerStarted","Data":"19ab127619f441936548ef4e3bb65c30e6d631f4b9e4a432dbcafe989e0d1b0b"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.769365 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rpfzn" event={"ID":"e912af71-9fc5-42ea-99cf-5a2e5a42cf0d","Type":"ContainerStarted","Data":"f20943a56ecdc59999d95883ec2cc6d820247916bc8e08644e4d3f1ffcc0e83a"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.769399 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rpfzn" event={"ID":"e912af71-9fc5-42ea-99cf-5a2e5a42cf0d","Type":"ContainerStarted","Data":"1fb0fb58de118ab9345cd4f76440a643c51ead0d78a1de49ad44bc413f588b28"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.783886 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-l4642" podStartSLOduration=125.783867254 podStartE2EDuration="2m5.783867254s" podCreationTimestamp="2025-10-13 06:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:55.783477103 +0000 UTC m=+145.883900019" watchObservedRunningTime="2025-10-13 06:30:55.783867254 +0000 UTC m=+145.884290170" Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.784298 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-slxv8" podStartSLOduration=124.784292506 podStartE2EDuration="2m4.784292506s" podCreationTimestamp="2025-10-13 06:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:55.76450602 +0000 UTC m=+145.864928936" watchObservedRunningTime="2025-10-13 06:30:55.784292506 +0000 UTC m=+145.884715422" Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.790573 4833 generic.go:334] "Generic (PLEG): container finished" podID="31fd7ef2-e28a-417f-8b5c-26d976680749" containerID="55bba77c9a65e1443db7c508d78377a430f4a2992300a60614eb28eff48b9d6f" exitCode=0 Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.790666 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" event={"ID":"31fd7ef2-e28a-417f-8b5c-26d976680749","Type":"ContainerDied","Data":"55bba77c9a65e1443db7c508d78377a430f4a2992300a60614eb28eff48b9d6f"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.802166 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-c7xv6" event={"ID":"8c40960a-a5c5-442a-9124-fba675359f3b","Type":"ContainerStarted","Data":"b730e4c5809fb768e444cb6def20b378f8050437f417125a2e852d1e20d8b87e"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.802214 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-c7xv6" event={"ID":"8c40960a-a5c5-442a-9124-fba675359f3b","Type":"ContainerStarted","Data":"35bad1dbad41bed9d0ec35791df15f63d7af5083cd41e8225909d2ed49dfb08d"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.804358 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-p6kj7" event={"ID":"68000101-bc2e-44dd-affe-be84000fba74","Type":"ContainerStarted","Data":"37ef18f85def07952c16862942a8a71388d7e8c5790daac2edf543530a2d94aa"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.804402 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-p6kj7" event={"ID":"68000101-bc2e-44dd-affe-be84000fba74","Type":"ContainerStarted","Data":"c4a9f87e84e6720ff7125cab039787b2a9a22594370eb1aa147e742716082041"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.805197 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:55 crc kubenswrapper[4833]: E1013 06:30:55.807642 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:56.307628916 +0000 UTC m=+146.408051832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.816974 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" podStartSLOduration=124.816958228 podStartE2EDuration="2m4.816958228s" podCreationTimestamp="2025-10-13 06:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:55.815264168 +0000 UTC m=+145.915687084" watchObservedRunningTime="2025-10-13 06:30:55.816958228 +0000 UTC m=+145.917381144" Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.818155 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgbwk" event={"ID":"803e09bf-dacb-49f5-b812-1415b7bc2c37","Type":"ContainerStarted","Data":"0996a06a937837a76c73d2844dca75590b3e595f04d4810fbdaaa3598e9d8f5d"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.818869 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgbwk" Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.820013 4833 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-sgbwk container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.820076 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgbwk" podUID="803e09bf-dacb-49f5-b812-1415b7bc2c37" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.822112 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t6b9x" event={"ID":"c50f9a77-c750-45f8-9655-6002e578c0fd","Type":"ContainerStarted","Data":"79c3c082c4341e713fdc5ecfe9b8882e967e2922f975debc37f40bc2a23ef860"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.826984 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qz7k6" event={"ID":"ba083af2-d9a6-42e5-99ec-2b89278b08a2","Type":"ContainerStarted","Data":"5365d7bef70e431306f51ed52945bb8400a4eaa88b0daef806944ec98d9d96a3"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.828093 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-qz7k6" Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.829138 4833 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qz7k6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.829178 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qz7k6" podUID="ba083af2-d9a6-42e5-99ec-2b89278b08a2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.834270 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lr2tr" podStartSLOduration=124.834252341 podStartE2EDuration="2m4.834252341s" podCreationTimestamp="2025-10-13 06:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:55.833131529 +0000 UTC m=+145.933554445" watchObservedRunningTime="2025-10-13 06:30:55.834252341 +0000 UTC m=+145.934675257" Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.852016 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t8q8k" event={"ID":"44a6f1e1-b3f3-4720-b282-65300d1cbf36","Type":"ContainerStarted","Data":"cabd50d500c12acff976eeca7cdd56774e5eaf87d7ee78f46840053dec3ca3a6"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.852070 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t8q8k" event={"ID":"44a6f1e1-b3f3-4720-b282-65300d1cbf36","Type":"ContainerStarted","Data":"364190d860544f3324065d4b9c1ff8d12bd6ac0ea8c4bc1edcebaebf1e6660d7"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.854750 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t8q8k" Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.854880 4833 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-t8q8k container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.854917 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t8q8k" podUID="44a6f1e1-b3f3-4720-b282-65300d1cbf36" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.855691 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-2qlcg" podStartSLOduration=124.855681346 podStartE2EDuration="2m4.855681346s" podCreationTimestamp="2025-10-13 06:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:55.855659815 +0000 UTC m=+145.956082731" watchObservedRunningTime="2025-10-13 06:30:55.855681346 +0000 UTC m=+145.956104262" Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.898683 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-rh4l6" event={"ID":"9e99b112-724a-48b6-9b92-b019d5092add","Type":"ContainerStarted","Data":"38f1fe42d86f7e50af1ef7d0d943183e2b7edef9c9de84467b1064f7dd4f0421"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.905588 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-6g499" podStartSLOduration=124.905516837 podStartE2EDuration="2m4.905516837s" podCreationTimestamp="2025-10-13 06:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:55.887813551 +0000 UTC m=+145.988236467" watchObservedRunningTime="2025-10-13 06:30:55.905516837 +0000 UTC m=+146.005939753" Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.905872 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:55 crc kubenswrapper[4833]: E1013 06:30:55.906224 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:56.406208277 +0000 UTC m=+146.506631193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.909214 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ds2pp" event={"ID":"dc79f3f9-f6a6-43db-830d-f9300e774b68","Type":"ContainerStarted","Data":"1bdae21d250059937085d281332cc56c6e874f79ad28ec1f2df0bd6b9156765f"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.909260 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ds2pp" event={"ID":"dc79f3f9-f6a6-43db-830d-f9300e774b68","Type":"ContainerStarted","Data":"08b6b569d549b826b533da48114eb333089abb51af4b6776e46f34a49a5dfa2a"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.922852 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t8q8k" podStartSLOduration=124.922835461 podStartE2EDuration="2m4.922835461s" podCreationTimestamp="2025-10-13 06:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:55.920872664 +0000 UTC m=+146.021295570" watchObservedRunningTime="2025-10-13 06:30:55.922835461 +0000 UTC m=+146.023258377" Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.927452 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jpjcg" event={"ID":"fe855835-c379-488f-84c2-46e500e828cd","Type":"ContainerStarted","Data":"d3e0299e0c09ce1479b9a21244281c6f49cf3c898b3775ed47fdee97efb3c158"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.942951 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9c87n" event={"ID":"fade4b8e-c06e-46ef-aaad-70a1257290aa","Type":"ContainerStarted","Data":"f9d6f324bcc4ebfdc084eebeb36f70f01811fc669923b35d3789dfb6e20ec78f"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.944268 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgbwk" podStartSLOduration=124.941523466 podStartE2EDuration="2m4.941523466s" podCreationTimestamp="2025-10-13 06:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:55.938625371 +0000 UTC m=+146.039048287" watchObservedRunningTime="2025-10-13 06:30:55.941523466 +0000 UTC m=+146.041946382" Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.953115 4833 generic.go:334] "Generic (PLEG): container finished" podID="d78c0df2-c046-49a4-b00c-031053c497c4" containerID="479fe36e53f738450778c53342be972381c6befe674cc99bf66471cead2203da" exitCode=0 Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.953210 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" event={"ID":"d78c0df2-c046-49a4-b00c-031053c497c4","Type":"ContainerDied","Data":"479fe36e53f738450778c53342be972381c6befe674cc99bf66471cead2203da"} Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.953840 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-tkv6x container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Oct 13 06:30:55 crc kubenswrapper[4833]: I1013 06:30:55.953876 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tkv6x" podUID="05a4a2c9-1543-49ae-9f86-ba208d564f75" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.008160 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t6b9x" podStartSLOduration=125.008142046 podStartE2EDuration="2m5.008142046s" podCreationTimestamp="2025-10-13 06:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:56.006981182 +0000 UTC m=+146.107404098" watchObservedRunningTime="2025-10-13 06:30:56.008142046 +0000 UTC m=+146.108564962" Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.008990 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-m5t5p" podStartSLOduration=125.008982791 podStartE2EDuration="2m5.008982791s" podCreationTimestamp="2025-10-13 06:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:55.978298207 +0000 UTC m=+146.078721123" watchObservedRunningTime="2025-10-13 06:30:56.008982791 +0000 UTC m=+146.109405707" Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.033860 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:56 crc kubenswrapper[4833]: E1013 06:30:56.034086 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:56.534075851 +0000 UTC m=+146.634498767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.049256 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-qz7k6" podStartSLOduration=126.049241303 podStartE2EDuration="2m6.049241303s" podCreationTimestamp="2025-10-13 06:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:56.029347344 +0000 UTC m=+146.129770260" watchObservedRunningTime="2025-10-13 06:30:56.049241303 +0000 UTC m=+146.149664219" Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.079771 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tqnzn" podStartSLOduration=125.079750632 podStartE2EDuration="2m5.079750632s" podCreationTimestamp="2025-10-13 06:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:56.078268289 +0000 UTC m=+146.178691215" watchObservedRunningTime="2025-10-13 06:30:56.079750632 +0000 UTC m=+146.180173548" Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.100766 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-77swh" podStartSLOduration=126.100749383 podStartE2EDuration="2m6.100749383s" podCreationTimestamp="2025-10-13 06:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:56.097364735 +0000 UTC m=+146.197787651" watchObservedRunningTime="2025-10-13 06:30:56.100749383 +0000 UTC m=+146.201172299" Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.119874 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-rh4l6" podStartSLOduration=6.11985585 podStartE2EDuration="6.11985585s" podCreationTimestamp="2025-10-13 06:30:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:56.119047576 +0000 UTC m=+146.219470492" watchObservedRunningTime="2025-10-13 06:30:56.11985585 +0000 UTC m=+146.220278766" Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.144025 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:56 crc kubenswrapper[4833]: E1013 06:30:56.144852 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:56.644834587 +0000 UTC m=+146.745257503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.148744 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9c87n" podStartSLOduration=125.148727181 podStartE2EDuration="2m5.148727181s" podCreationTimestamp="2025-10-13 06:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:56.144411655 +0000 UTC m=+146.244834571" watchObservedRunningTime="2025-10-13 06:30:56.148727181 +0000 UTC m=+146.249150097" Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.186285 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ds2pp" podStartSLOduration=126.186264214 podStartE2EDuration="2m6.186264214s" podCreationTimestamp="2025-10-13 06:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:56.168915689 +0000 UTC m=+146.269338605" watchObservedRunningTime="2025-10-13 06:30:56.186264214 +0000 UTC m=+146.286687130" Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.188697 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-jpjcg" podStartSLOduration=126.188688405 podStartE2EDuration="2m6.188688405s" podCreationTimestamp="2025-10-13 06:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:56.185204923 +0000 UTC m=+146.285627849" watchObservedRunningTime="2025-10-13 06:30:56.188688405 +0000 UTC m=+146.289111321" Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.247317 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:56 crc kubenswrapper[4833]: E1013 06:30:56.247788 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:56.747768015 +0000 UTC m=+146.848191001 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.347817 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:56 crc kubenswrapper[4833]: E1013 06:30:56.348287 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:56.848269423 +0000 UTC m=+146.948692339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.448775 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:56 crc kubenswrapper[4833]: E1013 06:30:56.449254 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:56.949238573 +0000 UTC m=+147.049661489 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.549827 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:56 crc kubenswrapper[4833]: E1013 06:30:56.550007 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:57.049980908 +0000 UTC m=+147.150403824 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.550245 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:56 crc kubenswrapper[4833]: E1013 06:30:56.550523 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:57.050517013 +0000 UTC m=+147.150939929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.651367 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:56 crc kubenswrapper[4833]: E1013 06:30:56.651648 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:57.151602878 +0000 UTC m=+147.252025794 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.651955 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:56 crc kubenswrapper[4833]: E1013 06:30:56.652376 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:57.15236529 +0000 UTC m=+147.252788306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.733568 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dkwvf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 13 06:30:56 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Oct 13 06:30:56 crc kubenswrapper[4833]: [+]process-running ok Oct 13 06:30:56 crc kubenswrapper[4833]: healthz check failed Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.733625 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkwvf" podUID="7b6ff3a0-c424-45f9-92e9-e7b5a46d7464" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.752979 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:56 crc kubenswrapper[4833]: E1013 06:30:56.753423 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:57.253406773 +0000 UTC m=+147.353829689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.854373 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:56 crc kubenswrapper[4833]: E1013 06:30:56.854913 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:57.354875118 +0000 UTC m=+147.455298124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.955214 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:56 crc kubenswrapper[4833]: E1013 06:30:56.955400 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:57.455348184 +0000 UTC m=+147.555771100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.955453 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:56 crc kubenswrapper[4833]: E1013 06:30:56.955738 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:57.455731256 +0000 UTC m=+147.556154172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.958956 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-l88r9" event={"ID":"4bd0a924-aaee-4a87-b87d-bbc1d7ddd4d1","Type":"ContainerStarted","Data":"5dca70ed2958d79366c022290a96162c4adca671e30f5e6af9869ec2caff5a7a"} Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.960512 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ddtzr" event={"ID":"8f0435d5-94cb-4de1-a43b-7d784b2c8022","Type":"ContainerStarted","Data":"141a3b863a0660799b214df74f5e0c87653391f0594249df1744bdc32a709e7a"} Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.961910 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zg4v6" event={"ID":"a46f0ce4-a965-4cc0-869a-0a1edfdb7519","Type":"ContainerStarted","Data":"471280deb2ddee7c8314c419696fa81512f89a4c2e605806fa5f0eddcc4e0635"} Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.962032 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zg4v6" Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.963107 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rxcsk" event={"ID":"dc4b124f-5c7d-441f-ba49-ad167dc10163","Type":"ContainerStarted","Data":"4ff35a54fd0dab7224057c1f02106454543351e97f87e90d1f6935254d186172"} Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.964347 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-c7xv6" event={"ID":"8c40960a-a5c5-442a-9124-fba675359f3b","Type":"ContainerStarted","Data":"df7c45a4f8255a598db360555cf1369aeabbdac20e398acf1cfc2ea97fe8e41f"} Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.964451 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-c7xv6" Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.965587 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rpfzn" event={"ID":"e912af71-9fc5-42ea-99cf-5a2e5a42cf0d","Type":"ContainerStarted","Data":"05e64b35edd4b3ae523ba1acfa25188afe96576d64693a7690bbadbc91639718"} Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.967065 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-p6kj7" event={"ID":"68000101-bc2e-44dd-affe-be84000fba74","Type":"ContainerStarted","Data":"0e83fd0d0a479eb82ca638f160e5f661c17297493c377fd2c42725e77fd8a148"} Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.968126 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-m5t5p" event={"ID":"a02fa3df-eb86-46d1-af05-0559a10899c8","Type":"ContainerStarted","Data":"c8b121e875d0c1cafaf6b77057cac04cd6b26bd20216b965854322b770b6e4af"} Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.969755 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" event={"ID":"31fd7ef2-e28a-417f-8b5c-26d976680749","Type":"ContainerStarted","Data":"f82a33f0ef0e323123ce6ba4a4fc05762d7ef76d86aff8b3ed2c4a4b193e8131"} Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.973166 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-l88r9" podStartSLOduration=125.973148133 podStartE2EDuration="2m5.973148133s" podCreationTimestamp="2025-10-13 06:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:56.97132907 +0000 UTC m=+147.071751996" watchObservedRunningTime="2025-10-13 06:30:56.973148133 +0000 UTC m=+147.073571049" Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.982919 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" event={"ID":"d78c0df2-c046-49a4-b00c-031053c497c4","Type":"ContainerStarted","Data":"cca2890a9dfeba7f8360c1bc1023e406b0221c824d10efab3b3a196c339247a6"} Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.984580 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-tkv6x container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.984653 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tkv6x" podUID="05a4a2c9-1543-49ae-9f86-ba208d564f75" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.985743 4833 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6g499 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.985783 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6g499" podUID="1a1d87b9-cb40-4860-8445-4729e0945358" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.991383 4833 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qz7k6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.991672 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qz7k6" podUID="ba083af2-d9a6-42e5-99ec-2b89278b08a2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.999014 4833 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-x7dz2 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.26:6443/healthz\": dial tcp 10.217.0.26:6443: connect: connection refused" start-of-body= Oct 13 06:30:56 crc kubenswrapper[4833]: I1013 06:30:56.999071 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" podUID="b4411e13-1d37-4d03-ad8a-7d24be467441" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.26:6443/healthz\": dial tcp 10.217.0.26:6443: connect: connection refused" Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.000020 4833 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-sgbwk container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.000058 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgbwk" podUID="803e09bf-dacb-49f5-b812-1415b7bc2c37" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.001411 4833 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-t8q8k container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.001448 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t8q8k" podUID="44a6f1e1-b3f3-4720-b282-65300d1cbf36" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.053027 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rpfzn" podStartSLOduration=126.053012999 podStartE2EDuration="2m6.053012999s" podCreationTimestamp="2025-10-13 06:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:57.030820573 +0000 UTC m=+147.131243489" watchObservedRunningTime="2025-10-13 06:30:57.053012999 +0000 UTC m=+147.153435915" Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.056263 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:57 crc kubenswrapper[4833]: E1013 06:30:57.064608 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:57.564582406 +0000 UTC m=+147.665005322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.077437 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zg4v6" podStartSLOduration=127.077396409 podStartE2EDuration="2m7.077396409s" podCreationTimestamp="2025-10-13 06:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:57.056102309 +0000 UTC m=+147.156525225" watchObservedRunningTime="2025-10-13 06:30:57.077396409 +0000 UTC m=+147.177819315" Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.084596 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-c7xv6" podStartSLOduration=7.084571508 podStartE2EDuration="7.084571508s" podCreationTimestamp="2025-10-13 06:30:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:57.077014228 +0000 UTC m=+147.177437144" watchObservedRunningTime="2025-10-13 06:30:57.084571508 +0000 UTC m=+147.184994424" Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.139926 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ddtzr" podStartSLOduration=126.13990701 podStartE2EDuration="2m6.13990701s" podCreationTimestamp="2025-10-13 06:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:57.111271736 +0000 UTC m=+147.211694682" watchObservedRunningTime="2025-10-13 06:30:57.13990701 +0000 UTC m=+147.240329916" Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.140224 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rxcsk" podStartSLOduration=126.140219699 podStartE2EDuration="2m6.140219699s" podCreationTimestamp="2025-10-13 06:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:57.139090966 +0000 UTC m=+147.239513902" watchObservedRunningTime="2025-10-13 06:30:57.140219699 +0000 UTC m=+147.240642615" Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.166116 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:57 crc kubenswrapper[4833]: E1013 06:30:57.166610 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:57.666592487 +0000 UTC m=+147.767015503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.184329 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-p6kj7" podStartSLOduration=126.184308163 podStartE2EDuration="2m6.184308163s" podCreationTimestamp="2025-10-13 06:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:57.180825902 +0000 UTC m=+147.281248818" watchObservedRunningTime="2025-10-13 06:30:57.184308163 +0000 UTC m=+147.284731079" Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.203904 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pzz9g" podStartSLOduration=126.203889163 podStartE2EDuration="2m6.203889163s" podCreationTimestamp="2025-10-13 06:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:57.201814683 +0000 UTC m=+147.302237609" watchObservedRunningTime="2025-10-13 06:30:57.203889163 +0000 UTC m=+147.304312079" Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.231223 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-zk7qg" podStartSLOduration=126.231207029 podStartE2EDuration="2m6.231207029s" podCreationTimestamp="2025-10-13 06:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:57.22919349 +0000 UTC m=+147.329616406" watchObservedRunningTime="2025-10-13 06:30:57.231207029 +0000 UTC m=+147.331629935" Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.258355 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2vbr" podStartSLOduration=126.258337179 podStartE2EDuration="2m6.258337179s" podCreationTimestamp="2025-10-13 06:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:57.254852268 +0000 UTC m=+147.355275184" watchObservedRunningTime="2025-10-13 06:30:57.258337179 +0000 UTC m=+147.358760095" Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.267095 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:57 crc kubenswrapper[4833]: E1013 06:30:57.267214 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:57.767193277 +0000 UTC m=+147.867616203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.267430 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:57 crc kubenswrapper[4833]: E1013 06:30:57.267827 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:57.767815625 +0000 UTC m=+147.868238541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.283785 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-tgsfn" podStartSLOduration=127.28376797 podStartE2EDuration="2m7.28376797s" podCreationTimestamp="2025-10-13 06:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:57.280237587 +0000 UTC m=+147.380660503" watchObservedRunningTime="2025-10-13 06:30:57.28376797 +0000 UTC m=+147.384190886" Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.296370 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lmq94" podStartSLOduration=126.296355656 podStartE2EDuration="2m6.296355656s" podCreationTimestamp="2025-10-13 06:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:57.295646946 +0000 UTC m=+147.396069852" watchObservedRunningTime="2025-10-13 06:30:57.296355656 +0000 UTC m=+147.396778562" Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.323124 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" podStartSLOduration=126.323103506 podStartE2EDuration="2m6.323103506s" podCreationTimestamp="2025-10-13 06:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:57.321030795 +0000 UTC m=+147.421453711" watchObservedRunningTime="2025-10-13 06:30:57.323103506 +0000 UTC m=+147.423526422" Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.368786 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:57 crc kubenswrapper[4833]: E1013 06:30:57.369153 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:57.869127786 +0000 UTC m=+147.969550692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.470733 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:57 crc kubenswrapper[4833]: E1013 06:30:57.471091 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:57.971075685 +0000 UTC m=+148.071498601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.571681 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:57 crc kubenswrapper[4833]: E1013 06:30:57.571785 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:58.071767618 +0000 UTC m=+148.172190524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.572137 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:57 crc kubenswrapper[4833]: E1013 06:30:57.572470 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:58.072453278 +0000 UTC m=+148.172876194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.589790 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.589993 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.590925 4833 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-ghwjx container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.590969 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" podUID="d78c0df2-c046-49a4-b00c-031053c497c4" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.673795 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:57 crc kubenswrapper[4833]: E1013 06:30:57.674061 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:58.174043327 +0000 UTC m=+148.274466243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.732264 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dkwvf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 13 06:30:57 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Oct 13 06:30:57 crc kubenswrapper[4833]: [+]process-running ok Oct 13 06:30:57 crc kubenswrapper[4833]: healthz check failed Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.732333 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkwvf" podUID="7b6ff3a0-c424-45f9-92e9-e7b5a46d7464" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.775317 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:57 crc kubenswrapper[4833]: E1013 06:30:57.775914 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:58.275890403 +0000 UTC m=+148.376313369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.876247 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:57 crc kubenswrapper[4833]: E1013 06:30:57.876418 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:58.376392501 +0000 UTC m=+148.476815417 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.876696 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:57 crc kubenswrapper[4833]: E1013 06:30:57.876985 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:58.376972947 +0000 UTC m=+148.477395853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.977834 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:57 crc kubenswrapper[4833]: E1013 06:30:57.978126 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:58.478097423 +0000 UTC m=+148.578520369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.978191 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:57 crc kubenswrapper[4833]: E1013 06:30:57.978454 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:58.478443613 +0000 UTC m=+148.578866529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.986522 4833 generic.go:334] "Generic (PLEG): container finished" podID="d537ffb6-77d0-4bfc-bc53-54cd70938e24" containerID="2f084ac0c9164b4220b602a3e14b51bd0d26a02c2b5adee8704ad4b6f0d06662" exitCode=0 Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.986574 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29338950-4m7p7" event={"ID":"d537ffb6-77d0-4bfc-bc53-54cd70938e24","Type":"ContainerDied","Data":"2f084ac0c9164b4220b602a3e14b51bd0d26a02c2b5adee8704ad4b6f0d06662"} Oct 13 06:30:57 crc kubenswrapper[4833]: I1013 06:30:57.995070 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jl8l9" event={"ID":"96a6d6d4-3b2e-410a-af74-2e05a6dc0025","Type":"ContainerStarted","Data":"d7f7e83e1d54240bc7cf520ff43a7e63ce88b5bd5290bc68c1d1b1a845b46784"} Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.007160 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" event={"ID":"31fd7ef2-e28a-417f-8b5c-26d976680749","Type":"ContainerStarted","Data":"ea24748ce8b2ea8f1b8a708ffb44cd836fac1fd2613fec3783100bdff6a3e91f"} Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.008815 4833 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-t8q8k container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.008860 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t8q8k" podUID="44a6f1e1-b3f3-4720-b282-65300d1cbf36" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.008928 4833 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qz7k6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.008945 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qz7k6" podUID="ba083af2-d9a6-42e5-99ec-2b89278b08a2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.009793 4833 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zg4v6 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.009823 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zg4v6" podUID="a46f0ce4-a965-4cc0-869a-0a1edfdb7519" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.039685 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" podStartSLOduration=127.039671855 podStartE2EDuration="2m7.039671855s" podCreationTimestamp="2025-10-13 06:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:30:58.03812982 +0000 UTC m=+148.138552736" watchObservedRunningTime="2025-10-13 06:30:58.039671855 +0000 UTC m=+148.140094761" Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.079054 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:58 crc kubenswrapper[4833]: E1013 06:30:58.079240 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:58.579211517 +0000 UTC m=+148.679634433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.079894 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:58 crc kubenswrapper[4833]: E1013 06:30:58.081596 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:58.581584556 +0000 UTC m=+148.682007562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.181294 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:58 crc kubenswrapper[4833]: E1013 06:30:58.181564 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:58.681518337 +0000 UTC m=+148.781941263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.181701 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:58 crc kubenswrapper[4833]: E1013 06:30:58.182285 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:58.682274459 +0000 UTC m=+148.782697375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.282871 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:58 crc kubenswrapper[4833]: E1013 06:30:58.282970 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:58.782954251 +0000 UTC m=+148.883377167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.283210 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:58 crc kubenswrapper[4833]: E1013 06:30:58.283460 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:58.783453676 +0000 UTC m=+148.883876592 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.305366 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.316234 4833 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zg4v6 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.316269 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zg4v6" podUID="a46f0ce4-a965-4cc0-869a-0a1edfdb7519" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.316311 4833 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zg4v6 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.316368 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zg4v6" podUID="a46f0ce4-a965-4cc0-869a-0a1edfdb7519" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.384730 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:58 crc kubenswrapper[4833]: E1013 06:30:58.384918 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:58.88489154 +0000 UTC m=+148.985314456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.385039 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:58 crc kubenswrapper[4833]: E1013 06:30:58.385448 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:58.885433696 +0000 UTC m=+148.985856702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.385640 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.393348 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.486388 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:58 crc kubenswrapper[4833]: E1013 06:30:58.486577 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:58.986549411 +0000 UTC m=+149.086972327 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.487041 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.487090 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.487127 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.487193 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:30:58 crc kubenswrapper[4833]: E1013 06:30:58.487418 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:58.987410346 +0000 UTC m=+149.087833262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.487943 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.491328 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.495465 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.540780 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.548790 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.567953 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.568648 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.570843 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.571386 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.588265 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.588442 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 13 06:30:58 crc kubenswrapper[4833]: E1013 06:30:58.588561 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:59.088532201 +0000 UTC m=+149.188967618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.690048 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d21a3d65-8c81-49ff-911c-06e99bf0c0ec-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d21a3d65-8c81-49ff-911c-06e99bf0c0ec\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.690096 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d21a3d65-8c81-49ff-911c-06e99bf0c0ec-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d21a3d65-8c81-49ff-911c-06e99bf0c0ec\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.690122 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:58 crc kubenswrapper[4833]: E1013 06:30:58.690402 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:59.190389778 +0000 UTC m=+149.290812694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.747758 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dkwvf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 13 06:30:58 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Oct 13 06:30:58 crc kubenswrapper[4833]: [+]process-running ok Oct 13 06:30:58 crc kubenswrapper[4833]: healthz check failed Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.748139 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkwvf" podUID="7b6ff3a0-c424-45f9-92e9-e7b5a46d7464" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.747875 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.793787 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.794000 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d21a3d65-8c81-49ff-911c-06e99bf0c0ec-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d21a3d65-8c81-49ff-911c-06e99bf0c0ec\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.794030 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d21a3d65-8c81-49ff-911c-06e99bf0c0ec-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d21a3d65-8c81-49ff-911c-06e99bf0c0ec\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.794151 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d21a3d65-8c81-49ff-911c-06e99bf0c0ec-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d21a3d65-8c81-49ff-911c-06e99bf0c0ec\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 13 06:30:58 crc kubenswrapper[4833]: E1013 06:30:58.794218 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:59.294203402 +0000 UTC m=+149.394626318 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.828248 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d21a3d65-8c81-49ff-911c-06e99bf0c0ec-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d21a3d65-8c81-49ff-911c-06e99bf0c0ec\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.885871 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.895700 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:58 crc kubenswrapper[4833]: E1013 06:30:58.895974 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:59.395963485 +0000 UTC m=+149.496386401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:58 crc kubenswrapper[4833]: I1013 06:30:58.997185 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:58 crc kubenswrapper[4833]: E1013 06:30:58.997842 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:59.497826772 +0000 UTC m=+149.598249688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:59 crc kubenswrapper[4833]: I1013 06:30:59.098458 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:59 crc kubenswrapper[4833]: E1013 06:30:59.100321 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:59.600303417 +0000 UTC m=+149.700726443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:59 crc kubenswrapper[4833]: W1013 06:30:59.169667 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-f7cb3170f42b9b3f20f05d450f3a11c1ac6aa683983f7e90f8ba59663d7de6ce WatchSource:0}: Error finding container f7cb3170f42b9b3f20f05d450f3a11c1ac6aa683983f7e90f8ba59663d7de6ce: Status 404 returned error can't find the container with id f7cb3170f42b9b3f20f05d450f3a11c1ac6aa683983f7e90f8ba59663d7de6ce Oct 13 06:30:59 crc kubenswrapper[4833]: I1013 06:30:59.205854 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:59 crc kubenswrapper[4833]: E1013 06:30:59.206134 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:59.706119199 +0000 UTC m=+149.806542115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:59 crc kubenswrapper[4833]: I1013 06:30:59.307296 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:59 crc kubenswrapper[4833]: E1013 06:30:59.307787 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:30:59.80777421 +0000 UTC m=+149.908197126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:59 crc kubenswrapper[4833]: I1013 06:30:59.360650 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29338950-4m7p7" Oct 13 06:30:59 crc kubenswrapper[4833]: I1013 06:30:59.395525 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 13 06:30:59 crc kubenswrapper[4833]: W1013 06:30:59.399114 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd21a3d65_8c81_49ff_911c_06e99bf0c0ec.slice/crio-3659462e6f2f8a97b5fcd90c11016b549a9e584b50013e1154b11d7e4add52ea WatchSource:0}: Error finding container 3659462e6f2f8a97b5fcd90c11016b549a9e584b50013e1154b11d7e4add52ea: Status 404 returned error can't find the container with id 3659462e6f2f8a97b5fcd90c11016b549a9e584b50013e1154b11d7e4add52ea Oct 13 06:30:59 crc kubenswrapper[4833]: I1013 06:30:59.408437 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d537ffb6-77d0-4bfc-bc53-54cd70938e24-config-volume\") pod \"d537ffb6-77d0-4bfc-bc53-54cd70938e24\" (UID: \"d537ffb6-77d0-4bfc-bc53-54cd70938e24\") " Oct 13 06:30:59 crc kubenswrapper[4833]: I1013 06:30:59.408517 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52gj6\" (UniqueName: \"kubernetes.io/projected/d537ffb6-77d0-4bfc-bc53-54cd70938e24-kube-api-access-52gj6\") pod \"d537ffb6-77d0-4bfc-bc53-54cd70938e24\" (UID: \"d537ffb6-77d0-4bfc-bc53-54cd70938e24\") " Oct 13 06:30:59 crc kubenswrapper[4833]: I1013 06:30:59.408653 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:59 crc kubenswrapper[4833]: I1013 06:30:59.408694 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d537ffb6-77d0-4bfc-bc53-54cd70938e24-secret-volume\") pod \"d537ffb6-77d0-4bfc-bc53-54cd70938e24\" (UID: \"d537ffb6-77d0-4bfc-bc53-54cd70938e24\") " Oct 13 06:30:59 crc kubenswrapper[4833]: E1013 06:30:59.411683 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:30:59.911656685 +0000 UTC m=+150.012079601 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:59 crc kubenswrapper[4833]: I1013 06:30:59.412159 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d537ffb6-77d0-4bfc-bc53-54cd70938e24-config-volume" (OuterVolumeSpecName: "config-volume") pod "d537ffb6-77d0-4bfc-bc53-54cd70938e24" (UID: "d537ffb6-77d0-4bfc-bc53-54cd70938e24"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:30:59 crc kubenswrapper[4833]: I1013 06:30:59.415126 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d537ffb6-77d0-4bfc-bc53-54cd70938e24-kube-api-access-52gj6" (OuterVolumeSpecName: "kube-api-access-52gj6") pod "d537ffb6-77d0-4bfc-bc53-54cd70938e24" (UID: "d537ffb6-77d0-4bfc-bc53-54cd70938e24"). InnerVolumeSpecName "kube-api-access-52gj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:30:59 crc kubenswrapper[4833]: I1013 06:30:59.415743 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d537ffb6-77d0-4bfc-bc53-54cd70938e24-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d537ffb6-77d0-4bfc-bc53-54cd70938e24" (UID: "d537ffb6-77d0-4bfc-bc53-54cd70938e24"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:30:59 crc kubenswrapper[4833]: I1013 06:30:59.510610 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:59 crc kubenswrapper[4833]: I1013 06:30:59.510711 4833 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d537ffb6-77d0-4bfc-bc53-54cd70938e24-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 13 06:30:59 crc kubenswrapper[4833]: I1013 06:30:59.510726 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d537ffb6-77d0-4bfc-bc53-54cd70938e24-config-volume\") on node \"crc\" DevicePath \"\"" Oct 13 06:30:59 crc kubenswrapper[4833]: I1013 06:30:59.510737 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52gj6\" (UniqueName: \"kubernetes.io/projected/d537ffb6-77d0-4bfc-bc53-54cd70938e24-kube-api-access-52gj6\") on node \"crc\" DevicePath \"\"" Oct 13 06:30:59 crc kubenswrapper[4833]: E1013 06:30:59.511066 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:31:00.01104586 +0000 UTC m=+150.111468846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:59 crc kubenswrapper[4833]: I1013 06:30:59.611286 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:59 crc kubenswrapper[4833]: E1013 06:30:59.611394 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:31:00.111370682 +0000 UTC m=+150.211793618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:59 crc kubenswrapper[4833]: I1013 06:30:59.611689 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:59 crc kubenswrapper[4833]: E1013 06:30:59.611973 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:31:00.11196528 +0000 UTC m=+150.212388196 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:59 crc kubenswrapper[4833]: I1013 06:30:59.712495 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:59 crc kubenswrapper[4833]: E1013 06:30:59.712684 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:31:00.212654092 +0000 UTC m=+150.313077008 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:59 crc kubenswrapper[4833]: I1013 06:30:59.712769 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:59 crc kubenswrapper[4833]: E1013 06:30:59.713044 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:31:00.213036474 +0000 UTC m=+150.313459390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:59 crc kubenswrapper[4833]: I1013 06:30:59.732085 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dkwvf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 13 06:30:59 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Oct 13 06:30:59 crc kubenswrapper[4833]: [+]process-running ok Oct 13 06:30:59 crc kubenswrapper[4833]: healthz check failed Oct 13 06:30:59 crc kubenswrapper[4833]: I1013 06:30:59.732131 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkwvf" podUID="7b6ff3a0-c424-45f9-92e9-e7b5a46d7464" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 06:30:59 crc kubenswrapper[4833]: I1013 06:30:59.813602 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:59 crc kubenswrapper[4833]: E1013 06:30:59.813776 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:31:00.313739897 +0000 UTC m=+150.414162853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:59 crc kubenswrapper[4833]: I1013 06:30:59.813916 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:59 crc kubenswrapper[4833]: E1013 06:30:59.814311 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:31:00.314297853 +0000 UTC m=+150.414720769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:59 crc kubenswrapper[4833]: I1013 06:30:59.924242 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:30:59 crc kubenswrapper[4833]: E1013 06:30:59.924722 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:31:00.424702438 +0000 UTC m=+150.525125344 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:30:59 crc kubenswrapper[4833]: I1013 06:30:59.925044 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:30:59 crc kubenswrapper[4833]: E1013 06:30:59.925464 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:31:00.42545341 +0000 UTC m=+150.525876326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.025879 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:31:00 crc kubenswrapper[4833]: E1013 06:31:00.026254 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:31:00.526234106 +0000 UTC m=+150.626657022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.032225 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d21a3d65-8c81-49ff-911c-06e99bf0c0ec","Type":"ContainerStarted","Data":"75f5d8a322ff443a2362abde6eae51bf3b08adbb4e44648e5772b4ee53731392"} Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.032268 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d21a3d65-8c81-49ff-911c-06e99bf0c0ec","Type":"ContainerStarted","Data":"3659462e6f2f8a97b5fcd90c11016b549a9e584b50013e1154b11d7e4add52ea"} Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.034607 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29338950-4m7p7" event={"ID":"d537ffb6-77d0-4bfc-bc53-54cd70938e24","Type":"ContainerDied","Data":"5deefd5b787e411d65e6d43be969201491923306d62dffe5b032a364a9853aea"} Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.034640 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5deefd5b787e411d65e6d43be969201491923306d62dffe5b032a364a9853aea" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.034718 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29338950-4m7p7" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.039122 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2cf64a2c71a95cca16f67da997085d5c581b89324cf4dd17f6170360aa96365e"} Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.039183 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f7cb3170f42b9b3f20f05d450f3a11c1ac6aa683983f7e90f8ba59663d7de6ce"} Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.040348 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"8e112c6df4a561acdc01e4e57dc8f2fc40523bd1296001c376cd7d843065f1fa"} Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.040414 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"5ca26b7284d7b87f16bbe3e9357293a1c5c1678fced3b8014e1dcecb274a82a9"} Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.041398 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e187b10920d44a97d60709995785e40fcbe2c7cc942235ccb907b52dccf72ec2"} Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.041429 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0aca48d24db19c3eccf79e0f7451bd4f27a3dc1e5585801b26682b2c2b64eb82"} Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.041836 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.065982 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.065965303 podStartE2EDuration="2.065965303s" podCreationTimestamp="2025-10-13 06:30:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:31:00.065350825 +0000 UTC m=+150.165773751" watchObservedRunningTime="2025-10-13 06:31:00.065965303 +0000 UTC m=+150.166388219" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.127280 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:31:00 crc kubenswrapper[4833]: E1013 06:31:00.127731 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:31:00.627715091 +0000 UTC m=+150.728138057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.228369 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:31:00 crc kubenswrapper[4833]: E1013 06:31:00.228546 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:31:00.728509597 +0000 UTC m=+150.828932513 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.228782 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:31:00 crc kubenswrapper[4833]: E1013 06:31:00.229116 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:31:00.729105124 +0000 UTC m=+150.829528040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.329797 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:31:00 crc kubenswrapper[4833]: E1013 06:31:00.329934 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:31:00.8299057 +0000 UTC m=+150.930328616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.330058 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:31:00 crc kubenswrapper[4833]: E1013 06:31:00.330346 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:31:00.830335593 +0000 UTC m=+150.930758499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.431652 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:31:00 crc kubenswrapper[4833]: E1013 06:31:00.431796 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:31:00.931774657 +0000 UTC m=+151.032197583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.431895 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:31:00 crc kubenswrapper[4833]: E1013 06:31:00.432233 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:31:00.93222351 +0000 UTC m=+151.032646426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.440759 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4sw6h"] Oct 13 06:31:00 crc kubenswrapper[4833]: E1013 06:31:00.440934 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d537ffb6-77d0-4bfc-bc53-54cd70938e24" containerName="collect-profiles" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.440945 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d537ffb6-77d0-4bfc-bc53-54cd70938e24" containerName="collect-profiles" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.441032 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d537ffb6-77d0-4bfc-bc53-54cd70938e24" containerName="collect-profiles" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.441672 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4sw6h" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.447682 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.456282 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4sw6h"] Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.533233 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:31:00 crc kubenswrapper[4833]: E1013 06:31:00.533432 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:31:01.033399657 +0000 UTC m=+151.133822573 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.533485 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.533552 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cc9a234-36b5-410a-ab39-d8ee02cecf3c-utilities\") pod \"certified-operators-4sw6h\" (UID: \"9cc9a234-36b5-410a-ab39-d8ee02cecf3c\") " pod="openshift-marketplace/certified-operators-4sw6h" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.533588 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnnlp\" (UniqueName: \"kubernetes.io/projected/9cc9a234-36b5-410a-ab39-d8ee02cecf3c-kube-api-access-jnnlp\") pod \"certified-operators-4sw6h\" (UID: \"9cc9a234-36b5-410a-ab39-d8ee02cecf3c\") " pod="openshift-marketplace/certified-operators-4sw6h" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.533632 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cc9a234-36b5-410a-ab39-d8ee02cecf3c-catalog-content\") pod \"certified-operators-4sw6h\" (UID: \"9cc9a234-36b5-410a-ab39-d8ee02cecf3c\") " pod="openshift-marketplace/certified-operators-4sw6h" Oct 13 06:31:00 crc kubenswrapper[4833]: E1013 06:31:00.533850 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:31:01.03383983 +0000 UTC m=+151.134262796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.542841 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.542914 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.634207 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:31:00 crc kubenswrapper[4833]: E1013 06:31:00.634380 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:31:01.134353828 +0000 UTC m=+151.234776744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.634504 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.634617 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cc9a234-36b5-410a-ab39-d8ee02cecf3c-utilities\") pod \"certified-operators-4sw6h\" (UID: \"9cc9a234-36b5-410a-ab39-d8ee02cecf3c\") " pod="openshift-marketplace/certified-operators-4sw6h" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.634655 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnnlp\" (UniqueName: \"kubernetes.io/projected/9cc9a234-36b5-410a-ab39-d8ee02cecf3c-kube-api-access-jnnlp\") pod \"certified-operators-4sw6h\" (UID: \"9cc9a234-36b5-410a-ab39-d8ee02cecf3c\") " pod="openshift-marketplace/certified-operators-4sw6h" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.634696 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cc9a234-36b5-410a-ab39-d8ee02cecf3c-catalog-content\") pod \"certified-operators-4sw6h\" (UID: \"9cc9a234-36b5-410a-ab39-d8ee02cecf3c\") " pod="openshift-marketplace/certified-operators-4sw6h" Oct 13 06:31:00 crc kubenswrapper[4833]: E1013 06:31:00.634822 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:31:01.134812311 +0000 UTC m=+151.235235277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.635172 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cc9a234-36b5-410a-ab39-d8ee02cecf3c-utilities\") pod \"certified-operators-4sw6h\" (UID: \"9cc9a234-36b5-410a-ab39-d8ee02cecf3c\") " pod="openshift-marketplace/certified-operators-4sw6h" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.635223 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cc9a234-36b5-410a-ab39-d8ee02cecf3c-catalog-content\") pod \"certified-operators-4sw6h\" (UID: \"9cc9a234-36b5-410a-ab39-d8ee02cecf3c\") " pod="openshift-marketplace/certified-operators-4sw6h" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.642678 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vwg98"] Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.643694 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vwg98" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.647287 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.655681 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vwg98"] Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.662795 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnnlp\" (UniqueName: \"kubernetes.io/projected/9cc9a234-36b5-410a-ab39-d8ee02cecf3c-kube-api-access-jnnlp\") pod \"certified-operators-4sw6h\" (UID: \"9cc9a234-36b5-410a-ab39-d8ee02cecf3c\") " pod="openshift-marketplace/certified-operators-4sw6h" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.735902 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:31:00 crc kubenswrapper[4833]: E1013 06:31:00.736026 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:31:01.236008228 +0000 UTC m=+151.336431145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.736140 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.736248 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbwh8\" (UniqueName: \"kubernetes.io/projected/6c65c1f2-5e55-4133-a013-d4d5e101e0d7-kube-api-access-tbwh8\") pod \"community-operators-vwg98\" (UID: \"6c65c1f2-5e55-4133-a013-d4d5e101e0d7\") " pod="openshift-marketplace/community-operators-vwg98" Oct 13 06:31:00 crc kubenswrapper[4833]: E1013 06:31:00.736367 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:31:01.236358109 +0000 UTC m=+151.336781025 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.736365 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c65c1f2-5e55-4133-a013-d4d5e101e0d7-catalog-content\") pod \"community-operators-vwg98\" (UID: \"6c65c1f2-5e55-4133-a013-d4d5e101e0d7\") " pod="openshift-marketplace/community-operators-vwg98" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.736427 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c65c1f2-5e55-4133-a013-d4d5e101e0d7-utilities\") pod \"community-operators-vwg98\" (UID: \"6c65c1f2-5e55-4133-a013-d4d5e101e0d7\") " pod="openshift-marketplace/community-operators-vwg98" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.742035 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dkwvf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 13 06:31:00 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Oct 13 06:31:00 crc kubenswrapper[4833]: [+]process-running ok Oct 13 06:31:00 crc kubenswrapper[4833]: healthz check failed Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.742099 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkwvf" podUID="7b6ff3a0-c424-45f9-92e9-e7b5a46d7464" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.755598 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4sw6h" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.837321 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:31:00 crc kubenswrapper[4833]: E1013 06:31:00.837631 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:31:01.337597217 +0000 UTC m=+151.438020133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.837739 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbwh8\" (UniqueName: \"kubernetes.io/projected/6c65c1f2-5e55-4133-a013-d4d5e101e0d7-kube-api-access-tbwh8\") pod \"community-operators-vwg98\" (UID: \"6c65c1f2-5e55-4133-a013-d4d5e101e0d7\") " pod="openshift-marketplace/community-operators-vwg98" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.837774 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c65c1f2-5e55-4133-a013-d4d5e101e0d7-catalog-content\") pod \"community-operators-vwg98\" (UID: \"6c65c1f2-5e55-4133-a013-d4d5e101e0d7\") " pod="openshift-marketplace/community-operators-vwg98" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.837806 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c65c1f2-5e55-4133-a013-d4d5e101e0d7-utilities\") pod \"community-operators-vwg98\" (UID: \"6c65c1f2-5e55-4133-a013-d4d5e101e0d7\") " pod="openshift-marketplace/community-operators-vwg98" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.837829 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:31:00 crc kubenswrapper[4833]: E1013 06:31:00.838101 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:31:01.338090092 +0000 UTC m=+151.438513008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.838186 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c65c1f2-5e55-4133-a013-d4d5e101e0d7-catalog-content\") pod \"community-operators-vwg98\" (UID: \"6c65c1f2-5e55-4133-a013-d4d5e101e0d7\") " pod="openshift-marketplace/community-operators-vwg98" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.838252 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c65c1f2-5e55-4133-a013-d4d5e101e0d7-utilities\") pod \"community-operators-vwg98\" (UID: \"6c65c1f2-5e55-4133-a013-d4d5e101e0d7\") " pod="openshift-marketplace/community-operators-vwg98" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.853770 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8xfsx"] Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.854921 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8xfsx" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.880938 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8xfsx"] Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.898500 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbwh8\" (UniqueName: \"kubernetes.io/projected/6c65c1f2-5e55-4133-a013-d4d5e101e0d7-kube-api-access-tbwh8\") pod \"community-operators-vwg98\" (UID: \"6c65c1f2-5e55-4133-a013-d4d5e101e0d7\") " pod="openshift-marketplace/community-operators-vwg98" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.939345 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.939685 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x77ls\" (UniqueName: \"kubernetes.io/projected/48b7084d-6299-4cfd-88f3-e2dca282c478-kube-api-access-x77ls\") pod \"certified-operators-8xfsx\" (UID: \"48b7084d-6299-4cfd-88f3-e2dca282c478\") " pod="openshift-marketplace/certified-operators-8xfsx" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.939729 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48b7084d-6299-4cfd-88f3-e2dca282c478-utilities\") pod \"certified-operators-8xfsx\" (UID: \"48b7084d-6299-4cfd-88f3-e2dca282c478\") " pod="openshift-marketplace/certified-operators-8xfsx" Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.939802 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48b7084d-6299-4cfd-88f3-e2dca282c478-catalog-content\") pod \"certified-operators-8xfsx\" (UID: \"48b7084d-6299-4cfd-88f3-e2dca282c478\") " pod="openshift-marketplace/certified-operators-8xfsx" Oct 13 06:31:00 crc kubenswrapper[4833]: E1013 06:31:00.939933 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:31:01.439914127 +0000 UTC m=+151.540337043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:00 crc kubenswrapper[4833]: I1013 06:31:00.973271 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vwg98" Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.041138 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48b7084d-6299-4cfd-88f3-e2dca282c478-catalog-content\") pod \"certified-operators-8xfsx\" (UID: \"48b7084d-6299-4cfd-88f3-e2dca282c478\") " pod="openshift-marketplace/certified-operators-8xfsx" Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.041205 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x77ls\" (UniqueName: \"kubernetes.io/projected/48b7084d-6299-4cfd-88f3-e2dca282c478-kube-api-access-x77ls\") pod \"certified-operators-8xfsx\" (UID: \"48b7084d-6299-4cfd-88f3-e2dca282c478\") " pod="openshift-marketplace/certified-operators-8xfsx" Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.041230 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48b7084d-6299-4cfd-88f3-e2dca282c478-utilities\") pod \"certified-operators-8xfsx\" (UID: \"48b7084d-6299-4cfd-88f3-e2dca282c478\") " pod="openshift-marketplace/certified-operators-8xfsx" Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.041251 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:31:01 crc kubenswrapper[4833]: E1013 06:31:01.041551 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:31:01.541526927 +0000 UTC m=+151.641949843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.041989 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48b7084d-6299-4cfd-88f3-e2dca282c478-catalog-content\") pod \"certified-operators-8xfsx\" (UID: \"48b7084d-6299-4cfd-88f3-e2dca282c478\") " pod="openshift-marketplace/certified-operators-8xfsx" Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.042644 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48b7084d-6299-4cfd-88f3-e2dca282c478-utilities\") pod \"certified-operators-8xfsx\" (UID: \"48b7084d-6299-4cfd-88f3-e2dca282c478\") " pod="openshift-marketplace/certified-operators-8xfsx" Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.051489 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m7mtw"] Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.052376 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7mtw" Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.075931 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m7mtw"] Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.080252 4833 generic.go:334] "Generic (PLEG): container finished" podID="d21a3d65-8c81-49ff-911c-06e99bf0c0ec" containerID="75f5d8a322ff443a2362abde6eae51bf3b08adbb4e44648e5772b4ee53731392" exitCode=0 Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.080357 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d21a3d65-8c81-49ff-911c-06e99bf0c0ec","Type":"ContainerDied","Data":"75f5d8a322ff443a2362abde6eae51bf3b08adbb4e44648e5772b4ee53731392"} Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.110528 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x77ls\" (UniqueName: \"kubernetes.io/projected/48b7084d-6299-4cfd-88f3-e2dca282c478-kube-api-access-x77ls\") pod \"certified-operators-8xfsx\" (UID: \"48b7084d-6299-4cfd-88f3-e2dca282c478\") " pod="openshift-marketplace/certified-operators-8xfsx" Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.141786 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.142027 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0efc55e5-22c4-4df7-b07e-30bb441769c4-catalog-content\") pod \"community-operators-m7mtw\" (UID: \"0efc55e5-22c4-4df7-b07e-30bb441769c4\") " pod="openshift-marketplace/community-operators-m7mtw" Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.142105 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0efc55e5-22c4-4df7-b07e-30bb441769c4-utilities\") pod \"community-operators-m7mtw\" (UID: \"0efc55e5-22c4-4df7-b07e-30bb441769c4\") " pod="openshift-marketplace/community-operators-m7mtw" Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.142131 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf9n2\" (UniqueName: \"kubernetes.io/projected/0efc55e5-22c4-4df7-b07e-30bb441769c4-kube-api-access-zf9n2\") pod \"community-operators-m7mtw\" (UID: \"0efc55e5-22c4-4df7-b07e-30bb441769c4\") " pod="openshift-marketplace/community-operators-m7mtw" Oct 13 06:31:01 crc kubenswrapper[4833]: E1013 06:31:01.142302 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:31:01.642282562 +0000 UTC m=+151.742705478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.184279 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4sw6h"] Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.197787 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8xfsx" Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.248482 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0efc55e5-22c4-4df7-b07e-30bb441769c4-utilities\") pod \"community-operators-m7mtw\" (UID: \"0efc55e5-22c4-4df7-b07e-30bb441769c4\") " pod="openshift-marketplace/community-operators-m7mtw" Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.248523 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf9n2\" (UniqueName: \"kubernetes.io/projected/0efc55e5-22c4-4df7-b07e-30bb441769c4-kube-api-access-zf9n2\") pod \"community-operators-m7mtw\" (UID: \"0efc55e5-22c4-4df7-b07e-30bb441769c4\") " pod="openshift-marketplace/community-operators-m7mtw" Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.248577 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.248627 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0efc55e5-22c4-4df7-b07e-30bb441769c4-catalog-content\") pod \"community-operators-m7mtw\" (UID: \"0efc55e5-22c4-4df7-b07e-30bb441769c4\") " pod="openshift-marketplace/community-operators-m7mtw" Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.248980 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0efc55e5-22c4-4df7-b07e-30bb441769c4-catalog-content\") pod \"community-operators-m7mtw\" (UID: \"0efc55e5-22c4-4df7-b07e-30bb441769c4\") " pod="openshift-marketplace/community-operators-m7mtw" Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.249233 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0efc55e5-22c4-4df7-b07e-30bb441769c4-utilities\") pod \"community-operators-m7mtw\" (UID: \"0efc55e5-22c4-4df7-b07e-30bb441769c4\") " pod="openshift-marketplace/community-operators-m7mtw" Oct 13 06:31:01 crc kubenswrapper[4833]: E1013 06:31:01.250556 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:31:01.750526454 +0000 UTC m=+151.850949370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.279275 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf9n2\" (UniqueName: \"kubernetes.io/projected/0efc55e5-22c4-4df7-b07e-30bb441769c4-kube-api-access-zf9n2\") pod \"community-operators-m7mtw\" (UID: \"0efc55e5-22c4-4df7-b07e-30bb441769c4\") " pod="openshift-marketplace/community-operators-m7mtw" Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.334965 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zg4v6" Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.350057 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:31:01 crc kubenswrapper[4833]: E1013 06:31:01.350461 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:31:01.850443715 +0000 UTC m=+151.950866631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.392855 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7mtw" Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.454163 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:31:01 crc kubenswrapper[4833]: E1013 06:31:01.454564 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:31:01.954527536 +0000 UTC m=+152.054950452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.502436 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vwg98"] Oct 13 06:31:01 crc kubenswrapper[4833]: W1013 06:31:01.548587 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c65c1f2_5e55_4133_a013_d4d5e101e0d7.slice/crio-551fd92a95240d3e542b1c8fff8687ba87cf0a437932f8f75da9857dae90c9bf WatchSource:0}: Error finding container 551fd92a95240d3e542b1c8fff8687ba87cf0a437932f8f75da9857dae90c9bf: Status 404 returned error can't find the container with id 551fd92a95240d3e542b1c8fff8687ba87cf0a437932f8f75da9857dae90c9bf Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.555490 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:31:01 crc kubenswrapper[4833]: E1013 06:31:01.555880 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:31:02.055861107 +0000 UTC m=+152.156284023 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.657557 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:31:01 crc kubenswrapper[4833]: E1013 06:31:01.657822 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:31:02.157810956 +0000 UTC m=+152.258233872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.661456 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8xfsx"] Oct 13 06:31:01 crc kubenswrapper[4833]: E1013 06:31:01.703274 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cc9a234_36b5_410a_ab39_d8ee02cecf3c.slice/crio-conmon-2670db2561f1e020457c9de244dd4db35d8112c3f947348dd605a176ce58a879.scope\": RecentStats: unable to find data in memory cache]" Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.732729 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dkwvf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 13 06:31:01 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Oct 13 06:31:01 crc kubenswrapper[4833]: [+]process-running ok Oct 13 06:31:01 crc kubenswrapper[4833]: healthz check failed Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.732781 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkwvf" podUID="7b6ff3a0-c424-45f9-92e9-e7b5a46d7464" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.758147 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:31:01 crc kubenswrapper[4833]: E1013 06:31:01.758343 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:31:02.258311643 +0000 UTC m=+152.358734569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.758623 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:31:01 crc kubenswrapper[4833]: E1013 06:31:01.758925 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:31:02.258914201 +0000 UTC m=+152.359337117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.859301 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:31:01 crc kubenswrapper[4833]: E1013 06:31:01.859855 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:31:02.35984081 +0000 UTC m=+152.460263726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.961295 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:31:01 crc kubenswrapper[4833]: E1013 06:31:01.961682 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:31:02.461667606 +0000 UTC m=+152.562090522 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:01 crc kubenswrapper[4833]: I1013 06:31:01.982457 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m7mtw"] Oct 13 06:31:02 crc kubenswrapper[4833]: W1013 06:31:02.034241 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0efc55e5_22c4_4df7_b07e_30bb441769c4.slice/crio-b14d79336297daca0182a5b38f55a3612d344e844c666a76ef861eee64bdffe5 WatchSource:0}: Error finding container b14d79336297daca0182a5b38f55a3612d344e844c666a76ef861eee64bdffe5: Status 404 returned error can't find the container with id b14d79336297daca0182a5b38f55a3612d344e844c666a76ef861eee64bdffe5 Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.062778 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:31:02 crc kubenswrapper[4833]: E1013 06:31:02.062921 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:31:02.562890444 +0000 UTC m=+152.663313370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.063209 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:31:02 crc kubenswrapper[4833]: E1013 06:31:02.063609 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:31:02.563597845 +0000 UTC m=+152.664020811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.099776 4833 generic.go:334] "Generic (PLEG): container finished" podID="9cc9a234-36b5-410a-ab39-d8ee02cecf3c" containerID="2670db2561f1e020457c9de244dd4db35d8112c3f947348dd605a176ce58a879" exitCode=0 Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.099849 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sw6h" event={"ID":"9cc9a234-36b5-410a-ab39-d8ee02cecf3c","Type":"ContainerDied","Data":"2670db2561f1e020457c9de244dd4db35d8112c3f947348dd605a176ce58a879"} Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.099877 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sw6h" event={"ID":"9cc9a234-36b5-410a-ab39-d8ee02cecf3c","Type":"ContainerStarted","Data":"a8f89f49c1bf5172cd8da84f4563d9646b234566ee18248d3984d8509e128b46"} Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.104044 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.107928 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7mtw" event={"ID":"0efc55e5-22c4-4df7-b07e-30bb441769c4","Type":"ContainerStarted","Data":"b14d79336297daca0182a5b38f55a3612d344e844c666a76ef861eee64bdffe5"} Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.119703 4833 generic.go:334] "Generic (PLEG): container finished" podID="48b7084d-6299-4cfd-88f3-e2dca282c478" containerID="d1a4587bc30e22aee9ae7f5935455056fd2ba4ba2bb338d45d2047829137b4eb" exitCode=0 Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.119761 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xfsx" event={"ID":"48b7084d-6299-4cfd-88f3-e2dca282c478","Type":"ContainerDied","Data":"d1a4587bc30e22aee9ae7f5935455056fd2ba4ba2bb338d45d2047829137b4eb"} Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.119786 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xfsx" event={"ID":"48b7084d-6299-4cfd-88f3-e2dca282c478","Type":"ContainerStarted","Data":"fef6fa5d8b0b470ab41462ed693f8a4251dc9c9c96afcd525e23e6ea2566d438"} Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.132322 4833 generic.go:334] "Generic (PLEG): container finished" podID="6c65c1f2-5e55-4133-a013-d4d5e101e0d7" containerID="c1bc27016b668380ca6c66dd6588a1c7ed0756289bb2749a5ad3b86c23947310" exitCode=0 Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.132431 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vwg98" event={"ID":"6c65c1f2-5e55-4133-a013-d4d5e101e0d7","Type":"ContainerDied","Data":"c1bc27016b668380ca6c66dd6588a1c7ed0756289bb2749a5ad3b86c23947310"} Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.132455 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vwg98" event={"ID":"6c65c1f2-5e55-4133-a013-d4d5e101e0d7","Type":"ContainerStarted","Data":"551fd92a95240d3e542b1c8fff8687ba87cf0a437932f8f75da9857dae90c9bf"} Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.153771 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jl8l9" event={"ID":"96a6d6d4-3b2e-410a-af74-2e05a6dc0025","Type":"ContainerStarted","Data":"9ddb78de7de8c1aca636523a7075067379b7f3d3b322f1a033612e727c914287"} Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.180391 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:31:02 crc kubenswrapper[4833]: E1013 06:31:02.182087 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:31:02.682065895 +0000 UTC m=+152.782488811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.257901 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5" Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.284243 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:31:02 crc kubenswrapper[4833]: E1013 06:31:02.284933 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:31:02.784911671 +0000 UTC m=+152.885334627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.385519 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:31:02 crc kubenswrapper[4833]: E1013 06:31:02.385699 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:31:02.885669475 +0000 UTC m=+152.986092391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.385997 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:31:02 crc kubenswrapper[4833]: E1013 06:31:02.387049 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:31:02.887038775 +0000 UTC m=+152.987461691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.486877 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:31:02 crc kubenswrapper[4833]: E1013 06:31:02.487164 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:31:02.987146691 +0000 UTC m=+153.087569607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.580780 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.580810 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.594441 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:31:02 crc kubenswrapper[4833]: E1013 06:31:02.594796 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:31:03.094784266 +0000 UTC m=+153.195207182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.603182 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.609964 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.610234 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ghwjx" Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.643002 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6l2jk"] Oct 13 06:31:02 crc kubenswrapper[4833]: E1013 06:31:02.643223 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d21a3d65-8c81-49ff-911c-06e99bf0c0ec" containerName="pruner" Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.643234 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d21a3d65-8c81-49ff-911c-06e99bf0c0ec" containerName="pruner" Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.643333 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d21a3d65-8c81-49ff-911c-06e99bf0c0ec" containerName="pruner" Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.644059 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6l2jk" Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.645363 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.657318 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.661657 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-77swh" Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.671848 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-tkv6x container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.671893 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tkv6x" podUID="05a4a2c9-1543-49ae-9f86-ba208d564f75" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.671854 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-tkv6x container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.671947 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tkv6x" podUID="05a4a2c9-1543-49ae-9f86-ba208d564f75" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.673073 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-77swh" Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.677120 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6l2jk"] Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.705904 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d21a3d65-8c81-49ff-911c-06e99bf0c0ec-kube-api-access\") pod \"d21a3d65-8c81-49ff-911c-06e99bf0c0ec\" (UID: \"d21a3d65-8c81-49ff-911c-06e99bf0c0ec\") " Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.705953 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d21a3d65-8c81-49ff-911c-06e99bf0c0ec-kubelet-dir\") pod \"d21a3d65-8c81-49ff-911c-06e99bf0c0ec\" (UID: \"d21a3d65-8c81-49ff-911c-06e99bf0c0ec\") " Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.706063 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.707499 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d21a3d65-8c81-49ff-911c-06e99bf0c0ec-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d21a3d65-8c81-49ff-911c-06e99bf0c0ec" (UID: "d21a3d65-8c81-49ff-911c-06e99bf0c0ec"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:31:02 crc kubenswrapper[4833]: E1013 06:31:02.707603 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:31:03.207584111 +0000 UTC m=+153.308007027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.721185 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d21a3d65-8c81-49ff-911c-06e99bf0c0ec-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d21a3d65-8c81-49ff-911c-06e99bf0c0ec" (UID: "d21a3d65-8c81-49ff-911c-06e99bf0c0ec"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.739182 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-dkwvf" Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.752787 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dkwvf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 13 06:31:02 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Oct 13 06:31:02 crc kubenswrapper[4833]: [+]process-running ok Oct 13 06:31:02 crc kubenswrapper[4833]: healthz check failed Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.752857 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkwvf" podUID="7b6ff3a0-c424-45f9-92e9-e7b5a46d7464" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.800069 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-qz7k6" Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.813318 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcfx7\" (UniqueName: \"kubernetes.io/projected/af2f317c-d52d-483f-a616-0d4868b57951-kube-api-access-tcfx7\") pod \"redhat-marketplace-6l2jk\" (UID: \"af2f317c-d52d-483f-a616-0d4868b57951\") " pod="openshift-marketplace/redhat-marketplace-6l2jk" Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.813450 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af2f317c-d52d-483f-a616-0d4868b57951-utilities\") pod \"redhat-marketplace-6l2jk\" (UID: \"af2f317c-d52d-483f-a616-0d4868b57951\") " pod="openshift-marketplace/redhat-marketplace-6l2jk" Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.813492 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.813597 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af2f317c-d52d-483f-a616-0d4868b57951-catalog-content\") pod \"redhat-marketplace-6l2jk\" (UID: \"af2f317c-d52d-483f-a616-0d4868b57951\") " pod="openshift-marketplace/redhat-marketplace-6l2jk" Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.815123 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d21a3d65-8c81-49ff-911c-06e99bf0c0ec-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.815152 4833 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d21a3d65-8c81-49ff-911c-06e99bf0c0ec-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 13 06:31:02 crc kubenswrapper[4833]: E1013 06:31:02.815182 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:31:03.315167675 +0000 UTC m=+153.415590691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.915843 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.916047 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af2f317c-d52d-483f-a616-0d4868b57951-catalog-content\") pod \"redhat-marketplace-6l2jk\" (UID: \"af2f317c-d52d-483f-a616-0d4868b57951\") " pod="openshift-marketplace/redhat-marketplace-6l2jk" Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.916094 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcfx7\" (UniqueName: \"kubernetes.io/projected/af2f317c-d52d-483f-a616-0d4868b57951-kube-api-access-tcfx7\") pod \"redhat-marketplace-6l2jk\" (UID: \"af2f317c-d52d-483f-a616-0d4868b57951\") " pod="openshift-marketplace/redhat-marketplace-6l2jk" Oct 13 06:31:02 crc kubenswrapper[4833]: E1013 06:31:02.916169 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:31:03.416135746 +0000 UTC m=+153.516558662 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.916376 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af2f317c-d52d-483f-a616-0d4868b57951-utilities\") pod \"redhat-marketplace-6l2jk\" (UID: \"af2f317c-d52d-483f-a616-0d4868b57951\") " pod="openshift-marketplace/redhat-marketplace-6l2jk" Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.916425 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.916759 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af2f317c-d52d-483f-a616-0d4868b57951-catalog-content\") pod \"redhat-marketplace-6l2jk\" (UID: \"af2f317c-d52d-483f-a616-0d4868b57951\") " pod="openshift-marketplace/redhat-marketplace-6l2jk" Oct 13 06:31:02 crc kubenswrapper[4833]: E1013 06:31:02.917624 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:31:03.417612919 +0000 UTC m=+153.518035925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.917763 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af2f317c-d52d-483f-a616-0d4868b57951-utilities\") pod \"redhat-marketplace-6l2jk\" (UID: \"af2f317c-d52d-483f-a616-0d4868b57951\") " pod="openshift-marketplace/redhat-marketplace-6l2jk" Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.945663 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcfx7\" (UniqueName: \"kubernetes.io/projected/af2f317c-d52d-483f-a616-0d4868b57951-kube-api-access-tcfx7\") pod \"redhat-marketplace-6l2jk\" (UID: \"af2f317c-d52d-483f-a616-0d4868b57951\") " pod="openshift-marketplace/redhat-marketplace-6l2jk" Oct 13 06:31:02 crc kubenswrapper[4833]: I1013 06:31:02.957364 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6l2jk" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.018419 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:31:03 crc kubenswrapper[4833]: E1013 06:31:03.018969 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:31:03.5189516 +0000 UTC m=+153.619374516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.039068 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rdgr7"] Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.040246 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdgr7" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.050693 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdgr7"] Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.119706 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.119794 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57de76e3-fdf0-4c6e-aa11-702f0368cb41-catalog-content\") pod \"redhat-marketplace-rdgr7\" (UID: \"57de76e3-fdf0-4c6e-aa11-702f0368cb41\") " pod="openshift-marketplace/redhat-marketplace-rdgr7" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.119820 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57de76e3-fdf0-4c6e-aa11-702f0368cb41-utilities\") pod \"redhat-marketplace-rdgr7\" (UID: \"57de76e3-fdf0-4c6e-aa11-702f0368cb41\") " pod="openshift-marketplace/redhat-marketplace-rdgr7" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.119839 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7b24\" (UniqueName: \"kubernetes.io/projected/57de76e3-fdf0-4c6e-aa11-702f0368cb41-kube-api-access-s7b24\") pod \"redhat-marketplace-rdgr7\" (UID: \"57de76e3-fdf0-4c6e-aa11-702f0368cb41\") " pod="openshift-marketplace/redhat-marketplace-rdgr7" Oct 13 06:31:03 crc kubenswrapper[4833]: E1013 06:31:03.120128 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:31:03.620116127 +0000 UTC m=+153.720539043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.126967 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lmq94" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.158652 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgbwk" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.159745 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lmq94" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.220803 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.221115 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57de76e3-fdf0-4c6e-aa11-702f0368cb41-catalog-content\") pod \"redhat-marketplace-rdgr7\" (UID: \"57de76e3-fdf0-4c6e-aa11-702f0368cb41\") " pod="openshift-marketplace/redhat-marketplace-rdgr7" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.221156 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57de76e3-fdf0-4c6e-aa11-702f0368cb41-utilities\") pod \"redhat-marketplace-rdgr7\" (UID: \"57de76e3-fdf0-4c6e-aa11-702f0368cb41\") " pod="openshift-marketplace/redhat-marketplace-rdgr7" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.221185 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7b24\" (UniqueName: \"kubernetes.io/projected/57de76e3-fdf0-4c6e-aa11-702f0368cb41-kube-api-access-s7b24\") pod \"redhat-marketplace-rdgr7\" (UID: \"57de76e3-fdf0-4c6e-aa11-702f0368cb41\") " pod="openshift-marketplace/redhat-marketplace-rdgr7" Oct 13 06:31:03 crc kubenswrapper[4833]: E1013 06:31:03.221904 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:31:03.721872941 +0000 UTC m=+153.822295907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.227361 4833 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.227420 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57de76e3-fdf0-4c6e-aa11-702f0368cb41-catalog-content\") pod \"redhat-marketplace-rdgr7\" (UID: \"57de76e3-fdf0-4c6e-aa11-702f0368cb41\") " pod="openshift-marketplace/redhat-marketplace-rdgr7" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.228555 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d21a3d65-8c81-49ff-911c-06e99bf0c0ec","Type":"ContainerDied","Data":"3659462e6f2f8a97b5fcd90c11016b549a9e584b50013e1154b11d7e4add52ea"} Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.228592 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3659462e6f2f8a97b5fcd90c11016b549a9e584b50013e1154b11d7e4add52ea" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.228660 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.233803 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57de76e3-fdf0-4c6e-aa11-702f0368cb41-utilities\") pod \"redhat-marketplace-rdgr7\" (UID: \"57de76e3-fdf0-4c6e-aa11-702f0368cb41\") " pod="openshift-marketplace/redhat-marketplace-rdgr7" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.236181 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jl8l9" event={"ID":"96a6d6d4-3b2e-410a-af74-2e05a6dc0025","Type":"ContainerStarted","Data":"ea9c4a66e7c9d75f72b33a89e6d9d8cd2bac2ccf2de848dd3703e5a5523df2af"} Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.236220 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jl8l9" event={"ID":"96a6d6d4-3b2e-410a-af74-2e05a6dc0025","Type":"ContainerStarted","Data":"d30d5be6e9bca0b90ddd352cd591993c8b54a44c5df1878372803c01b8d25624"} Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.241065 4833 generic.go:334] "Generic (PLEG): container finished" podID="0efc55e5-22c4-4df7-b07e-30bb441769c4" containerID="1966f5a66a7e0d84d2995346e19186a61dd96aaefdb6c25c9f2f68f6bd92ceb3" exitCode=0 Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.241233 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7mtw" event={"ID":"0efc55e5-22c4-4df7-b07e-30bb441769c4","Type":"ContainerDied","Data":"1966f5a66a7e0d84d2995346e19186a61dd96aaefdb6c25c9f2f68f6bd92ceb3"} Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.245412 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7b24\" (UniqueName: \"kubernetes.io/projected/57de76e3-fdf0-4c6e-aa11-702f0368cb41-kube-api-access-s7b24\") pod \"redhat-marketplace-rdgr7\" (UID: \"57de76e3-fdf0-4c6e-aa11-702f0368cb41\") " pod="openshift-marketplace/redhat-marketplace-rdgr7" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.267197 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-jl8l9" podStartSLOduration=13.26717565 podStartE2EDuration="13.26717565s" podCreationTimestamp="2025-10-13 06:30:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:31:03.256836339 +0000 UTC m=+153.357259255" watchObservedRunningTime="2025-10-13 06:31:03.26717565 +0000 UTC m=+153.367598566" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.310141 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-tgsfn" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.310195 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-tgsfn" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.315339 4833 patch_prober.go:28] interesting pod/console-f9d7485db-tgsfn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.315388 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-tgsfn" podUID="00fbf18f-bb4d-4d88-801d-5598eb6f6fa2" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.322681 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:31:03 crc kubenswrapper[4833]: E1013 06:31:03.325347 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:31:03.825328364 +0000 UTC m=+153.925751270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.388911 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdgr7" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.392278 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6l2jk"] Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.422564 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-6g499" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.423709 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:31:03 crc kubenswrapper[4833]: E1013 06:31:03.423885 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:31:03.923868434 +0000 UTC m=+154.024291340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.425337 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:31:03 crc kubenswrapper[4833]: E1013 06:31:03.425755 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:31:03.925740108 +0000 UTC m=+154.026163034 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.451651 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t8q8k" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.465635 4833 patch_prober.go:28] interesting pod/apiserver-76f77b778f-h8p9r container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 13 06:31:03 crc kubenswrapper[4833]: [+]log ok Oct 13 06:31:03 crc kubenswrapper[4833]: [+]etcd ok Oct 13 06:31:03 crc kubenswrapper[4833]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 13 06:31:03 crc kubenswrapper[4833]: [+]poststarthook/generic-apiserver-start-informers ok Oct 13 06:31:03 crc kubenswrapper[4833]: [+]poststarthook/max-in-flight-filter ok Oct 13 06:31:03 crc kubenswrapper[4833]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 13 06:31:03 crc kubenswrapper[4833]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 13 06:31:03 crc kubenswrapper[4833]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Oct 13 06:31:03 crc kubenswrapper[4833]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Oct 13 06:31:03 crc kubenswrapper[4833]: [+]poststarthook/project.openshift.io-projectcache ok Oct 13 06:31:03 crc kubenswrapper[4833]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 13 06:31:03 crc kubenswrapper[4833]: [+]poststarthook/openshift.io-startinformers ok Oct 13 06:31:03 crc kubenswrapper[4833]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 13 06:31:03 crc kubenswrapper[4833]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 13 06:31:03 crc kubenswrapper[4833]: livez check failed Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.465706 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" podUID="31fd7ef2-e28a-417f-8b5c-26d976680749" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.526827 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:31:03 crc kubenswrapper[4833]: E1013 06:31:03.527577 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:31:04.027561054 +0000 UTC m=+154.127983970 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.629576 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:31:03 crc kubenswrapper[4833]: E1013 06:31:03.629925 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 06:31:04.129907575 +0000 UTC m=+154.230330491 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7kzpp" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.648354 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-drnc5"] Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.649695 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drnc5" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.650861 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-drnc5"] Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.654807 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.668053 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdgr7"] Oct 13 06:31:03 crc kubenswrapper[4833]: W1013 06:31:03.713821 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57de76e3_fdf0_4c6e_aa11_702f0368cb41.slice/crio-07b7d1bfe12aa0933c4315d471be92f9bb6a4a3c4c1ea0732f9b01cf4d6bd6d1 WatchSource:0}: Error finding container 07b7d1bfe12aa0933c4315d471be92f9bb6a4a3c4c1ea0732f9b01cf4d6bd6d1: Status 404 returned error can't find the container with id 07b7d1bfe12aa0933c4315d471be92f9bb6a4a3c4c1ea0732f9b01cf4d6bd6d1 Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.736376 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.736593 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63fdb502-c6b1-44e8-86a8-c9571886f5b3-utilities\") pod \"redhat-operators-drnc5\" (UID: \"63fdb502-c6b1-44e8-86a8-c9571886f5b3\") " pod="openshift-marketplace/redhat-operators-drnc5" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.736620 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjwb5\" (UniqueName: \"kubernetes.io/projected/63fdb502-c6b1-44e8-86a8-c9571886f5b3-kube-api-access-xjwb5\") pod \"redhat-operators-drnc5\" (UID: \"63fdb502-c6b1-44e8-86a8-c9571886f5b3\") " pod="openshift-marketplace/redhat-operators-drnc5" Oct 13 06:31:03 crc kubenswrapper[4833]: E1013 06:31:03.736667 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 06:31:04.236641974 +0000 UTC m=+154.337064890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.736694 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63fdb502-c6b1-44e8-86a8-c9571886f5b3-catalog-content\") pod \"redhat-operators-drnc5\" (UID: \"63fdb502-c6b1-44e8-86a8-c9571886f5b3\") " pod="openshift-marketplace/redhat-operators-drnc5" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.738679 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dkwvf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 13 06:31:03 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Oct 13 06:31:03 crc kubenswrapper[4833]: [+]process-running ok Oct 13 06:31:03 crc kubenswrapper[4833]: healthz check failed Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.738716 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkwvf" podUID="7b6ff3a0-c424-45f9-92e9-e7b5a46d7464" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.788664 4833 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-10-13T06:31:03.227388961Z","Handler":null,"Name":""} Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.801726 4833 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.801765 4833 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.837447 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjwb5\" (UniqueName: \"kubernetes.io/projected/63fdb502-c6b1-44e8-86a8-c9571886f5b3-kube-api-access-xjwb5\") pod \"redhat-operators-drnc5\" (UID: \"63fdb502-c6b1-44e8-86a8-c9571886f5b3\") " pod="openshift-marketplace/redhat-operators-drnc5" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.837502 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63fdb502-c6b1-44e8-86a8-c9571886f5b3-catalog-content\") pod \"redhat-operators-drnc5\" (UID: \"63fdb502-c6b1-44e8-86a8-c9571886f5b3\") " pod="openshift-marketplace/redhat-operators-drnc5" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.837551 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.837604 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63fdb502-c6b1-44e8-86a8-c9571886f5b3-utilities\") pod \"redhat-operators-drnc5\" (UID: \"63fdb502-c6b1-44e8-86a8-c9571886f5b3\") " pod="openshift-marketplace/redhat-operators-drnc5" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.837968 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63fdb502-c6b1-44e8-86a8-c9571886f5b3-utilities\") pod \"redhat-operators-drnc5\" (UID: \"63fdb502-c6b1-44e8-86a8-c9571886f5b3\") " pod="openshift-marketplace/redhat-operators-drnc5" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.838136 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63fdb502-c6b1-44e8-86a8-c9571886f5b3-catalog-content\") pod \"redhat-operators-drnc5\" (UID: \"63fdb502-c6b1-44e8-86a8-c9571886f5b3\") " pod="openshift-marketplace/redhat-operators-drnc5" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.843511 4833 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.843566 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.874641 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjwb5\" (UniqueName: \"kubernetes.io/projected/63fdb502-c6b1-44e8-86a8-c9571886f5b3-kube-api-access-xjwb5\") pod \"redhat-operators-drnc5\" (UID: \"63fdb502-c6b1-44e8-86a8-c9571886f5b3\") " pod="openshift-marketplace/redhat-operators-drnc5" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.890030 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7kzpp\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.938809 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 06:31:03 crc kubenswrapper[4833]: I1013 06:31:03.945100 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.007430 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drnc5" Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.046308 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8sz95"] Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.063245 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8sz95"] Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.063365 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8sz95" Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.138165 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.141302 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a01ee6c-348d-403c-835a-80cd28ddb6ee-catalog-content\") pod \"redhat-operators-8sz95\" (UID: \"6a01ee6c-348d-403c-835a-80cd28ddb6ee\") " pod="openshift-marketplace/redhat-operators-8sz95" Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.141355 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a01ee6c-348d-403c-835a-80cd28ddb6ee-utilities\") pod \"redhat-operators-8sz95\" (UID: \"6a01ee6c-348d-403c-835a-80cd28ddb6ee\") " pod="openshift-marketplace/redhat-operators-8sz95" Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.141381 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjn5g\" (UniqueName: \"kubernetes.io/projected/6a01ee6c-348d-403c-835a-80cd28ddb6ee-kube-api-access-bjn5g\") pod \"redhat-operators-8sz95\" (UID: \"6a01ee6c-348d-403c-835a-80cd28ddb6ee\") " pod="openshift-marketplace/redhat-operators-8sz95" Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.243219 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a01ee6c-348d-403c-835a-80cd28ddb6ee-catalog-content\") pod \"redhat-operators-8sz95\" (UID: \"6a01ee6c-348d-403c-835a-80cd28ddb6ee\") " pod="openshift-marketplace/redhat-operators-8sz95" Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.243310 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjn5g\" (UniqueName: \"kubernetes.io/projected/6a01ee6c-348d-403c-835a-80cd28ddb6ee-kube-api-access-bjn5g\") pod \"redhat-operators-8sz95\" (UID: \"6a01ee6c-348d-403c-835a-80cd28ddb6ee\") " pod="openshift-marketplace/redhat-operators-8sz95" Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.243333 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a01ee6c-348d-403c-835a-80cd28ddb6ee-utilities\") pod \"redhat-operators-8sz95\" (UID: \"6a01ee6c-348d-403c-835a-80cd28ddb6ee\") " pod="openshift-marketplace/redhat-operators-8sz95" Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.244111 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a01ee6c-348d-403c-835a-80cd28ddb6ee-catalog-content\") pod \"redhat-operators-8sz95\" (UID: \"6a01ee6c-348d-403c-835a-80cd28ddb6ee\") " pod="openshift-marketplace/redhat-operators-8sz95" Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.244184 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a01ee6c-348d-403c-835a-80cd28ddb6ee-utilities\") pod \"redhat-operators-8sz95\" (UID: \"6a01ee6c-348d-403c-835a-80cd28ddb6ee\") " pod="openshift-marketplace/redhat-operators-8sz95" Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.252109 4833 generic.go:334] "Generic (PLEG): container finished" podID="af2f317c-d52d-483f-a616-0d4868b57951" containerID="3528a2afa973cebdfa9ab17232fac9e8217a00a29ee9d931ef99c4aef5e3095b" exitCode=0 Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.252229 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6l2jk" event={"ID":"af2f317c-d52d-483f-a616-0d4868b57951","Type":"ContainerDied","Data":"3528a2afa973cebdfa9ab17232fac9e8217a00a29ee9d931ef99c4aef5e3095b"} Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.252260 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6l2jk" event={"ID":"af2f317c-d52d-483f-a616-0d4868b57951","Type":"ContainerStarted","Data":"78f00b1dd6d63a455a1a92292fefae349ddc586c3c3d57552e7f2ea3eef2e0c3"} Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.263182 4833 generic.go:334] "Generic (PLEG): container finished" podID="57de76e3-fdf0-4c6e-aa11-702f0368cb41" containerID="0b4a7974624e16fd690495eb9ea6a656f869da850788e7049863b4496713aa3e" exitCode=0 Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.263508 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdgr7" event={"ID":"57de76e3-fdf0-4c6e-aa11-702f0368cb41","Type":"ContainerDied","Data":"0b4a7974624e16fd690495eb9ea6a656f869da850788e7049863b4496713aa3e"} Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.263567 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdgr7" event={"ID":"57de76e3-fdf0-4c6e-aa11-702f0368cb41","Type":"ContainerStarted","Data":"07b7d1bfe12aa0933c4315d471be92f9bb6a4a3c4c1ea0732f9b01cf4d6bd6d1"} Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.273806 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjn5g\" (UniqueName: \"kubernetes.io/projected/6a01ee6c-348d-403c-835a-80cd28ddb6ee-kube-api-access-bjn5g\") pod \"redhat-operators-8sz95\" (UID: \"6a01ee6c-348d-403c-835a-80cd28ddb6ee\") " pod="openshift-marketplace/redhat-operators-8sz95" Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.307088 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.307843 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.312062 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.313864 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.319470 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.400341 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8sz95" Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.447191 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2c787b5-6f34-422e-a227-bfefade4c11e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c2c787b5-6f34-422e-a227-bfefade4c11e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.447237 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2c787b5-6f34-422e-a227-bfefade4c11e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c2c787b5-6f34-422e-a227-bfefade4c11e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.517676 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-drnc5"] Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.548942 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2c787b5-6f34-422e-a227-bfefade4c11e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c2c787b5-6f34-422e-a227-bfefade4c11e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.549058 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2c787b5-6f34-422e-a227-bfefade4c11e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c2c787b5-6f34-422e-a227-bfefade4c11e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.549071 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2c787b5-6f34-422e-a227-bfefade4c11e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c2c787b5-6f34-422e-a227-bfefade4c11e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.579151 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2c787b5-6f34-422e-a227-bfefade4c11e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c2c787b5-6f34-422e-a227-bfefade4c11e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.637290 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.650342 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.733305 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dkwvf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 13 06:31:04 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Oct 13 06:31:04 crc kubenswrapper[4833]: [+]process-running ok Oct 13 06:31:04 crc kubenswrapper[4833]: healthz check failed Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.733393 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkwvf" podUID="7b6ff3a0-c424-45f9-92e9-e7b5a46d7464" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.782404 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7kzpp"] Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.869865 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8sz95"] Oct 13 06:31:04 crc kubenswrapper[4833]: I1013 06:31:04.928199 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 13 06:31:04 crc kubenswrapper[4833]: W1013 06:31:04.934035 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc2c787b5_6f34_422e_a227_bfefade4c11e.slice/crio-2201eb4303d828710de26feeb26a521a4d85c71cfeb7671d0430415306445445 WatchSource:0}: Error finding container 2201eb4303d828710de26feeb26a521a4d85c71cfeb7671d0430415306445445: Status 404 returned error can't find the container with id 2201eb4303d828710de26feeb26a521a4d85c71cfeb7671d0430415306445445 Oct 13 06:31:05 crc kubenswrapper[4833]: I1013 06:31:05.271258 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c2c787b5-6f34-422e-a227-bfefade4c11e","Type":"ContainerStarted","Data":"2201eb4303d828710de26feeb26a521a4d85c71cfeb7671d0430415306445445"} Oct 13 06:31:05 crc kubenswrapper[4833]: I1013 06:31:05.274414 4833 generic.go:334] "Generic (PLEG): container finished" podID="63fdb502-c6b1-44e8-86a8-c9571886f5b3" containerID="2bbad559bc4d5f99ac62b3c52b46aaaa9820cb7379b78a7084c0d1cafc95f5c2" exitCode=0 Oct 13 06:31:05 crc kubenswrapper[4833]: I1013 06:31:05.274545 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drnc5" event={"ID":"63fdb502-c6b1-44e8-86a8-c9571886f5b3","Type":"ContainerDied","Data":"2bbad559bc4d5f99ac62b3c52b46aaaa9820cb7379b78a7084c0d1cafc95f5c2"} Oct 13 06:31:05 crc kubenswrapper[4833]: I1013 06:31:05.274596 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drnc5" event={"ID":"63fdb502-c6b1-44e8-86a8-c9571886f5b3","Type":"ContainerStarted","Data":"37d10bed4452377d10143a5a144aa526b0d42ad833d246db9af75039f30da61a"} Oct 13 06:31:05 crc kubenswrapper[4833]: I1013 06:31:05.284193 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" event={"ID":"523f8b54-3667-4d44-99b3-99a4caca1cee","Type":"ContainerStarted","Data":"1fd0df744b57e356d54b7fb67b8cc257c98d6b8ebbee32395b942b279b1913b0"} Oct 13 06:31:05 crc kubenswrapper[4833]: I1013 06:31:05.287681 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8sz95" event={"ID":"6a01ee6c-348d-403c-835a-80cd28ddb6ee","Type":"ContainerStarted","Data":"5687ab767bc6716bc16c7075b4d411c3c8e2eb33875ae1d37ee49ad4c11734e3"} Oct 13 06:31:05 crc kubenswrapper[4833]: I1013 06:31:05.531180 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-c7xv6" Oct 13 06:31:05 crc kubenswrapper[4833]: I1013 06:31:05.736309 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dkwvf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 13 06:31:05 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Oct 13 06:31:05 crc kubenswrapper[4833]: [+]process-running ok Oct 13 06:31:05 crc kubenswrapper[4833]: healthz check failed Oct 13 06:31:05 crc kubenswrapper[4833]: I1013 06:31:05.736393 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkwvf" podUID="7b6ff3a0-c424-45f9-92e9-e7b5a46d7464" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 06:31:06 crc kubenswrapper[4833]: I1013 06:31:06.293703 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" event={"ID":"523f8b54-3667-4d44-99b3-99a4caca1cee","Type":"ContainerStarted","Data":"c6d61f467d00b79fe4684dc8fd49f5fc9e3626a43693b0ab8a15f5fa415aeef0"} Oct 13 06:31:06 crc kubenswrapper[4833]: I1013 06:31:06.294698 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:31:06 crc kubenswrapper[4833]: I1013 06:31:06.298375 4833 generic.go:334] "Generic (PLEG): container finished" podID="6a01ee6c-348d-403c-835a-80cd28ddb6ee" containerID="2d254f5eebe9fa469452067d9f2a3c85d1d3f3bdbae5c68ec7418ce6797659b5" exitCode=0 Oct 13 06:31:06 crc kubenswrapper[4833]: I1013 06:31:06.298422 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8sz95" event={"ID":"6a01ee6c-348d-403c-835a-80cd28ddb6ee","Type":"ContainerDied","Data":"2d254f5eebe9fa469452067d9f2a3c85d1d3f3bdbae5c68ec7418ce6797659b5"} Oct 13 06:31:06 crc kubenswrapper[4833]: I1013 06:31:06.300321 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c2c787b5-6f34-422e-a227-bfefade4c11e","Type":"ContainerStarted","Data":"e009b06a492baf054c54cf17699786ebad57c2afb40498abdbe880243dcbf5af"} Oct 13 06:31:06 crc kubenswrapper[4833]: I1013 06:31:06.315502 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" podStartSLOduration=135.315487314 podStartE2EDuration="2m15.315487314s" podCreationTimestamp="2025-10-13 06:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:31:06.313517456 +0000 UTC m=+156.413940382" watchObservedRunningTime="2025-10-13 06:31:06.315487314 +0000 UTC m=+156.415910230" Oct 13 06:31:06 crc kubenswrapper[4833]: I1013 06:31:06.329717 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.329701318 podStartE2EDuration="2.329701318s" podCreationTimestamp="2025-10-13 06:31:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:31:06.326935237 +0000 UTC m=+156.427358183" watchObservedRunningTime="2025-10-13 06:31:06.329701318 +0000 UTC m=+156.430124234" Oct 13 06:31:06 crc kubenswrapper[4833]: I1013 06:31:06.731579 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dkwvf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 13 06:31:06 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Oct 13 06:31:06 crc kubenswrapper[4833]: [+]process-running ok Oct 13 06:31:06 crc kubenswrapper[4833]: healthz check failed Oct 13 06:31:06 crc kubenswrapper[4833]: I1013 06:31:06.731950 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkwvf" podUID="7b6ff3a0-c424-45f9-92e9-e7b5a46d7464" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 06:31:07 crc kubenswrapper[4833]: I1013 06:31:07.311022 4833 generic.go:334] "Generic (PLEG): container finished" podID="c2c787b5-6f34-422e-a227-bfefade4c11e" containerID="e009b06a492baf054c54cf17699786ebad57c2afb40498abdbe880243dcbf5af" exitCode=0 Oct 13 06:31:07 crc kubenswrapper[4833]: I1013 06:31:07.311128 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c2c787b5-6f34-422e-a227-bfefade4c11e","Type":"ContainerDied","Data":"e009b06a492baf054c54cf17699786ebad57c2afb40498abdbe880243dcbf5af"} Oct 13 06:31:07 crc kubenswrapper[4833]: I1013 06:31:07.582336 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:31:07 crc kubenswrapper[4833]: I1013 06:31:07.588199 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-h8p9r" Oct 13 06:31:07 crc kubenswrapper[4833]: I1013 06:31:07.732091 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dkwvf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 13 06:31:07 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Oct 13 06:31:07 crc kubenswrapper[4833]: [+]process-running ok Oct 13 06:31:07 crc kubenswrapper[4833]: healthz check failed Oct 13 06:31:07 crc kubenswrapper[4833]: I1013 06:31:07.732367 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkwvf" podUID="7b6ff3a0-c424-45f9-92e9-e7b5a46d7464" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 06:31:08 crc kubenswrapper[4833]: I1013 06:31:08.731663 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dkwvf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 13 06:31:08 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Oct 13 06:31:08 crc kubenswrapper[4833]: [+]process-running ok Oct 13 06:31:08 crc kubenswrapper[4833]: healthz check failed Oct 13 06:31:08 crc kubenswrapper[4833]: I1013 06:31:08.731713 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkwvf" podUID="7b6ff3a0-c424-45f9-92e9-e7b5a46d7464" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 06:31:09 crc kubenswrapper[4833]: I1013 06:31:09.731431 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dkwvf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 13 06:31:09 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Oct 13 06:31:09 crc kubenswrapper[4833]: [+]process-running ok Oct 13 06:31:09 crc kubenswrapper[4833]: healthz check failed Oct 13 06:31:09 crc kubenswrapper[4833]: I1013 06:31:09.731688 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkwvf" podUID="7b6ff3a0-c424-45f9-92e9-e7b5a46d7464" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 06:31:10 crc kubenswrapper[4833]: I1013 06:31:10.730844 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dkwvf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 13 06:31:10 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Oct 13 06:31:10 crc kubenswrapper[4833]: [+]process-running ok Oct 13 06:31:10 crc kubenswrapper[4833]: healthz check failed Oct 13 06:31:10 crc kubenswrapper[4833]: I1013 06:31:10.730929 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkwvf" podUID="7b6ff3a0-c424-45f9-92e9-e7b5a46d7464" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 06:31:11 crc kubenswrapper[4833]: I1013 06:31:11.732407 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dkwvf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 13 06:31:11 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Oct 13 06:31:11 crc kubenswrapper[4833]: [+]process-running ok Oct 13 06:31:11 crc kubenswrapper[4833]: healthz check failed Oct 13 06:31:11 crc kubenswrapper[4833]: I1013 06:31:11.732489 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkwvf" podUID="7b6ff3a0-c424-45f9-92e9-e7b5a46d7464" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 06:31:12 crc kubenswrapper[4833]: I1013 06:31:12.418715 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2fd6b1c1-777a-46be-960c-c6109d1615ad-metrics-certs\") pod \"network-metrics-daemon-28gq6\" (UID: \"2fd6b1c1-777a-46be-960c-c6109d1615ad\") " pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:31:12 crc kubenswrapper[4833]: I1013 06:31:12.445465 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2fd6b1c1-777a-46be-960c-c6109d1615ad-metrics-certs\") pod \"network-metrics-daemon-28gq6\" (UID: \"2fd6b1c1-777a-46be-960c-c6109d1615ad\") " pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:31:12 crc kubenswrapper[4833]: I1013 06:31:12.553363 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-28gq6" Oct 13 06:31:12 crc kubenswrapper[4833]: I1013 06:31:12.672118 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-tkv6x container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Oct 13 06:31:12 crc kubenswrapper[4833]: I1013 06:31:12.672176 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tkv6x" podUID="05a4a2c9-1543-49ae-9f86-ba208d564f75" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Oct 13 06:31:12 crc kubenswrapper[4833]: I1013 06:31:12.672779 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-tkv6x container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Oct 13 06:31:12 crc kubenswrapper[4833]: I1013 06:31:12.672809 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tkv6x" podUID="05a4a2c9-1543-49ae-9f86-ba208d564f75" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Oct 13 06:31:12 crc kubenswrapper[4833]: I1013 06:31:12.732555 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dkwvf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 13 06:31:12 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Oct 13 06:31:12 crc kubenswrapper[4833]: [+]process-running ok Oct 13 06:31:12 crc kubenswrapper[4833]: healthz check failed Oct 13 06:31:12 crc kubenswrapper[4833]: I1013 06:31:12.732611 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkwvf" podUID="7b6ff3a0-c424-45f9-92e9-e7b5a46d7464" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 06:31:13 crc kubenswrapper[4833]: I1013 06:31:13.310236 4833 patch_prober.go:28] interesting pod/console-f9d7485db-tgsfn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Oct 13 06:31:13 crc kubenswrapper[4833]: I1013 06:31:13.310605 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-tgsfn" podUID="00fbf18f-bb4d-4d88-801d-5598eb6f6fa2" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Oct 13 06:31:13 crc kubenswrapper[4833]: I1013 06:31:13.732445 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dkwvf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 13 06:31:13 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Oct 13 06:31:13 crc kubenswrapper[4833]: [+]process-running ok Oct 13 06:31:13 crc kubenswrapper[4833]: healthz check failed Oct 13 06:31:13 crc kubenswrapper[4833]: I1013 06:31:13.732521 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkwvf" podUID="7b6ff3a0-c424-45f9-92e9-e7b5a46d7464" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 06:31:14 crc kubenswrapper[4833]: I1013 06:31:14.734232 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dkwvf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 13 06:31:14 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Oct 13 06:31:14 crc kubenswrapper[4833]: [+]process-running ok Oct 13 06:31:14 crc kubenswrapper[4833]: healthz check failed Oct 13 06:31:14 crc kubenswrapper[4833]: I1013 06:31:14.734490 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkwvf" podUID="7b6ff3a0-c424-45f9-92e9-e7b5a46d7464" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 06:31:15 crc kubenswrapper[4833]: I1013 06:31:15.732399 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dkwvf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 13 06:31:15 crc kubenswrapper[4833]: [+]has-synced ok Oct 13 06:31:15 crc kubenswrapper[4833]: [+]process-running ok Oct 13 06:31:15 crc kubenswrapper[4833]: healthz check failed Oct 13 06:31:15 crc kubenswrapper[4833]: I1013 06:31:15.732450 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkwvf" podUID="7b6ff3a0-c424-45f9-92e9-e7b5a46d7464" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 06:31:15 crc kubenswrapper[4833]: I1013 06:31:15.875876 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 13 06:31:15 crc kubenswrapper[4833]: I1013 06:31:15.966868 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2c787b5-6f34-422e-a227-bfefade4c11e-kubelet-dir\") pod \"c2c787b5-6f34-422e-a227-bfefade4c11e\" (UID: \"c2c787b5-6f34-422e-a227-bfefade4c11e\") " Oct 13 06:31:15 crc kubenswrapper[4833]: I1013 06:31:15.967029 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2c787b5-6f34-422e-a227-bfefade4c11e-kube-api-access\") pod \"c2c787b5-6f34-422e-a227-bfefade4c11e\" (UID: \"c2c787b5-6f34-422e-a227-bfefade4c11e\") " Oct 13 06:31:15 crc kubenswrapper[4833]: I1013 06:31:15.967017 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2c787b5-6f34-422e-a227-bfefade4c11e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c2c787b5-6f34-422e-a227-bfefade4c11e" (UID: "c2c787b5-6f34-422e-a227-bfefade4c11e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:31:15 crc kubenswrapper[4833]: I1013 06:31:15.967248 4833 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2c787b5-6f34-422e-a227-bfefade4c11e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 13 06:31:15 crc kubenswrapper[4833]: I1013 06:31:15.971719 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2c787b5-6f34-422e-a227-bfefade4c11e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c2c787b5-6f34-422e-a227-bfefade4c11e" (UID: "c2c787b5-6f34-422e-a227-bfefade4c11e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:31:16 crc kubenswrapper[4833]: I1013 06:31:16.068670 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2c787b5-6f34-422e-a227-bfefade4c11e-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 13 06:31:16 crc kubenswrapper[4833]: I1013 06:31:16.363907 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c2c787b5-6f34-422e-a227-bfefade4c11e","Type":"ContainerDied","Data":"2201eb4303d828710de26feeb26a521a4d85c71cfeb7671d0430415306445445"} Oct 13 06:31:16 crc kubenswrapper[4833]: I1013 06:31:16.363955 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2201eb4303d828710de26feeb26a521a4d85c71cfeb7671d0430415306445445" Oct 13 06:31:16 crc kubenswrapper[4833]: I1013 06:31:16.364028 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 13 06:31:16 crc kubenswrapper[4833]: I1013 06:31:16.732081 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-dkwvf" Oct 13 06:31:16 crc kubenswrapper[4833]: I1013 06:31:16.734848 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-dkwvf" Oct 13 06:31:22 crc kubenswrapper[4833]: I1013 06:31:22.671765 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-tkv6x container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Oct 13 06:31:22 crc kubenswrapper[4833]: I1013 06:31:22.672308 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tkv6x" podUID="05a4a2c9-1543-49ae-9f86-ba208d564f75" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Oct 13 06:31:22 crc kubenswrapper[4833]: I1013 06:31:22.672354 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-tkv6x" Oct 13 06:31:22 crc kubenswrapper[4833]: I1013 06:31:22.672591 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-tkv6x container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Oct 13 06:31:22 crc kubenswrapper[4833]: I1013 06:31:22.672679 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tkv6x" podUID="05a4a2c9-1543-49ae-9f86-ba208d564f75" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Oct 13 06:31:22 crc kubenswrapper[4833]: I1013 06:31:22.672793 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"ee9419f5b0a3dd3e5fd971ce6c064a5dc0b7d25bf0363a7565402c287f01da59"} pod="openshift-console/downloads-7954f5f757-tkv6x" containerMessage="Container download-server failed liveness probe, will be restarted" Oct 13 06:31:22 crc kubenswrapper[4833]: I1013 06:31:22.672862 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-tkv6x" podUID="05a4a2c9-1543-49ae-9f86-ba208d564f75" containerName="download-server" containerID="cri-o://ee9419f5b0a3dd3e5fd971ce6c064a5dc0b7d25bf0363a7565402c287f01da59" gracePeriod=2 Oct 13 06:31:22 crc kubenswrapper[4833]: I1013 06:31:22.673820 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-tkv6x container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Oct 13 06:31:22 crc kubenswrapper[4833]: I1013 06:31:22.673879 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tkv6x" podUID="05a4a2c9-1543-49ae-9f86-ba208d564f75" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Oct 13 06:31:23 crc kubenswrapper[4833]: I1013 06:31:23.362497 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-tgsfn" Oct 13 06:31:23 crc kubenswrapper[4833]: I1013 06:31:23.367122 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-tgsfn" Oct 13 06:31:23 crc kubenswrapper[4833]: I1013 06:31:23.467257 4833 generic.go:334] "Generic (PLEG): container finished" podID="05a4a2c9-1543-49ae-9f86-ba208d564f75" containerID="ee9419f5b0a3dd3e5fd971ce6c064a5dc0b7d25bf0363a7565402c287f01da59" exitCode=0 Oct 13 06:31:23 crc kubenswrapper[4833]: I1013 06:31:23.467577 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tkv6x" event={"ID":"05a4a2c9-1543-49ae-9f86-ba208d564f75","Type":"ContainerDied","Data":"ee9419f5b0a3dd3e5fd971ce6c064a5dc0b7d25bf0363a7565402c287f01da59"} Oct 13 06:31:24 crc kubenswrapper[4833]: I1013 06:31:24.146057 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:31:30 crc kubenswrapper[4833]: I1013 06:31:30.542771 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 06:31:30 crc kubenswrapper[4833]: I1013 06:31:30.543336 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 06:31:32 crc kubenswrapper[4833]: I1013 06:31:32.576237 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-slxv8" Oct 13 06:31:32 crc kubenswrapper[4833]: I1013 06:31:32.672005 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-tkv6x container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Oct 13 06:31:32 crc kubenswrapper[4833]: I1013 06:31:32.672077 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tkv6x" podUID="05a4a2c9-1543-49ae-9f86-ba208d564f75" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Oct 13 06:31:34 crc kubenswrapper[4833]: E1013 06:31:34.265784 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Oct 13 06:31:34 crc kubenswrapper[4833]: E1013 06:31:34.265989 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zf9n2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-m7mtw_openshift-marketplace(0efc55e5-22c4-4df7-b07e-30bb441769c4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 13 06:31:34 crc kubenswrapper[4833]: E1013 06:31:34.267439 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-m7mtw" podUID="0efc55e5-22c4-4df7-b07e-30bb441769c4" Oct 13 06:31:34 crc kubenswrapper[4833]: E1013 06:31:34.365503 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Oct 13 06:31:34 crc kubenswrapper[4833]: E1013 06:31:34.365926 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x77ls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8xfsx_openshift-marketplace(48b7084d-6299-4cfd-88f3-e2dca282c478): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 13 06:31:34 crc kubenswrapper[4833]: E1013 06:31:34.368099 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-8xfsx" podUID="48b7084d-6299-4cfd-88f3-e2dca282c478" Oct 13 06:31:34 crc kubenswrapper[4833]: I1013 06:31:34.730790 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-28gq6"] Oct 13 06:31:34 crc kubenswrapper[4833]: E1013 06:31:34.934615 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8xfsx" podUID="48b7084d-6299-4cfd-88f3-e2dca282c478" Oct 13 06:31:34 crc kubenswrapper[4833]: E1013 06:31:34.934693 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-m7mtw" podUID="0efc55e5-22c4-4df7-b07e-30bb441769c4" Oct 13 06:31:35 crc kubenswrapper[4833]: E1013 06:31:35.001029 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Oct 13 06:31:35 crc kubenswrapper[4833]: E1013 06:31:35.001630 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tcfx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-6l2jk_openshift-marketplace(af2f317c-d52d-483f-a616-0d4868b57951): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 13 06:31:35 crc kubenswrapper[4833]: E1013 06:31:35.002703 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-6l2jk" podUID="af2f317c-d52d-483f-a616-0d4868b57951" Oct 13 06:31:35 crc kubenswrapper[4833]: E1013 06:31:35.004515 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Oct 13 06:31:35 crc kubenswrapper[4833]: E1013 06:31:35.004679 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s7b24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rdgr7_openshift-marketplace(57de76e3-fdf0-4c6e-aa11-702f0368cb41): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 13 06:31:35 crc kubenswrapper[4833]: E1013 06:31:35.006785 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-rdgr7" podUID="57de76e3-fdf0-4c6e-aa11-702f0368cb41" Oct 13 06:31:37 crc kubenswrapper[4833]: W1013 06:31:37.623611 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fd6b1c1_777a_46be_960c_c6109d1615ad.slice/crio-bab2278fed650f80a9d3a185c64e490984739ab710b1bba715352aa06cd7f4a8 WatchSource:0}: Error finding container bab2278fed650f80a9d3a185c64e490984739ab710b1bba715352aa06cd7f4a8: Status 404 returned error can't find the container with id bab2278fed650f80a9d3a185c64e490984739ab710b1bba715352aa06cd7f4a8 Oct 13 06:31:37 crc kubenswrapper[4833]: E1013 06:31:37.627098 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-6l2jk" podUID="af2f317c-d52d-483f-a616-0d4868b57951" Oct 13 06:31:37 crc kubenswrapper[4833]: E1013 06:31:37.627469 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rdgr7" podUID="57de76e3-fdf0-4c6e-aa11-702f0368cb41" Oct 13 06:31:37 crc kubenswrapper[4833]: E1013 06:31:37.646320 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Oct 13 06:31:37 crc kubenswrapper[4833]: E1013 06:31:37.646455 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bjn5g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-8sz95_openshift-marketplace(6a01ee6c-348d-403c-835a-80cd28ddb6ee): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 13 06:31:37 crc kubenswrapper[4833]: E1013 06:31:37.647746 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-8sz95" podUID="6a01ee6c-348d-403c-835a-80cd28ddb6ee" Oct 13 06:31:38 crc kubenswrapper[4833]: I1013 06:31:38.552037 4833 generic.go:334] "Generic (PLEG): container finished" podID="9cc9a234-36b5-410a-ab39-d8ee02cecf3c" containerID="ede10795a521854c704e0766f93f8c91fbb2bb8f5836e321b6301bd00fb1b672" exitCode=0 Oct 13 06:31:38 crc kubenswrapper[4833]: I1013 06:31:38.552071 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sw6h" event={"ID":"9cc9a234-36b5-410a-ab39-d8ee02cecf3c","Type":"ContainerDied","Data":"ede10795a521854c704e0766f93f8c91fbb2bb8f5836e321b6301bd00fb1b672"} Oct 13 06:31:38 crc kubenswrapper[4833]: I1013 06:31:38.554812 4833 generic.go:334] "Generic (PLEG): container finished" podID="63fdb502-c6b1-44e8-86a8-c9571886f5b3" containerID="befeb517d3aa8424fdbbb012062face02d5b32edbccd55587d62456b2b6c05d5" exitCode=0 Oct 13 06:31:38 crc kubenswrapper[4833]: I1013 06:31:38.554895 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drnc5" event={"ID":"63fdb502-c6b1-44e8-86a8-c9571886f5b3","Type":"ContainerDied","Data":"befeb517d3aa8424fdbbb012062face02d5b32edbccd55587d62456b2b6c05d5"} Oct 13 06:31:38 crc kubenswrapper[4833]: I1013 06:31:38.555569 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 06:31:38 crc kubenswrapper[4833]: I1013 06:31:38.556605 4833 generic.go:334] "Generic (PLEG): container finished" podID="6c65c1f2-5e55-4133-a013-d4d5e101e0d7" containerID="e9be7f6c1fd16ecbb7e39e025cd029d57af262149850a6878aef5d26fad332d9" exitCode=0 Oct 13 06:31:38 crc kubenswrapper[4833]: I1013 06:31:38.556671 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vwg98" event={"ID":"6c65c1f2-5e55-4133-a013-d4d5e101e0d7","Type":"ContainerDied","Data":"e9be7f6c1fd16ecbb7e39e025cd029d57af262149850a6878aef5d26fad332d9"} Oct 13 06:31:38 crc kubenswrapper[4833]: I1013 06:31:38.560214 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-28gq6" event={"ID":"2fd6b1c1-777a-46be-960c-c6109d1615ad","Type":"ContainerStarted","Data":"dfb41c93e26b298f98e816dab86493d4dd0f43f1cbfd74da2145eb07bba90258"} Oct 13 06:31:38 crc kubenswrapper[4833]: I1013 06:31:38.560377 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-28gq6" event={"ID":"2fd6b1c1-777a-46be-960c-c6109d1615ad","Type":"ContainerStarted","Data":"1a4a52a25426a4e98673f113db5f4d35a7c32dd3d9ebc0ffab2eca331a06fe51"} Oct 13 06:31:38 crc kubenswrapper[4833]: I1013 06:31:38.560504 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-28gq6" event={"ID":"2fd6b1c1-777a-46be-960c-c6109d1615ad","Type":"ContainerStarted","Data":"bab2278fed650f80a9d3a185c64e490984739ab710b1bba715352aa06cd7f4a8"} Oct 13 06:31:38 crc kubenswrapper[4833]: I1013 06:31:38.562791 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tkv6x" event={"ID":"05a4a2c9-1543-49ae-9f86-ba208d564f75","Type":"ContainerStarted","Data":"d17bb1d2f0879d14ab83bc516e1d35f75c332cc2fbab47c0ea5736538481e377"} Oct 13 06:31:38 crc kubenswrapper[4833]: I1013 06:31:38.562961 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-tkv6x" Oct 13 06:31:38 crc kubenswrapper[4833]: I1013 06:31:38.563277 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-tkv6x container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Oct 13 06:31:38 crc kubenswrapper[4833]: I1013 06:31:38.563324 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tkv6x" podUID="05a4a2c9-1543-49ae-9f86-ba208d564f75" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Oct 13 06:31:38 crc kubenswrapper[4833]: E1013 06:31:38.569819 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8sz95" podUID="6a01ee6c-348d-403c-835a-80cd28ddb6ee" Oct 13 06:31:38 crc kubenswrapper[4833]: I1013 06:31:38.664483 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-28gq6" podStartSLOduration=168.664463681 podStartE2EDuration="2m48.664463681s" podCreationTimestamp="2025-10-13 06:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:31:38.66202438 +0000 UTC m=+188.762447306" watchObservedRunningTime="2025-10-13 06:31:38.664463681 +0000 UTC m=+188.764886607" Oct 13 06:31:39 crc kubenswrapper[4833]: I1013 06:31:39.571041 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vwg98" event={"ID":"6c65c1f2-5e55-4133-a013-d4d5e101e0d7","Type":"ContainerStarted","Data":"edbef70a0eb30b101804cd97226dfed96a4341349ae5ff8c1e0c4c05b594dbfa"} Oct 13 06:31:39 crc kubenswrapper[4833]: I1013 06:31:39.573369 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sw6h" event={"ID":"9cc9a234-36b5-410a-ab39-d8ee02cecf3c","Type":"ContainerStarted","Data":"7c3fd23f3c7f3c8312065cd9fc1d171f5a65969565012777c7204e67fd8cd93d"} Oct 13 06:31:39 crc kubenswrapper[4833]: I1013 06:31:39.575427 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drnc5" event={"ID":"63fdb502-c6b1-44e8-86a8-c9571886f5b3","Type":"ContainerStarted","Data":"b7c4920f81b5afc6ca9ad51ee70d04d90c75ee50d4ae1cdc829b9b7fcf6e7061"} Oct 13 06:31:39 crc kubenswrapper[4833]: I1013 06:31:39.576280 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-tkv6x container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Oct 13 06:31:39 crc kubenswrapper[4833]: I1013 06:31:39.576352 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tkv6x" podUID="05a4a2c9-1543-49ae-9f86-ba208d564f75" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Oct 13 06:31:39 crc kubenswrapper[4833]: I1013 06:31:39.601510 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vwg98" podStartSLOduration=2.797645794 podStartE2EDuration="39.601483202s" podCreationTimestamp="2025-10-13 06:31:00 +0000 UTC" firstStartedPulling="2025-10-13 06:31:02.155749869 +0000 UTC m=+152.256172795" lastFinishedPulling="2025-10-13 06:31:38.959587277 +0000 UTC m=+189.060010203" observedRunningTime="2025-10-13 06:31:39.596792926 +0000 UTC m=+189.697215872" watchObservedRunningTime="2025-10-13 06:31:39.601483202 +0000 UTC m=+189.701906118" Oct 13 06:31:39 crc kubenswrapper[4833]: I1013 06:31:39.632878 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-drnc5" podStartSLOduration=2.679794058 podStartE2EDuration="36.632855856s" podCreationTimestamp="2025-10-13 06:31:03 +0000 UTC" firstStartedPulling="2025-10-13 06:31:05.277362997 +0000 UTC m=+155.377785913" lastFinishedPulling="2025-10-13 06:31:39.230424795 +0000 UTC m=+189.330847711" observedRunningTime="2025-10-13 06:31:39.626283105 +0000 UTC m=+189.726706011" watchObservedRunningTime="2025-10-13 06:31:39.632855856 +0000 UTC m=+189.733278772" Oct 13 06:31:39 crc kubenswrapper[4833]: I1013 06:31:39.650369 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4sw6h" podStartSLOduration=2.709740374 podStartE2EDuration="39.650345446s" podCreationTimestamp="2025-10-13 06:31:00 +0000 UTC" firstStartedPulling="2025-10-13 06:31:02.103690122 +0000 UTC m=+152.204113038" lastFinishedPulling="2025-10-13 06:31:39.044295194 +0000 UTC m=+189.144718110" observedRunningTime="2025-10-13 06:31:39.647088341 +0000 UTC m=+189.747511257" watchObservedRunningTime="2025-10-13 06:31:39.650345446 +0000 UTC m=+189.750768362" Oct 13 06:31:40 crc kubenswrapper[4833]: I1013 06:31:40.583304 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-tkv6x container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Oct 13 06:31:40 crc kubenswrapper[4833]: I1013 06:31:40.583690 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tkv6x" podUID="05a4a2c9-1543-49ae-9f86-ba208d564f75" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Oct 13 06:31:40 crc kubenswrapper[4833]: I1013 06:31:40.755853 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4sw6h" Oct 13 06:31:40 crc kubenswrapper[4833]: I1013 06:31:40.755904 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4sw6h" Oct 13 06:31:40 crc kubenswrapper[4833]: I1013 06:31:40.974030 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vwg98" Oct 13 06:31:40 crc kubenswrapper[4833]: I1013 06:31:40.974079 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vwg98" Oct 13 06:31:41 crc kubenswrapper[4833]: I1013 06:31:41.121677 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4sw6h" Oct 13 06:31:42 crc kubenswrapper[4833]: I1013 06:31:42.107765 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-vwg98" podUID="6c65c1f2-5e55-4133-a013-d4d5e101e0d7" containerName="registry-server" probeResult="failure" output=< Oct 13 06:31:42 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Oct 13 06:31:42 crc kubenswrapper[4833]: > Oct 13 06:31:42 crc kubenswrapper[4833]: I1013 06:31:42.671516 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-tkv6x container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Oct 13 06:31:42 crc kubenswrapper[4833]: I1013 06:31:42.671826 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tkv6x" podUID="05a4a2c9-1543-49ae-9f86-ba208d564f75" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Oct 13 06:31:42 crc kubenswrapper[4833]: I1013 06:31:42.671704 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-tkv6x container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Oct 13 06:31:42 crc kubenswrapper[4833]: I1013 06:31:42.671951 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tkv6x" podUID="05a4a2c9-1543-49ae-9f86-ba208d564f75" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Oct 13 06:31:44 crc kubenswrapper[4833]: I1013 06:31:44.008343 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-drnc5" Oct 13 06:31:44 crc kubenswrapper[4833]: I1013 06:31:44.008641 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-drnc5" Oct 13 06:31:45 crc kubenswrapper[4833]: I1013 06:31:45.062253 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-drnc5" podUID="63fdb502-c6b1-44e8-86a8-c9571886f5b3" containerName="registry-server" probeResult="failure" output=< Oct 13 06:31:45 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Oct 13 06:31:45 crc kubenswrapper[4833]: > Oct 13 06:31:50 crc kubenswrapper[4833]: I1013 06:31:50.798690 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4sw6h" Oct 13 06:31:51 crc kubenswrapper[4833]: I1013 06:31:51.044132 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vwg98" Oct 13 06:31:51 crc kubenswrapper[4833]: I1013 06:31:51.108893 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vwg98" Oct 13 06:31:52 crc kubenswrapper[4833]: I1013 06:31:52.691116 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-tkv6x" Oct 13 06:31:54 crc kubenswrapper[4833]: I1013 06:31:54.067251 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-drnc5" Oct 13 06:31:54 crc kubenswrapper[4833]: I1013 06:31:54.120905 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-drnc5" Oct 13 06:31:55 crc kubenswrapper[4833]: I1013 06:31:55.673722 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7mtw" event={"ID":"0efc55e5-22c4-4df7-b07e-30bb441769c4","Type":"ContainerStarted","Data":"b17cb64359acec22f8360e8cad750a44d4385f05b412d55921879a62ee55eacd"} Oct 13 06:31:55 crc kubenswrapper[4833]: I1013 06:31:55.675707 4833 generic.go:334] "Generic (PLEG): container finished" podID="57de76e3-fdf0-4c6e-aa11-702f0368cb41" containerID="f7a600626ea4543c4a4e5e71e66375df06a8ad715da1a3df6d246d4a14acd45f" exitCode=0 Oct 13 06:31:55 crc kubenswrapper[4833]: I1013 06:31:55.675766 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdgr7" event={"ID":"57de76e3-fdf0-4c6e-aa11-702f0368cb41","Type":"ContainerDied","Data":"f7a600626ea4543c4a4e5e71e66375df06a8ad715da1a3df6d246d4a14acd45f"} Oct 13 06:31:55 crc kubenswrapper[4833]: I1013 06:31:55.679249 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xfsx" event={"ID":"48b7084d-6299-4cfd-88f3-e2dca282c478","Type":"ContainerStarted","Data":"68bb2d248e106ebd353f886c31f313e75bad9548e7de1516ce3c818e8f9af086"} Oct 13 06:31:55 crc kubenswrapper[4833]: I1013 06:31:55.682343 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8sz95" event={"ID":"6a01ee6c-348d-403c-835a-80cd28ddb6ee","Type":"ContainerStarted","Data":"61c6e531d4bdf038b4a1da11d8ccadec6e00f75c7cb362f4c619d92d1cf8318f"} Oct 13 06:31:55 crc kubenswrapper[4833]: I1013 06:31:55.684351 4833 generic.go:334] "Generic (PLEG): container finished" podID="af2f317c-d52d-483f-a616-0d4868b57951" containerID="cec21ac2ca929613f4cb27a76a5146f06215573293461d28f2c2d32925f69c98" exitCode=0 Oct 13 06:31:55 crc kubenswrapper[4833]: I1013 06:31:55.684387 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6l2jk" event={"ID":"af2f317c-d52d-483f-a616-0d4868b57951","Type":"ContainerDied","Data":"cec21ac2ca929613f4cb27a76a5146f06215573293461d28f2c2d32925f69c98"} Oct 13 06:31:56 crc kubenswrapper[4833]: I1013 06:31:56.692014 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdgr7" event={"ID":"57de76e3-fdf0-4c6e-aa11-702f0368cb41","Type":"ContainerStarted","Data":"d85d1b4ddc7efdf68e6bc945cdad44d9b19c973b279b908e9a5563639d4266d6"} Oct 13 06:31:56 crc kubenswrapper[4833]: I1013 06:31:56.698109 4833 generic.go:334] "Generic (PLEG): container finished" podID="48b7084d-6299-4cfd-88f3-e2dca282c478" containerID="68bb2d248e106ebd353f886c31f313e75bad9548e7de1516ce3c818e8f9af086" exitCode=0 Oct 13 06:31:56 crc kubenswrapper[4833]: I1013 06:31:56.698174 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xfsx" event={"ID":"48b7084d-6299-4cfd-88f3-e2dca282c478","Type":"ContainerDied","Data":"68bb2d248e106ebd353f886c31f313e75bad9548e7de1516ce3c818e8f9af086"} Oct 13 06:31:56 crc kubenswrapper[4833]: I1013 06:31:56.700294 4833 generic.go:334] "Generic (PLEG): container finished" podID="6a01ee6c-348d-403c-835a-80cd28ddb6ee" containerID="61c6e531d4bdf038b4a1da11d8ccadec6e00f75c7cb362f4c619d92d1cf8318f" exitCode=0 Oct 13 06:31:56 crc kubenswrapper[4833]: I1013 06:31:56.700360 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8sz95" event={"ID":"6a01ee6c-348d-403c-835a-80cd28ddb6ee","Type":"ContainerDied","Data":"61c6e531d4bdf038b4a1da11d8ccadec6e00f75c7cb362f4c619d92d1cf8318f"} Oct 13 06:31:56 crc kubenswrapper[4833]: I1013 06:31:56.704492 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6l2jk" event={"ID":"af2f317c-d52d-483f-a616-0d4868b57951","Type":"ContainerStarted","Data":"f4413631840162c4c7bc19543ebe3fe9268043c4bf4f1d9cbbea001245d94da6"} Oct 13 06:31:56 crc kubenswrapper[4833]: I1013 06:31:56.706718 4833 generic.go:334] "Generic (PLEG): container finished" podID="0efc55e5-22c4-4df7-b07e-30bb441769c4" containerID="b17cb64359acec22f8360e8cad750a44d4385f05b412d55921879a62ee55eacd" exitCode=0 Oct 13 06:31:56 crc kubenswrapper[4833]: I1013 06:31:56.706764 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7mtw" event={"ID":"0efc55e5-22c4-4df7-b07e-30bb441769c4","Type":"ContainerDied","Data":"b17cb64359acec22f8360e8cad750a44d4385f05b412d55921879a62ee55eacd"} Oct 13 06:31:56 crc kubenswrapper[4833]: I1013 06:31:56.721183 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rdgr7" podStartSLOduration=1.852997477 podStartE2EDuration="53.721162052s" podCreationTimestamp="2025-10-13 06:31:03 +0000 UTC" firstStartedPulling="2025-10-13 06:31:04.266720523 +0000 UTC m=+154.367143449" lastFinishedPulling="2025-10-13 06:31:56.134884898 +0000 UTC m=+206.235308024" observedRunningTime="2025-10-13 06:31:56.716890846 +0000 UTC m=+206.817313762" watchObservedRunningTime="2025-10-13 06:31:56.721162052 +0000 UTC m=+206.821584988" Oct 13 06:31:57 crc kubenswrapper[4833]: I1013 06:31:57.715259 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7mtw" event={"ID":"0efc55e5-22c4-4df7-b07e-30bb441769c4","Type":"ContainerStarted","Data":"bafd744039fafb0ea1025997d4bb0a230c88fe7535784072d069e4347bb2f4a6"} Oct 13 06:31:57 crc kubenswrapper[4833]: I1013 06:31:57.719465 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xfsx" event={"ID":"48b7084d-6299-4cfd-88f3-e2dca282c478","Type":"ContainerStarted","Data":"59334feb3a8225afa79f974e25cb454f566b34d39b1c2df2bd2e1d0381be56a3"} Oct 13 06:31:57 crc kubenswrapper[4833]: I1013 06:31:57.721360 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8sz95" event={"ID":"6a01ee6c-348d-403c-835a-80cd28ddb6ee","Type":"ContainerStarted","Data":"0d562a47c8cf908cd1b3f4b44938ba805141a697e0b88bf85db96d96ec2feed3"} Oct 13 06:31:57 crc kubenswrapper[4833]: I1013 06:31:57.733282 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6l2jk" podStartSLOduration=3.895744391 podStartE2EDuration="55.733267281s" podCreationTimestamp="2025-10-13 06:31:02 +0000 UTC" firstStartedPulling="2025-10-13 06:31:04.254358683 +0000 UTC m=+154.354781599" lastFinishedPulling="2025-10-13 06:31:56.091881573 +0000 UTC m=+206.192304489" observedRunningTime="2025-10-13 06:31:56.794774457 +0000 UTC m=+206.895197373" watchObservedRunningTime="2025-10-13 06:31:57.733267281 +0000 UTC m=+207.833690197" Oct 13 06:31:57 crc kubenswrapper[4833]: I1013 06:31:57.734578 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m7mtw" podStartSLOduration=2.839573652 podStartE2EDuration="56.734571669s" podCreationTimestamp="2025-10-13 06:31:01 +0000 UTC" firstStartedPulling="2025-10-13 06:31:03.246329093 +0000 UTC m=+153.346751999" lastFinishedPulling="2025-10-13 06:31:57.14132707 +0000 UTC m=+207.241750016" observedRunningTime="2025-10-13 06:31:57.733493368 +0000 UTC m=+207.833916284" watchObservedRunningTime="2025-10-13 06:31:57.734571669 +0000 UTC m=+207.834994585" Oct 13 06:31:57 crc kubenswrapper[4833]: I1013 06:31:57.753253 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8xfsx" podStartSLOduration=2.706954561 podStartE2EDuration="57.753228708s" podCreationTimestamp="2025-10-13 06:31:00 +0000 UTC" firstStartedPulling="2025-10-13 06:31:02.122680936 +0000 UTC m=+152.223103852" lastFinishedPulling="2025-10-13 06:31:57.168955083 +0000 UTC m=+207.269377999" observedRunningTime="2025-10-13 06:31:57.752871057 +0000 UTC m=+207.853293983" watchObservedRunningTime="2025-10-13 06:31:57.753228708 +0000 UTC m=+207.853651644" Oct 13 06:31:57 crc kubenswrapper[4833]: I1013 06:31:57.772290 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8sz95" podStartSLOduration=2.95169073 podStartE2EDuration="53.772275608s" podCreationTimestamp="2025-10-13 06:31:04 +0000 UTC" firstStartedPulling="2025-10-13 06:31:06.301225488 +0000 UTC m=+156.401648404" lastFinishedPulling="2025-10-13 06:31:57.121810366 +0000 UTC m=+207.222233282" observedRunningTime="2025-10-13 06:31:57.771676221 +0000 UTC m=+207.872099147" watchObservedRunningTime="2025-10-13 06:31:57.772275608 +0000 UTC m=+207.872698524" Oct 13 06:32:00 crc kubenswrapper[4833]: I1013 06:32:00.542548 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 06:32:00 crc kubenswrapper[4833]: I1013 06:32:00.543094 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 06:32:00 crc kubenswrapper[4833]: I1013 06:32:00.543156 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 06:32:00 crc kubenswrapper[4833]: I1013 06:32:00.543856 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 06:32:00 crc kubenswrapper[4833]: I1013 06:32:00.543919 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388" gracePeriod=600 Oct 13 06:32:00 crc kubenswrapper[4833]: I1013 06:32:00.761167 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x7dz2"] Oct 13 06:32:00 crc kubenswrapper[4833]: I1013 06:32:00.775092 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388" exitCode=0 Oct 13 06:32:00 crc kubenswrapper[4833]: I1013 06:32:00.775366 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388"} Oct 13 06:32:01 crc kubenswrapper[4833]: I1013 06:32:01.199007 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8xfsx" Oct 13 06:32:01 crc kubenswrapper[4833]: I1013 06:32:01.199434 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8xfsx" Oct 13 06:32:01 crc kubenswrapper[4833]: I1013 06:32:01.243201 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8xfsx" Oct 13 06:32:01 crc kubenswrapper[4833]: I1013 06:32:01.394373 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m7mtw" Oct 13 06:32:01 crc kubenswrapper[4833]: I1013 06:32:01.394750 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m7mtw" Oct 13 06:32:01 crc kubenswrapper[4833]: I1013 06:32:01.435591 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m7mtw" Oct 13 06:32:01 crc kubenswrapper[4833]: I1013 06:32:01.781580 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"dfe0e3b90fb39f70a625aa18e10ffc3bc8f0d046e4ae8138cd65de6a8868a93f"} Oct 13 06:32:02 crc kubenswrapper[4833]: I1013 06:32:02.958321 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6l2jk" Oct 13 06:32:02 crc kubenswrapper[4833]: I1013 06:32:02.958818 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6l2jk" Oct 13 06:32:03 crc kubenswrapper[4833]: I1013 06:32:03.002907 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6l2jk" Oct 13 06:32:03 crc kubenswrapper[4833]: I1013 06:32:03.389982 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rdgr7" Oct 13 06:32:03 crc kubenswrapper[4833]: I1013 06:32:03.390037 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rdgr7" Oct 13 06:32:03 crc kubenswrapper[4833]: I1013 06:32:03.427258 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rdgr7" Oct 13 06:32:03 crc kubenswrapper[4833]: I1013 06:32:03.827559 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rdgr7" Oct 13 06:32:03 crc kubenswrapper[4833]: I1013 06:32:03.831272 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6l2jk" Oct 13 06:32:04 crc kubenswrapper[4833]: I1013 06:32:04.401798 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8sz95" Oct 13 06:32:04 crc kubenswrapper[4833]: I1013 06:32:04.401849 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8sz95" Oct 13 06:32:04 crc kubenswrapper[4833]: I1013 06:32:04.459154 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8sz95" Oct 13 06:32:04 crc kubenswrapper[4833]: I1013 06:32:04.842374 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8sz95" Oct 13 06:32:06 crc kubenswrapper[4833]: I1013 06:32:06.865086 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdgr7"] Oct 13 06:32:06 crc kubenswrapper[4833]: I1013 06:32:06.865806 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rdgr7" podUID="57de76e3-fdf0-4c6e-aa11-702f0368cb41" containerName="registry-server" containerID="cri-o://d85d1b4ddc7efdf68e6bc945cdad44d9b19c973b279b908e9a5563639d4266d6" gracePeriod=2 Oct 13 06:32:07 crc kubenswrapper[4833]: I1013 06:32:07.071717 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8sz95"] Oct 13 06:32:07 crc kubenswrapper[4833]: I1013 06:32:07.072056 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8sz95" podUID="6a01ee6c-348d-403c-835a-80cd28ddb6ee" containerName="registry-server" containerID="cri-o://0d562a47c8cf908cd1b3f4b44938ba805141a697e0b88bf85db96d96ec2feed3" gracePeriod=2 Oct 13 06:32:07 crc kubenswrapper[4833]: I1013 06:32:07.520062 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdgr7" Oct 13 06:32:07 crc kubenswrapper[4833]: I1013 06:32:07.624067 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57de76e3-fdf0-4c6e-aa11-702f0368cb41-catalog-content\") pod \"57de76e3-fdf0-4c6e-aa11-702f0368cb41\" (UID: \"57de76e3-fdf0-4c6e-aa11-702f0368cb41\") " Oct 13 06:32:07 crc kubenswrapper[4833]: I1013 06:32:07.624202 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57de76e3-fdf0-4c6e-aa11-702f0368cb41-utilities\") pod \"57de76e3-fdf0-4c6e-aa11-702f0368cb41\" (UID: \"57de76e3-fdf0-4c6e-aa11-702f0368cb41\") " Oct 13 06:32:07 crc kubenswrapper[4833]: I1013 06:32:07.624281 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7b24\" (UniqueName: \"kubernetes.io/projected/57de76e3-fdf0-4c6e-aa11-702f0368cb41-kube-api-access-s7b24\") pod \"57de76e3-fdf0-4c6e-aa11-702f0368cb41\" (UID: \"57de76e3-fdf0-4c6e-aa11-702f0368cb41\") " Oct 13 06:32:07 crc kubenswrapper[4833]: I1013 06:32:07.625077 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57de76e3-fdf0-4c6e-aa11-702f0368cb41-utilities" (OuterVolumeSpecName: "utilities") pod "57de76e3-fdf0-4c6e-aa11-702f0368cb41" (UID: "57de76e3-fdf0-4c6e-aa11-702f0368cb41"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:32:07 crc kubenswrapper[4833]: I1013 06:32:07.636500 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57de76e3-fdf0-4c6e-aa11-702f0368cb41-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57de76e3-fdf0-4c6e-aa11-702f0368cb41" (UID: "57de76e3-fdf0-4c6e-aa11-702f0368cb41"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:32:07 crc kubenswrapper[4833]: I1013 06:32:07.651397 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57de76e3-fdf0-4c6e-aa11-702f0368cb41-kube-api-access-s7b24" (OuterVolumeSpecName: "kube-api-access-s7b24") pod "57de76e3-fdf0-4c6e-aa11-702f0368cb41" (UID: "57de76e3-fdf0-4c6e-aa11-702f0368cb41"). InnerVolumeSpecName "kube-api-access-s7b24". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:32:07 crc kubenswrapper[4833]: I1013 06:32:07.726288 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7b24\" (UniqueName: \"kubernetes.io/projected/57de76e3-fdf0-4c6e-aa11-702f0368cb41-kube-api-access-s7b24\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:07 crc kubenswrapper[4833]: I1013 06:32:07.726324 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57de76e3-fdf0-4c6e-aa11-702f0368cb41-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:07 crc kubenswrapper[4833]: I1013 06:32:07.726336 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57de76e3-fdf0-4c6e-aa11-702f0368cb41-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:07 crc kubenswrapper[4833]: I1013 06:32:07.816608 4833 generic.go:334] "Generic (PLEG): container finished" podID="6a01ee6c-348d-403c-835a-80cd28ddb6ee" containerID="0d562a47c8cf908cd1b3f4b44938ba805141a697e0b88bf85db96d96ec2feed3" exitCode=0 Oct 13 06:32:07 crc kubenswrapper[4833]: I1013 06:32:07.816685 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8sz95" event={"ID":"6a01ee6c-348d-403c-835a-80cd28ddb6ee","Type":"ContainerDied","Data":"0d562a47c8cf908cd1b3f4b44938ba805141a697e0b88bf85db96d96ec2feed3"} Oct 13 06:32:07 crc kubenswrapper[4833]: I1013 06:32:07.819068 4833 generic.go:334] "Generic (PLEG): container finished" podID="57de76e3-fdf0-4c6e-aa11-702f0368cb41" containerID="d85d1b4ddc7efdf68e6bc945cdad44d9b19c973b279b908e9a5563639d4266d6" exitCode=0 Oct 13 06:32:07 crc kubenswrapper[4833]: I1013 06:32:07.819104 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdgr7" event={"ID":"57de76e3-fdf0-4c6e-aa11-702f0368cb41","Type":"ContainerDied","Data":"d85d1b4ddc7efdf68e6bc945cdad44d9b19c973b279b908e9a5563639d4266d6"} Oct 13 06:32:07 crc kubenswrapper[4833]: I1013 06:32:07.819134 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdgr7" Oct 13 06:32:07 crc kubenswrapper[4833]: I1013 06:32:07.819153 4833 scope.go:117] "RemoveContainer" containerID="d85d1b4ddc7efdf68e6bc945cdad44d9b19c973b279b908e9a5563639d4266d6" Oct 13 06:32:07 crc kubenswrapper[4833]: I1013 06:32:07.819134 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdgr7" event={"ID":"57de76e3-fdf0-4c6e-aa11-702f0368cb41","Type":"ContainerDied","Data":"07b7d1bfe12aa0933c4315d471be92f9bb6a4a3c4c1ea0732f9b01cf4d6bd6d1"} Oct 13 06:32:07 crc kubenswrapper[4833]: I1013 06:32:07.841778 4833 scope.go:117] "RemoveContainer" containerID="f7a600626ea4543c4a4e5e71e66375df06a8ad715da1a3df6d246d4a14acd45f" Oct 13 06:32:07 crc kubenswrapper[4833]: I1013 06:32:07.843940 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdgr7"] Oct 13 06:32:07 crc kubenswrapper[4833]: I1013 06:32:07.846244 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdgr7"] Oct 13 06:32:07 crc kubenswrapper[4833]: I1013 06:32:07.868003 4833 scope.go:117] "RemoveContainer" containerID="0b4a7974624e16fd690495eb9ea6a656f869da850788e7049863b4496713aa3e" Oct 13 06:32:07 crc kubenswrapper[4833]: I1013 06:32:07.883156 4833 scope.go:117] "RemoveContainer" containerID="d85d1b4ddc7efdf68e6bc945cdad44d9b19c973b279b908e9a5563639d4266d6" Oct 13 06:32:07 crc kubenswrapper[4833]: E1013 06:32:07.883583 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d85d1b4ddc7efdf68e6bc945cdad44d9b19c973b279b908e9a5563639d4266d6\": container with ID starting with d85d1b4ddc7efdf68e6bc945cdad44d9b19c973b279b908e9a5563639d4266d6 not found: ID does not exist" containerID="d85d1b4ddc7efdf68e6bc945cdad44d9b19c973b279b908e9a5563639d4266d6" Oct 13 06:32:07 crc kubenswrapper[4833]: I1013 06:32:07.883618 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d85d1b4ddc7efdf68e6bc945cdad44d9b19c973b279b908e9a5563639d4266d6"} err="failed to get container status \"d85d1b4ddc7efdf68e6bc945cdad44d9b19c973b279b908e9a5563639d4266d6\": rpc error: code = NotFound desc = could not find container \"d85d1b4ddc7efdf68e6bc945cdad44d9b19c973b279b908e9a5563639d4266d6\": container with ID starting with d85d1b4ddc7efdf68e6bc945cdad44d9b19c973b279b908e9a5563639d4266d6 not found: ID does not exist" Oct 13 06:32:07 crc kubenswrapper[4833]: I1013 06:32:07.883643 4833 scope.go:117] "RemoveContainer" containerID="f7a600626ea4543c4a4e5e71e66375df06a8ad715da1a3df6d246d4a14acd45f" Oct 13 06:32:07 crc kubenswrapper[4833]: E1013 06:32:07.883940 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7a600626ea4543c4a4e5e71e66375df06a8ad715da1a3df6d246d4a14acd45f\": container with ID starting with f7a600626ea4543c4a4e5e71e66375df06a8ad715da1a3df6d246d4a14acd45f not found: ID does not exist" containerID="f7a600626ea4543c4a4e5e71e66375df06a8ad715da1a3df6d246d4a14acd45f" Oct 13 06:32:07 crc kubenswrapper[4833]: I1013 06:32:07.883971 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7a600626ea4543c4a4e5e71e66375df06a8ad715da1a3df6d246d4a14acd45f"} err="failed to get container status \"f7a600626ea4543c4a4e5e71e66375df06a8ad715da1a3df6d246d4a14acd45f\": rpc error: code = NotFound desc = could not find container \"f7a600626ea4543c4a4e5e71e66375df06a8ad715da1a3df6d246d4a14acd45f\": container with ID starting with f7a600626ea4543c4a4e5e71e66375df06a8ad715da1a3df6d246d4a14acd45f not found: ID does not exist" Oct 13 06:32:07 crc kubenswrapper[4833]: I1013 06:32:07.883996 4833 scope.go:117] "RemoveContainer" containerID="0b4a7974624e16fd690495eb9ea6a656f869da850788e7049863b4496713aa3e" Oct 13 06:32:07 crc kubenswrapper[4833]: E1013 06:32:07.884260 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b4a7974624e16fd690495eb9ea6a656f869da850788e7049863b4496713aa3e\": container with ID starting with 0b4a7974624e16fd690495eb9ea6a656f869da850788e7049863b4496713aa3e not found: ID does not exist" containerID="0b4a7974624e16fd690495eb9ea6a656f869da850788e7049863b4496713aa3e" Oct 13 06:32:07 crc kubenswrapper[4833]: I1013 06:32:07.884289 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b4a7974624e16fd690495eb9ea6a656f869da850788e7049863b4496713aa3e"} err="failed to get container status \"0b4a7974624e16fd690495eb9ea6a656f869da850788e7049863b4496713aa3e\": rpc error: code = NotFound desc = could not find container \"0b4a7974624e16fd690495eb9ea6a656f869da850788e7049863b4496713aa3e\": container with ID starting with 0b4a7974624e16fd690495eb9ea6a656f869da850788e7049863b4496713aa3e not found: ID does not exist" Oct 13 06:32:08 crc kubenswrapper[4833]: I1013 06:32:08.190791 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8sz95" Oct 13 06:32:08 crc kubenswrapper[4833]: I1013 06:32:08.332996 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjn5g\" (UniqueName: \"kubernetes.io/projected/6a01ee6c-348d-403c-835a-80cd28ddb6ee-kube-api-access-bjn5g\") pod \"6a01ee6c-348d-403c-835a-80cd28ddb6ee\" (UID: \"6a01ee6c-348d-403c-835a-80cd28ddb6ee\") " Oct 13 06:32:08 crc kubenswrapper[4833]: I1013 06:32:08.333083 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a01ee6c-348d-403c-835a-80cd28ddb6ee-utilities\") pod \"6a01ee6c-348d-403c-835a-80cd28ddb6ee\" (UID: \"6a01ee6c-348d-403c-835a-80cd28ddb6ee\") " Oct 13 06:32:08 crc kubenswrapper[4833]: I1013 06:32:08.333137 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a01ee6c-348d-403c-835a-80cd28ddb6ee-catalog-content\") pod \"6a01ee6c-348d-403c-835a-80cd28ddb6ee\" (UID: \"6a01ee6c-348d-403c-835a-80cd28ddb6ee\") " Oct 13 06:32:08 crc kubenswrapper[4833]: I1013 06:32:08.334131 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a01ee6c-348d-403c-835a-80cd28ddb6ee-utilities" (OuterVolumeSpecName: "utilities") pod "6a01ee6c-348d-403c-835a-80cd28ddb6ee" (UID: "6a01ee6c-348d-403c-835a-80cd28ddb6ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:32:08 crc kubenswrapper[4833]: I1013 06:32:08.337267 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a01ee6c-348d-403c-835a-80cd28ddb6ee-kube-api-access-bjn5g" (OuterVolumeSpecName: "kube-api-access-bjn5g") pod "6a01ee6c-348d-403c-835a-80cd28ddb6ee" (UID: "6a01ee6c-348d-403c-835a-80cd28ddb6ee"). InnerVolumeSpecName "kube-api-access-bjn5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:32:08 crc kubenswrapper[4833]: I1013 06:32:08.419291 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a01ee6c-348d-403c-835a-80cd28ddb6ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a01ee6c-348d-403c-835a-80cd28ddb6ee" (UID: "6a01ee6c-348d-403c-835a-80cd28ddb6ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:32:08 crc kubenswrapper[4833]: I1013 06:32:08.434017 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a01ee6c-348d-403c-835a-80cd28ddb6ee-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:08 crc kubenswrapper[4833]: I1013 06:32:08.434049 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a01ee6c-348d-403c-835a-80cd28ddb6ee-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:08 crc kubenswrapper[4833]: I1013 06:32:08.434066 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjn5g\" (UniqueName: \"kubernetes.io/projected/6a01ee6c-348d-403c-835a-80cd28ddb6ee-kube-api-access-bjn5g\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:08 crc kubenswrapper[4833]: I1013 06:32:08.633605 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57de76e3-fdf0-4c6e-aa11-702f0368cb41" path="/var/lib/kubelet/pods/57de76e3-fdf0-4c6e-aa11-702f0368cb41/volumes" Oct 13 06:32:08 crc kubenswrapper[4833]: I1013 06:32:08.827145 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8sz95" event={"ID":"6a01ee6c-348d-403c-835a-80cd28ddb6ee","Type":"ContainerDied","Data":"5687ab767bc6716bc16c7075b4d411c3c8e2eb33875ae1d37ee49ad4c11734e3"} Oct 13 06:32:08 crc kubenswrapper[4833]: I1013 06:32:08.827209 4833 scope.go:117] "RemoveContainer" containerID="0d562a47c8cf908cd1b3f4b44938ba805141a697e0b88bf85db96d96ec2feed3" Oct 13 06:32:08 crc kubenswrapper[4833]: I1013 06:32:08.827212 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8sz95" Oct 13 06:32:08 crc kubenswrapper[4833]: I1013 06:32:08.844158 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8sz95"] Oct 13 06:32:08 crc kubenswrapper[4833]: I1013 06:32:08.849007 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8sz95"] Oct 13 06:32:08 crc kubenswrapper[4833]: I1013 06:32:08.850193 4833 scope.go:117] "RemoveContainer" containerID="61c6e531d4bdf038b4a1da11d8ccadec6e00f75c7cb362f4c619d92d1cf8318f" Oct 13 06:32:08 crc kubenswrapper[4833]: I1013 06:32:08.865682 4833 scope.go:117] "RemoveContainer" containerID="2d254f5eebe9fa469452067d9f2a3c85d1d3f3bdbae5c68ec7418ce6797659b5" Oct 13 06:32:10 crc kubenswrapper[4833]: I1013 06:32:10.634366 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a01ee6c-348d-403c-835a-80cd28ddb6ee" path="/var/lib/kubelet/pods/6a01ee6c-348d-403c-835a-80cd28ddb6ee/volumes" Oct 13 06:32:11 crc kubenswrapper[4833]: I1013 06:32:11.239652 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8xfsx" Oct 13 06:32:11 crc kubenswrapper[4833]: I1013 06:32:11.431880 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m7mtw" Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.064895 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8xfsx"] Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.065353 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8xfsx" podUID="48b7084d-6299-4cfd-88f3-e2dca282c478" containerName="registry-server" containerID="cri-o://59334feb3a8225afa79f974e25cb454f566b34d39b1c2df2bd2e1d0381be56a3" gracePeriod=2 Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.432492 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8xfsx" Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.593038 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x77ls\" (UniqueName: \"kubernetes.io/projected/48b7084d-6299-4cfd-88f3-e2dca282c478-kube-api-access-x77ls\") pod \"48b7084d-6299-4cfd-88f3-e2dca282c478\" (UID: \"48b7084d-6299-4cfd-88f3-e2dca282c478\") " Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.593341 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48b7084d-6299-4cfd-88f3-e2dca282c478-utilities\") pod \"48b7084d-6299-4cfd-88f3-e2dca282c478\" (UID: \"48b7084d-6299-4cfd-88f3-e2dca282c478\") " Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.593429 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48b7084d-6299-4cfd-88f3-e2dca282c478-catalog-content\") pod \"48b7084d-6299-4cfd-88f3-e2dca282c478\" (UID: \"48b7084d-6299-4cfd-88f3-e2dca282c478\") " Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.594031 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48b7084d-6299-4cfd-88f3-e2dca282c478-utilities" (OuterVolumeSpecName: "utilities") pod "48b7084d-6299-4cfd-88f3-e2dca282c478" (UID: "48b7084d-6299-4cfd-88f3-e2dca282c478"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.597866 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48b7084d-6299-4cfd-88f3-e2dca282c478-kube-api-access-x77ls" (OuterVolumeSpecName: "kube-api-access-x77ls") pod "48b7084d-6299-4cfd-88f3-e2dca282c478" (UID: "48b7084d-6299-4cfd-88f3-e2dca282c478"). InnerVolumeSpecName "kube-api-access-x77ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.636123 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48b7084d-6299-4cfd-88f3-e2dca282c478-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48b7084d-6299-4cfd-88f3-e2dca282c478" (UID: "48b7084d-6299-4cfd-88f3-e2dca282c478"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.664128 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m7mtw"] Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.664349 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m7mtw" podUID="0efc55e5-22c4-4df7-b07e-30bb441769c4" containerName="registry-server" containerID="cri-o://bafd744039fafb0ea1025997d4bb0a230c88fe7535784072d069e4347bb2f4a6" gracePeriod=2 Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.695283 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x77ls\" (UniqueName: \"kubernetes.io/projected/48b7084d-6299-4cfd-88f3-e2dca282c478-kube-api-access-x77ls\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.695312 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48b7084d-6299-4cfd-88f3-e2dca282c478-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.695323 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48b7084d-6299-4cfd-88f3-e2dca282c478-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.856584 4833 generic.go:334] "Generic (PLEG): container finished" podID="0efc55e5-22c4-4df7-b07e-30bb441769c4" containerID="bafd744039fafb0ea1025997d4bb0a230c88fe7535784072d069e4347bb2f4a6" exitCode=0 Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.856673 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7mtw" event={"ID":"0efc55e5-22c4-4df7-b07e-30bb441769c4","Type":"ContainerDied","Data":"bafd744039fafb0ea1025997d4bb0a230c88fe7535784072d069e4347bb2f4a6"} Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.876806 4833 generic.go:334] "Generic (PLEG): container finished" podID="48b7084d-6299-4cfd-88f3-e2dca282c478" containerID="59334feb3a8225afa79f974e25cb454f566b34d39b1c2df2bd2e1d0381be56a3" exitCode=0 Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.876841 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xfsx" event={"ID":"48b7084d-6299-4cfd-88f3-e2dca282c478","Type":"ContainerDied","Data":"59334feb3a8225afa79f974e25cb454f566b34d39b1c2df2bd2e1d0381be56a3"} Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.876867 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xfsx" event={"ID":"48b7084d-6299-4cfd-88f3-e2dca282c478","Type":"ContainerDied","Data":"fef6fa5d8b0b470ab41462ed693f8a4251dc9c9c96afcd525e23e6ea2566d438"} Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.876885 4833 scope.go:117] "RemoveContainer" containerID="59334feb3a8225afa79f974e25cb454f566b34d39b1c2df2bd2e1d0381be56a3" Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.877007 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8xfsx" Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.894341 4833 scope.go:117] "RemoveContainer" containerID="68bb2d248e106ebd353f886c31f313e75bad9548e7de1516ce3c818e8f9af086" Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.911041 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8xfsx"] Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.913829 4833 scope.go:117] "RemoveContainer" containerID="d1a4587bc30e22aee9ae7f5935455056fd2ba4ba2bb338d45d2047829137b4eb" Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.914147 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8xfsx"] Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.939976 4833 scope.go:117] "RemoveContainer" containerID="59334feb3a8225afa79f974e25cb454f566b34d39b1c2df2bd2e1d0381be56a3" Oct 13 06:32:13 crc kubenswrapper[4833]: E1013 06:32:13.940407 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59334feb3a8225afa79f974e25cb454f566b34d39b1c2df2bd2e1d0381be56a3\": container with ID starting with 59334feb3a8225afa79f974e25cb454f566b34d39b1c2df2bd2e1d0381be56a3 not found: ID does not exist" containerID="59334feb3a8225afa79f974e25cb454f566b34d39b1c2df2bd2e1d0381be56a3" Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.940477 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59334feb3a8225afa79f974e25cb454f566b34d39b1c2df2bd2e1d0381be56a3"} err="failed to get container status \"59334feb3a8225afa79f974e25cb454f566b34d39b1c2df2bd2e1d0381be56a3\": rpc error: code = NotFound desc = could not find container \"59334feb3a8225afa79f974e25cb454f566b34d39b1c2df2bd2e1d0381be56a3\": container with ID starting with 59334feb3a8225afa79f974e25cb454f566b34d39b1c2df2bd2e1d0381be56a3 not found: ID does not exist" Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.940741 4833 scope.go:117] "RemoveContainer" containerID="68bb2d248e106ebd353f886c31f313e75bad9548e7de1516ce3c818e8f9af086" Oct 13 06:32:13 crc kubenswrapper[4833]: E1013 06:32:13.941298 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68bb2d248e106ebd353f886c31f313e75bad9548e7de1516ce3c818e8f9af086\": container with ID starting with 68bb2d248e106ebd353f886c31f313e75bad9548e7de1516ce3c818e8f9af086 not found: ID does not exist" containerID="68bb2d248e106ebd353f886c31f313e75bad9548e7de1516ce3c818e8f9af086" Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.941329 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68bb2d248e106ebd353f886c31f313e75bad9548e7de1516ce3c818e8f9af086"} err="failed to get container status \"68bb2d248e106ebd353f886c31f313e75bad9548e7de1516ce3c818e8f9af086\": rpc error: code = NotFound desc = could not find container \"68bb2d248e106ebd353f886c31f313e75bad9548e7de1516ce3c818e8f9af086\": container with ID starting with 68bb2d248e106ebd353f886c31f313e75bad9548e7de1516ce3c818e8f9af086 not found: ID does not exist" Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.941343 4833 scope.go:117] "RemoveContainer" containerID="d1a4587bc30e22aee9ae7f5935455056fd2ba4ba2bb338d45d2047829137b4eb" Oct 13 06:32:13 crc kubenswrapper[4833]: E1013 06:32:13.942302 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1a4587bc30e22aee9ae7f5935455056fd2ba4ba2bb338d45d2047829137b4eb\": container with ID starting with d1a4587bc30e22aee9ae7f5935455056fd2ba4ba2bb338d45d2047829137b4eb not found: ID does not exist" containerID="d1a4587bc30e22aee9ae7f5935455056fd2ba4ba2bb338d45d2047829137b4eb" Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.942351 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1a4587bc30e22aee9ae7f5935455056fd2ba4ba2bb338d45d2047829137b4eb"} err="failed to get container status \"d1a4587bc30e22aee9ae7f5935455056fd2ba4ba2bb338d45d2047829137b4eb\": rpc error: code = NotFound desc = could not find container \"d1a4587bc30e22aee9ae7f5935455056fd2ba4ba2bb338d45d2047829137b4eb\": container with ID starting with d1a4587bc30e22aee9ae7f5935455056fd2ba4ba2bb338d45d2047829137b4eb not found: ID does not exist" Oct 13 06:32:13 crc kubenswrapper[4833]: I1013 06:32:13.971387 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7mtw" Oct 13 06:32:14 crc kubenswrapper[4833]: I1013 06:32:14.100986 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf9n2\" (UniqueName: \"kubernetes.io/projected/0efc55e5-22c4-4df7-b07e-30bb441769c4-kube-api-access-zf9n2\") pod \"0efc55e5-22c4-4df7-b07e-30bb441769c4\" (UID: \"0efc55e5-22c4-4df7-b07e-30bb441769c4\") " Oct 13 06:32:14 crc kubenswrapper[4833]: I1013 06:32:14.101096 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0efc55e5-22c4-4df7-b07e-30bb441769c4-catalog-content\") pod \"0efc55e5-22c4-4df7-b07e-30bb441769c4\" (UID: \"0efc55e5-22c4-4df7-b07e-30bb441769c4\") " Oct 13 06:32:14 crc kubenswrapper[4833]: I1013 06:32:14.101143 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0efc55e5-22c4-4df7-b07e-30bb441769c4-utilities\") pod \"0efc55e5-22c4-4df7-b07e-30bb441769c4\" (UID: \"0efc55e5-22c4-4df7-b07e-30bb441769c4\") " Oct 13 06:32:14 crc kubenswrapper[4833]: I1013 06:32:14.101915 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0efc55e5-22c4-4df7-b07e-30bb441769c4-utilities" (OuterVolumeSpecName: "utilities") pod "0efc55e5-22c4-4df7-b07e-30bb441769c4" (UID: "0efc55e5-22c4-4df7-b07e-30bb441769c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:32:14 crc kubenswrapper[4833]: I1013 06:32:14.102140 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0efc55e5-22c4-4df7-b07e-30bb441769c4-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:14 crc kubenswrapper[4833]: I1013 06:32:14.104674 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0efc55e5-22c4-4df7-b07e-30bb441769c4-kube-api-access-zf9n2" (OuterVolumeSpecName: "kube-api-access-zf9n2") pod "0efc55e5-22c4-4df7-b07e-30bb441769c4" (UID: "0efc55e5-22c4-4df7-b07e-30bb441769c4"). InnerVolumeSpecName "kube-api-access-zf9n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:32:14 crc kubenswrapper[4833]: I1013 06:32:14.159968 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0efc55e5-22c4-4df7-b07e-30bb441769c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0efc55e5-22c4-4df7-b07e-30bb441769c4" (UID: "0efc55e5-22c4-4df7-b07e-30bb441769c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:32:14 crc kubenswrapper[4833]: I1013 06:32:14.203677 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf9n2\" (UniqueName: \"kubernetes.io/projected/0efc55e5-22c4-4df7-b07e-30bb441769c4-kube-api-access-zf9n2\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:14 crc kubenswrapper[4833]: I1013 06:32:14.203731 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0efc55e5-22c4-4df7-b07e-30bb441769c4-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:14 crc kubenswrapper[4833]: I1013 06:32:14.634677 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48b7084d-6299-4cfd-88f3-e2dca282c478" path="/var/lib/kubelet/pods/48b7084d-6299-4cfd-88f3-e2dca282c478/volumes" Oct 13 06:32:14 crc kubenswrapper[4833]: I1013 06:32:14.886759 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7mtw" event={"ID":"0efc55e5-22c4-4df7-b07e-30bb441769c4","Type":"ContainerDied","Data":"b14d79336297daca0182a5b38f55a3612d344e844c666a76ef861eee64bdffe5"} Oct 13 06:32:14 crc kubenswrapper[4833]: I1013 06:32:14.886820 4833 scope.go:117] "RemoveContainer" containerID="bafd744039fafb0ea1025997d4bb0a230c88fe7535784072d069e4347bb2f4a6" Oct 13 06:32:14 crc kubenswrapper[4833]: I1013 06:32:14.887012 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7mtw" Oct 13 06:32:14 crc kubenswrapper[4833]: I1013 06:32:14.902109 4833 scope.go:117] "RemoveContainer" containerID="b17cb64359acec22f8360e8cad750a44d4385f05b412d55921879a62ee55eacd" Oct 13 06:32:14 crc kubenswrapper[4833]: I1013 06:32:14.908744 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m7mtw"] Oct 13 06:32:14 crc kubenswrapper[4833]: I1013 06:32:14.917915 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m7mtw"] Oct 13 06:32:14 crc kubenswrapper[4833]: I1013 06:32:14.925765 4833 scope.go:117] "RemoveContainer" containerID="1966f5a66a7e0d84d2995346e19186a61dd96aaefdb6c25c9f2f68f6bd92ceb3" Oct 13 06:32:16 crc kubenswrapper[4833]: I1013 06:32:16.636077 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0efc55e5-22c4-4df7-b07e-30bb441769c4" path="/var/lib/kubelet/pods/0efc55e5-22c4-4df7-b07e-30bb441769c4/volumes" Oct 13 06:32:25 crc kubenswrapper[4833]: I1013 06:32:25.794830 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" podUID="b4411e13-1d37-4d03-ad8a-7d24be467441" containerName="oauth-openshift" containerID="cri-o://1caf582fa4c7398683f68d8f1143d2641787f87c549ca020cc0f42a9b28b3b89" gracePeriod=15 Oct 13 06:32:25 crc kubenswrapper[4833]: I1013 06:32:25.959376 4833 generic.go:334] "Generic (PLEG): container finished" podID="b4411e13-1d37-4d03-ad8a-7d24be467441" containerID="1caf582fa4c7398683f68d8f1143d2641787f87c549ca020cc0f42a9b28b3b89" exitCode=0 Oct 13 06:32:25 crc kubenswrapper[4833]: I1013 06:32:25.959443 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" event={"ID":"b4411e13-1d37-4d03-ad8a-7d24be467441","Type":"ContainerDied","Data":"1caf582fa4c7398683f68d8f1143d2641787f87c549ca020cc0f42a9b28b3b89"} Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.223523 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.272959 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-79b5c48459-6nggb"] Oct 13 06:32:26 crc kubenswrapper[4833]: E1013 06:32:26.273442 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48b7084d-6299-4cfd-88f3-e2dca282c478" containerName="registry-server" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.273466 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="48b7084d-6299-4cfd-88f3-e2dca282c478" containerName="registry-server" Oct 13 06:32:26 crc kubenswrapper[4833]: E1013 06:32:26.273484 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48b7084d-6299-4cfd-88f3-e2dca282c478" containerName="extract-content" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.273498 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="48b7084d-6299-4cfd-88f3-e2dca282c478" containerName="extract-content" Oct 13 06:32:26 crc kubenswrapper[4833]: E1013 06:32:26.273518 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2c787b5-6f34-422e-a227-bfefade4c11e" containerName="pruner" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.273533 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2c787b5-6f34-422e-a227-bfefade4c11e" containerName="pruner" Oct 13 06:32:26 crc kubenswrapper[4833]: E1013 06:32:26.273578 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0efc55e5-22c4-4df7-b07e-30bb441769c4" containerName="extract-utilities" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.273592 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0efc55e5-22c4-4df7-b07e-30bb441769c4" containerName="extract-utilities" Oct 13 06:32:26 crc kubenswrapper[4833]: E1013 06:32:26.273611 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4411e13-1d37-4d03-ad8a-7d24be467441" containerName="oauth-openshift" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.273626 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4411e13-1d37-4d03-ad8a-7d24be467441" containerName="oauth-openshift" Oct 13 06:32:26 crc kubenswrapper[4833]: E1013 06:32:26.273647 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a01ee6c-348d-403c-835a-80cd28ddb6ee" containerName="registry-server" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.273660 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a01ee6c-348d-403c-835a-80cd28ddb6ee" containerName="registry-server" Oct 13 06:32:26 crc kubenswrapper[4833]: E1013 06:32:26.273676 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48b7084d-6299-4cfd-88f3-e2dca282c478" containerName="extract-utilities" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.273689 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="48b7084d-6299-4cfd-88f3-e2dca282c478" containerName="extract-utilities" Oct 13 06:32:26 crc kubenswrapper[4833]: E1013 06:32:26.273716 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57de76e3-fdf0-4c6e-aa11-702f0368cb41" containerName="extract-content" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.273729 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="57de76e3-fdf0-4c6e-aa11-702f0368cb41" containerName="extract-content" Oct 13 06:32:26 crc kubenswrapper[4833]: E1013 06:32:26.273751 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a01ee6c-348d-403c-835a-80cd28ddb6ee" containerName="extract-content" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.273763 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a01ee6c-348d-403c-835a-80cd28ddb6ee" containerName="extract-content" Oct 13 06:32:26 crc kubenswrapper[4833]: E1013 06:32:26.273778 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57de76e3-fdf0-4c6e-aa11-702f0368cb41" containerName="registry-server" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.273793 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="57de76e3-fdf0-4c6e-aa11-702f0368cb41" containerName="registry-server" Oct 13 06:32:26 crc kubenswrapper[4833]: E1013 06:32:26.273812 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0efc55e5-22c4-4df7-b07e-30bb441769c4" containerName="extract-content" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.273824 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0efc55e5-22c4-4df7-b07e-30bb441769c4" containerName="extract-content" Oct 13 06:32:26 crc kubenswrapper[4833]: E1013 06:32:26.273841 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0efc55e5-22c4-4df7-b07e-30bb441769c4" containerName="registry-server" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.273855 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0efc55e5-22c4-4df7-b07e-30bb441769c4" containerName="registry-server" Oct 13 06:32:26 crc kubenswrapper[4833]: E1013 06:32:26.273876 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57de76e3-fdf0-4c6e-aa11-702f0368cb41" containerName="extract-utilities" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.273888 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="57de76e3-fdf0-4c6e-aa11-702f0368cb41" containerName="extract-utilities" Oct 13 06:32:26 crc kubenswrapper[4833]: E1013 06:32:26.273905 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a01ee6c-348d-403c-835a-80cd28ddb6ee" containerName="extract-utilities" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.273917 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a01ee6c-348d-403c-835a-80cd28ddb6ee" containerName="extract-utilities" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.274107 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a01ee6c-348d-403c-835a-80cd28ddb6ee" containerName="registry-server" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.274132 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="48b7084d-6299-4cfd-88f3-e2dca282c478" containerName="registry-server" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.274149 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0efc55e5-22c4-4df7-b07e-30bb441769c4" containerName="registry-server" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.274172 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="57de76e3-fdf0-4c6e-aa11-702f0368cb41" containerName="registry-server" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.274188 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4411e13-1d37-4d03-ad8a-7d24be467441" containerName="oauth-openshift" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.274208 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2c787b5-6f34-422e-a227-bfefade4c11e" containerName="pruner" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.275040 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.281145 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79b5c48459-6nggb"] Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.365353 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77t8j\" (UniqueName: \"kubernetes.io/projected/b4411e13-1d37-4d03-ad8a-7d24be467441-kube-api-access-77t8j\") pod \"b4411e13-1d37-4d03-ad8a-7d24be467441\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.365419 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-router-certs\") pod \"b4411e13-1d37-4d03-ad8a-7d24be467441\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.365447 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b4411e13-1d37-4d03-ad8a-7d24be467441-audit-dir\") pod \"b4411e13-1d37-4d03-ad8a-7d24be467441\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.365466 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-cliconfig\") pod \"b4411e13-1d37-4d03-ad8a-7d24be467441\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.365487 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-trusted-ca-bundle\") pod \"b4411e13-1d37-4d03-ad8a-7d24be467441\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.365549 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-serving-cert\") pod \"b4411e13-1d37-4d03-ad8a-7d24be467441\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.365573 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-user-template-login\") pod \"b4411e13-1d37-4d03-ad8a-7d24be467441\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.365588 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-user-template-error\") pod \"b4411e13-1d37-4d03-ad8a-7d24be467441\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.365606 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b4411e13-1d37-4d03-ad8a-7d24be467441-audit-policies\") pod \"b4411e13-1d37-4d03-ad8a-7d24be467441\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.365623 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-service-ca\") pod \"b4411e13-1d37-4d03-ad8a-7d24be467441\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.365642 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-session\") pod \"b4411e13-1d37-4d03-ad8a-7d24be467441\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.365660 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-user-idp-0-file-data\") pod \"b4411e13-1d37-4d03-ad8a-7d24be467441\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.365686 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-user-template-provider-selection\") pod \"b4411e13-1d37-4d03-ad8a-7d24be467441\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.365715 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-ocp-branding-template\") pod \"b4411e13-1d37-4d03-ad8a-7d24be467441\" (UID: \"b4411e13-1d37-4d03-ad8a-7d24be467441\") " Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.366609 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4411e13-1d37-4d03-ad8a-7d24be467441-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b4411e13-1d37-4d03-ad8a-7d24be467441" (UID: "b4411e13-1d37-4d03-ad8a-7d24be467441"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.367517 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "b4411e13-1d37-4d03-ad8a-7d24be467441" (UID: "b4411e13-1d37-4d03-ad8a-7d24be467441"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.368259 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "b4411e13-1d37-4d03-ad8a-7d24be467441" (UID: "b4411e13-1d37-4d03-ad8a-7d24be467441"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.368992 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "b4411e13-1d37-4d03-ad8a-7d24be467441" (UID: "b4411e13-1d37-4d03-ad8a-7d24be467441"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.369008 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4411e13-1d37-4d03-ad8a-7d24be467441-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "b4411e13-1d37-4d03-ad8a-7d24be467441" (UID: "b4411e13-1d37-4d03-ad8a-7d24be467441"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.373637 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "b4411e13-1d37-4d03-ad8a-7d24be467441" (UID: "b4411e13-1d37-4d03-ad8a-7d24be467441"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.373950 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "b4411e13-1d37-4d03-ad8a-7d24be467441" (UID: "b4411e13-1d37-4d03-ad8a-7d24be467441"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.374396 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4411e13-1d37-4d03-ad8a-7d24be467441-kube-api-access-77t8j" (OuterVolumeSpecName: "kube-api-access-77t8j") pod "b4411e13-1d37-4d03-ad8a-7d24be467441" (UID: "b4411e13-1d37-4d03-ad8a-7d24be467441"). InnerVolumeSpecName "kube-api-access-77t8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.377102 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "b4411e13-1d37-4d03-ad8a-7d24be467441" (UID: "b4411e13-1d37-4d03-ad8a-7d24be467441"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.382444 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "b4411e13-1d37-4d03-ad8a-7d24be467441" (UID: "b4411e13-1d37-4d03-ad8a-7d24be467441"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.383973 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "b4411e13-1d37-4d03-ad8a-7d24be467441" (UID: "b4411e13-1d37-4d03-ad8a-7d24be467441"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.384713 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "b4411e13-1d37-4d03-ad8a-7d24be467441" (UID: "b4411e13-1d37-4d03-ad8a-7d24be467441"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.384958 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "b4411e13-1d37-4d03-ad8a-7d24be467441" (UID: "b4411e13-1d37-4d03-ad8a-7d24be467441"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.385405 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "b4411e13-1d37-4d03-ad8a-7d24be467441" (UID: "b4411e13-1d37-4d03-ad8a-7d24be467441"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.467239 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.467294 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.467319 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6k4v\" (UniqueName: \"kubernetes.io/projected/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-kube-api-access-d6k4v\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.467345 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.467364 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-system-service-ca\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.467381 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-system-session\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.467399 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.467423 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-audit-policies\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.467485 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-user-template-error\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.467603 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-system-router-certs\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.467648 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.467717 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-user-template-login\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.467755 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-audit-dir\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.467781 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.467832 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.467856 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77t8j\" (UniqueName: \"kubernetes.io/projected/b4411e13-1d37-4d03-ad8a-7d24be467441-kube-api-access-77t8j\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.467873 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.467887 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.467899 4833 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b4411e13-1d37-4d03-ad8a-7d24be467441-audit-dir\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.467912 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.467923 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.467932 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.467941 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.467950 4833 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b4411e13-1d37-4d03-ad8a-7d24be467441-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.467958 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.467967 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.467976 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.467986 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b4411e13-1d37-4d03-ad8a-7d24be467441-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.568916 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.569010 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.569058 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6k4v\" (UniqueName: \"kubernetes.io/projected/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-kube-api-access-d6k4v\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.569114 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.569158 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-system-service-ca\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.569213 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.569249 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-system-session\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.569287 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-audit-policies\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.569325 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-user-template-error\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.569369 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-system-router-certs\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.569404 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.569448 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-user-template-login\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.569485 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-audit-dir\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.569559 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.570109 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-audit-dir\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.570629 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.574797 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-system-service-ca\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.574965 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.575033 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-audit-policies\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.575692 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.575814 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.579680 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-user-template-error\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.580629 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-system-session\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.581076 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-user-template-login\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.583024 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.583529 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-system-router-certs\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.585453 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.593809 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6k4v\" (UniqueName: \"kubernetes.io/projected/e045c13e-6f7a-4530-92b7-3ea0ae9d3107-kube-api-access-d6k4v\") pod \"oauth-openshift-79b5c48459-6nggb\" (UID: \"e045c13e-6f7a-4530-92b7-3ea0ae9d3107\") " pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.597362 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.835624 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79b5c48459-6nggb"] Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.968299 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" event={"ID":"b4411e13-1d37-4d03-ad8a-7d24be467441","Type":"ContainerDied","Data":"c342d73f28c3cbce983fa52dd67aef537c61f9de595fac7dbab41d00a72c265b"} Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.968378 4833 scope.go:117] "RemoveContainer" containerID="1caf582fa4c7398683f68d8f1143d2641787f87c549ca020cc0f42a9b28b3b89" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.968675 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-x7dz2" Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.971845 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" event={"ID":"e045c13e-6f7a-4530-92b7-3ea0ae9d3107","Type":"ContainerStarted","Data":"b60e68e5f8cf9440e21d6afd0eb6b232facac2061f9242d628c6d768eba888c1"} Oct 13 06:32:26 crc kubenswrapper[4833]: I1013 06:32:26.994677 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x7dz2"] Oct 13 06:32:27 crc kubenswrapper[4833]: I1013 06:32:27.000123 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x7dz2"] Oct 13 06:32:27 crc kubenswrapper[4833]: I1013 06:32:27.981217 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" event={"ID":"e045c13e-6f7a-4530-92b7-3ea0ae9d3107","Type":"ContainerStarted","Data":"4bb8a9023e51a26428d33d153d7ebcb4489e95a83ef65e0d088c79b06c820dd5"} Oct 13 06:32:27 crc kubenswrapper[4833]: I1013 06:32:27.981492 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:27 crc kubenswrapper[4833]: I1013 06:32:27.991628 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" Oct 13 06:32:28 crc kubenswrapper[4833]: I1013 06:32:28.037032 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-79b5c48459-6nggb" podStartSLOduration=28.037006405 podStartE2EDuration="28.037006405s" podCreationTimestamp="2025-10-13 06:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:32:28.008702943 +0000 UTC m=+238.109125889" watchObservedRunningTime="2025-10-13 06:32:28.037006405 +0000 UTC m=+238.137429361" Oct 13 06:32:28 crc kubenswrapper[4833]: I1013 06:32:28.637788 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4411e13-1d37-4d03-ad8a-7d24be467441" path="/var/lib/kubelet/pods/b4411e13-1d37-4d03-ad8a-7d24be467441/volumes" Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.326032 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4sw6h"] Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.327007 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4sw6h" podUID="9cc9a234-36b5-410a-ab39-d8ee02cecf3c" containerName="registry-server" containerID="cri-o://7c3fd23f3c7f3c8312065cd9fc1d171f5a65969565012777c7204e67fd8cd93d" gracePeriod=30 Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.338477 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vwg98"] Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.338740 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vwg98" podUID="6c65c1f2-5e55-4133-a013-d4d5e101e0d7" containerName="registry-server" containerID="cri-o://edbef70a0eb30b101804cd97226dfed96a4341349ae5ff8c1e0c4c05b594dbfa" gracePeriod=30 Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.360056 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6g499"] Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.360344 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-6g499" podUID="1a1d87b9-cb40-4860-8445-4729e0945358" containerName="marketplace-operator" containerID="cri-o://1dfe118d5dc9228ba7296d59accacc286fa07080a76802af5b9ab7ed59f5cb67" gracePeriod=30 Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.371614 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6l2jk"] Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.371878 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6l2jk" podUID="af2f317c-d52d-483f-a616-0d4868b57951" containerName="registry-server" containerID="cri-o://f4413631840162c4c7bc19543ebe3fe9268043c4bf4f1d9cbbea001245d94da6" gracePeriod=30 Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.376002 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hf4k4"] Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.377195 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hf4k4" Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.385368 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-drnc5"] Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.385646 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-drnc5" podUID="63fdb502-c6b1-44e8-86a8-c9571886f5b3" containerName="registry-server" containerID="cri-o://b7c4920f81b5afc6ca9ad51ee70d04d90c75ee50d4ae1cdc829b9b7fcf6e7061" gracePeriod=30 Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.389887 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hf4k4"] Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.523755 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9abfcabe-0f85-4d47-aace-d218b9245549-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hf4k4\" (UID: \"9abfcabe-0f85-4d47-aace-d218b9245549\") " pod="openshift-marketplace/marketplace-operator-79b997595-hf4k4" Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.523814 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9abfcabe-0f85-4d47-aace-d218b9245549-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hf4k4\" (UID: \"9abfcabe-0f85-4d47-aace-d218b9245549\") " pod="openshift-marketplace/marketplace-operator-79b997595-hf4k4" Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.523860 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlx7j\" (UniqueName: \"kubernetes.io/projected/9abfcabe-0f85-4d47-aace-d218b9245549-kube-api-access-nlx7j\") pod \"marketplace-operator-79b997595-hf4k4\" (UID: \"9abfcabe-0f85-4d47-aace-d218b9245549\") " pod="openshift-marketplace/marketplace-operator-79b997595-hf4k4" Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.625346 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9abfcabe-0f85-4d47-aace-d218b9245549-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hf4k4\" (UID: \"9abfcabe-0f85-4d47-aace-d218b9245549\") " pod="openshift-marketplace/marketplace-operator-79b997595-hf4k4" Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.625729 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9abfcabe-0f85-4d47-aace-d218b9245549-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hf4k4\" (UID: \"9abfcabe-0f85-4d47-aace-d218b9245549\") " pod="openshift-marketplace/marketplace-operator-79b997595-hf4k4" Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.625781 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlx7j\" (UniqueName: \"kubernetes.io/projected/9abfcabe-0f85-4d47-aace-d218b9245549-kube-api-access-nlx7j\") pod \"marketplace-operator-79b997595-hf4k4\" (UID: \"9abfcabe-0f85-4d47-aace-d218b9245549\") " pod="openshift-marketplace/marketplace-operator-79b997595-hf4k4" Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.626792 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9abfcabe-0f85-4d47-aace-d218b9245549-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hf4k4\" (UID: \"9abfcabe-0f85-4d47-aace-d218b9245549\") " pod="openshift-marketplace/marketplace-operator-79b997595-hf4k4" Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.631737 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9abfcabe-0f85-4d47-aace-d218b9245549-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hf4k4\" (UID: \"9abfcabe-0f85-4d47-aace-d218b9245549\") " pod="openshift-marketplace/marketplace-operator-79b997595-hf4k4" Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.641131 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlx7j\" (UniqueName: \"kubernetes.io/projected/9abfcabe-0f85-4d47-aace-d218b9245549-kube-api-access-nlx7j\") pod \"marketplace-operator-79b997595-hf4k4\" (UID: \"9abfcabe-0f85-4d47-aace-d218b9245549\") " pod="openshift-marketplace/marketplace-operator-79b997595-hf4k4" Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.778753 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hf4k4" Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.782223 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vwg98" Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.813984 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6l2jk" Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.852844 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4sw6h" Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.883109 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drnc5" Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.908309 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6g499" Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.928758 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c65c1f2-5e55-4133-a013-d4d5e101e0d7-catalog-content\") pod \"6c65c1f2-5e55-4133-a013-d4d5e101e0d7\" (UID: \"6c65c1f2-5e55-4133-a013-d4d5e101e0d7\") " Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.928891 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbwh8\" (UniqueName: \"kubernetes.io/projected/6c65c1f2-5e55-4133-a013-d4d5e101e0d7-kube-api-access-tbwh8\") pod \"6c65c1f2-5e55-4133-a013-d4d5e101e0d7\" (UID: \"6c65c1f2-5e55-4133-a013-d4d5e101e0d7\") " Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.928964 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af2f317c-d52d-483f-a616-0d4868b57951-utilities\") pod \"af2f317c-d52d-483f-a616-0d4868b57951\" (UID: \"af2f317c-d52d-483f-a616-0d4868b57951\") " Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.928985 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcfx7\" (UniqueName: \"kubernetes.io/projected/af2f317c-d52d-483f-a616-0d4868b57951-kube-api-access-tcfx7\") pod \"af2f317c-d52d-483f-a616-0d4868b57951\" (UID: \"af2f317c-d52d-483f-a616-0d4868b57951\") " Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.929003 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c65c1f2-5e55-4133-a013-d4d5e101e0d7-utilities\") pod \"6c65c1f2-5e55-4133-a013-d4d5e101e0d7\" (UID: \"6c65c1f2-5e55-4133-a013-d4d5e101e0d7\") " Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.929030 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af2f317c-d52d-483f-a616-0d4868b57951-catalog-content\") pod \"af2f317c-d52d-483f-a616-0d4868b57951\" (UID: \"af2f317c-d52d-483f-a616-0d4868b57951\") " Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.932278 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af2f317c-d52d-483f-a616-0d4868b57951-utilities" (OuterVolumeSpecName: "utilities") pod "af2f317c-d52d-483f-a616-0d4868b57951" (UID: "af2f317c-d52d-483f-a616-0d4868b57951"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.933567 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c65c1f2-5e55-4133-a013-d4d5e101e0d7-utilities" (OuterVolumeSpecName: "utilities") pod "6c65c1f2-5e55-4133-a013-d4d5e101e0d7" (UID: "6c65c1f2-5e55-4133-a013-d4d5e101e0d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.934191 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c65c1f2-5e55-4133-a013-d4d5e101e0d7-kube-api-access-tbwh8" (OuterVolumeSpecName: "kube-api-access-tbwh8") pod "6c65c1f2-5e55-4133-a013-d4d5e101e0d7" (UID: "6c65c1f2-5e55-4133-a013-d4d5e101e0d7"). InnerVolumeSpecName "kube-api-access-tbwh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.936528 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af2f317c-d52d-483f-a616-0d4868b57951-kube-api-access-tcfx7" (OuterVolumeSpecName: "kube-api-access-tcfx7") pod "af2f317c-d52d-483f-a616-0d4868b57951" (UID: "af2f317c-d52d-483f-a616-0d4868b57951"). InnerVolumeSpecName "kube-api-access-tcfx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:32:54 crc kubenswrapper[4833]: I1013 06:32:54.949386 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af2f317c-d52d-483f-a616-0d4868b57951-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af2f317c-d52d-483f-a616-0d4868b57951" (UID: "af2f317c-d52d-483f-a616-0d4868b57951"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.005080 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c65c1f2-5e55-4133-a013-d4d5e101e0d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c65c1f2-5e55-4133-a013-d4d5e101e0d7" (UID: "6c65c1f2-5e55-4133-a013-d4d5e101e0d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.032416 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63fdb502-c6b1-44e8-86a8-c9571886f5b3-catalog-content\") pod \"63fdb502-c6b1-44e8-86a8-c9571886f5b3\" (UID: \"63fdb502-c6b1-44e8-86a8-c9571886f5b3\") " Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.032579 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a1d87b9-cb40-4860-8445-4729e0945358-marketplace-trusted-ca\") pod \"1a1d87b9-cb40-4860-8445-4729e0945358\" (UID: \"1a1d87b9-cb40-4860-8445-4729e0945358\") " Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.032647 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjwb5\" (UniqueName: \"kubernetes.io/projected/63fdb502-c6b1-44e8-86a8-c9571886f5b3-kube-api-access-xjwb5\") pod \"63fdb502-c6b1-44e8-86a8-c9571886f5b3\" (UID: \"63fdb502-c6b1-44e8-86a8-c9571886f5b3\") " Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.032686 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cc9a234-36b5-410a-ab39-d8ee02cecf3c-utilities\") pod \"9cc9a234-36b5-410a-ab39-d8ee02cecf3c\" (UID: \"9cc9a234-36b5-410a-ab39-d8ee02cecf3c\") " Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.032715 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jggtq\" (UniqueName: \"kubernetes.io/projected/1a1d87b9-cb40-4860-8445-4729e0945358-kube-api-access-jggtq\") pod \"1a1d87b9-cb40-4860-8445-4729e0945358\" (UID: \"1a1d87b9-cb40-4860-8445-4729e0945358\") " Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.032744 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cc9a234-36b5-410a-ab39-d8ee02cecf3c-catalog-content\") pod \"9cc9a234-36b5-410a-ab39-d8ee02cecf3c\" (UID: \"9cc9a234-36b5-410a-ab39-d8ee02cecf3c\") " Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.032798 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1a1d87b9-cb40-4860-8445-4729e0945358-marketplace-operator-metrics\") pod \"1a1d87b9-cb40-4860-8445-4729e0945358\" (UID: \"1a1d87b9-cb40-4860-8445-4729e0945358\") " Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.032821 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63fdb502-c6b1-44e8-86a8-c9571886f5b3-utilities\") pod \"63fdb502-c6b1-44e8-86a8-c9571886f5b3\" (UID: \"63fdb502-c6b1-44e8-86a8-c9571886f5b3\") " Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.032874 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnnlp\" (UniqueName: \"kubernetes.io/projected/9cc9a234-36b5-410a-ab39-d8ee02cecf3c-kube-api-access-jnnlp\") pod \"9cc9a234-36b5-410a-ab39-d8ee02cecf3c\" (UID: \"9cc9a234-36b5-410a-ab39-d8ee02cecf3c\") " Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.033115 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbwh8\" (UniqueName: \"kubernetes.io/projected/6c65c1f2-5e55-4133-a013-d4d5e101e0d7-kube-api-access-tbwh8\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.033144 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af2f317c-d52d-483f-a616-0d4868b57951-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.033160 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcfx7\" (UniqueName: \"kubernetes.io/projected/af2f317c-d52d-483f-a616-0d4868b57951-kube-api-access-tcfx7\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.033174 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c65c1f2-5e55-4133-a013-d4d5e101e0d7-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.033186 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af2f317c-d52d-483f-a616-0d4868b57951-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.033197 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c65c1f2-5e55-4133-a013-d4d5e101e0d7-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.033130 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a1d87b9-cb40-4860-8445-4729e0945358-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "1a1d87b9-cb40-4860-8445-4729e0945358" (UID: "1a1d87b9-cb40-4860-8445-4729e0945358"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.033948 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cc9a234-36b5-410a-ab39-d8ee02cecf3c-utilities" (OuterVolumeSpecName: "utilities") pod "9cc9a234-36b5-410a-ab39-d8ee02cecf3c" (UID: "9cc9a234-36b5-410a-ab39-d8ee02cecf3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.034021 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63fdb502-c6b1-44e8-86a8-c9571886f5b3-utilities" (OuterVolumeSpecName: "utilities") pod "63fdb502-c6b1-44e8-86a8-c9571886f5b3" (UID: "63fdb502-c6b1-44e8-86a8-c9571886f5b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.036080 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cc9a234-36b5-410a-ab39-d8ee02cecf3c-kube-api-access-jnnlp" (OuterVolumeSpecName: "kube-api-access-jnnlp") pod "9cc9a234-36b5-410a-ab39-d8ee02cecf3c" (UID: "9cc9a234-36b5-410a-ab39-d8ee02cecf3c"). InnerVolumeSpecName "kube-api-access-jnnlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.036125 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63fdb502-c6b1-44e8-86a8-c9571886f5b3-kube-api-access-xjwb5" (OuterVolumeSpecName: "kube-api-access-xjwb5") pod "63fdb502-c6b1-44e8-86a8-c9571886f5b3" (UID: "63fdb502-c6b1-44e8-86a8-c9571886f5b3"). InnerVolumeSpecName "kube-api-access-xjwb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.036385 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a1d87b9-cb40-4860-8445-4729e0945358-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "1a1d87b9-cb40-4860-8445-4729e0945358" (UID: "1a1d87b9-cb40-4860-8445-4729e0945358"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.036462 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a1d87b9-cb40-4860-8445-4729e0945358-kube-api-access-jggtq" (OuterVolumeSpecName: "kube-api-access-jggtq") pod "1a1d87b9-cb40-4860-8445-4729e0945358" (UID: "1a1d87b9-cb40-4860-8445-4729e0945358"). InnerVolumeSpecName "kube-api-access-jggtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.082045 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cc9a234-36b5-410a-ab39-d8ee02cecf3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9cc9a234-36b5-410a-ab39-d8ee02cecf3c" (UID: "9cc9a234-36b5-410a-ab39-d8ee02cecf3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.134320 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnnlp\" (UniqueName: \"kubernetes.io/projected/9cc9a234-36b5-410a-ab39-d8ee02cecf3c-kube-api-access-jnnlp\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.134368 4833 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a1d87b9-cb40-4860-8445-4729e0945358-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.134381 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjwb5\" (UniqueName: \"kubernetes.io/projected/63fdb502-c6b1-44e8-86a8-c9571886f5b3-kube-api-access-xjwb5\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.134394 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cc9a234-36b5-410a-ab39-d8ee02cecf3c-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.134408 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jggtq\" (UniqueName: \"kubernetes.io/projected/1a1d87b9-cb40-4860-8445-4729e0945358-kube-api-access-jggtq\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.134419 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cc9a234-36b5-410a-ab39-d8ee02cecf3c-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.134430 4833 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1a1d87b9-cb40-4860-8445-4729e0945358-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.134442 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63fdb502-c6b1-44e8-86a8-c9571886f5b3-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.142727 4833 generic.go:334] "Generic (PLEG): container finished" podID="6c65c1f2-5e55-4133-a013-d4d5e101e0d7" containerID="edbef70a0eb30b101804cd97226dfed96a4341349ae5ff8c1e0c4c05b594dbfa" exitCode=0 Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.142825 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vwg98" event={"ID":"6c65c1f2-5e55-4133-a013-d4d5e101e0d7","Type":"ContainerDied","Data":"edbef70a0eb30b101804cd97226dfed96a4341349ae5ff8c1e0c4c05b594dbfa"} Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.142828 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vwg98" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.142864 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vwg98" event={"ID":"6c65c1f2-5e55-4133-a013-d4d5e101e0d7","Type":"ContainerDied","Data":"551fd92a95240d3e542b1c8fff8687ba87cf0a437932f8f75da9857dae90c9bf"} Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.142892 4833 scope.go:117] "RemoveContainer" containerID="edbef70a0eb30b101804cd97226dfed96a4341349ae5ff8c1e0c4c05b594dbfa" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.144892 4833 generic.go:334] "Generic (PLEG): container finished" podID="1a1d87b9-cb40-4860-8445-4729e0945358" containerID="1dfe118d5dc9228ba7296d59accacc286fa07080a76802af5b9ab7ed59f5cb67" exitCode=0 Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.145077 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6g499" event={"ID":"1a1d87b9-cb40-4860-8445-4729e0945358","Type":"ContainerDied","Data":"1dfe118d5dc9228ba7296d59accacc286fa07080a76802af5b9ab7ed59f5cb67"} Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.145142 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6g499" event={"ID":"1a1d87b9-cb40-4860-8445-4729e0945358","Type":"ContainerDied","Data":"a99f2d24a81c7502faac060b2ea9f14c270eae647e842e771f976de45ec662b9"} Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.145243 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6g499" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.149702 4833 generic.go:334] "Generic (PLEG): container finished" podID="9cc9a234-36b5-410a-ab39-d8ee02cecf3c" containerID="7c3fd23f3c7f3c8312065cd9fc1d171f5a65969565012777c7204e67fd8cd93d" exitCode=0 Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.149759 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sw6h" event={"ID":"9cc9a234-36b5-410a-ab39-d8ee02cecf3c","Type":"ContainerDied","Data":"7c3fd23f3c7f3c8312065cd9fc1d171f5a65969565012777c7204e67fd8cd93d"} Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.149782 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sw6h" event={"ID":"9cc9a234-36b5-410a-ab39-d8ee02cecf3c","Type":"ContainerDied","Data":"a8f89f49c1bf5172cd8da84f4563d9646b234566ee18248d3984d8509e128b46"} Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.149849 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4sw6h" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.153104 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63fdb502-c6b1-44e8-86a8-c9571886f5b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63fdb502-c6b1-44e8-86a8-c9571886f5b3" (UID: "63fdb502-c6b1-44e8-86a8-c9571886f5b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.154742 4833 generic.go:334] "Generic (PLEG): container finished" podID="af2f317c-d52d-483f-a616-0d4868b57951" containerID="f4413631840162c4c7bc19543ebe3fe9268043c4bf4f1d9cbbea001245d94da6" exitCode=0 Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.154797 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6l2jk" event={"ID":"af2f317c-d52d-483f-a616-0d4868b57951","Type":"ContainerDied","Data":"f4413631840162c4c7bc19543ebe3fe9268043c4bf4f1d9cbbea001245d94da6"} Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.154821 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6l2jk" event={"ID":"af2f317c-d52d-483f-a616-0d4868b57951","Type":"ContainerDied","Data":"78f00b1dd6d63a455a1a92292fefae349ddc586c3c3d57552e7f2ea3eef2e0c3"} Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.154881 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6l2jk" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.161709 4833 generic.go:334] "Generic (PLEG): container finished" podID="63fdb502-c6b1-44e8-86a8-c9571886f5b3" containerID="b7c4920f81b5afc6ca9ad51ee70d04d90c75ee50d4ae1cdc829b9b7fcf6e7061" exitCode=0 Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.161734 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drnc5" event={"ID":"63fdb502-c6b1-44e8-86a8-c9571886f5b3","Type":"ContainerDied","Data":"b7c4920f81b5afc6ca9ad51ee70d04d90c75ee50d4ae1cdc829b9b7fcf6e7061"} Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.161751 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drnc5" event={"ID":"63fdb502-c6b1-44e8-86a8-c9571886f5b3","Type":"ContainerDied","Data":"37d10bed4452377d10143a5a144aa526b0d42ad833d246db9af75039f30da61a"} Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.161794 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drnc5" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.165063 4833 scope.go:117] "RemoveContainer" containerID="e9be7f6c1fd16ecbb7e39e025cd029d57af262149850a6878aef5d26fad332d9" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.178627 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6g499"] Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.179834 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6g499"] Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.197467 4833 scope.go:117] "RemoveContainer" containerID="c1bc27016b668380ca6c66dd6588a1c7ed0756289bb2749a5ad3b86c23947310" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.199462 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6l2jk"] Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.220039 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6l2jk"] Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.224396 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vwg98"] Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.228303 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vwg98"] Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.230145 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-drnc5"] Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.232956 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hf4k4"] Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.235289 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63fdb502-c6b1-44e8-86a8-c9571886f5b3-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.235508 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-drnc5"] Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.235717 4833 scope.go:117] "RemoveContainer" containerID="edbef70a0eb30b101804cd97226dfed96a4341349ae5ff8c1e0c4c05b594dbfa" Oct 13 06:32:55 crc kubenswrapper[4833]: E1013 06:32:55.236483 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edbef70a0eb30b101804cd97226dfed96a4341349ae5ff8c1e0c4c05b594dbfa\": container with ID starting with edbef70a0eb30b101804cd97226dfed96a4341349ae5ff8c1e0c4c05b594dbfa not found: ID does not exist" containerID="edbef70a0eb30b101804cd97226dfed96a4341349ae5ff8c1e0c4c05b594dbfa" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.236518 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edbef70a0eb30b101804cd97226dfed96a4341349ae5ff8c1e0c4c05b594dbfa"} err="failed to get container status \"edbef70a0eb30b101804cd97226dfed96a4341349ae5ff8c1e0c4c05b594dbfa\": rpc error: code = NotFound desc = could not find container \"edbef70a0eb30b101804cd97226dfed96a4341349ae5ff8c1e0c4c05b594dbfa\": container with ID starting with edbef70a0eb30b101804cd97226dfed96a4341349ae5ff8c1e0c4c05b594dbfa not found: ID does not exist" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.236562 4833 scope.go:117] "RemoveContainer" containerID="e9be7f6c1fd16ecbb7e39e025cd029d57af262149850a6878aef5d26fad332d9" Oct 13 06:32:55 crc kubenswrapper[4833]: E1013 06:32:55.236969 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9be7f6c1fd16ecbb7e39e025cd029d57af262149850a6878aef5d26fad332d9\": container with ID starting with e9be7f6c1fd16ecbb7e39e025cd029d57af262149850a6878aef5d26fad332d9 not found: ID does not exist" containerID="e9be7f6c1fd16ecbb7e39e025cd029d57af262149850a6878aef5d26fad332d9" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.237022 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9be7f6c1fd16ecbb7e39e025cd029d57af262149850a6878aef5d26fad332d9"} err="failed to get container status \"e9be7f6c1fd16ecbb7e39e025cd029d57af262149850a6878aef5d26fad332d9\": rpc error: code = NotFound desc = could not find container \"e9be7f6c1fd16ecbb7e39e025cd029d57af262149850a6878aef5d26fad332d9\": container with ID starting with e9be7f6c1fd16ecbb7e39e025cd029d57af262149850a6878aef5d26fad332d9 not found: ID does not exist" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.237063 4833 scope.go:117] "RemoveContainer" containerID="c1bc27016b668380ca6c66dd6588a1c7ed0756289bb2749a5ad3b86c23947310" Oct 13 06:32:55 crc kubenswrapper[4833]: E1013 06:32:55.237652 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1bc27016b668380ca6c66dd6588a1c7ed0756289bb2749a5ad3b86c23947310\": container with ID starting with c1bc27016b668380ca6c66dd6588a1c7ed0756289bb2749a5ad3b86c23947310 not found: ID does not exist" containerID="c1bc27016b668380ca6c66dd6588a1c7ed0756289bb2749a5ad3b86c23947310" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.237686 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1bc27016b668380ca6c66dd6588a1c7ed0756289bb2749a5ad3b86c23947310"} err="failed to get container status \"c1bc27016b668380ca6c66dd6588a1c7ed0756289bb2749a5ad3b86c23947310\": rpc error: code = NotFound desc = could not find container \"c1bc27016b668380ca6c66dd6588a1c7ed0756289bb2749a5ad3b86c23947310\": container with ID starting with c1bc27016b668380ca6c66dd6588a1c7ed0756289bb2749a5ad3b86c23947310 not found: ID does not exist" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.237713 4833 scope.go:117] "RemoveContainer" containerID="1dfe118d5dc9228ba7296d59accacc286fa07080a76802af5b9ab7ed59f5cb67" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.238435 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4sw6h"] Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.240771 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4sw6h"] Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.261177 4833 scope.go:117] "RemoveContainer" containerID="1dfe118d5dc9228ba7296d59accacc286fa07080a76802af5b9ab7ed59f5cb67" Oct 13 06:32:55 crc kubenswrapper[4833]: E1013 06:32:55.261886 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dfe118d5dc9228ba7296d59accacc286fa07080a76802af5b9ab7ed59f5cb67\": container with ID starting with 1dfe118d5dc9228ba7296d59accacc286fa07080a76802af5b9ab7ed59f5cb67 not found: ID does not exist" containerID="1dfe118d5dc9228ba7296d59accacc286fa07080a76802af5b9ab7ed59f5cb67" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.261995 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dfe118d5dc9228ba7296d59accacc286fa07080a76802af5b9ab7ed59f5cb67"} err="failed to get container status \"1dfe118d5dc9228ba7296d59accacc286fa07080a76802af5b9ab7ed59f5cb67\": rpc error: code = NotFound desc = could not find container \"1dfe118d5dc9228ba7296d59accacc286fa07080a76802af5b9ab7ed59f5cb67\": container with ID starting with 1dfe118d5dc9228ba7296d59accacc286fa07080a76802af5b9ab7ed59f5cb67 not found: ID does not exist" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.262104 4833 scope.go:117] "RemoveContainer" containerID="7c3fd23f3c7f3c8312065cd9fc1d171f5a65969565012777c7204e67fd8cd93d" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.273884 4833 scope.go:117] "RemoveContainer" containerID="ede10795a521854c704e0766f93f8c91fbb2bb8f5836e321b6301bd00fb1b672" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.292726 4833 scope.go:117] "RemoveContainer" containerID="2670db2561f1e020457c9de244dd4db35d8112c3f947348dd605a176ce58a879" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.308410 4833 scope.go:117] "RemoveContainer" containerID="7c3fd23f3c7f3c8312065cd9fc1d171f5a65969565012777c7204e67fd8cd93d" Oct 13 06:32:55 crc kubenswrapper[4833]: E1013 06:32:55.308799 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c3fd23f3c7f3c8312065cd9fc1d171f5a65969565012777c7204e67fd8cd93d\": container with ID starting with 7c3fd23f3c7f3c8312065cd9fc1d171f5a65969565012777c7204e67fd8cd93d not found: ID does not exist" containerID="7c3fd23f3c7f3c8312065cd9fc1d171f5a65969565012777c7204e67fd8cd93d" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.308824 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c3fd23f3c7f3c8312065cd9fc1d171f5a65969565012777c7204e67fd8cd93d"} err="failed to get container status \"7c3fd23f3c7f3c8312065cd9fc1d171f5a65969565012777c7204e67fd8cd93d\": rpc error: code = NotFound desc = could not find container \"7c3fd23f3c7f3c8312065cd9fc1d171f5a65969565012777c7204e67fd8cd93d\": container with ID starting with 7c3fd23f3c7f3c8312065cd9fc1d171f5a65969565012777c7204e67fd8cd93d not found: ID does not exist" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.308845 4833 scope.go:117] "RemoveContainer" containerID="ede10795a521854c704e0766f93f8c91fbb2bb8f5836e321b6301bd00fb1b672" Oct 13 06:32:55 crc kubenswrapper[4833]: E1013 06:32:55.309181 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ede10795a521854c704e0766f93f8c91fbb2bb8f5836e321b6301bd00fb1b672\": container with ID starting with ede10795a521854c704e0766f93f8c91fbb2bb8f5836e321b6301bd00fb1b672 not found: ID does not exist" containerID="ede10795a521854c704e0766f93f8c91fbb2bb8f5836e321b6301bd00fb1b672" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.309198 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ede10795a521854c704e0766f93f8c91fbb2bb8f5836e321b6301bd00fb1b672"} err="failed to get container status \"ede10795a521854c704e0766f93f8c91fbb2bb8f5836e321b6301bd00fb1b672\": rpc error: code = NotFound desc = could not find container \"ede10795a521854c704e0766f93f8c91fbb2bb8f5836e321b6301bd00fb1b672\": container with ID starting with ede10795a521854c704e0766f93f8c91fbb2bb8f5836e321b6301bd00fb1b672 not found: ID does not exist" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.309210 4833 scope.go:117] "RemoveContainer" containerID="2670db2561f1e020457c9de244dd4db35d8112c3f947348dd605a176ce58a879" Oct 13 06:32:55 crc kubenswrapper[4833]: E1013 06:32:55.309404 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2670db2561f1e020457c9de244dd4db35d8112c3f947348dd605a176ce58a879\": container with ID starting with 2670db2561f1e020457c9de244dd4db35d8112c3f947348dd605a176ce58a879 not found: ID does not exist" containerID="2670db2561f1e020457c9de244dd4db35d8112c3f947348dd605a176ce58a879" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.309421 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2670db2561f1e020457c9de244dd4db35d8112c3f947348dd605a176ce58a879"} err="failed to get container status \"2670db2561f1e020457c9de244dd4db35d8112c3f947348dd605a176ce58a879\": rpc error: code = NotFound desc = could not find container \"2670db2561f1e020457c9de244dd4db35d8112c3f947348dd605a176ce58a879\": container with ID starting with 2670db2561f1e020457c9de244dd4db35d8112c3f947348dd605a176ce58a879 not found: ID does not exist" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.309431 4833 scope.go:117] "RemoveContainer" containerID="f4413631840162c4c7bc19543ebe3fe9268043c4bf4f1d9cbbea001245d94da6" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.323472 4833 scope.go:117] "RemoveContainer" containerID="cec21ac2ca929613f4cb27a76a5146f06215573293461d28f2c2d32925f69c98" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.336414 4833 scope.go:117] "RemoveContainer" containerID="3528a2afa973cebdfa9ab17232fac9e8217a00a29ee9d931ef99c4aef5e3095b" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.349619 4833 scope.go:117] "RemoveContainer" containerID="f4413631840162c4c7bc19543ebe3fe9268043c4bf4f1d9cbbea001245d94da6" Oct 13 06:32:55 crc kubenswrapper[4833]: E1013 06:32:55.350001 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4413631840162c4c7bc19543ebe3fe9268043c4bf4f1d9cbbea001245d94da6\": container with ID starting with f4413631840162c4c7bc19543ebe3fe9268043c4bf4f1d9cbbea001245d94da6 not found: ID does not exist" containerID="f4413631840162c4c7bc19543ebe3fe9268043c4bf4f1d9cbbea001245d94da6" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.350035 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4413631840162c4c7bc19543ebe3fe9268043c4bf4f1d9cbbea001245d94da6"} err="failed to get container status \"f4413631840162c4c7bc19543ebe3fe9268043c4bf4f1d9cbbea001245d94da6\": rpc error: code = NotFound desc = could not find container \"f4413631840162c4c7bc19543ebe3fe9268043c4bf4f1d9cbbea001245d94da6\": container with ID starting with f4413631840162c4c7bc19543ebe3fe9268043c4bf4f1d9cbbea001245d94da6 not found: ID does not exist" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.350061 4833 scope.go:117] "RemoveContainer" containerID="cec21ac2ca929613f4cb27a76a5146f06215573293461d28f2c2d32925f69c98" Oct 13 06:32:55 crc kubenswrapper[4833]: E1013 06:32:55.350391 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cec21ac2ca929613f4cb27a76a5146f06215573293461d28f2c2d32925f69c98\": container with ID starting with cec21ac2ca929613f4cb27a76a5146f06215573293461d28f2c2d32925f69c98 not found: ID does not exist" containerID="cec21ac2ca929613f4cb27a76a5146f06215573293461d28f2c2d32925f69c98" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.350441 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cec21ac2ca929613f4cb27a76a5146f06215573293461d28f2c2d32925f69c98"} err="failed to get container status \"cec21ac2ca929613f4cb27a76a5146f06215573293461d28f2c2d32925f69c98\": rpc error: code = NotFound desc = could not find container \"cec21ac2ca929613f4cb27a76a5146f06215573293461d28f2c2d32925f69c98\": container with ID starting with cec21ac2ca929613f4cb27a76a5146f06215573293461d28f2c2d32925f69c98 not found: ID does not exist" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.350467 4833 scope.go:117] "RemoveContainer" containerID="3528a2afa973cebdfa9ab17232fac9e8217a00a29ee9d931ef99c4aef5e3095b" Oct 13 06:32:55 crc kubenswrapper[4833]: E1013 06:32:55.350810 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3528a2afa973cebdfa9ab17232fac9e8217a00a29ee9d931ef99c4aef5e3095b\": container with ID starting with 3528a2afa973cebdfa9ab17232fac9e8217a00a29ee9d931ef99c4aef5e3095b not found: ID does not exist" containerID="3528a2afa973cebdfa9ab17232fac9e8217a00a29ee9d931ef99c4aef5e3095b" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.350832 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3528a2afa973cebdfa9ab17232fac9e8217a00a29ee9d931ef99c4aef5e3095b"} err="failed to get container status \"3528a2afa973cebdfa9ab17232fac9e8217a00a29ee9d931ef99c4aef5e3095b\": rpc error: code = NotFound desc = could not find container \"3528a2afa973cebdfa9ab17232fac9e8217a00a29ee9d931ef99c4aef5e3095b\": container with ID starting with 3528a2afa973cebdfa9ab17232fac9e8217a00a29ee9d931ef99c4aef5e3095b not found: ID does not exist" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.350848 4833 scope.go:117] "RemoveContainer" containerID="b7c4920f81b5afc6ca9ad51ee70d04d90c75ee50d4ae1cdc829b9b7fcf6e7061" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.366204 4833 scope.go:117] "RemoveContainer" containerID="befeb517d3aa8424fdbbb012062face02d5b32edbccd55587d62456b2b6c05d5" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.379141 4833 scope.go:117] "RemoveContainer" containerID="2bbad559bc4d5f99ac62b3c52b46aaaa9820cb7379b78a7084c0d1cafc95f5c2" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.394679 4833 scope.go:117] "RemoveContainer" containerID="b7c4920f81b5afc6ca9ad51ee70d04d90c75ee50d4ae1cdc829b9b7fcf6e7061" Oct 13 06:32:55 crc kubenswrapper[4833]: E1013 06:32:55.395044 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7c4920f81b5afc6ca9ad51ee70d04d90c75ee50d4ae1cdc829b9b7fcf6e7061\": container with ID starting with b7c4920f81b5afc6ca9ad51ee70d04d90c75ee50d4ae1cdc829b9b7fcf6e7061 not found: ID does not exist" containerID="b7c4920f81b5afc6ca9ad51ee70d04d90c75ee50d4ae1cdc829b9b7fcf6e7061" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.395077 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7c4920f81b5afc6ca9ad51ee70d04d90c75ee50d4ae1cdc829b9b7fcf6e7061"} err="failed to get container status \"b7c4920f81b5afc6ca9ad51ee70d04d90c75ee50d4ae1cdc829b9b7fcf6e7061\": rpc error: code = NotFound desc = could not find container \"b7c4920f81b5afc6ca9ad51ee70d04d90c75ee50d4ae1cdc829b9b7fcf6e7061\": container with ID starting with b7c4920f81b5afc6ca9ad51ee70d04d90c75ee50d4ae1cdc829b9b7fcf6e7061 not found: ID does not exist" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.395103 4833 scope.go:117] "RemoveContainer" containerID="befeb517d3aa8424fdbbb012062face02d5b32edbccd55587d62456b2b6c05d5" Oct 13 06:32:55 crc kubenswrapper[4833]: E1013 06:32:55.396531 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"befeb517d3aa8424fdbbb012062face02d5b32edbccd55587d62456b2b6c05d5\": container with ID starting with befeb517d3aa8424fdbbb012062face02d5b32edbccd55587d62456b2b6c05d5 not found: ID does not exist" containerID="befeb517d3aa8424fdbbb012062face02d5b32edbccd55587d62456b2b6c05d5" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.396600 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"befeb517d3aa8424fdbbb012062face02d5b32edbccd55587d62456b2b6c05d5"} err="failed to get container status \"befeb517d3aa8424fdbbb012062face02d5b32edbccd55587d62456b2b6c05d5\": rpc error: code = NotFound desc = could not find container \"befeb517d3aa8424fdbbb012062face02d5b32edbccd55587d62456b2b6c05d5\": container with ID starting with befeb517d3aa8424fdbbb012062face02d5b32edbccd55587d62456b2b6c05d5 not found: ID does not exist" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.396629 4833 scope.go:117] "RemoveContainer" containerID="2bbad559bc4d5f99ac62b3c52b46aaaa9820cb7379b78a7084c0d1cafc95f5c2" Oct 13 06:32:55 crc kubenswrapper[4833]: E1013 06:32:55.397014 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bbad559bc4d5f99ac62b3c52b46aaaa9820cb7379b78a7084c0d1cafc95f5c2\": container with ID starting with 2bbad559bc4d5f99ac62b3c52b46aaaa9820cb7379b78a7084c0d1cafc95f5c2 not found: ID does not exist" containerID="2bbad559bc4d5f99ac62b3c52b46aaaa9820cb7379b78a7084c0d1cafc95f5c2" Oct 13 06:32:55 crc kubenswrapper[4833]: I1013 06:32:55.397037 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bbad559bc4d5f99ac62b3c52b46aaaa9820cb7379b78a7084c0d1cafc95f5c2"} err="failed to get container status \"2bbad559bc4d5f99ac62b3c52b46aaaa9820cb7379b78a7084c0d1cafc95f5c2\": rpc error: code = NotFound desc = could not find container \"2bbad559bc4d5f99ac62b3c52b46aaaa9820cb7379b78a7084c0d1cafc95f5c2\": container with ID starting with 2bbad559bc4d5f99ac62b3c52b46aaaa9820cb7379b78a7084c0d1cafc95f5c2 not found: ID does not exist" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.169454 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hf4k4" event={"ID":"9abfcabe-0f85-4d47-aace-d218b9245549","Type":"ContainerStarted","Data":"785930c40aa373072da568cea701941a7241d3b63d268fa50afcaa7b13120bc6"} Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.169802 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hf4k4" event={"ID":"9abfcabe-0f85-4d47-aace-d218b9245549","Type":"ContainerStarted","Data":"c830674a4808890d9c144c51f564cc810b392d9417a064eccc04675b9d9c5358"} Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.169982 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hf4k4" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.171903 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hf4k4" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.187714 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hf4k4" podStartSLOduration=2.187695186 podStartE2EDuration="2.187695186s" podCreationTimestamp="2025-10-13 06:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:32:56.183871025 +0000 UTC m=+266.284293931" watchObservedRunningTime="2025-10-13 06:32:56.187695186 +0000 UTC m=+266.288118102" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.545162 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2bk65"] Oct 13 06:32:56 crc kubenswrapper[4833]: E1013 06:32:56.545363 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63fdb502-c6b1-44e8-86a8-c9571886f5b3" containerName="extract-utilities" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.545375 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="63fdb502-c6b1-44e8-86a8-c9571886f5b3" containerName="extract-utilities" Oct 13 06:32:56 crc kubenswrapper[4833]: E1013 06:32:56.545386 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af2f317c-d52d-483f-a616-0d4868b57951" containerName="extract-content" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.545392 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="af2f317c-d52d-483f-a616-0d4868b57951" containerName="extract-content" Oct 13 06:32:56 crc kubenswrapper[4833]: E1013 06:32:56.545400 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cc9a234-36b5-410a-ab39-d8ee02cecf3c" containerName="extract-utilities" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.545406 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cc9a234-36b5-410a-ab39-d8ee02cecf3c" containerName="extract-utilities" Oct 13 06:32:56 crc kubenswrapper[4833]: E1013 06:32:56.545416 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c65c1f2-5e55-4133-a013-d4d5e101e0d7" containerName="extract-utilities" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.545422 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c65c1f2-5e55-4133-a013-d4d5e101e0d7" containerName="extract-utilities" Oct 13 06:32:56 crc kubenswrapper[4833]: E1013 06:32:56.545430 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c65c1f2-5e55-4133-a013-d4d5e101e0d7" containerName="registry-server" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.545435 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c65c1f2-5e55-4133-a013-d4d5e101e0d7" containerName="registry-server" Oct 13 06:32:56 crc kubenswrapper[4833]: E1013 06:32:56.545444 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af2f317c-d52d-483f-a616-0d4868b57951" containerName="registry-server" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.545449 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="af2f317c-d52d-483f-a616-0d4868b57951" containerName="registry-server" Oct 13 06:32:56 crc kubenswrapper[4833]: E1013 06:32:56.545459 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cc9a234-36b5-410a-ab39-d8ee02cecf3c" containerName="registry-server" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.545465 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cc9a234-36b5-410a-ab39-d8ee02cecf3c" containerName="registry-server" Oct 13 06:32:56 crc kubenswrapper[4833]: E1013 06:32:56.545472 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63fdb502-c6b1-44e8-86a8-c9571886f5b3" containerName="registry-server" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.545478 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="63fdb502-c6b1-44e8-86a8-c9571886f5b3" containerName="registry-server" Oct 13 06:32:56 crc kubenswrapper[4833]: E1013 06:32:56.545485 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af2f317c-d52d-483f-a616-0d4868b57951" containerName="extract-utilities" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.545492 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="af2f317c-d52d-483f-a616-0d4868b57951" containerName="extract-utilities" Oct 13 06:32:56 crc kubenswrapper[4833]: E1013 06:32:56.545499 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cc9a234-36b5-410a-ab39-d8ee02cecf3c" containerName="extract-content" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.545504 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cc9a234-36b5-410a-ab39-d8ee02cecf3c" containerName="extract-content" Oct 13 06:32:56 crc kubenswrapper[4833]: E1013 06:32:56.545511 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c65c1f2-5e55-4133-a013-d4d5e101e0d7" containerName="extract-content" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.545517 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c65c1f2-5e55-4133-a013-d4d5e101e0d7" containerName="extract-content" Oct 13 06:32:56 crc kubenswrapper[4833]: E1013 06:32:56.545524 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a1d87b9-cb40-4860-8445-4729e0945358" containerName="marketplace-operator" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.545529 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a1d87b9-cb40-4860-8445-4729e0945358" containerName="marketplace-operator" Oct 13 06:32:56 crc kubenswrapper[4833]: E1013 06:32:56.545552 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63fdb502-c6b1-44e8-86a8-c9571886f5b3" containerName="extract-content" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.545558 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="63fdb502-c6b1-44e8-86a8-c9571886f5b3" containerName="extract-content" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.545640 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cc9a234-36b5-410a-ab39-d8ee02cecf3c" containerName="registry-server" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.545650 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="af2f317c-d52d-483f-a616-0d4868b57951" containerName="registry-server" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.545659 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="63fdb502-c6b1-44e8-86a8-c9571886f5b3" containerName="registry-server" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.545667 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c65c1f2-5e55-4133-a013-d4d5e101e0d7" containerName="registry-server" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.545676 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a1d87b9-cb40-4860-8445-4729e0945358" containerName="marketplace-operator" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.546332 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2bk65" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.555439 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.560288 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2bk65"] Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.634506 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a1d87b9-cb40-4860-8445-4729e0945358" path="/var/lib/kubelet/pods/1a1d87b9-cb40-4860-8445-4729e0945358/volumes" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.635143 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63fdb502-c6b1-44e8-86a8-c9571886f5b3" path="/var/lib/kubelet/pods/63fdb502-c6b1-44e8-86a8-c9571886f5b3/volumes" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.635964 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c65c1f2-5e55-4133-a013-d4d5e101e0d7" path="/var/lib/kubelet/pods/6c65c1f2-5e55-4133-a013-d4d5e101e0d7/volumes" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.637309 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cc9a234-36b5-410a-ab39-d8ee02cecf3c" path="/var/lib/kubelet/pods/9cc9a234-36b5-410a-ab39-d8ee02cecf3c/volumes" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.638111 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af2f317c-d52d-483f-a616-0d4868b57951" path="/var/lib/kubelet/pods/af2f317c-d52d-483f-a616-0d4868b57951/volumes" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.657046 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adc8be39-7458-4c84-b227-856761d77e4e-utilities\") pod \"certified-operators-2bk65\" (UID: \"adc8be39-7458-4c84-b227-856761d77e4e\") " pod="openshift-marketplace/certified-operators-2bk65" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.657114 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxdj6\" (UniqueName: \"kubernetes.io/projected/adc8be39-7458-4c84-b227-856761d77e4e-kube-api-access-qxdj6\") pod \"certified-operators-2bk65\" (UID: \"adc8be39-7458-4c84-b227-856761d77e4e\") " pod="openshift-marketplace/certified-operators-2bk65" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.657143 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adc8be39-7458-4c84-b227-856761d77e4e-catalog-content\") pod \"certified-operators-2bk65\" (UID: \"adc8be39-7458-4c84-b227-856761d77e4e\") " pod="openshift-marketplace/certified-operators-2bk65" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.744819 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9v92m"] Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.753516 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9v92m" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.754931 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9v92m"] Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.757775 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.758158 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6beb74db-c3f1-480b-b294-9d1ba1867055-catalog-content\") pod \"redhat-marketplace-9v92m\" (UID: \"6beb74db-c3f1-480b-b294-9d1ba1867055\") " pod="openshift-marketplace/redhat-marketplace-9v92m" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.758224 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbtnn\" (UniqueName: \"kubernetes.io/projected/6beb74db-c3f1-480b-b294-9d1ba1867055-kube-api-access-cbtnn\") pod \"redhat-marketplace-9v92m\" (UID: \"6beb74db-c3f1-480b-b294-9d1ba1867055\") " pod="openshift-marketplace/redhat-marketplace-9v92m" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.758262 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adc8be39-7458-4c84-b227-856761d77e4e-utilities\") pod \"certified-operators-2bk65\" (UID: \"adc8be39-7458-4c84-b227-856761d77e4e\") " pod="openshift-marketplace/certified-operators-2bk65" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.758335 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6beb74db-c3f1-480b-b294-9d1ba1867055-utilities\") pod \"redhat-marketplace-9v92m\" (UID: \"6beb74db-c3f1-480b-b294-9d1ba1867055\") " pod="openshift-marketplace/redhat-marketplace-9v92m" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.758376 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxdj6\" (UniqueName: \"kubernetes.io/projected/adc8be39-7458-4c84-b227-856761d77e4e-kube-api-access-qxdj6\") pod \"certified-operators-2bk65\" (UID: \"adc8be39-7458-4c84-b227-856761d77e4e\") " pod="openshift-marketplace/certified-operators-2bk65" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.758407 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adc8be39-7458-4c84-b227-856761d77e4e-catalog-content\") pod \"certified-operators-2bk65\" (UID: \"adc8be39-7458-4c84-b227-856761d77e4e\") " pod="openshift-marketplace/certified-operators-2bk65" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.758778 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adc8be39-7458-4c84-b227-856761d77e4e-utilities\") pod \"certified-operators-2bk65\" (UID: \"adc8be39-7458-4c84-b227-856761d77e4e\") " pod="openshift-marketplace/certified-operators-2bk65" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.758853 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adc8be39-7458-4c84-b227-856761d77e4e-catalog-content\") pod \"certified-operators-2bk65\" (UID: \"adc8be39-7458-4c84-b227-856761d77e4e\") " pod="openshift-marketplace/certified-operators-2bk65" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.778907 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxdj6\" (UniqueName: \"kubernetes.io/projected/adc8be39-7458-4c84-b227-856761d77e4e-kube-api-access-qxdj6\") pod \"certified-operators-2bk65\" (UID: \"adc8be39-7458-4c84-b227-856761d77e4e\") " pod="openshift-marketplace/certified-operators-2bk65" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.860081 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6beb74db-c3f1-480b-b294-9d1ba1867055-utilities\") pod \"redhat-marketplace-9v92m\" (UID: \"6beb74db-c3f1-480b-b294-9d1ba1867055\") " pod="openshift-marketplace/redhat-marketplace-9v92m" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.860332 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6beb74db-c3f1-480b-b294-9d1ba1867055-catalog-content\") pod \"redhat-marketplace-9v92m\" (UID: \"6beb74db-c3f1-480b-b294-9d1ba1867055\") " pod="openshift-marketplace/redhat-marketplace-9v92m" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.860667 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbtnn\" (UniqueName: \"kubernetes.io/projected/6beb74db-c3f1-480b-b294-9d1ba1867055-kube-api-access-cbtnn\") pod \"redhat-marketplace-9v92m\" (UID: \"6beb74db-c3f1-480b-b294-9d1ba1867055\") " pod="openshift-marketplace/redhat-marketplace-9v92m" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.860625 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6beb74db-c3f1-480b-b294-9d1ba1867055-catalog-content\") pod \"redhat-marketplace-9v92m\" (UID: \"6beb74db-c3f1-480b-b294-9d1ba1867055\") " pod="openshift-marketplace/redhat-marketplace-9v92m" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.860552 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6beb74db-c3f1-480b-b294-9d1ba1867055-utilities\") pod \"redhat-marketplace-9v92m\" (UID: \"6beb74db-c3f1-480b-b294-9d1ba1867055\") " pod="openshift-marketplace/redhat-marketplace-9v92m" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.868119 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2bk65" Oct 13 06:32:56 crc kubenswrapper[4833]: I1013 06:32:56.875899 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbtnn\" (UniqueName: \"kubernetes.io/projected/6beb74db-c3f1-480b-b294-9d1ba1867055-kube-api-access-cbtnn\") pod \"redhat-marketplace-9v92m\" (UID: \"6beb74db-c3f1-480b-b294-9d1ba1867055\") " pod="openshift-marketplace/redhat-marketplace-9v92m" Oct 13 06:32:57 crc kubenswrapper[4833]: I1013 06:32:57.068455 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9v92m" Oct 13 06:32:57 crc kubenswrapper[4833]: I1013 06:32:57.101994 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2bk65"] Oct 13 06:32:57 crc kubenswrapper[4833]: I1013 06:32:57.180026 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bk65" event={"ID":"adc8be39-7458-4c84-b227-856761d77e4e","Type":"ContainerStarted","Data":"3597bca1bc85a257a123dcd2a958270008d7fe3dc8dd39cafffd06b6f9e6523e"} Oct 13 06:32:57 crc kubenswrapper[4833]: I1013 06:32:57.270568 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9v92m"] Oct 13 06:32:57 crc kubenswrapper[4833]: W1013 06:32:57.281511 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6beb74db_c3f1_480b_b294_9d1ba1867055.slice/crio-3a13a25b328a510f82b4e59ec62f060f549afb22b8dfc7d3552308e0ea75c5bf WatchSource:0}: Error finding container 3a13a25b328a510f82b4e59ec62f060f549afb22b8dfc7d3552308e0ea75c5bf: Status 404 returned error can't find the container with id 3a13a25b328a510f82b4e59ec62f060f549afb22b8dfc7d3552308e0ea75c5bf Oct 13 06:32:58 crc kubenswrapper[4833]: I1013 06:32:58.185550 4833 generic.go:334] "Generic (PLEG): container finished" podID="adc8be39-7458-4c84-b227-856761d77e4e" containerID="3b172144879b900410a53651017d9ec599ca595a0a6f96c70391f9424a521f62" exitCode=0 Oct 13 06:32:58 crc kubenswrapper[4833]: I1013 06:32:58.185634 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bk65" event={"ID":"adc8be39-7458-4c84-b227-856761d77e4e","Type":"ContainerDied","Data":"3b172144879b900410a53651017d9ec599ca595a0a6f96c70391f9424a521f62"} Oct 13 06:32:58 crc kubenswrapper[4833]: I1013 06:32:58.196174 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9v92m" event={"ID":"6beb74db-c3f1-480b-b294-9d1ba1867055","Type":"ContainerDied","Data":"3e64df605f90e617015356085b00cdff7eb469b0e6d35a1179f7d4b92085e87e"} Oct 13 06:32:58 crc kubenswrapper[4833]: I1013 06:32:58.196783 4833 generic.go:334] "Generic (PLEG): container finished" podID="6beb74db-c3f1-480b-b294-9d1ba1867055" containerID="3e64df605f90e617015356085b00cdff7eb469b0e6d35a1179f7d4b92085e87e" exitCode=0 Oct 13 06:32:58 crc kubenswrapper[4833]: I1013 06:32:58.197338 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9v92m" event={"ID":"6beb74db-c3f1-480b-b294-9d1ba1867055","Type":"ContainerStarted","Data":"3a13a25b328a510f82b4e59ec62f060f549afb22b8dfc7d3552308e0ea75c5bf"} Oct 13 06:32:58 crc kubenswrapper[4833]: I1013 06:32:58.942659 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rqjsg"] Oct 13 06:32:58 crc kubenswrapper[4833]: I1013 06:32:58.944073 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rqjsg" Oct 13 06:32:58 crc kubenswrapper[4833]: I1013 06:32:58.947986 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 13 06:32:58 crc kubenswrapper[4833]: I1013 06:32:58.960742 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rqjsg"] Oct 13 06:32:59 crc kubenswrapper[4833]: I1013 06:32:59.091643 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9da3536c-cd43-4288-87df-1960453f5d50-catalog-content\") pod \"community-operators-rqjsg\" (UID: \"9da3536c-cd43-4288-87df-1960453f5d50\") " pod="openshift-marketplace/community-operators-rqjsg" Oct 13 06:32:59 crc kubenswrapper[4833]: I1013 06:32:59.091681 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9da3536c-cd43-4288-87df-1960453f5d50-utilities\") pod \"community-operators-rqjsg\" (UID: \"9da3536c-cd43-4288-87df-1960453f5d50\") " pod="openshift-marketplace/community-operators-rqjsg" Oct 13 06:32:59 crc kubenswrapper[4833]: I1013 06:32:59.091735 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tts8z\" (UniqueName: \"kubernetes.io/projected/9da3536c-cd43-4288-87df-1960453f5d50-kube-api-access-tts8z\") pod \"community-operators-rqjsg\" (UID: \"9da3536c-cd43-4288-87df-1960453f5d50\") " pod="openshift-marketplace/community-operators-rqjsg" Oct 13 06:32:59 crc kubenswrapper[4833]: I1013 06:32:59.157464 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n9hd7"] Oct 13 06:32:59 crc kubenswrapper[4833]: I1013 06:32:59.158726 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n9hd7" Oct 13 06:32:59 crc kubenswrapper[4833]: I1013 06:32:59.162368 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 13 06:32:59 crc kubenswrapper[4833]: I1013 06:32:59.164592 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n9hd7"] Oct 13 06:32:59 crc kubenswrapper[4833]: I1013 06:32:59.193027 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9da3536c-cd43-4288-87df-1960453f5d50-catalog-content\") pod \"community-operators-rqjsg\" (UID: \"9da3536c-cd43-4288-87df-1960453f5d50\") " pod="openshift-marketplace/community-operators-rqjsg" Oct 13 06:32:59 crc kubenswrapper[4833]: I1013 06:32:59.193068 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9da3536c-cd43-4288-87df-1960453f5d50-utilities\") pod \"community-operators-rqjsg\" (UID: \"9da3536c-cd43-4288-87df-1960453f5d50\") " pod="openshift-marketplace/community-operators-rqjsg" Oct 13 06:32:59 crc kubenswrapper[4833]: I1013 06:32:59.193582 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9da3536c-cd43-4288-87df-1960453f5d50-catalog-content\") pod \"community-operators-rqjsg\" (UID: \"9da3536c-cd43-4288-87df-1960453f5d50\") " pod="openshift-marketplace/community-operators-rqjsg" Oct 13 06:32:59 crc kubenswrapper[4833]: I1013 06:32:59.193655 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tts8z\" (UniqueName: \"kubernetes.io/projected/9da3536c-cd43-4288-87df-1960453f5d50-kube-api-access-tts8z\") pod \"community-operators-rqjsg\" (UID: \"9da3536c-cd43-4288-87df-1960453f5d50\") " pod="openshift-marketplace/community-operators-rqjsg" Oct 13 06:32:59 crc kubenswrapper[4833]: I1013 06:32:59.193587 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9da3536c-cd43-4288-87df-1960453f5d50-utilities\") pod \"community-operators-rqjsg\" (UID: \"9da3536c-cd43-4288-87df-1960453f5d50\") " pod="openshift-marketplace/community-operators-rqjsg" Oct 13 06:32:59 crc kubenswrapper[4833]: I1013 06:32:59.203138 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bk65" event={"ID":"adc8be39-7458-4c84-b227-856761d77e4e","Type":"ContainerStarted","Data":"0221d9d16c5ff62fe6294af2c4882977266d8610025b05edd6c0d245062ec239"} Oct 13 06:32:59 crc kubenswrapper[4833]: I1013 06:32:59.206640 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9v92m" event={"ID":"6beb74db-c3f1-480b-b294-9d1ba1867055","Type":"ContainerStarted","Data":"c7ac2b5589e51fc557cf4a0b65d1c2b8221553d68bad089f2b499166c8dc4147"} Oct 13 06:32:59 crc kubenswrapper[4833]: I1013 06:32:59.221326 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tts8z\" (UniqueName: \"kubernetes.io/projected/9da3536c-cd43-4288-87df-1960453f5d50-kube-api-access-tts8z\") pod \"community-operators-rqjsg\" (UID: \"9da3536c-cd43-4288-87df-1960453f5d50\") " pod="openshift-marketplace/community-operators-rqjsg" Oct 13 06:32:59 crc kubenswrapper[4833]: I1013 06:32:59.260722 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rqjsg" Oct 13 06:32:59 crc kubenswrapper[4833]: I1013 06:32:59.294925 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a092b978-dd26-456c-bf3c-310a83f188e7-utilities\") pod \"redhat-operators-n9hd7\" (UID: \"a092b978-dd26-456c-bf3c-310a83f188e7\") " pod="openshift-marketplace/redhat-operators-n9hd7" Oct 13 06:32:59 crc kubenswrapper[4833]: I1013 06:32:59.294988 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nscz5\" (UniqueName: \"kubernetes.io/projected/a092b978-dd26-456c-bf3c-310a83f188e7-kube-api-access-nscz5\") pod \"redhat-operators-n9hd7\" (UID: \"a092b978-dd26-456c-bf3c-310a83f188e7\") " pod="openshift-marketplace/redhat-operators-n9hd7" Oct 13 06:32:59 crc kubenswrapper[4833]: I1013 06:32:59.295076 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a092b978-dd26-456c-bf3c-310a83f188e7-catalog-content\") pod \"redhat-operators-n9hd7\" (UID: \"a092b978-dd26-456c-bf3c-310a83f188e7\") " pod="openshift-marketplace/redhat-operators-n9hd7" Oct 13 06:32:59 crc kubenswrapper[4833]: I1013 06:32:59.396667 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nscz5\" (UniqueName: \"kubernetes.io/projected/a092b978-dd26-456c-bf3c-310a83f188e7-kube-api-access-nscz5\") pod \"redhat-operators-n9hd7\" (UID: \"a092b978-dd26-456c-bf3c-310a83f188e7\") " pod="openshift-marketplace/redhat-operators-n9hd7" Oct 13 06:32:59 crc kubenswrapper[4833]: I1013 06:32:59.396725 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a092b978-dd26-456c-bf3c-310a83f188e7-catalog-content\") pod \"redhat-operators-n9hd7\" (UID: \"a092b978-dd26-456c-bf3c-310a83f188e7\") " pod="openshift-marketplace/redhat-operators-n9hd7" Oct 13 06:32:59 crc kubenswrapper[4833]: I1013 06:32:59.396779 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a092b978-dd26-456c-bf3c-310a83f188e7-utilities\") pod \"redhat-operators-n9hd7\" (UID: \"a092b978-dd26-456c-bf3c-310a83f188e7\") " pod="openshift-marketplace/redhat-operators-n9hd7" Oct 13 06:32:59 crc kubenswrapper[4833]: I1013 06:32:59.397525 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a092b978-dd26-456c-bf3c-310a83f188e7-utilities\") pod \"redhat-operators-n9hd7\" (UID: \"a092b978-dd26-456c-bf3c-310a83f188e7\") " pod="openshift-marketplace/redhat-operators-n9hd7" Oct 13 06:32:59 crc kubenswrapper[4833]: I1013 06:32:59.400819 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a092b978-dd26-456c-bf3c-310a83f188e7-catalog-content\") pod \"redhat-operators-n9hd7\" (UID: \"a092b978-dd26-456c-bf3c-310a83f188e7\") " pod="openshift-marketplace/redhat-operators-n9hd7" Oct 13 06:32:59 crc kubenswrapper[4833]: I1013 06:32:59.416202 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nscz5\" (UniqueName: \"kubernetes.io/projected/a092b978-dd26-456c-bf3c-310a83f188e7-kube-api-access-nscz5\") pod \"redhat-operators-n9hd7\" (UID: \"a092b978-dd26-456c-bf3c-310a83f188e7\") " pod="openshift-marketplace/redhat-operators-n9hd7" Oct 13 06:32:59 crc kubenswrapper[4833]: I1013 06:32:59.483077 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n9hd7" Oct 13 06:32:59 crc kubenswrapper[4833]: I1013 06:32:59.656322 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rqjsg"] Oct 13 06:32:59 crc kubenswrapper[4833]: W1013 06:32:59.668515 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9da3536c_cd43_4288_87df_1960453f5d50.slice/crio-f0ce091a5021446fdefafa3c7b5b5a772210f067e6477e116566ce1d4a02d1b5 WatchSource:0}: Error finding container f0ce091a5021446fdefafa3c7b5b5a772210f067e6477e116566ce1d4a02d1b5: Status 404 returned error can't find the container with id f0ce091a5021446fdefafa3c7b5b5a772210f067e6477e116566ce1d4a02d1b5 Oct 13 06:32:59 crc kubenswrapper[4833]: I1013 06:32:59.879756 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n9hd7"] Oct 13 06:32:59 crc kubenswrapper[4833]: W1013 06:32:59.886911 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda092b978_dd26_456c_bf3c_310a83f188e7.slice/crio-f3bc10fb4d27da74cfd4aece4c5cd403275b6dba26a976b7d5e9001dd5ae89a5 WatchSource:0}: Error finding container f3bc10fb4d27da74cfd4aece4c5cd403275b6dba26a976b7d5e9001dd5ae89a5: Status 404 returned error can't find the container with id f3bc10fb4d27da74cfd4aece4c5cd403275b6dba26a976b7d5e9001dd5ae89a5 Oct 13 06:33:00 crc kubenswrapper[4833]: I1013 06:33:00.212988 4833 generic.go:334] "Generic (PLEG): container finished" podID="a092b978-dd26-456c-bf3c-310a83f188e7" containerID="301e24cdb77e7c13c16881c6fde1518c05e17b3e0bac0b3648fab1efcbebe837" exitCode=0 Oct 13 06:33:00 crc kubenswrapper[4833]: I1013 06:33:00.213049 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9hd7" event={"ID":"a092b978-dd26-456c-bf3c-310a83f188e7","Type":"ContainerDied","Data":"301e24cdb77e7c13c16881c6fde1518c05e17b3e0bac0b3648fab1efcbebe837"} Oct 13 06:33:00 crc kubenswrapper[4833]: I1013 06:33:00.213091 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9hd7" event={"ID":"a092b978-dd26-456c-bf3c-310a83f188e7","Type":"ContainerStarted","Data":"f3bc10fb4d27da74cfd4aece4c5cd403275b6dba26a976b7d5e9001dd5ae89a5"} Oct 13 06:33:00 crc kubenswrapper[4833]: I1013 06:33:00.215685 4833 generic.go:334] "Generic (PLEG): container finished" podID="adc8be39-7458-4c84-b227-856761d77e4e" containerID="0221d9d16c5ff62fe6294af2c4882977266d8610025b05edd6c0d245062ec239" exitCode=0 Oct 13 06:33:00 crc kubenswrapper[4833]: I1013 06:33:00.215737 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bk65" event={"ID":"adc8be39-7458-4c84-b227-856761d77e4e","Type":"ContainerDied","Data":"0221d9d16c5ff62fe6294af2c4882977266d8610025b05edd6c0d245062ec239"} Oct 13 06:33:00 crc kubenswrapper[4833]: I1013 06:33:00.219742 4833 generic.go:334] "Generic (PLEG): container finished" podID="6beb74db-c3f1-480b-b294-9d1ba1867055" containerID="c7ac2b5589e51fc557cf4a0b65d1c2b8221553d68bad089f2b499166c8dc4147" exitCode=0 Oct 13 06:33:00 crc kubenswrapper[4833]: I1013 06:33:00.220850 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9v92m" event={"ID":"6beb74db-c3f1-480b-b294-9d1ba1867055","Type":"ContainerDied","Data":"c7ac2b5589e51fc557cf4a0b65d1c2b8221553d68bad089f2b499166c8dc4147"} Oct 13 06:33:00 crc kubenswrapper[4833]: I1013 06:33:00.222744 4833 generic.go:334] "Generic (PLEG): container finished" podID="9da3536c-cd43-4288-87df-1960453f5d50" containerID="1f56e2d7347b88dc11b272fd1db3dd5069ad4b8c8946f550ff7e1787e29627cb" exitCode=0 Oct 13 06:33:00 crc kubenswrapper[4833]: I1013 06:33:00.222785 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqjsg" event={"ID":"9da3536c-cd43-4288-87df-1960453f5d50","Type":"ContainerDied","Data":"1f56e2d7347b88dc11b272fd1db3dd5069ad4b8c8946f550ff7e1787e29627cb"} Oct 13 06:33:00 crc kubenswrapper[4833]: I1013 06:33:00.222809 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqjsg" event={"ID":"9da3536c-cd43-4288-87df-1960453f5d50","Type":"ContainerStarted","Data":"f0ce091a5021446fdefafa3c7b5b5a772210f067e6477e116566ce1d4a02d1b5"} Oct 13 06:33:01 crc kubenswrapper[4833]: I1013 06:33:01.235953 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9v92m" event={"ID":"6beb74db-c3f1-480b-b294-9d1ba1867055","Type":"ContainerStarted","Data":"59f102a1bd1b3ae73919240f99478c68f493e31a1a60c54463f6aca2823dd4e5"} Oct 13 06:33:01 crc kubenswrapper[4833]: I1013 06:33:01.238855 4833 generic.go:334] "Generic (PLEG): container finished" podID="9da3536c-cd43-4288-87df-1960453f5d50" containerID="450f8178c9c0a2adb7714afa821ef85d122166ba4f35982671b3a2e50a9fe02c" exitCode=0 Oct 13 06:33:01 crc kubenswrapper[4833]: I1013 06:33:01.238923 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqjsg" event={"ID":"9da3536c-cd43-4288-87df-1960453f5d50","Type":"ContainerDied","Data":"450f8178c9c0a2adb7714afa821ef85d122166ba4f35982671b3a2e50a9fe02c"} Oct 13 06:33:01 crc kubenswrapper[4833]: I1013 06:33:01.240711 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9hd7" event={"ID":"a092b978-dd26-456c-bf3c-310a83f188e7","Type":"ContainerStarted","Data":"a591bb9f1aa1d23eabd646415f86e5ba836179be94b19f4778d8f0394c2e124a"} Oct 13 06:33:01 crc kubenswrapper[4833]: I1013 06:33:01.243972 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bk65" event={"ID":"adc8be39-7458-4c84-b227-856761d77e4e","Type":"ContainerStarted","Data":"21c7a1591df3d1a77136836dbcc635054a4bd4317cac5dc02c7b5f0f4f48afb1"} Oct 13 06:33:01 crc kubenswrapper[4833]: I1013 06:33:01.249496 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9v92m" podStartSLOduration=2.843674815 podStartE2EDuration="5.2494863s" podCreationTimestamp="2025-10-13 06:32:56 +0000 UTC" firstStartedPulling="2025-10-13 06:32:58.198916113 +0000 UTC m=+268.299339029" lastFinishedPulling="2025-10-13 06:33:00.604727558 +0000 UTC m=+270.705150514" observedRunningTime="2025-10-13 06:33:01.248362187 +0000 UTC m=+271.348785123" watchObservedRunningTime="2025-10-13 06:33:01.2494863 +0000 UTC m=+271.349909216" Oct 13 06:33:01 crc kubenswrapper[4833]: I1013 06:33:01.290653 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2bk65" podStartSLOduration=2.783205855 podStartE2EDuration="5.290633098s" podCreationTimestamp="2025-10-13 06:32:56 +0000 UTC" firstStartedPulling="2025-10-13 06:32:58.194601468 +0000 UTC m=+268.295024384" lastFinishedPulling="2025-10-13 06:33:00.702028711 +0000 UTC m=+270.802451627" observedRunningTime="2025-10-13 06:33:01.288228668 +0000 UTC m=+271.388651584" watchObservedRunningTime="2025-10-13 06:33:01.290633098 +0000 UTC m=+271.391056014" Oct 13 06:33:02 crc kubenswrapper[4833]: I1013 06:33:02.251331 4833 generic.go:334] "Generic (PLEG): container finished" podID="a092b978-dd26-456c-bf3c-310a83f188e7" containerID="a591bb9f1aa1d23eabd646415f86e5ba836179be94b19f4778d8f0394c2e124a" exitCode=0 Oct 13 06:33:02 crc kubenswrapper[4833]: I1013 06:33:02.251426 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9hd7" event={"ID":"a092b978-dd26-456c-bf3c-310a83f188e7","Type":"ContainerDied","Data":"a591bb9f1aa1d23eabd646415f86e5ba836179be94b19f4778d8f0394c2e124a"} Oct 13 06:33:03 crc kubenswrapper[4833]: I1013 06:33:03.260050 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqjsg" event={"ID":"9da3536c-cd43-4288-87df-1960453f5d50","Type":"ContainerStarted","Data":"ced5788f4566081a4c2c784af881f6abb8a099a4a328c16b21228046563d856b"} Oct 13 06:33:03 crc kubenswrapper[4833]: I1013 06:33:03.264907 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9hd7" event={"ID":"a092b978-dd26-456c-bf3c-310a83f188e7","Type":"ContainerStarted","Data":"063007b2b6bc580ca334b4de0a575123f603f67e8b687f07cc4701ed05b0eff0"} Oct 13 06:33:03 crc kubenswrapper[4833]: I1013 06:33:03.302992 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rqjsg" podStartSLOduration=3.685539176 podStartE2EDuration="5.302975477s" podCreationTimestamp="2025-10-13 06:32:58 +0000 UTC" firstStartedPulling="2025-10-13 06:33:00.22388679 +0000 UTC m=+270.324309706" lastFinishedPulling="2025-10-13 06:33:01.841323091 +0000 UTC m=+271.941746007" observedRunningTime="2025-10-13 06:33:03.282312716 +0000 UTC m=+273.382735652" watchObservedRunningTime="2025-10-13 06:33:03.302975477 +0000 UTC m=+273.403398393" Oct 13 06:33:03 crc kubenswrapper[4833]: I1013 06:33:03.303982 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n9hd7" podStartSLOduration=1.771293837 podStartE2EDuration="4.303975206s" podCreationTimestamp="2025-10-13 06:32:59 +0000 UTC" firstStartedPulling="2025-10-13 06:33:00.215142245 +0000 UTC m=+270.315565161" lastFinishedPulling="2025-10-13 06:33:02.747823614 +0000 UTC m=+272.848246530" observedRunningTime="2025-10-13 06:33:03.301180554 +0000 UTC m=+273.401603470" watchObservedRunningTime="2025-10-13 06:33:03.303975206 +0000 UTC m=+273.404398122" Oct 13 06:33:06 crc kubenswrapper[4833]: I1013 06:33:06.868666 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2bk65" Oct 13 06:33:06 crc kubenswrapper[4833]: I1013 06:33:06.869382 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2bk65" Oct 13 06:33:06 crc kubenswrapper[4833]: I1013 06:33:06.912525 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2bk65" Oct 13 06:33:07 crc kubenswrapper[4833]: I1013 06:33:07.068629 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9v92m" Oct 13 06:33:07 crc kubenswrapper[4833]: I1013 06:33:07.068712 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9v92m" Oct 13 06:33:07 crc kubenswrapper[4833]: I1013 06:33:07.130629 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9v92m" Oct 13 06:33:07 crc kubenswrapper[4833]: I1013 06:33:07.347908 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9v92m" Oct 13 06:33:07 crc kubenswrapper[4833]: I1013 06:33:07.362069 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2bk65" Oct 13 06:33:09 crc kubenswrapper[4833]: I1013 06:33:09.260934 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rqjsg" Oct 13 06:33:09 crc kubenswrapper[4833]: I1013 06:33:09.261652 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rqjsg" Oct 13 06:33:09 crc kubenswrapper[4833]: I1013 06:33:09.305711 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rqjsg" Oct 13 06:33:09 crc kubenswrapper[4833]: I1013 06:33:09.483462 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n9hd7" Oct 13 06:33:09 crc kubenswrapper[4833]: I1013 06:33:09.483512 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n9hd7" Oct 13 06:33:09 crc kubenswrapper[4833]: I1013 06:33:09.521415 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n9hd7" Oct 13 06:33:10 crc kubenswrapper[4833]: I1013 06:33:10.360983 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rqjsg" Oct 13 06:33:10 crc kubenswrapper[4833]: I1013 06:33:10.374341 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n9hd7" Oct 13 06:34:30 crc kubenswrapper[4833]: I1013 06:34:30.543191 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 06:34:30 crc kubenswrapper[4833]: I1013 06:34:30.543884 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 06:35:00 crc kubenswrapper[4833]: I1013 06:35:00.542474 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 06:35:00 crc kubenswrapper[4833]: I1013 06:35:00.543156 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 06:35:30 crc kubenswrapper[4833]: I1013 06:35:30.543054 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 06:35:30 crc kubenswrapper[4833]: I1013 06:35:30.543718 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 06:35:30 crc kubenswrapper[4833]: I1013 06:35:30.543769 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 06:35:30 crc kubenswrapper[4833]: I1013 06:35:30.544346 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dfe0e3b90fb39f70a625aa18e10ffc3bc8f0d046e4ae8138cd65de6a8868a93f"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 06:35:30 crc kubenswrapper[4833]: I1013 06:35:30.544419 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://dfe0e3b90fb39f70a625aa18e10ffc3bc8f0d046e4ae8138cd65de6a8868a93f" gracePeriod=600 Oct 13 06:35:31 crc kubenswrapper[4833]: I1013 06:35:31.211760 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="dfe0e3b90fb39f70a625aa18e10ffc3bc8f0d046e4ae8138cd65de6a8868a93f" exitCode=0 Oct 13 06:35:31 crc kubenswrapper[4833]: I1013 06:35:31.211855 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"dfe0e3b90fb39f70a625aa18e10ffc3bc8f0d046e4ae8138cd65de6a8868a93f"} Oct 13 06:35:31 crc kubenswrapper[4833]: I1013 06:35:31.212127 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"3d72eefcdc034d5e45d5c524e5c6641a6db280762987026a18bc3b264c963a34"} Oct 13 06:35:31 crc kubenswrapper[4833]: I1013 06:35:31.212189 4833 scope.go:117] "RemoveContainer" containerID="b6317731a3b73f6dfd4e5ff841d989753e9e862508bd53da00ec168340e49388" Oct 13 06:35:40 crc kubenswrapper[4833]: I1013 06:35:40.279309 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7qld2"] Oct 13 06:35:40 crc kubenswrapper[4833]: I1013 06:35:40.280496 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-7qld2" Oct 13 06:35:40 crc kubenswrapper[4833]: I1013 06:35:40.300107 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7qld2"] Oct 13 06:35:40 crc kubenswrapper[4833]: I1013 06:35:40.410008 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ced4a14f-f0e8-44bb-90f1-e573b3685716-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7qld2\" (UID: \"ced4a14f-f0e8-44bb-90f1-e573b3685716\") " pod="openshift-image-registry/image-registry-66df7c8f76-7qld2" Oct 13 06:35:40 crc kubenswrapper[4833]: I1013 06:35:40.410281 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ced4a14f-f0e8-44bb-90f1-e573b3685716-registry-tls\") pod \"image-registry-66df7c8f76-7qld2\" (UID: \"ced4a14f-f0e8-44bb-90f1-e573b3685716\") " pod="openshift-image-registry/image-registry-66df7c8f76-7qld2" Oct 13 06:35:40 crc kubenswrapper[4833]: I1013 06:35:40.410398 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ced4a14f-f0e8-44bb-90f1-e573b3685716-registry-certificates\") pod \"image-registry-66df7c8f76-7qld2\" (UID: \"ced4a14f-f0e8-44bb-90f1-e573b3685716\") " pod="openshift-image-registry/image-registry-66df7c8f76-7qld2" Oct 13 06:35:40 crc kubenswrapper[4833]: I1013 06:35:40.410611 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ced4a14f-f0e8-44bb-90f1-e573b3685716-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7qld2\" (UID: \"ced4a14f-f0e8-44bb-90f1-e573b3685716\") " pod="openshift-image-registry/image-registry-66df7c8f76-7qld2" Oct 13 06:35:40 crc kubenswrapper[4833]: I1013 06:35:40.410674 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ced4a14f-f0e8-44bb-90f1-e573b3685716-bound-sa-token\") pod \"image-registry-66df7c8f76-7qld2\" (UID: \"ced4a14f-f0e8-44bb-90f1-e573b3685716\") " pod="openshift-image-registry/image-registry-66df7c8f76-7qld2" Oct 13 06:35:40 crc kubenswrapper[4833]: I1013 06:35:40.410736 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ced4a14f-f0e8-44bb-90f1-e573b3685716-trusted-ca\") pod \"image-registry-66df7c8f76-7qld2\" (UID: \"ced4a14f-f0e8-44bb-90f1-e573b3685716\") " pod="openshift-image-registry/image-registry-66df7c8f76-7qld2" Oct 13 06:35:40 crc kubenswrapper[4833]: I1013 06:35:40.410758 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjgh4\" (UniqueName: \"kubernetes.io/projected/ced4a14f-f0e8-44bb-90f1-e573b3685716-kube-api-access-jjgh4\") pod \"image-registry-66df7c8f76-7qld2\" (UID: \"ced4a14f-f0e8-44bb-90f1-e573b3685716\") " pod="openshift-image-registry/image-registry-66df7c8f76-7qld2" Oct 13 06:35:40 crc kubenswrapper[4833]: I1013 06:35:40.410807 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-7qld2\" (UID: \"ced4a14f-f0e8-44bb-90f1-e573b3685716\") " pod="openshift-image-registry/image-registry-66df7c8f76-7qld2" Oct 13 06:35:40 crc kubenswrapper[4833]: I1013 06:35:40.443570 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-7qld2\" (UID: \"ced4a14f-f0e8-44bb-90f1-e573b3685716\") " pod="openshift-image-registry/image-registry-66df7c8f76-7qld2" Oct 13 06:35:40 crc kubenswrapper[4833]: I1013 06:35:40.511723 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ced4a14f-f0e8-44bb-90f1-e573b3685716-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7qld2\" (UID: \"ced4a14f-f0e8-44bb-90f1-e573b3685716\") " pod="openshift-image-registry/image-registry-66df7c8f76-7qld2" Oct 13 06:35:40 crc kubenswrapper[4833]: I1013 06:35:40.511793 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ced4a14f-f0e8-44bb-90f1-e573b3685716-registry-tls\") pod \"image-registry-66df7c8f76-7qld2\" (UID: \"ced4a14f-f0e8-44bb-90f1-e573b3685716\") " pod="openshift-image-registry/image-registry-66df7c8f76-7qld2" Oct 13 06:35:40 crc kubenswrapper[4833]: I1013 06:35:40.511922 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ced4a14f-f0e8-44bb-90f1-e573b3685716-registry-certificates\") pod \"image-registry-66df7c8f76-7qld2\" (UID: \"ced4a14f-f0e8-44bb-90f1-e573b3685716\") " pod="openshift-image-registry/image-registry-66df7c8f76-7qld2" Oct 13 06:35:40 crc kubenswrapper[4833]: I1013 06:35:40.511974 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ced4a14f-f0e8-44bb-90f1-e573b3685716-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7qld2\" (UID: \"ced4a14f-f0e8-44bb-90f1-e573b3685716\") " pod="openshift-image-registry/image-registry-66df7c8f76-7qld2" Oct 13 06:35:40 crc kubenswrapper[4833]: I1013 06:35:40.512014 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ced4a14f-f0e8-44bb-90f1-e573b3685716-bound-sa-token\") pod \"image-registry-66df7c8f76-7qld2\" (UID: \"ced4a14f-f0e8-44bb-90f1-e573b3685716\") " pod="openshift-image-registry/image-registry-66df7c8f76-7qld2" Oct 13 06:35:40 crc kubenswrapper[4833]: I1013 06:35:40.512067 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ced4a14f-f0e8-44bb-90f1-e573b3685716-trusted-ca\") pod \"image-registry-66df7c8f76-7qld2\" (UID: \"ced4a14f-f0e8-44bb-90f1-e573b3685716\") " pod="openshift-image-registry/image-registry-66df7c8f76-7qld2" Oct 13 06:35:40 crc kubenswrapper[4833]: I1013 06:35:40.512101 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjgh4\" (UniqueName: \"kubernetes.io/projected/ced4a14f-f0e8-44bb-90f1-e573b3685716-kube-api-access-jjgh4\") pod \"image-registry-66df7c8f76-7qld2\" (UID: \"ced4a14f-f0e8-44bb-90f1-e573b3685716\") " pod="openshift-image-registry/image-registry-66df7c8f76-7qld2" Oct 13 06:35:40 crc kubenswrapper[4833]: I1013 06:35:40.512233 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ced4a14f-f0e8-44bb-90f1-e573b3685716-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7qld2\" (UID: \"ced4a14f-f0e8-44bb-90f1-e573b3685716\") " pod="openshift-image-registry/image-registry-66df7c8f76-7qld2" Oct 13 06:35:40 crc kubenswrapper[4833]: I1013 06:35:40.513304 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ced4a14f-f0e8-44bb-90f1-e573b3685716-trusted-ca\") pod \"image-registry-66df7c8f76-7qld2\" (UID: \"ced4a14f-f0e8-44bb-90f1-e573b3685716\") " pod="openshift-image-registry/image-registry-66df7c8f76-7qld2" Oct 13 06:35:40 crc kubenswrapper[4833]: I1013 06:35:40.513523 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ced4a14f-f0e8-44bb-90f1-e573b3685716-registry-certificates\") pod \"image-registry-66df7c8f76-7qld2\" (UID: \"ced4a14f-f0e8-44bb-90f1-e573b3685716\") " pod="openshift-image-registry/image-registry-66df7c8f76-7qld2" Oct 13 06:35:40 crc kubenswrapper[4833]: I1013 06:35:40.521475 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ced4a14f-f0e8-44bb-90f1-e573b3685716-registry-tls\") pod \"image-registry-66df7c8f76-7qld2\" (UID: \"ced4a14f-f0e8-44bb-90f1-e573b3685716\") " pod="openshift-image-registry/image-registry-66df7c8f76-7qld2" Oct 13 06:35:40 crc kubenswrapper[4833]: I1013 06:35:40.521510 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ced4a14f-f0e8-44bb-90f1-e573b3685716-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7qld2\" (UID: \"ced4a14f-f0e8-44bb-90f1-e573b3685716\") " pod="openshift-image-registry/image-registry-66df7c8f76-7qld2" Oct 13 06:35:40 crc kubenswrapper[4833]: I1013 06:35:40.534962 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjgh4\" (UniqueName: \"kubernetes.io/projected/ced4a14f-f0e8-44bb-90f1-e573b3685716-kube-api-access-jjgh4\") pod \"image-registry-66df7c8f76-7qld2\" (UID: \"ced4a14f-f0e8-44bb-90f1-e573b3685716\") " pod="openshift-image-registry/image-registry-66df7c8f76-7qld2" Oct 13 06:35:40 crc kubenswrapper[4833]: I1013 06:35:40.549794 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ced4a14f-f0e8-44bb-90f1-e573b3685716-bound-sa-token\") pod \"image-registry-66df7c8f76-7qld2\" (UID: \"ced4a14f-f0e8-44bb-90f1-e573b3685716\") " pod="openshift-image-registry/image-registry-66df7c8f76-7qld2" Oct 13 06:35:40 crc kubenswrapper[4833]: I1013 06:35:40.597845 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-7qld2" Oct 13 06:35:40 crc kubenswrapper[4833]: I1013 06:35:40.800961 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7qld2"] Oct 13 06:35:41 crc kubenswrapper[4833]: I1013 06:35:41.281055 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-7qld2" event={"ID":"ced4a14f-f0e8-44bb-90f1-e573b3685716","Type":"ContainerStarted","Data":"f64d3e965241631f9b035c194f26d9ff39a3ef3bfa7bb2f34ae5be057e528fce"} Oct 13 06:35:41 crc kubenswrapper[4833]: I1013 06:35:41.282501 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-7qld2" event={"ID":"ced4a14f-f0e8-44bb-90f1-e573b3685716","Type":"ContainerStarted","Data":"fbbc3dea028ae90feb192c3471825718bf1a5bdd8e600b843c152f1c33919893"} Oct 13 06:35:41 crc kubenswrapper[4833]: I1013 06:35:41.282621 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-7qld2" Oct 13 06:35:41 crc kubenswrapper[4833]: I1013 06:35:41.300433 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-7qld2" podStartSLOduration=1.300416478 podStartE2EDuration="1.300416478s" podCreationTimestamp="2025-10-13 06:35:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:35:41.298892763 +0000 UTC m=+431.399315679" watchObservedRunningTime="2025-10-13 06:35:41.300416478 +0000 UTC m=+431.400839394" Oct 13 06:36:00 crc kubenswrapper[4833]: I1013 06:36:00.607108 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-7qld2" Oct 13 06:36:00 crc kubenswrapper[4833]: I1013 06:36:00.672017 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7kzpp"] Oct 13 06:36:25 crc kubenswrapper[4833]: I1013 06:36:25.708604 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" podUID="523f8b54-3667-4d44-99b3-99a4caca1cee" containerName="registry" containerID="cri-o://c6d61f467d00b79fe4684dc8fd49f5fc9e3626a43693b0ab8a15f5fa415aeef0" gracePeriod=30 Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.100792 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.205037 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/523f8b54-3667-4d44-99b3-99a4caca1cee-bound-sa-token\") pod \"523f8b54-3667-4d44-99b3-99a4caca1cee\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.205074 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/523f8b54-3667-4d44-99b3-99a4caca1cee-trusted-ca\") pod \"523f8b54-3667-4d44-99b3-99a4caca1cee\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.205106 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/523f8b54-3667-4d44-99b3-99a4caca1cee-registry-tls\") pod \"523f8b54-3667-4d44-99b3-99a4caca1cee\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.205161 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/523f8b54-3667-4d44-99b3-99a4caca1cee-installation-pull-secrets\") pod \"523f8b54-3667-4d44-99b3-99a4caca1cee\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.205307 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"523f8b54-3667-4d44-99b3-99a4caca1cee\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.205350 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/523f8b54-3667-4d44-99b3-99a4caca1cee-ca-trust-extracted\") pod \"523f8b54-3667-4d44-99b3-99a4caca1cee\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.205374 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/523f8b54-3667-4d44-99b3-99a4caca1cee-registry-certificates\") pod \"523f8b54-3667-4d44-99b3-99a4caca1cee\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.205403 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg89p\" (UniqueName: \"kubernetes.io/projected/523f8b54-3667-4d44-99b3-99a4caca1cee-kube-api-access-rg89p\") pod \"523f8b54-3667-4d44-99b3-99a4caca1cee\" (UID: \"523f8b54-3667-4d44-99b3-99a4caca1cee\") " Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.206750 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/523f8b54-3667-4d44-99b3-99a4caca1cee-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "523f8b54-3667-4d44-99b3-99a4caca1cee" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.206915 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/523f8b54-3667-4d44-99b3-99a4caca1cee-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "523f8b54-3667-4d44-99b3-99a4caca1cee" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.211733 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/523f8b54-3667-4d44-99b3-99a4caca1cee-kube-api-access-rg89p" (OuterVolumeSpecName: "kube-api-access-rg89p") pod "523f8b54-3667-4d44-99b3-99a4caca1cee" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee"). InnerVolumeSpecName "kube-api-access-rg89p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.212705 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/523f8b54-3667-4d44-99b3-99a4caca1cee-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "523f8b54-3667-4d44-99b3-99a4caca1cee" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.212826 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/523f8b54-3667-4d44-99b3-99a4caca1cee-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "523f8b54-3667-4d44-99b3-99a4caca1cee" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.221439 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/523f8b54-3667-4d44-99b3-99a4caca1cee-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "523f8b54-3667-4d44-99b3-99a4caca1cee" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.225819 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "523f8b54-3667-4d44-99b3-99a4caca1cee" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.236013 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/523f8b54-3667-4d44-99b3-99a4caca1cee-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "523f8b54-3667-4d44-99b3-99a4caca1cee" (UID: "523f8b54-3667-4d44-99b3-99a4caca1cee"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.306598 4833 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/523f8b54-3667-4d44-99b3-99a4caca1cee-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.306650 4833 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/523f8b54-3667-4d44-99b3-99a4caca1cee-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.306668 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg89p\" (UniqueName: \"kubernetes.io/projected/523f8b54-3667-4d44-99b3-99a4caca1cee-kube-api-access-rg89p\") on node \"crc\" DevicePath \"\"" Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.306686 4833 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/523f8b54-3667-4d44-99b3-99a4caca1cee-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.306708 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/523f8b54-3667-4d44-99b3-99a4caca1cee-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.306725 4833 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/523f8b54-3667-4d44-99b3-99a4caca1cee-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.306743 4833 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/523f8b54-3667-4d44-99b3-99a4caca1cee-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.576198 4833 generic.go:334] "Generic (PLEG): container finished" podID="523f8b54-3667-4d44-99b3-99a4caca1cee" containerID="c6d61f467d00b79fe4684dc8fd49f5fc9e3626a43693b0ab8a15f5fa415aeef0" exitCode=0 Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.576270 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" event={"ID":"523f8b54-3667-4d44-99b3-99a4caca1cee","Type":"ContainerDied","Data":"c6d61f467d00b79fe4684dc8fd49f5fc9e3626a43693b0ab8a15f5fa415aeef0"} Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.576328 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.576532 4833 scope.go:117] "RemoveContainer" containerID="c6d61f467d00b79fe4684dc8fd49f5fc9e3626a43693b0ab8a15f5fa415aeef0" Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.576313 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7kzpp" event={"ID":"523f8b54-3667-4d44-99b3-99a4caca1cee","Type":"ContainerDied","Data":"1fd0df744b57e356d54b7fb67b8cc257c98d6b8ebbee32395b942b279b1913b0"} Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.600900 4833 scope.go:117] "RemoveContainer" containerID="c6d61f467d00b79fe4684dc8fd49f5fc9e3626a43693b0ab8a15f5fa415aeef0" Oct 13 06:36:26 crc kubenswrapper[4833]: E1013 06:36:26.601637 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6d61f467d00b79fe4684dc8fd49f5fc9e3626a43693b0ab8a15f5fa415aeef0\": container with ID starting with c6d61f467d00b79fe4684dc8fd49f5fc9e3626a43693b0ab8a15f5fa415aeef0 not found: ID does not exist" containerID="c6d61f467d00b79fe4684dc8fd49f5fc9e3626a43693b0ab8a15f5fa415aeef0" Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.601696 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6d61f467d00b79fe4684dc8fd49f5fc9e3626a43693b0ab8a15f5fa415aeef0"} err="failed to get container status \"c6d61f467d00b79fe4684dc8fd49f5fc9e3626a43693b0ab8a15f5fa415aeef0\": rpc error: code = NotFound desc = could not find container \"c6d61f467d00b79fe4684dc8fd49f5fc9e3626a43693b0ab8a15f5fa415aeef0\": container with ID starting with c6d61f467d00b79fe4684dc8fd49f5fc9e3626a43693b0ab8a15f5fa415aeef0 not found: ID does not exist" Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.623395 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7kzpp"] Oct 13 06:36:26 crc kubenswrapper[4833]: I1013 06:36:26.635323 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7kzpp"] Oct 13 06:36:28 crc kubenswrapper[4833]: I1013 06:36:28.643249 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="523f8b54-3667-4d44-99b3-99a4caca1cee" path="/var/lib/kubelet/pods/523f8b54-3667-4d44-99b3-99a4caca1cee/volumes" Oct 13 06:37:30 crc kubenswrapper[4833]: I1013 06:37:30.542405 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 06:37:30 crc kubenswrapper[4833]: I1013 06:37:30.542992 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 06:38:00 crc kubenswrapper[4833]: I1013 06:38:00.542998 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 06:38:00 crc kubenswrapper[4833]: I1013 06:38:00.543513 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 06:38:30 crc kubenswrapper[4833]: I1013 06:38:30.542636 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 06:38:30 crc kubenswrapper[4833]: I1013 06:38:30.543329 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 06:38:30 crc kubenswrapper[4833]: I1013 06:38:30.543393 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 06:38:30 crc kubenswrapper[4833]: I1013 06:38:30.544253 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d72eefcdc034d5e45d5c524e5c6641a6db280762987026a18bc3b264c963a34"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 06:38:30 crc kubenswrapper[4833]: I1013 06:38:30.544370 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://3d72eefcdc034d5e45d5c524e5c6641a6db280762987026a18bc3b264c963a34" gracePeriod=600 Oct 13 06:38:31 crc kubenswrapper[4833]: I1013 06:38:31.393726 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="3d72eefcdc034d5e45d5c524e5c6641a6db280762987026a18bc3b264c963a34" exitCode=0 Oct 13 06:38:31 crc kubenswrapper[4833]: I1013 06:38:31.393797 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"3d72eefcdc034d5e45d5c524e5c6641a6db280762987026a18bc3b264c963a34"} Oct 13 06:38:31 crc kubenswrapper[4833]: I1013 06:38:31.394108 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"0c7662799f74f815e9b59b491ebe056d2f40e7a81a10b5cc35be025975e84206"} Oct 13 06:38:31 crc kubenswrapper[4833]: I1013 06:38:31.394204 4833 scope.go:117] "RemoveContainer" containerID="dfe0e3b90fb39f70a625aa18e10ffc3bc8f0d046e4ae8138cd65de6a8868a93f" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.442963 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wnpc6"] Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.444059 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="northd" containerID="cri-o://8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a" gracePeriod=30 Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.444059 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e" gracePeriod=30 Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.444139 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="ovn-controller" containerID="cri-o://5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4" gracePeriod=30 Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.444121 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="ovn-acl-logging" containerID="cri-o://c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368" gracePeriod=30 Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.444186 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="kube-rbac-proxy-node" containerID="cri-o://75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a" gracePeriod=30 Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.444301 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="sbdb" containerID="cri-o://015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065" gracePeriod=30 Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.448194 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="nbdb" containerID="cri-o://609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40" gracePeriod=30 Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.490633 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="ovnkube-controller" containerID="cri-o://c5f10ce598296c8168c432174c128e2e124e0eee35617722c804586b0dea4a49" gracePeriod=30 Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.724827 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zbg2r_9d1bd0f7-c161-456d-af32-2da416006789/kube-multus/2.log" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.725216 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zbg2r_9d1bd0f7-c161-456d-af32-2da416006789/kube-multus/1.log" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.725256 4833 generic.go:334] "Generic (PLEG): container finished" podID="9d1bd0f7-c161-456d-af32-2da416006789" containerID="8b45e4b875a145b2ae05a9c9a05af30df92a79bf06adb47cb6550ae4ac56cb08" exitCode=2 Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.725309 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zbg2r" event={"ID":"9d1bd0f7-c161-456d-af32-2da416006789","Type":"ContainerDied","Data":"8b45e4b875a145b2ae05a9c9a05af30df92a79bf06adb47cb6550ae4ac56cb08"} Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.725366 4833 scope.go:117] "RemoveContainer" containerID="f7560e6781e45623f8f09699ee026305664eb7a06da06088ac4e870b174c94c6" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.725935 4833 scope.go:117] "RemoveContainer" containerID="8b45e4b875a145b2ae05a9c9a05af30df92a79bf06adb47cb6550ae4ac56cb08" Oct 13 06:39:24 crc kubenswrapper[4833]: E1013 06:39:24.726204 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-zbg2r_openshift-multus(9d1bd0f7-c161-456d-af32-2da416006789)\"" pod="openshift-multus/multus-zbg2r" podUID="9d1bd0f7-c161-456d-af32-2da416006789" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.729407 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wnpc6_cb9a788e-b626-43a8-955a-bf4a5a3cb145/ovnkube-controller/3.log" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.732352 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wnpc6_cb9a788e-b626-43a8-955a-bf4a5a3cb145/ovn-acl-logging/0.log" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.733084 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wnpc6_cb9a788e-b626-43a8-955a-bf4a5a3cb145/ovn-controller/0.log" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.733478 4833 generic.go:334] "Generic (PLEG): container finished" podID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerID="c5f10ce598296c8168c432174c128e2e124e0eee35617722c804586b0dea4a49" exitCode=0 Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.733604 4833 generic.go:334] "Generic (PLEG): container finished" podID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerID="015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065" exitCode=0 Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.733685 4833 generic.go:334] "Generic (PLEG): container finished" podID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerID="609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40" exitCode=0 Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.733749 4833 generic.go:334] "Generic (PLEG): container finished" podID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerID="2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e" exitCode=0 Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.733815 4833 generic.go:334] "Generic (PLEG): container finished" podID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerID="75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a" exitCode=0 Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.733876 4833 generic.go:334] "Generic (PLEG): container finished" podID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerID="c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368" exitCode=143 Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.733945 4833 generic.go:334] "Generic (PLEG): container finished" podID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerID="5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4" exitCode=143 Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.734023 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" event={"ID":"cb9a788e-b626-43a8-955a-bf4a5a3cb145","Type":"ContainerDied","Data":"c5f10ce598296c8168c432174c128e2e124e0eee35617722c804586b0dea4a49"} Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.734109 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" event={"ID":"cb9a788e-b626-43a8-955a-bf4a5a3cb145","Type":"ContainerDied","Data":"015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065"} Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.734188 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" event={"ID":"cb9a788e-b626-43a8-955a-bf4a5a3cb145","Type":"ContainerDied","Data":"609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40"} Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.734262 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" event={"ID":"cb9a788e-b626-43a8-955a-bf4a5a3cb145","Type":"ContainerDied","Data":"2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e"} Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.734335 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" event={"ID":"cb9a788e-b626-43a8-955a-bf4a5a3cb145","Type":"ContainerDied","Data":"75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a"} Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.734408 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" event={"ID":"cb9a788e-b626-43a8-955a-bf4a5a3cb145","Type":"ContainerDied","Data":"c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368"} Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.734484 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" event={"ID":"cb9a788e-b626-43a8-955a-bf4a5a3cb145","Type":"ContainerDied","Data":"5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4"} Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.771716 4833 scope.go:117] "RemoveContainer" containerID="bc6b7ae614a47894eb39d173d42b688003f00a958ae034ee47875ae4a41b139c" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.777251 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wnpc6_cb9a788e-b626-43a8-955a-bf4a5a3cb145/ovn-acl-logging/0.log" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.777661 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wnpc6_cb9a788e-b626-43a8-955a-bf4a5a3cb145/ovn-controller/0.log" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.778060 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.827887 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fdqvk"] Oct 13 06:39:24 crc kubenswrapper[4833]: E1013 06:39:24.828146 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="ovn-controller" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.828159 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="ovn-controller" Oct 13 06:39:24 crc kubenswrapper[4833]: E1013 06:39:24.828169 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="ovn-acl-logging" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.828194 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="ovn-acl-logging" Oct 13 06:39:24 crc kubenswrapper[4833]: E1013 06:39:24.828204 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="northd" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.828211 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="northd" Oct 13 06:39:24 crc kubenswrapper[4833]: E1013 06:39:24.828222 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="kube-rbac-proxy-ovn-metrics" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.828228 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="kube-rbac-proxy-ovn-metrics" Oct 13 06:39:24 crc kubenswrapper[4833]: E1013 06:39:24.828262 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="ovnkube-controller" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.828270 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="ovnkube-controller" Oct 13 06:39:24 crc kubenswrapper[4833]: E1013 06:39:24.828279 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="ovnkube-controller" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.828285 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="ovnkube-controller" Oct 13 06:39:24 crc kubenswrapper[4833]: E1013 06:39:24.828295 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="nbdb" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.828301 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="nbdb" Oct 13 06:39:24 crc kubenswrapper[4833]: E1013 06:39:24.828309 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="kubecfg-setup" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.828315 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="kubecfg-setup" Oct 13 06:39:24 crc kubenswrapper[4833]: E1013 06:39:24.828324 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="523f8b54-3667-4d44-99b3-99a4caca1cee" containerName="registry" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.828331 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="523f8b54-3667-4d44-99b3-99a4caca1cee" containerName="registry" Oct 13 06:39:24 crc kubenswrapper[4833]: E1013 06:39:24.828344 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="ovnkube-controller" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.828352 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="ovnkube-controller" Oct 13 06:39:24 crc kubenswrapper[4833]: E1013 06:39:24.828387 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="sbdb" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.828393 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="sbdb" Oct 13 06:39:24 crc kubenswrapper[4833]: E1013 06:39:24.828402 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="kube-rbac-proxy-node" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.828410 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="kube-rbac-proxy-node" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.828517 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="ovn-acl-logging" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.828529 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="kube-rbac-proxy-ovn-metrics" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.828578 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="523f8b54-3667-4d44-99b3-99a4caca1cee" containerName="registry" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.828588 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="ovnkube-controller" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.828596 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="nbdb" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.828604 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="ovnkube-controller" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.828612 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="ovnkube-controller" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.828620 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="kube-rbac-proxy-node" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.828627 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="ovn-controller" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.828636 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="northd" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.828642 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="sbdb" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.828650 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="ovnkube-controller" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.828661 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="ovnkube-controller" Oct 13 06:39:24 crc kubenswrapper[4833]: E1013 06:39:24.828746 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="ovnkube-controller" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.828753 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="ovnkube-controller" Oct 13 06:39:24 crc kubenswrapper[4833]: E1013 06:39:24.828765 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="ovnkube-controller" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.828771 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerName="ovnkube-controller" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.830421 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.864674 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5vxf\" (UniqueName: \"kubernetes.io/projected/cb9a788e-b626-43a8-955a-bf4a5a3cb145-kube-api-access-k5vxf\") pod \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.864769 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-var-lib-cni-networks-ovn-kubernetes\") pod \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.864815 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-cni-netd\") pod \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.864893 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "cb9a788e-b626-43a8-955a-bf4a5a3cb145" (UID: "cb9a788e-b626-43a8-955a-bf4a5a3cb145"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.864910 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb9a788e-b626-43a8-955a-bf4a5a3cb145-env-overrides\") pod \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.865018 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-run-systemd\") pod \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.865047 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-slash\") pod \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.865049 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "cb9a788e-b626-43a8-955a-bf4a5a3cb145" (UID: "cb9a788e-b626-43a8-955a-bf4a5a3cb145"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.865094 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb9a788e-b626-43a8-955a-bf4a5a3cb145-ovn-node-metrics-cert\") pod \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.865119 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-node-log\") pod \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.865156 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-run-ovn\") pod \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.865176 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-run-netns\") pod \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.865212 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-systemd-units\") pod \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.865233 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-cni-bin\") pod \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.865262 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-run-ovn-kubernetes\") pod \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.865288 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-log-socket\") pod \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.865311 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb9a788e-b626-43a8-955a-bf4a5a3cb145-ovnkube-config\") pod \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.865331 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-etc-openvswitch\") pod \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.865353 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-var-lib-openvswitch\") pod \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.865395 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-kubelet\") pod \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.865427 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cb9a788e-b626-43a8-955a-bf4a5a3cb145-ovnkube-script-lib\") pod \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.865455 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-run-openvswitch\") pod \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\" (UID: \"cb9a788e-b626-43a8-955a-bf4a5a3cb145\") " Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.865599 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-slash" (OuterVolumeSpecName: "host-slash") pod "cb9a788e-b626-43a8-955a-bf4a5a3cb145" (UID: "cb9a788e-b626-43a8-955a-bf4a5a3cb145"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.865610 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb9a788e-b626-43a8-955a-bf4a5a3cb145-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "cb9a788e-b626-43a8-955a-bf4a5a3cb145" (UID: "cb9a788e-b626-43a8-955a-bf4a5a3cb145"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.865633 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "cb9a788e-b626-43a8-955a-bf4a5a3cb145" (UID: "cb9a788e-b626-43a8-955a-bf4a5a3cb145"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.865638 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-node-log" (OuterVolumeSpecName: "node-log") pod "cb9a788e-b626-43a8-955a-bf4a5a3cb145" (UID: "cb9a788e-b626-43a8-955a-bf4a5a3cb145"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.865671 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "cb9a788e-b626-43a8-955a-bf4a5a3cb145" (UID: "cb9a788e-b626-43a8-955a-bf4a5a3cb145"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.865774 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "cb9a788e-b626-43a8-955a-bf4a5a3cb145" (UID: "cb9a788e-b626-43a8-955a-bf4a5a3cb145"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.865658 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "cb9a788e-b626-43a8-955a-bf4a5a3cb145" (UID: "cb9a788e-b626-43a8-955a-bf4a5a3cb145"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.865710 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "cb9a788e-b626-43a8-955a-bf4a5a3cb145" (UID: "cb9a788e-b626-43a8-955a-bf4a5a3cb145"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.865835 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "cb9a788e-b626-43a8-955a-bf4a5a3cb145" (UID: "cb9a788e-b626-43a8-955a-bf4a5a3cb145"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.865755 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-log-socket" (OuterVolumeSpecName: "log-socket") pod "cb9a788e-b626-43a8-955a-bf4a5a3cb145" (UID: "cb9a788e-b626-43a8-955a-bf4a5a3cb145"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.865910 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "cb9a788e-b626-43a8-955a-bf4a5a3cb145" (UID: "cb9a788e-b626-43a8-955a-bf4a5a3cb145"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.866120 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb9a788e-b626-43a8-955a-bf4a5a3cb145-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "cb9a788e-b626-43a8-955a-bf4a5a3cb145" (UID: "cb9a788e-b626-43a8-955a-bf4a5a3cb145"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.866226 4833 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-run-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.866253 4833 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.866272 4833 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-cni-netd\") on node \"crc\" DevicePath \"\"" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.866289 4833 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb9a788e-b626-43a8-955a-bf4a5a3cb145-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.865561 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "cb9a788e-b626-43a8-955a-bf4a5a3cb145" (UID: "cb9a788e-b626-43a8-955a-bf4a5a3cb145"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.866303 4833 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-slash\") on node \"crc\" DevicePath \"\"" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.865698 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "cb9a788e-b626-43a8-955a-bf4a5a3cb145" (UID: "cb9a788e-b626-43a8-955a-bf4a5a3cb145"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.866343 4833 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-node-log\") on node \"crc\" DevicePath \"\"" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.866358 4833 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.866370 4833 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-run-netns\") on node \"crc\" DevicePath \"\"" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.866361 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb9a788e-b626-43a8-955a-bf4a5a3cb145-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "cb9a788e-b626-43a8-955a-bf4a5a3cb145" (UID: "cb9a788e-b626-43a8-955a-bf4a5a3cb145"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.866380 4833 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-systemd-units\") on node \"crc\" DevicePath \"\"" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.866433 4833 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-log-socket\") on node \"crc\" DevicePath \"\"" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.866455 4833 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb9a788e-b626-43a8-955a-bf4a5a3cb145-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.866475 4833 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.866495 4833 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.866518 4833 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-kubelet\") on node \"crc\" DevicePath \"\"" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.870941 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb9a788e-b626-43a8-955a-bf4a5a3cb145-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "cb9a788e-b626-43a8-955a-bf4a5a3cb145" (UID: "cb9a788e-b626-43a8-955a-bf4a5a3cb145"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.871306 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb9a788e-b626-43a8-955a-bf4a5a3cb145-kube-api-access-k5vxf" (OuterVolumeSpecName: "kube-api-access-k5vxf") pod "cb9a788e-b626-43a8-955a-bf4a5a3cb145" (UID: "cb9a788e-b626-43a8-955a-bf4a5a3cb145"). InnerVolumeSpecName "kube-api-access-k5vxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.881743 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "cb9a788e-b626-43a8-955a-bf4a5a3cb145" (UID: "cb9a788e-b626-43a8-955a-bf4a5a3cb145"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.968132 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-node-log\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.968215 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-host-kubelet\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.968265 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-run-systemd\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.968316 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2v6w\" (UniqueName: \"kubernetes.io/projected/36312e75-9f84-412d-8b8d-a1f47c2cd14c-kube-api-access-s2v6w\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.968380 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-etc-openvswitch\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.968433 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-host-run-ovn-kubernetes\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.968494 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36312e75-9f84-412d-8b8d-a1f47c2cd14c-env-overrides\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.968596 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36312e75-9f84-412d-8b8d-a1f47c2cd14c-ovn-node-metrics-cert\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.968653 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-host-cni-bin\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.968702 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-var-lib-openvswitch\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.968757 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-run-ovn\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.968807 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.968854 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36312e75-9f84-412d-8b8d-a1f47c2cd14c-ovnkube-config\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.968916 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-systemd-units\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.968970 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-host-run-netns\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.969014 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-host-cni-netd\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.969059 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-log-socket\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.969109 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/36312e75-9f84-412d-8b8d-a1f47c2cd14c-ovnkube-script-lib\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.969186 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-run-openvswitch\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.969236 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-host-slash\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.969327 4833 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-run-systemd\") on node \"crc\" DevicePath \"\"" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.969358 4833 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb9a788e-b626-43a8-955a-bf4a5a3cb145-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.969388 4833 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-host-cni-bin\") on node \"crc\" DevicePath \"\"" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.969413 4833 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb9a788e-b626-43a8-955a-bf4a5a3cb145-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.969438 4833 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cb9a788e-b626-43a8-955a-bf4a5a3cb145-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 13 06:39:24 crc kubenswrapper[4833]: I1013 06:39:24.969463 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5vxf\" (UniqueName: \"kubernetes.io/projected/cb9a788e-b626-43a8-955a-bf4a5a3cb145-kube-api-access-k5vxf\") on node \"crc\" DevicePath \"\"" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.070997 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-etc-openvswitch\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.071050 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-host-run-ovn-kubernetes\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.071105 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36312e75-9f84-412d-8b8d-a1f47c2cd14c-env-overrides\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.071113 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-etc-openvswitch\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.071132 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-host-run-ovn-kubernetes\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.071149 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36312e75-9f84-412d-8b8d-a1f47c2cd14c-ovn-node-metrics-cert\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.071322 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-host-cni-bin\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.071361 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-var-lib-openvswitch\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.071406 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-run-ovn\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.071441 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36312e75-9f84-412d-8b8d-a1f47c2cd14c-ovnkube-config\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.071475 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.071524 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-systemd-units\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.071589 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-host-run-netns\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.071618 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-host-cni-netd\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.071650 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-log-socket\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.071686 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/36312e75-9f84-412d-8b8d-a1f47c2cd14c-ovnkube-script-lib\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.071732 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-run-openvswitch\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.071764 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-host-slash\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.071834 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-host-kubelet\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.071866 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-node-log\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.071898 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-run-systemd\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.071930 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2v6w\" (UniqueName: \"kubernetes.io/projected/36312e75-9f84-412d-8b8d-a1f47c2cd14c-kube-api-access-s2v6w\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.072016 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-host-cni-netd\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.072094 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-host-cni-bin\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.072141 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-var-lib-openvswitch\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.072184 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-run-ovn\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.072375 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-log-socket\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.072527 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-host-kubelet\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.072603 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-run-openvswitch\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.072640 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-host-slash\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.072675 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-systemd-units\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.072709 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.072745 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-host-run-netns\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.072798 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-node-log\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.072846 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/36312e75-9f84-412d-8b8d-a1f47c2cd14c-run-systemd\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.073512 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36312e75-9f84-412d-8b8d-a1f47c2cd14c-env-overrides\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.073716 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36312e75-9f84-412d-8b8d-a1f47c2cd14c-ovnkube-config\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.073836 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/36312e75-9f84-412d-8b8d-a1f47c2cd14c-ovnkube-script-lib\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.075891 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36312e75-9f84-412d-8b8d-a1f47c2cd14c-ovn-node-metrics-cert\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.094701 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2v6w\" (UniqueName: \"kubernetes.io/projected/36312e75-9f84-412d-8b8d-a1f47c2cd14c-kube-api-access-s2v6w\") pod \"ovnkube-node-fdqvk\" (UID: \"36312e75-9f84-412d-8b8d-a1f47c2cd14c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.149208 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.743741 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zbg2r_9d1bd0f7-c161-456d-af32-2da416006789/kube-multus/2.log" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.751295 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wnpc6_cb9a788e-b626-43a8-955a-bf4a5a3cb145/ovn-acl-logging/0.log" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.752978 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wnpc6_cb9a788e-b626-43a8-955a-bf4a5a3cb145/ovn-controller/0.log" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.753922 4833 generic.go:334] "Generic (PLEG): container finished" podID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" containerID="8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a" exitCode=0 Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.754068 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" event={"ID":"cb9a788e-b626-43a8-955a-bf4a5a3cb145","Type":"ContainerDied","Data":"8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a"} Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.754124 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" event={"ID":"cb9a788e-b626-43a8-955a-bf4a5a3cb145","Type":"ContainerDied","Data":"1e755011c31d18efdad2d1310ee11165d3f7fb1637877b36f7cb07c9b77a1e7c"} Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.754141 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wnpc6" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.754163 4833 scope.go:117] "RemoveContainer" containerID="c5f10ce598296c8168c432174c128e2e124e0eee35617722c804586b0dea4a49" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.763353 4833 generic.go:334] "Generic (PLEG): container finished" podID="36312e75-9f84-412d-8b8d-a1f47c2cd14c" containerID="56bea8bcc6dbd09277ff55edfca18f92103f6e76471ac3d9344f0d5162cfdf17" exitCode=0 Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.763937 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" event={"ID":"36312e75-9f84-412d-8b8d-a1f47c2cd14c","Type":"ContainerDied","Data":"56bea8bcc6dbd09277ff55edfca18f92103f6e76471ac3d9344f0d5162cfdf17"} Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.764770 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" event={"ID":"36312e75-9f84-412d-8b8d-a1f47c2cd14c","Type":"ContainerStarted","Data":"75be67d2f213fa1a61d397c82fb2b2e9b1680d266f2c26871d3708223c721a70"} Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.791056 4833 scope.go:117] "RemoveContainer" containerID="015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.840839 4833 scope.go:117] "RemoveContainer" containerID="609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.855633 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wnpc6"] Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.860605 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wnpc6"] Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.895327 4833 scope.go:117] "RemoveContainer" containerID="8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.914109 4833 scope.go:117] "RemoveContainer" containerID="2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.931651 4833 scope.go:117] "RemoveContainer" containerID="75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.948097 4833 scope.go:117] "RemoveContainer" containerID="c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.963958 4833 scope.go:117] "RemoveContainer" containerID="5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4" Oct 13 06:39:25 crc kubenswrapper[4833]: I1013 06:39:25.981733 4833 scope.go:117] "RemoveContainer" containerID="73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15" Oct 13 06:39:26 crc kubenswrapper[4833]: I1013 06:39:26.003004 4833 scope.go:117] "RemoveContainer" containerID="c5f10ce598296c8168c432174c128e2e124e0eee35617722c804586b0dea4a49" Oct 13 06:39:26 crc kubenswrapper[4833]: E1013 06:39:26.003639 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5f10ce598296c8168c432174c128e2e124e0eee35617722c804586b0dea4a49\": container with ID starting with c5f10ce598296c8168c432174c128e2e124e0eee35617722c804586b0dea4a49 not found: ID does not exist" containerID="c5f10ce598296c8168c432174c128e2e124e0eee35617722c804586b0dea4a49" Oct 13 06:39:26 crc kubenswrapper[4833]: I1013 06:39:26.003718 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5f10ce598296c8168c432174c128e2e124e0eee35617722c804586b0dea4a49"} err="failed to get container status \"c5f10ce598296c8168c432174c128e2e124e0eee35617722c804586b0dea4a49\": rpc error: code = NotFound desc = could not find container \"c5f10ce598296c8168c432174c128e2e124e0eee35617722c804586b0dea4a49\": container with ID starting with c5f10ce598296c8168c432174c128e2e124e0eee35617722c804586b0dea4a49 not found: ID does not exist" Oct 13 06:39:26 crc kubenswrapper[4833]: I1013 06:39:26.003795 4833 scope.go:117] "RemoveContainer" containerID="015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065" Oct 13 06:39:26 crc kubenswrapper[4833]: E1013 06:39:26.004379 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\": container with ID starting with 015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065 not found: ID does not exist" containerID="015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065" Oct 13 06:39:26 crc kubenswrapper[4833]: I1013 06:39:26.004432 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065"} err="failed to get container status \"015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\": rpc error: code = NotFound desc = could not find container \"015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065\": container with ID starting with 015d133a3f994e2989b0a8c9095b405bd4672af3ed431534a84d74cbbfefb065 not found: ID does not exist" Oct 13 06:39:26 crc kubenswrapper[4833]: I1013 06:39:26.004465 4833 scope.go:117] "RemoveContainer" containerID="609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40" Oct 13 06:39:26 crc kubenswrapper[4833]: E1013 06:39:26.004920 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\": container with ID starting with 609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40 not found: ID does not exist" containerID="609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40" Oct 13 06:39:26 crc kubenswrapper[4833]: I1013 06:39:26.004968 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40"} err="failed to get container status \"609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\": rpc error: code = NotFound desc = could not find container \"609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40\": container with ID starting with 609c4596efaab92d49947c105086a9c5f18c54afaddff45ff05c576f36179b40 not found: ID does not exist" Oct 13 06:39:26 crc kubenswrapper[4833]: I1013 06:39:26.005009 4833 scope.go:117] "RemoveContainer" containerID="8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a" Oct 13 06:39:26 crc kubenswrapper[4833]: E1013 06:39:26.005348 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\": container with ID starting with 8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a not found: ID does not exist" containerID="8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a" Oct 13 06:39:26 crc kubenswrapper[4833]: I1013 06:39:26.005396 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a"} err="failed to get container status \"8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\": rpc error: code = NotFound desc = could not find container \"8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a\": container with ID starting with 8ba243f15dbb99de5dcc1b9b6ac681c13174032fdd93cc45ad6c5f64c3d9f04a not found: ID does not exist" Oct 13 06:39:26 crc kubenswrapper[4833]: I1013 06:39:26.005423 4833 scope.go:117] "RemoveContainer" containerID="2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e" Oct 13 06:39:26 crc kubenswrapper[4833]: E1013 06:39:26.005847 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\": container with ID starting with 2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e not found: ID does not exist" containerID="2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e" Oct 13 06:39:26 crc kubenswrapper[4833]: I1013 06:39:26.005879 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e"} err="failed to get container status \"2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\": rpc error: code = NotFound desc = could not find container \"2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e\": container with ID starting with 2533e2fcdf22393df59e568c770b91009e79bbb3c08454cdca11c601816e8b2e not found: ID does not exist" Oct 13 06:39:26 crc kubenswrapper[4833]: I1013 06:39:26.005894 4833 scope.go:117] "RemoveContainer" containerID="75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a" Oct 13 06:39:26 crc kubenswrapper[4833]: E1013 06:39:26.006214 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\": container with ID starting with 75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a not found: ID does not exist" containerID="75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a" Oct 13 06:39:26 crc kubenswrapper[4833]: I1013 06:39:26.006249 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a"} err="failed to get container status \"75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\": rpc error: code = NotFound desc = could not find container \"75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a\": container with ID starting with 75b6fc3ac039fbeec39f04bf6ecf10a34925cbdca1a267e0904f0ec07288c24a not found: ID does not exist" Oct 13 06:39:26 crc kubenswrapper[4833]: I1013 06:39:26.006276 4833 scope.go:117] "RemoveContainer" containerID="c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368" Oct 13 06:39:26 crc kubenswrapper[4833]: E1013 06:39:26.006566 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\": container with ID starting with c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368 not found: ID does not exist" containerID="c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368" Oct 13 06:39:26 crc kubenswrapper[4833]: I1013 06:39:26.006607 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368"} err="failed to get container status \"c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\": rpc error: code = NotFound desc = could not find container \"c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368\": container with ID starting with c28e0fae6ba01fd81c65f147101fb93346528af845f214458863537fcbb2e368 not found: ID does not exist" Oct 13 06:39:26 crc kubenswrapper[4833]: I1013 06:39:26.006632 4833 scope.go:117] "RemoveContainer" containerID="5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4" Oct 13 06:39:26 crc kubenswrapper[4833]: E1013 06:39:26.007044 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\": container with ID starting with 5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4 not found: ID does not exist" containerID="5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4" Oct 13 06:39:26 crc kubenswrapper[4833]: I1013 06:39:26.007079 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4"} err="failed to get container status \"5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\": rpc error: code = NotFound desc = could not find container \"5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4\": container with ID starting with 5b790fb3b64a78ea9cc975951d3f4ec8f3fd347aff9e31b2eac8d3baa01087b4 not found: ID does not exist" Oct 13 06:39:26 crc kubenswrapper[4833]: I1013 06:39:26.007105 4833 scope.go:117] "RemoveContainer" containerID="73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15" Oct 13 06:39:26 crc kubenswrapper[4833]: E1013 06:39:26.007433 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\": container with ID starting with 73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15 not found: ID does not exist" containerID="73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15" Oct 13 06:39:26 crc kubenswrapper[4833]: I1013 06:39:26.007462 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15"} err="failed to get container status \"73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\": rpc error: code = NotFound desc = could not find container \"73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15\": container with ID starting with 73f7d665b66493261fc99c0816676c4bff9871a114a73963ffe498bbd03eaf15 not found: ID does not exist" Oct 13 06:39:26 crc kubenswrapper[4833]: I1013 06:39:26.648957 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb9a788e-b626-43a8-955a-bf4a5a3cb145" path="/var/lib/kubelet/pods/cb9a788e-b626-43a8-955a-bf4a5a3cb145/volumes" Oct 13 06:39:26 crc kubenswrapper[4833]: I1013 06:39:26.776658 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" event={"ID":"36312e75-9f84-412d-8b8d-a1f47c2cd14c","Type":"ContainerStarted","Data":"e3482eeb31312991d7bf1ce951c879804dae6079e091fa980bbb6891f1308de2"} Oct 13 06:39:26 crc kubenswrapper[4833]: I1013 06:39:26.776704 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" event={"ID":"36312e75-9f84-412d-8b8d-a1f47c2cd14c","Type":"ContainerStarted","Data":"7bc6e3582f91aaed2e87769fc6e5d527ab417d01a4b2516efb980ef029dd15fb"} Oct 13 06:39:26 crc kubenswrapper[4833]: I1013 06:39:26.776719 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" event={"ID":"36312e75-9f84-412d-8b8d-a1f47c2cd14c","Type":"ContainerStarted","Data":"6b04d3ee23da13b57677e19e4275673efc6aaf8b0a3a171d6ae28eff87ca1ef7"} Oct 13 06:39:26 crc kubenswrapper[4833]: I1013 06:39:26.776732 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" event={"ID":"36312e75-9f84-412d-8b8d-a1f47c2cd14c","Type":"ContainerStarted","Data":"650c72beff65ca8c58f461a58d529c753a937ac73fe75277adb69165bf74ee9a"} Oct 13 06:39:26 crc kubenswrapper[4833]: I1013 06:39:26.776743 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" event={"ID":"36312e75-9f84-412d-8b8d-a1f47c2cd14c","Type":"ContainerStarted","Data":"7dc681a05c3925bba56f73731c3ce3ff023df830f2dda4ee7da2c3c2a8f254d7"} Oct 13 06:39:26 crc kubenswrapper[4833]: I1013 06:39:26.776754 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" event={"ID":"36312e75-9f84-412d-8b8d-a1f47c2cd14c","Type":"ContainerStarted","Data":"064b9acde47107cfc3ec3c47eed83ebbe490b14175bb10e1e1f226a69ec3a8b2"} Oct 13 06:39:29 crc kubenswrapper[4833]: I1013 06:39:29.313565 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-jq9tm"] Oct 13 06:39:29 crc kubenswrapper[4833]: I1013 06:39:29.314614 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jq9tm" Oct 13 06:39:29 crc kubenswrapper[4833]: I1013 06:39:29.317293 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Oct 13 06:39:29 crc kubenswrapper[4833]: I1013 06:39:29.317472 4833 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-bcgmx" Oct 13 06:39:29 crc kubenswrapper[4833]: I1013 06:39:29.317447 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Oct 13 06:39:29 crc kubenswrapper[4833]: I1013 06:39:29.317799 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Oct 13 06:39:29 crc kubenswrapper[4833]: I1013 06:39:29.436634 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/f098f03e-e4ee-4cad-b3cd-6887b3423124-crc-storage\") pod \"crc-storage-crc-jq9tm\" (UID: \"f098f03e-e4ee-4cad-b3cd-6887b3423124\") " pod="crc-storage/crc-storage-crc-jq9tm" Oct 13 06:39:29 crc kubenswrapper[4833]: I1013 06:39:29.436863 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7hsk\" (UniqueName: \"kubernetes.io/projected/f098f03e-e4ee-4cad-b3cd-6887b3423124-kube-api-access-v7hsk\") pod \"crc-storage-crc-jq9tm\" (UID: \"f098f03e-e4ee-4cad-b3cd-6887b3423124\") " pod="crc-storage/crc-storage-crc-jq9tm" Oct 13 06:39:29 crc kubenswrapper[4833]: I1013 06:39:29.436948 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/f098f03e-e4ee-4cad-b3cd-6887b3423124-node-mnt\") pod \"crc-storage-crc-jq9tm\" (UID: \"f098f03e-e4ee-4cad-b3cd-6887b3423124\") " pod="crc-storage/crc-storage-crc-jq9tm" Oct 13 06:39:29 crc kubenswrapper[4833]: I1013 06:39:29.538223 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7hsk\" (UniqueName: \"kubernetes.io/projected/f098f03e-e4ee-4cad-b3cd-6887b3423124-kube-api-access-v7hsk\") pod \"crc-storage-crc-jq9tm\" (UID: \"f098f03e-e4ee-4cad-b3cd-6887b3423124\") " pod="crc-storage/crc-storage-crc-jq9tm" Oct 13 06:39:29 crc kubenswrapper[4833]: I1013 06:39:29.538360 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/f098f03e-e4ee-4cad-b3cd-6887b3423124-node-mnt\") pod \"crc-storage-crc-jq9tm\" (UID: \"f098f03e-e4ee-4cad-b3cd-6887b3423124\") " pod="crc-storage/crc-storage-crc-jq9tm" Oct 13 06:39:29 crc kubenswrapper[4833]: I1013 06:39:29.538446 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/f098f03e-e4ee-4cad-b3cd-6887b3423124-crc-storage\") pod \"crc-storage-crc-jq9tm\" (UID: \"f098f03e-e4ee-4cad-b3cd-6887b3423124\") " pod="crc-storage/crc-storage-crc-jq9tm" Oct 13 06:39:29 crc kubenswrapper[4833]: I1013 06:39:29.539010 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/f098f03e-e4ee-4cad-b3cd-6887b3423124-node-mnt\") pod \"crc-storage-crc-jq9tm\" (UID: \"f098f03e-e4ee-4cad-b3cd-6887b3423124\") " pod="crc-storage/crc-storage-crc-jq9tm" Oct 13 06:39:29 crc kubenswrapper[4833]: I1013 06:39:29.540020 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/f098f03e-e4ee-4cad-b3cd-6887b3423124-crc-storage\") pod \"crc-storage-crc-jq9tm\" (UID: \"f098f03e-e4ee-4cad-b3cd-6887b3423124\") " pod="crc-storage/crc-storage-crc-jq9tm" Oct 13 06:39:29 crc kubenswrapper[4833]: I1013 06:39:29.566220 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7hsk\" (UniqueName: \"kubernetes.io/projected/f098f03e-e4ee-4cad-b3cd-6887b3423124-kube-api-access-v7hsk\") pod \"crc-storage-crc-jq9tm\" (UID: \"f098f03e-e4ee-4cad-b3cd-6887b3423124\") " pod="crc-storage/crc-storage-crc-jq9tm" Oct 13 06:39:29 crc kubenswrapper[4833]: I1013 06:39:29.635589 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jq9tm" Oct 13 06:39:29 crc kubenswrapper[4833]: E1013 06:39:29.661344 4833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jq9tm_crc-storage_f098f03e-e4ee-4cad-b3cd-6887b3423124_0(344e2153a1031893308e92764a6f660f36d9365eca10291f6b1a54750e1aba48): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 13 06:39:29 crc kubenswrapper[4833]: E1013 06:39:29.661438 4833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jq9tm_crc-storage_f098f03e-e4ee-4cad-b3cd-6887b3423124_0(344e2153a1031893308e92764a6f660f36d9365eca10291f6b1a54750e1aba48): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-jq9tm" Oct 13 06:39:29 crc kubenswrapper[4833]: E1013 06:39:29.661475 4833 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jq9tm_crc-storage_f098f03e-e4ee-4cad-b3cd-6887b3423124_0(344e2153a1031893308e92764a6f660f36d9365eca10291f6b1a54750e1aba48): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-jq9tm" Oct 13 06:39:29 crc kubenswrapper[4833]: E1013 06:39:29.661563 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-jq9tm_crc-storage(f098f03e-e4ee-4cad-b3cd-6887b3423124)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-jq9tm_crc-storage(f098f03e-e4ee-4cad-b3cd-6887b3423124)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jq9tm_crc-storage_f098f03e-e4ee-4cad-b3cd-6887b3423124_0(344e2153a1031893308e92764a6f660f36d9365eca10291f6b1a54750e1aba48): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-jq9tm" podUID="f098f03e-e4ee-4cad-b3cd-6887b3423124" Oct 13 06:39:29 crc kubenswrapper[4833]: I1013 06:39:29.813241 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" event={"ID":"36312e75-9f84-412d-8b8d-a1f47c2cd14c","Type":"ContainerStarted","Data":"2a29c540a282c1d28c3700550b4c946c67f3fd27db7115791bb85d1e14473b35"} Oct 13 06:39:31 crc kubenswrapper[4833]: I1013 06:39:31.828580 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" event={"ID":"36312e75-9f84-412d-8b8d-a1f47c2cd14c","Type":"ContainerStarted","Data":"8b3dbf92ecf1d3ec6672dd060bdf9118e2def36354e7ccf52b8d0ebce9a17821"} Oct 13 06:39:31 crc kubenswrapper[4833]: I1013 06:39:31.828928 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:31 crc kubenswrapper[4833]: I1013 06:39:31.855790 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:31 crc kubenswrapper[4833]: I1013 06:39:31.879798 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" podStartSLOduration=7.879777923 podStartE2EDuration="7.879777923s" podCreationTimestamp="2025-10-13 06:39:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:39:31.87708846 +0000 UTC m=+661.977511376" watchObservedRunningTime="2025-10-13 06:39:31.879777923 +0000 UTC m=+661.980200839" Oct 13 06:39:32 crc kubenswrapper[4833]: I1013 06:39:32.019640 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-jq9tm"] Oct 13 06:39:32 crc kubenswrapper[4833]: I1013 06:39:32.019750 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jq9tm" Oct 13 06:39:32 crc kubenswrapper[4833]: I1013 06:39:32.020193 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jq9tm" Oct 13 06:39:32 crc kubenswrapper[4833]: E1013 06:39:32.056921 4833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jq9tm_crc-storage_f098f03e-e4ee-4cad-b3cd-6887b3423124_0(c777e2583ea01bacbe0076728ba8e839cb063ddd144b894739e65527d41c168e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 13 06:39:32 crc kubenswrapper[4833]: E1013 06:39:32.056993 4833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jq9tm_crc-storage_f098f03e-e4ee-4cad-b3cd-6887b3423124_0(c777e2583ea01bacbe0076728ba8e839cb063ddd144b894739e65527d41c168e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-jq9tm" Oct 13 06:39:32 crc kubenswrapper[4833]: E1013 06:39:32.057017 4833 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jq9tm_crc-storage_f098f03e-e4ee-4cad-b3cd-6887b3423124_0(c777e2583ea01bacbe0076728ba8e839cb063ddd144b894739e65527d41c168e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-jq9tm" Oct 13 06:39:32 crc kubenswrapper[4833]: E1013 06:39:32.057063 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-jq9tm_crc-storage(f098f03e-e4ee-4cad-b3cd-6887b3423124)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-jq9tm_crc-storage(f098f03e-e4ee-4cad-b3cd-6887b3423124)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jq9tm_crc-storage_f098f03e-e4ee-4cad-b3cd-6887b3423124_0(c777e2583ea01bacbe0076728ba8e839cb063ddd144b894739e65527d41c168e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-jq9tm" podUID="f098f03e-e4ee-4cad-b3cd-6887b3423124" Oct 13 06:39:32 crc kubenswrapper[4833]: I1013 06:39:32.834952 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:32 crc kubenswrapper[4833]: I1013 06:39:32.836683 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:32 crc kubenswrapper[4833]: I1013 06:39:32.903922 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:37 crc kubenswrapper[4833]: I1013 06:39:37.626963 4833 scope.go:117] "RemoveContainer" containerID="8b45e4b875a145b2ae05a9c9a05af30df92a79bf06adb47cb6550ae4ac56cb08" Oct 13 06:39:37 crc kubenswrapper[4833]: E1013 06:39:37.627836 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-zbg2r_openshift-multus(9d1bd0f7-c161-456d-af32-2da416006789)\"" pod="openshift-multus/multus-zbg2r" podUID="9d1bd0f7-c161-456d-af32-2da416006789" Oct 13 06:39:44 crc kubenswrapper[4833]: I1013 06:39:44.626139 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jq9tm" Oct 13 06:39:44 crc kubenswrapper[4833]: I1013 06:39:44.628700 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jq9tm" Oct 13 06:39:44 crc kubenswrapper[4833]: E1013 06:39:44.661402 4833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jq9tm_crc-storage_f098f03e-e4ee-4cad-b3cd-6887b3423124_0(c55a4b8f560c11ea3f0b6918aa7d59f8b151afaf044e59fe2cf03d4e0c9a4c35): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 13 06:39:44 crc kubenswrapper[4833]: E1013 06:39:44.661497 4833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jq9tm_crc-storage_f098f03e-e4ee-4cad-b3cd-6887b3423124_0(c55a4b8f560c11ea3f0b6918aa7d59f8b151afaf044e59fe2cf03d4e0c9a4c35): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-jq9tm" Oct 13 06:39:44 crc kubenswrapper[4833]: E1013 06:39:44.661532 4833 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jq9tm_crc-storage_f098f03e-e4ee-4cad-b3cd-6887b3423124_0(c55a4b8f560c11ea3f0b6918aa7d59f8b151afaf044e59fe2cf03d4e0c9a4c35): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-jq9tm" Oct 13 06:39:44 crc kubenswrapper[4833]: E1013 06:39:44.661655 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-jq9tm_crc-storage(f098f03e-e4ee-4cad-b3cd-6887b3423124)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-jq9tm_crc-storage(f098f03e-e4ee-4cad-b3cd-6887b3423124)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jq9tm_crc-storage_f098f03e-e4ee-4cad-b3cd-6887b3423124_0(c55a4b8f560c11ea3f0b6918aa7d59f8b151afaf044e59fe2cf03d4e0c9a4c35): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-jq9tm" podUID="f098f03e-e4ee-4cad-b3cd-6887b3423124" Oct 13 06:39:51 crc kubenswrapper[4833]: I1013 06:39:51.627847 4833 scope.go:117] "RemoveContainer" containerID="8b45e4b875a145b2ae05a9c9a05af30df92a79bf06adb47cb6550ae4ac56cb08" Oct 13 06:39:51 crc kubenswrapper[4833]: I1013 06:39:51.959510 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zbg2r_9d1bd0f7-c161-456d-af32-2da416006789/kube-multus/2.log" Oct 13 06:39:51 crc kubenswrapper[4833]: I1013 06:39:51.959955 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zbg2r" event={"ID":"9d1bd0f7-c161-456d-af32-2da416006789","Type":"ContainerStarted","Data":"0a29854c39c7f98abaef2daa445cabbd7b4d966390f2c71a06ee3271a44a1200"} Oct 13 06:39:55 crc kubenswrapper[4833]: I1013 06:39:55.177442 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fdqvk" Oct 13 06:39:56 crc kubenswrapper[4833]: I1013 06:39:56.627896 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jq9tm" Oct 13 06:39:56 crc kubenswrapper[4833]: I1013 06:39:56.628435 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jq9tm" Oct 13 06:39:56 crc kubenswrapper[4833]: I1013 06:39:56.809457 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-jq9tm"] Oct 13 06:39:56 crc kubenswrapper[4833]: I1013 06:39:56.816959 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 06:39:56 crc kubenswrapper[4833]: I1013 06:39:56.988285 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-jq9tm" event={"ID":"f098f03e-e4ee-4cad-b3cd-6887b3423124","Type":"ContainerStarted","Data":"cc3aa2339f4cec7b4ce7dc9cd84e67cb49885832a9f4faf5709d62b88371d76b"} Oct 13 06:39:59 crc kubenswrapper[4833]: I1013 06:39:59.004386 4833 generic.go:334] "Generic (PLEG): container finished" podID="f098f03e-e4ee-4cad-b3cd-6887b3423124" containerID="0e81e31fbe90c4b224e4a69666ed044631eace94d7dcc2716994858b155a4812" exitCode=0 Oct 13 06:39:59 crc kubenswrapper[4833]: I1013 06:39:59.004502 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-jq9tm" event={"ID":"f098f03e-e4ee-4cad-b3cd-6887b3423124","Type":"ContainerDied","Data":"0e81e31fbe90c4b224e4a69666ed044631eace94d7dcc2716994858b155a4812"} Oct 13 06:40:00 crc kubenswrapper[4833]: I1013 06:40:00.319288 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jq9tm" Oct 13 06:40:00 crc kubenswrapper[4833]: I1013 06:40:00.456779 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/f098f03e-e4ee-4cad-b3cd-6887b3423124-node-mnt\") pod \"f098f03e-e4ee-4cad-b3cd-6887b3423124\" (UID: \"f098f03e-e4ee-4cad-b3cd-6887b3423124\") " Oct 13 06:40:00 crc kubenswrapper[4833]: I1013 06:40:00.456860 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/f098f03e-e4ee-4cad-b3cd-6887b3423124-crc-storage\") pod \"f098f03e-e4ee-4cad-b3cd-6887b3423124\" (UID: \"f098f03e-e4ee-4cad-b3cd-6887b3423124\") " Oct 13 06:40:00 crc kubenswrapper[4833]: I1013 06:40:00.456911 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7hsk\" (UniqueName: \"kubernetes.io/projected/f098f03e-e4ee-4cad-b3cd-6887b3423124-kube-api-access-v7hsk\") pod \"f098f03e-e4ee-4cad-b3cd-6887b3423124\" (UID: \"f098f03e-e4ee-4cad-b3cd-6887b3423124\") " Oct 13 06:40:00 crc kubenswrapper[4833]: I1013 06:40:00.456977 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f098f03e-e4ee-4cad-b3cd-6887b3423124-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "f098f03e-e4ee-4cad-b3cd-6887b3423124" (UID: "f098f03e-e4ee-4cad-b3cd-6887b3423124"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:40:00 crc kubenswrapper[4833]: I1013 06:40:00.457256 4833 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/f098f03e-e4ee-4cad-b3cd-6887b3423124-node-mnt\") on node \"crc\" DevicePath \"\"" Oct 13 06:40:00 crc kubenswrapper[4833]: I1013 06:40:00.464991 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f098f03e-e4ee-4cad-b3cd-6887b3423124-kube-api-access-v7hsk" (OuterVolumeSpecName: "kube-api-access-v7hsk") pod "f098f03e-e4ee-4cad-b3cd-6887b3423124" (UID: "f098f03e-e4ee-4cad-b3cd-6887b3423124"). InnerVolumeSpecName "kube-api-access-v7hsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:40:00 crc kubenswrapper[4833]: I1013 06:40:00.480404 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f098f03e-e4ee-4cad-b3cd-6887b3423124-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "f098f03e-e4ee-4cad-b3cd-6887b3423124" (UID: "f098f03e-e4ee-4cad-b3cd-6887b3423124"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:40:00 crc kubenswrapper[4833]: I1013 06:40:00.558645 4833 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/f098f03e-e4ee-4cad-b3cd-6887b3423124-crc-storage\") on node \"crc\" DevicePath \"\"" Oct 13 06:40:00 crc kubenswrapper[4833]: I1013 06:40:00.558700 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7hsk\" (UniqueName: \"kubernetes.io/projected/f098f03e-e4ee-4cad-b3cd-6887b3423124-kube-api-access-v7hsk\") on node \"crc\" DevicePath \"\"" Oct 13 06:40:01 crc kubenswrapper[4833]: I1013 06:40:01.026319 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-jq9tm" event={"ID":"f098f03e-e4ee-4cad-b3cd-6887b3423124","Type":"ContainerDied","Data":"cc3aa2339f4cec7b4ce7dc9cd84e67cb49885832a9f4faf5709d62b88371d76b"} Oct 13 06:40:01 crc kubenswrapper[4833]: I1013 06:40:01.026367 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc3aa2339f4cec7b4ce7dc9cd84e67cb49885832a9f4faf5709d62b88371d76b" Oct 13 06:40:01 crc kubenswrapper[4833]: I1013 06:40:01.026837 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jq9tm" Oct 13 06:40:07 crc kubenswrapper[4833]: I1013 06:40:07.076063 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5"] Oct 13 06:40:07 crc kubenswrapper[4833]: E1013 06:40:07.076614 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f098f03e-e4ee-4cad-b3cd-6887b3423124" containerName="storage" Oct 13 06:40:07 crc kubenswrapper[4833]: I1013 06:40:07.076633 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f098f03e-e4ee-4cad-b3cd-6887b3423124" containerName="storage" Oct 13 06:40:07 crc kubenswrapper[4833]: I1013 06:40:07.076955 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f098f03e-e4ee-4cad-b3cd-6887b3423124" containerName="storage" Oct 13 06:40:07 crc kubenswrapper[4833]: I1013 06:40:07.077821 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5" Oct 13 06:40:07 crc kubenswrapper[4833]: I1013 06:40:07.081933 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 13 06:40:07 crc kubenswrapper[4833]: I1013 06:40:07.106770 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5"] Oct 13 06:40:07 crc kubenswrapper[4833]: I1013 06:40:07.168924 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjqcm\" (UniqueName: \"kubernetes.io/projected/962d14bb-624a-4fa8-93ae-e81f514487ca-kube-api-access-gjqcm\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5\" (UID: \"962d14bb-624a-4fa8-93ae-e81f514487ca\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5" Oct 13 06:40:07 crc kubenswrapper[4833]: I1013 06:40:07.168983 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/962d14bb-624a-4fa8-93ae-e81f514487ca-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5\" (UID: \"962d14bb-624a-4fa8-93ae-e81f514487ca\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5" Oct 13 06:40:07 crc kubenswrapper[4833]: I1013 06:40:07.169006 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/962d14bb-624a-4fa8-93ae-e81f514487ca-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5\" (UID: \"962d14bb-624a-4fa8-93ae-e81f514487ca\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5" Oct 13 06:40:07 crc kubenswrapper[4833]: I1013 06:40:07.269928 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjqcm\" (UniqueName: \"kubernetes.io/projected/962d14bb-624a-4fa8-93ae-e81f514487ca-kube-api-access-gjqcm\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5\" (UID: \"962d14bb-624a-4fa8-93ae-e81f514487ca\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5" Oct 13 06:40:07 crc kubenswrapper[4833]: I1013 06:40:07.269972 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/962d14bb-624a-4fa8-93ae-e81f514487ca-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5\" (UID: \"962d14bb-624a-4fa8-93ae-e81f514487ca\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5" Oct 13 06:40:07 crc kubenswrapper[4833]: I1013 06:40:07.269992 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/962d14bb-624a-4fa8-93ae-e81f514487ca-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5\" (UID: \"962d14bb-624a-4fa8-93ae-e81f514487ca\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5" Oct 13 06:40:07 crc kubenswrapper[4833]: I1013 06:40:07.270525 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/962d14bb-624a-4fa8-93ae-e81f514487ca-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5\" (UID: \"962d14bb-624a-4fa8-93ae-e81f514487ca\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5" Oct 13 06:40:07 crc kubenswrapper[4833]: I1013 06:40:07.270617 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/962d14bb-624a-4fa8-93ae-e81f514487ca-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5\" (UID: \"962d14bb-624a-4fa8-93ae-e81f514487ca\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5" Oct 13 06:40:07 crc kubenswrapper[4833]: I1013 06:40:07.302868 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjqcm\" (UniqueName: \"kubernetes.io/projected/962d14bb-624a-4fa8-93ae-e81f514487ca-kube-api-access-gjqcm\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5\" (UID: \"962d14bb-624a-4fa8-93ae-e81f514487ca\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5" Oct 13 06:40:07 crc kubenswrapper[4833]: I1013 06:40:07.411898 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5" Oct 13 06:40:07 crc kubenswrapper[4833]: I1013 06:40:07.674595 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5"] Oct 13 06:40:08 crc kubenswrapper[4833]: I1013 06:40:08.069664 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5" event={"ID":"962d14bb-624a-4fa8-93ae-e81f514487ca","Type":"ContainerStarted","Data":"36b5758bb58f473307a9f7504077de04ca1756029d781677d73627bf9c22f7e7"} Oct 13 06:40:08 crc kubenswrapper[4833]: I1013 06:40:08.069727 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5" event={"ID":"962d14bb-624a-4fa8-93ae-e81f514487ca","Type":"ContainerStarted","Data":"3d6783c92030a9eb43edb0bc68169ed07f0b1d31b83fd7ef07f26bb149c30910"} Oct 13 06:40:09 crc kubenswrapper[4833]: I1013 06:40:09.077826 4833 generic.go:334] "Generic (PLEG): container finished" podID="962d14bb-624a-4fa8-93ae-e81f514487ca" containerID="36b5758bb58f473307a9f7504077de04ca1756029d781677d73627bf9c22f7e7" exitCode=0 Oct 13 06:40:09 crc kubenswrapper[4833]: I1013 06:40:09.077953 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5" event={"ID":"962d14bb-624a-4fa8-93ae-e81f514487ca","Type":"ContainerDied","Data":"36b5758bb58f473307a9f7504077de04ca1756029d781677d73627bf9c22f7e7"} Oct 13 06:40:11 crc kubenswrapper[4833]: I1013 06:40:11.100678 4833 generic.go:334] "Generic (PLEG): container finished" podID="962d14bb-624a-4fa8-93ae-e81f514487ca" containerID="fe39cb9f2f7805770906c376143db7867590535badf72c84b581929916af950e" exitCode=0 Oct 13 06:40:11 crc kubenswrapper[4833]: I1013 06:40:11.100844 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5" event={"ID":"962d14bb-624a-4fa8-93ae-e81f514487ca","Type":"ContainerDied","Data":"fe39cb9f2f7805770906c376143db7867590535badf72c84b581929916af950e"} Oct 13 06:40:12 crc kubenswrapper[4833]: I1013 06:40:12.109857 4833 generic.go:334] "Generic (PLEG): container finished" podID="962d14bb-624a-4fa8-93ae-e81f514487ca" containerID="be8ec4c294658466121295b4fe45b49382d80ba609ae9cf6e9bc52b9d8aeeb8f" exitCode=0 Oct 13 06:40:12 crc kubenswrapper[4833]: I1013 06:40:12.109901 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5" event={"ID":"962d14bb-624a-4fa8-93ae-e81f514487ca","Type":"ContainerDied","Data":"be8ec4c294658466121295b4fe45b49382d80ba609ae9cf6e9bc52b9d8aeeb8f"} Oct 13 06:40:13 crc kubenswrapper[4833]: I1013 06:40:13.383084 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5" Oct 13 06:40:13 crc kubenswrapper[4833]: I1013 06:40:13.557057 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/962d14bb-624a-4fa8-93ae-e81f514487ca-util\") pod \"962d14bb-624a-4fa8-93ae-e81f514487ca\" (UID: \"962d14bb-624a-4fa8-93ae-e81f514487ca\") " Oct 13 06:40:13 crc kubenswrapper[4833]: I1013 06:40:13.557133 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjqcm\" (UniqueName: \"kubernetes.io/projected/962d14bb-624a-4fa8-93ae-e81f514487ca-kube-api-access-gjqcm\") pod \"962d14bb-624a-4fa8-93ae-e81f514487ca\" (UID: \"962d14bb-624a-4fa8-93ae-e81f514487ca\") " Oct 13 06:40:13 crc kubenswrapper[4833]: I1013 06:40:13.557271 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/962d14bb-624a-4fa8-93ae-e81f514487ca-bundle\") pod \"962d14bb-624a-4fa8-93ae-e81f514487ca\" (UID: \"962d14bb-624a-4fa8-93ae-e81f514487ca\") " Oct 13 06:40:13 crc kubenswrapper[4833]: I1013 06:40:13.558432 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/962d14bb-624a-4fa8-93ae-e81f514487ca-bundle" (OuterVolumeSpecName: "bundle") pod "962d14bb-624a-4fa8-93ae-e81f514487ca" (UID: "962d14bb-624a-4fa8-93ae-e81f514487ca"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:40:13 crc kubenswrapper[4833]: I1013 06:40:13.567910 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/962d14bb-624a-4fa8-93ae-e81f514487ca-kube-api-access-gjqcm" (OuterVolumeSpecName: "kube-api-access-gjqcm") pod "962d14bb-624a-4fa8-93ae-e81f514487ca" (UID: "962d14bb-624a-4fa8-93ae-e81f514487ca"). InnerVolumeSpecName "kube-api-access-gjqcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:40:13 crc kubenswrapper[4833]: I1013 06:40:13.635946 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/962d14bb-624a-4fa8-93ae-e81f514487ca-util" (OuterVolumeSpecName: "util") pod "962d14bb-624a-4fa8-93ae-e81f514487ca" (UID: "962d14bb-624a-4fa8-93ae-e81f514487ca"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:40:13 crc kubenswrapper[4833]: I1013 06:40:13.659353 4833 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/962d14bb-624a-4fa8-93ae-e81f514487ca-util\") on node \"crc\" DevicePath \"\"" Oct 13 06:40:13 crc kubenswrapper[4833]: I1013 06:40:13.659401 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjqcm\" (UniqueName: \"kubernetes.io/projected/962d14bb-624a-4fa8-93ae-e81f514487ca-kube-api-access-gjqcm\") on node \"crc\" DevicePath \"\"" Oct 13 06:40:13 crc kubenswrapper[4833]: I1013 06:40:13.659414 4833 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/962d14bb-624a-4fa8-93ae-e81f514487ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:40:14 crc kubenswrapper[4833]: I1013 06:40:14.127818 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5" event={"ID":"962d14bb-624a-4fa8-93ae-e81f514487ca","Type":"ContainerDied","Data":"3d6783c92030a9eb43edb0bc68169ed07f0b1d31b83fd7ef07f26bb149c30910"} Oct 13 06:40:14 crc kubenswrapper[4833]: I1013 06:40:14.127881 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d6783c92030a9eb43edb0bc68169ed07f0b1d31b83fd7ef07f26bb149c30910" Oct 13 06:40:14 crc kubenswrapper[4833]: I1013 06:40:14.127962 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5" Oct 13 06:40:18 crc kubenswrapper[4833]: I1013 06:40:18.829474 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-4qbpk"] Oct 13 06:40:18 crc kubenswrapper[4833]: E1013 06:40:18.830063 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="962d14bb-624a-4fa8-93ae-e81f514487ca" containerName="util" Oct 13 06:40:18 crc kubenswrapper[4833]: I1013 06:40:18.830081 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="962d14bb-624a-4fa8-93ae-e81f514487ca" containerName="util" Oct 13 06:40:18 crc kubenswrapper[4833]: E1013 06:40:18.830091 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="962d14bb-624a-4fa8-93ae-e81f514487ca" containerName="extract" Oct 13 06:40:18 crc kubenswrapper[4833]: I1013 06:40:18.830098 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="962d14bb-624a-4fa8-93ae-e81f514487ca" containerName="extract" Oct 13 06:40:18 crc kubenswrapper[4833]: E1013 06:40:18.830112 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="962d14bb-624a-4fa8-93ae-e81f514487ca" containerName="pull" Oct 13 06:40:18 crc kubenswrapper[4833]: I1013 06:40:18.830120 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="962d14bb-624a-4fa8-93ae-e81f514487ca" containerName="pull" Oct 13 06:40:18 crc kubenswrapper[4833]: I1013 06:40:18.830236 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="962d14bb-624a-4fa8-93ae-e81f514487ca" containerName="extract" Oct 13 06:40:18 crc kubenswrapper[4833]: I1013 06:40:18.830676 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-4qbpk" Oct 13 06:40:18 crc kubenswrapper[4833]: I1013 06:40:18.832743 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Oct 13 06:40:18 crc kubenswrapper[4833]: I1013 06:40:18.833331 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-77xzt" Oct 13 06:40:18 crc kubenswrapper[4833]: I1013 06:40:18.833458 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Oct 13 06:40:18 crc kubenswrapper[4833]: I1013 06:40:18.842913 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-4qbpk"] Oct 13 06:40:19 crc kubenswrapper[4833]: I1013 06:40:19.031835 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w22l\" (UniqueName: \"kubernetes.io/projected/9694fd5a-c5d2-4d3e-9e5b-5ca415933b33-kube-api-access-2w22l\") pod \"nmstate-operator-858ddd8f98-4qbpk\" (UID: \"9694fd5a-c5d2-4d3e-9e5b-5ca415933b33\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-4qbpk" Oct 13 06:40:19 crc kubenswrapper[4833]: I1013 06:40:19.133047 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w22l\" (UniqueName: \"kubernetes.io/projected/9694fd5a-c5d2-4d3e-9e5b-5ca415933b33-kube-api-access-2w22l\") pod \"nmstate-operator-858ddd8f98-4qbpk\" (UID: \"9694fd5a-c5d2-4d3e-9e5b-5ca415933b33\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-4qbpk" Oct 13 06:40:19 crc kubenswrapper[4833]: I1013 06:40:19.156801 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w22l\" (UniqueName: \"kubernetes.io/projected/9694fd5a-c5d2-4d3e-9e5b-5ca415933b33-kube-api-access-2w22l\") pod \"nmstate-operator-858ddd8f98-4qbpk\" (UID: \"9694fd5a-c5d2-4d3e-9e5b-5ca415933b33\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-4qbpk" Oct 13 06:40:19 crc kubenswrapper[4833]: I1013 06:40:19.447003 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-4qbpk" Oct 13 06:40:19 crc kubenswrapper[4833]: I1013 06:40:19.625078 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-4qbpk"] Oct 13 06:40:20 crc kubenswrapper[4833]: I1013 06:40:20.163759 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-4qbpk" event={"ID":"9694fd5a-c5d2-4d3e-9e5b-5ca415933b33","Type":"ContainerStarted","Data":"7fdcab98d57392b49d106ed0c46ed22870de94898fc1f55f689ede95d622bd52"} Oct 13 06:40:22 crc kubenswrapper[4833]: I1013 06:40:22.180275 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-4qbpk" event={"ID":"9694fd5a-c5d2-4d3e-9e5b-5ca415933b33","Type":"ContainerStarted","Data":"b910d9033806d2de1c1b238a2065482bd0d7f0262b743f11897668f6dd9e8780"} Oct 13 06:40:22 crc kubenswrapper[4833]: I1013 06:40:22.195393 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-858ddd8f98-4qbpk" podStartSLOduration=2.419828806 podStartE2EDuration="4.195374229s" podCreationTimestamp="2025-10-13 06:40:18 +0000 UTC" firstStartedPulling="2025-10-13 06:40:19.636466692 +0000 UTC m=+709.736889608" lastFinishedPulling="2025-10-13 06:40:21.412012115 +0000 UTC m=+711.512435031" observedRunningTime="2025-10-13 06:40:22.194593208 +0000 UTC m=+712.295016144" watchObservedRunningTime="2025-10-13 06:40:22.195374229 +0000 UTC m=+712.295797135" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.500676 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-drn9l"] Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.503310 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-drn9l" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.508173 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-drn9l"] Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.508393 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-c574q" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.531907 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-zfgzs"] Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.532679 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-zfgzs" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.534786 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.537734 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d2f7\" (UniqueName: \"kubernetes.io/projected/3fd37d62-cc28-41c0-a6ee-f086c41cbcec-kube-api-access-7d2f7\") pod \"nmstate-webhook-6cdbc54649-zfgzs\" (UID: \"3fd37d62-cc28-41c0-a6ee-f086c41cbcec\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-zfgzs" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.537783 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh2xg\" (UniqueName: \"kubernetes.io/projected/be580188-b967-4a91-b2ff-5b82f300d50f-kube-api-access-qh2xg\") pod \"nmstate-metrics-fdff9cb8d-drn9l\" (UID: \"be580188-b967-4a91-b2ff-5b82f300d50f\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-drn9l" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.537942 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3fd37d62-cc28-41c0-a6ee-f086c41cbcec-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-zfgzs\" (UID: \"3fd37d62-cc28-41c0-a6ee-f086c41cbcec\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-zfgzs" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.554285 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-zfgzs"] Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.558156 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-d45xz"] Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.558988 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-d45xz" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.638811 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh2xg\" (UniqueName: \"kubernetes.io/projected/be580188-b967-4a91-b2ff-5b82f300d50f-kube-api-access-qh2xg\") pod \"nmstate-metrics-fdff9cb8d-drn9l\" (UID: \"be580188-b967-4a91-b2ff-5b82f300d50f\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-drn9l" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.638867 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a0c9d202-f469-4633-85a2-16cea67b5d26-dbus-socket\") pod \"nmstate-handler-d45xz\" (UID: \"a0c9d202-f469-4633-85a2-16cea67b5d26\") " pod="openshift-nmstate/nmstate-handler-d45xz" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.638893 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a0c9d202-f469-4633-85a2-16cea67b5d26-nmstate-lock\") pod \"nmstate-handler-d45xz\" (UID: \"a0c9d202-f469-4633-85a2-16cea67b5d26\") " pod="openshift-nmstate/nmstate-handler-d45xz" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.638936 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3fd37d62-cc28-41c0-a6ee-f086c41cbcec-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-zfgzs\" (UID: \"3fd37d62-cc28-41c0-a6ee-f086c41cbcec\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-zfgzs" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.639105 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtjkd\" (UniqueName: \"kubernetes.io/projected/a0c9d202-f469-4633-85a2-16cea67b5d26-kube-api-access-vtjkd\") pod \"nmstate-handler-d45xz\" (UID: \"a0c9d202-f469-4633-85a2-16cea67b5d26\") " pod="openshift-nmstate/nmstate-handler-d45xz" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.639138 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a0c9d202-f469-4633-85a2-16cea67b5d26-ovs-socket\") pod \"nmstate-handler-d45xz\" (UID: \"a0c9d202-f469-4633-85a2-16cea67b5d26\") " pod="openshift-nmstate/nmstate-handler-d45xz" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.639169 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d2f7\" (UniqueName: \"kubernetes.io/projected/3fd37d62-cc28-41c0-a6ee-f086c41cbcec-kube-api-access-7d2f7\") pod \"nmstate-webhook-6cdbc54649-zfgzs\" (UID: \"3fd37d62-cc28-41c0-a6ee-f086c41cbcec\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-zfgzs" Oct 13 06:40:27 crc kubenswrapper[4833]: E1013 06:40:27.639197 4833 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Oct 13 06:40:27 crc kubenswrapper[4833]: E1013 06:40:27.639285 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fd37d62-cc28-41c0-a6ee-f086c41cbcec-tls-key-pair podName:3fd37d62-cc28-41c0-a6ee-f086c41cbcec nodeName:}" failed. No retries permitted until 2025-10-13 06:40:28.139263146 +0000 UTC m=+718.239686062 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/3fd37d62-cc28-41c0-a6ee-f086c41cbcec-tls-key-pair") pod "nmstate-webhook-6cdbc54649-zfgzs" (UID: "3fd37d62-cc28-41c0-a6ee-f086c41cbcec") : secret "openshift-nmstate-webhook" not found Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.648569 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-9657w"] Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.649319 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-9657w" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.651297 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.651481 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.653885 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-jktfk" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.660777 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-9657w"] Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.666459 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh2xg\" (UniqueName: \"kubernetes.io/projected/be580188-b967-4a91-b2ff-5b82f300d50f-kube-api-access-qh2xg\") pod \"nmstate-metrics-fdff9cb8d-drn9l\" (UID: \"be580188-b967-4a91-b2ff-5b82f300d50f\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-drn9l" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.669283 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d2f7\" (UniqueName: \"kubernetes.io/projected/3fd37d62-cc28-41c0-a6ee-f086c41cbcec-kube-api-access-7d2f7\") pod \"nmstate-webhook-6cdbc54649-zfgzs\" (UID: \"3fd37d62-cc28-41c0-a6ee-f086c41cbcec\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-zfgzs" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.746162 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtjkd\" (UniqueName: \"kubernetes.io/projected/a0c9d202-f469-4633-85a2-16cea67b5d26-kube-api-access-vtjkd\") pod \"nmstate-handler-d45xz\" (UID: \"a0c9d202-f469-4633-85a2-16cea67b5d26\") " pod="openshift-nmstate/nmstate-handler-d45xz" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.746225 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a0c9d202-f469-4633-85a2-16cea67b5d26-ovs-socket\") pod \"nmstate-handler-d45xz\" (UID: \"a0c9d202-f469-4633-85a2-16cea67b5d26\") " pod="openshift-nmstate/nmstate-handler-d45xz" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.746277 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a0c9d202-f469-4633-85a2-16cea67b5d26-dbus-socket\") pod \"nmstate-handler-d45xz\" (UID: \"a0c9d202-f469-4633-85a2-16cea67b5d26\") " pod="openshift-nmstate/nmstate-handler-d45xz" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.746300 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a0c9d202-f469-4633-85a2-16cea67b5d26-nmstate-lock\") pod \"nmstate-handler-d45xz\" (UID: \"a0c9d202-f469-4633-85a2-16cea67b5d26\") " pod="openshift-nmstate/nmstate-handler-d45xz" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.746372 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a0c9d202-f469-4633-85a2-16cea67b5d26-nmstate-lock\") pod \"nmstate-handler-d45xz\" (UID: \"a0c9d202-f469-4633-85a2-16cea67b5d26\") " pod="openshift-nmstate/nmstate-handler-d45xz" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.746684 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a0c9d202-f469-4633-85a2-16cea67b5d26-ovs-socket\") pod \"nmstate-handler-d45xz\" (UID: \"a0c9d202-f469-4633-85a2-16cea67b5d26\") " pod="openshift-nmstate/nmstate-handler-d45xz" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.746977 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a0c9d202-f469-4633-85a2-16cea67b5d26-dbus-socket\") pod \"nmstate-handler-d45xz\" (UID: \"a0c9d202-f469-4633-85a2-16cea67b5d26\") " pod="openshift-nmstate/nmstate-handler-d45xz" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.768021 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtjkd\" (UniqueName: \"kubernetes.io/projected/a0c9d202-f469-4633-85a2-16cea67b5d26-kube-api-access-vtjkd\") pod \"nmstate-handler-d45xz\" (UID: \"a0c9d202-f469-4633-85a2-16cea67b5d26\") " pod="openshift-nmstate/nmstate-handler-d45xz" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.820812 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-drn9l" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.841278 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-754bfd49bf-hhg4k"] Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.842200 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-754bfd49bf-hhg4k" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.848266 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8d564272-a394-4689-b2c6-0685d447a2a4-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-9657w\" (UID: \"8d564272-a394-4689-b2c6-0685d447a2a4\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-9657w" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.848360 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l895q\" (UniqueName: \"kubernetes.io/projected/8d564272-a394-4689-b2c6-0685d447a2a4-kube-api-access-l895q\") pod \"nmstate-console-plugin-6b874cbd85-9657w\" (UID: \"8d564272-a394-4689-b2c6-0685d447a2a4\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-9657w" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.848403 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8d564272-a394-4689-b2c6-0685d447a2a4-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-9657w\" (UID: \"8d564272-a394-4689-b2c6-0685d447a2a4\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-9657w" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.851215 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-754bfd49bf-hhg4k"] Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.878504 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-d45xz" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.949167 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41-service-ca\") pod \"console-754bfd49bf-hhg4k\" (UID: \"75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41\") " pod="openshift-console/console-754bfd49bf-hhg4k" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.949462 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l895q\" (UniqueName: \"kubernetes.io/projected/8d564272-a394-4689-b2c6-0685d447a2a4-kube-api-access-l895q\") pod \"nmstate-console-plugin-6b874cbd85-9657w\" (UID: \"8d564272-a394-4689-b2c6-0685d447a2a4\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-9657w" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.949484 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41-console-serving-cert\") pod \"console-754bfd49bf-hhg4k\" (UID: \"75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41\") " pod="openshift-console/console-754bfd49bf-hhg4k" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.949505 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41-console-config\") pod \"console-754bfd49bf-hhg4k\" (UID: \"75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41\") " pod="openshift-console/console-754bfd49bf-hhg4k" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.949531 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8d564272-a394-4689-b2c6-0685d447a2a4-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-9657w\" (UID: \"8d564272-a394-4689-b2c6-0685d447a2a4\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-9657w" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.949588 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhz9m\" (UniqueName: \"kubernetes.io/projected/75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41-kube-api-access-hhz9m\") pod \"console-754bfd49bf-hhg4k\" (UID: \"75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41\") " pod="openshift-console/console-754bfd49bf-hhg4k" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.949739 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41-trusted-ca-bundle\") pod \"console-754bfd49bf-hhg4k\" (UID: \"75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41\") " pod="openshift-console/console-754bfd49bf-hhg4k" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.949803 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41-console-oauth-config\") pod \"console-754bfd49bf-hhg4k\" (UID: \"75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41\") " pod="openshift-console/console-754bfd49bf-hhg4k" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.949857 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8d564272-a394-4689-b2c6-0685d447a2a4-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-9657w\" (UID: \"8d564272-a394-4689-b2c6-0685d447a2a4\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-9657w" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.949953 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41-oauth-serving-cert\") pod \"console-754bfd49bf-hhg4k\" (UID: \"75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41\") " pod="openshift-console/console-754bfd49bf-hhg4k" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.950733 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8d564272-a394-4689-b2c6-0685d447a2a4-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-9657w\" (UID: \"8d564272-a394-4689-b2c6-0685d447a2a4\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-9657w" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.955253 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8d564272-a394-4689-b2c6-0685d447a2a4-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-9657w\" (UID: \"8d564272-a394-4689-b2c6-0685d447a2a4\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-9657w" Oct 13 06:40:27 crc kubenswrapper[4833]: I1013 06:40:27.968911 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l895q\" (UniqueName: \"kubernetes.io/projected/8d564272-a394-4689-b2c6-0685d447a2a4-kube-api-access-l895q\") pod \"nmstate-console-plugin-6b874cbd85-9657w\" (UID: \"8d564272-a394-4689-b2c6-0685d447a2a4\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-9657w" Oct 13 06:40:28 crc kubenswrapper[4833]: I1013 06:40:28.049629 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-drn9l"] Oct 13 06:40:28 crc kubenswrapper[4833]: I1013 06:40:28.051461 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41-console-serving-cert\") pod \"console-754bfd49bf-hhg4k\" (UID: \"75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41\") " pod="openshift-console/console-754bfd49bf-hhg4k" Oct 13 06:40:28 crc kubenswrapper[4833]: I1013 06:40:28.051518 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41-console-config\") pod \"console-754bfd49bf-hhg4k\" (UID: \"75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41\") " pod="openshift-console/console-754bfd49bf-hhg4k" Oct 13 06:40:28 crc kubenswrapper[4833]: I1013 06:40:28.051586 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhz9m\" (UniqueName: \"kubernetes.io/projected/75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41-kube-api-access-hhz9m\") pod \"console-754bfd49bf-hhg4k\" (UID: \"75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41\") " pod="openshift-console/console-754bfd49bf-hhg4k" Oct 13 06:40:28 crc kubenswrapper[4833]: I1013 06:40:28.051612 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41-trusted-ca-bundle\") pod \"console-754bfd49bf-hhg4k\" (UID: \"75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41\") " pod="openshift-console/console-754bfd49bf-hhg4k" Oct 13 06:40:28 crc kubenswrapper[4833]: I1013 06:40:28.051631 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41-console-oauth-config\") pod \"console-754bfd49bf-hhg4k\" (UID: \"75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41\") " pod="openshift-console/console-754bfd49bf-hhg4k" Oct 13 06:40:28 crc kubenswrapper[4833]: I1013 06:40:28.051677 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41-oauth-serving-cert\") pod \"console-754bfd49bf-hhg4k\" (UID: \"75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41\") " pod="openshift-console/console-754bfd49bf-hhg4k" Oct 13 06:40:28 crc kubenswrapper[4833]: I1013 06:40:28.051700 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41-service-ca\") pod \"console-754bfd49bf-hhg4k\" (UID: \"75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41\") " pod="openshift-console/console-754bfd49bf-hhg4k" Oct 13 06:40:28 crc kubenswrapper[4833]: I1013 06:40:28.054054 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41-console-config\") pod \"console-754bfd49bf-hhg4k\" (UID: \"75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41\") " pod="openshift-console/console-754bfd49bf-hhg4k" Oct 13 06:40:28 crc kubenswrapper[4833]: I1013 06:40:28.054132 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41-oauth-serving-cert\") pod \"console-754bfd49bf-hhg4k\" (UID: \"75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41\") " pod="openshift-console/console-754bfd49bf-hhg4k" Oct 13 06:40:28 crc kubenswrapper[4833]: I1013 06:40:28.054144 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41-service-ca\") pod \"console-754bfd49bf-hhg4k\" (UID: \"75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41\") " pod="openshift-console/console-754bfd49bf-hhg4k" Oct 13 06:40:28 crc kubenswrapper[4833]: I1013 06:40:28.054603 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41-console-serving-cert\") pod \"console-754bfd49bf-hhg4k\" (UID: \"75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41\") " pod="openshift-console/console-754bfd49bf-hhg4k" Oct 13 06:40:28 crc kubenswrapper[4833]: I1013 06:40:28.054605 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41-console-oauth-config\") pod \"console-754bfd49bf-hhg4k\" (UID: \"75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41\") " pod="openshift-console/console-754bfd49bf-hhg4k" Oct 13 06:40:28 crc kubenswrapper[4833]: W1013 06:40:28.059302 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe580188_b967_4a91_b2ff_5b82f300d50f.slice/crio-d596a347a71318f02c8479b6db7e0da8989c901090079a64ffacf4ac7349a839 WatchSource:0}: Error finding container d596a347a71318f02c8479b6db7e0da8989c901090079a64ffacf4ac7349a839: Status 404 returned error can't find the container with id d596a347a71318f02c8479b6db7e0da8989c901090079a64ffacf4ac7349a839 Oct 13 06:40:28 crc kubenswrapper[4833]: I1013 06:40:28.059670 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41-trusted-ca-bundle\") pod \"console-754bfd49bf-hhg4k\" (UID: \"75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41\") " pod="openshift-console/console-754bfd49bf-hhg4k" Oct 13 06:40:28 crc kubenswrapper[4833]: I1013 06:40:28.068413 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhz9m\" (UniqueName: \"kubernetes.io/projected/75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41-kube-api-access-hhz9m\") pod \"console-754bfd49bf-hhg4k\" (UID: \"75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41\") " pod="openshift-console/console-754bfd49bf-hhg4k" Oct 13 06:40:28 crc kubenswrapper[4833]: I1013 06:40:28.153212 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3fd37d62-cc28-41c0-a6ee-f086c41cbcec-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-zfgzs\" (UID: \"3fd37d62-cc28-41c0-a6ee-f086c41cbcec\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-zfgzs" Oct 13 06:40:28 crc kubenswrapper[4833]: I1013 06:40:28.156165 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3fd37d62-cc28-41c0-a6ee-f086c41cbcec-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-zfgzs\" (UID: \"3fd37d62-cc28-41c0-a6ee-f086c41cbcec\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-zfgzs" Oct 13 06:40:28 crc kubenswrapper[4833]: I1013 06:40:28.202436 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-754bfd49bf-hhg4k" Oct 13 06:40:28 crc kubenswrapper[4833]: I1013 06:40:28.217030 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-drn9l" event={"ID":"be580188-b967-4a91-b2ff-5b82f300d50f","Type":"ContainerStarted","Data":"d596a347a71318f02c8479b6db7e0da8989c901090079a64ffacf4ac7349a839"} Oct 13 06:40:28 crc kubenswrapper[4833]: I1013 06:40:28.217961 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-d45xz" event={"ID":"a0c9d202-f469-4633-85a2-16cea67b5d26","Type":"ContainerStarted","Data":"2ccbc89dd318a57f38806419b96c46925b96ac9b2b8dbdad505f757bf8781cce"} Oct 13 06:40:28 crc kubenswrapper[4833]: I1013 06:40:28.263067 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-9657w" Oct 13 06:40:28 crc kubenswrapper[4833]: I1013 06:40:28.417946 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-754bfd49bf-hhg4k"] Oct 13 06:40:28 crc kubenswrapper[4833]: W1013 06:40:28.426256 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75d7eef0_976c_4e3b_b73a_6a4eb1eb5f41.slice/crio-d08142c8f7f5d25f6da6482250133993eed8c324fab92bc0ff1ae87e2f5cba17 WatchSource:0}: Error finding container d08142c8f7f5d25f6da6482250133993eed8c324fab92bc0ff1ae87e2f5cba17: Status 404 returned error can't find the container with id d08142c8f7f5d25f6da6482250133993eed8c324fab92bc0ff1ae87e2f5cba17 Oct 13 06:40:28 crc kubenswrapper[4833]: I1013 06:40:28.446593 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-zfgzs" Oct 13 06:40:28 crc kubenswrapper[4833]: I1013 06:40:28.465726 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-9657w"] Oct 13 06:40:28 crc kubenswrapper[4833]: W1013 06:40:28.470751 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d564272_a394_4689_b2c6_0685d447a2a4.slice/crio-d08209fc9297750e4a78f5bbb1edecf09e3a172859cfcfb71dde3f35a60bbef3 WatchSource:0}: Error finding container d08209fc9297750e4a78f5bbb1edecf09e3a172859cfcfb71dde3f35a60bbef3: Status 404 returned error can't find the container with id d08209fc9297750e4a78f5bbb1edecf09e3a172859cfcfb71dde3f35a60bbef3 Oct 13 06:40:28 crc kubenswrapper[4833]: I1013 06:40:28.615243 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-zfgzs"] Oct 13 06:40:28 crc kubenswrapper[4833]: W1013 06:40:28.620346 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fd37d62_cc28_41c0_a6ee_f086c41cbcec.slice/crio-d4bdd429354785cb8a389b4849564f831e3aa9a0487fd0269a549fb5f74664b5 WatchSource:0}: Error finding container d4bdd429354785cb8a389b4849564f831e3aa9a0487fd0269a549fb5f74664b5: Status 404 returned error can't find the container with id d4bdd429354785cb8a389b4849564f831e3aa9a0487fd0269a549fb5f74664b5 Oct 13 06:40:29 crc kubenswrapper[4833]: I1013 06:40:29.223643 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-zfgzs" event={"ID":"3fd37d62-cc28-41c0-a6ee-f086c41cbcec","Type":"ContainerStarted","Data":"d4bdd429354785cb8a389b4849564f831e3aa9a0487fd0269a549fb5f74664b5"} Oct 13 06:40:29 crc kubenswrapper[4833]: I1013 06:40:29.224960 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-754bfd49bf-hhg4k" event={"ID":"75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41","Type":"ContainerStarted","Data":"be307d21c4644286e6c682f8b88bbe8cc93fad4f8dc11e924e65265aa5a65dc7"} Oct 13 06:40:29 crc kubenswrapper[4833]: I1013 06:40:29.224988 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-754bfd49bf-hhg4k" event={"ID":"75d7eef0-976c-4e3b-b73a-6a4eb1eb5f41","Type":"ContainerStarted","Data":"d08142c8f7f5d25f6da6482250133993eed8c324fab92bc0ff1ae87e2f5cba17"} Oct 13 06:40:29 crc kubenswrapper[4833]: I1013 06:40:29.226898 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-9657w" event={"ID":"8d564272-a394-4689-b2c6-0685d447a2a4","Type":"ContainerStarted","Data":"d08209fc9297750e4a78f5bbb1edecf09e3a172859cfcfb71dde3f35a60bbef3"} Oct 13 06:40:29 crc kubenswrapper[4833]: I1013 06:40:29.248290 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-754bfd49bf-hhg4k" podStartSLOduration=2.248268049 podStartE2EDuration="2.248268049s" podCreationTimestamp="2025-10-13 06:40:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:40:29.243801539 +0000 UTC m=+719.344224455" watchObservedRunningTime="2025-10-13 06:40:29.248268049 +0000 UTC m=+719.348690965" Oct 13 06:40:30 crc kubenswrapper[4833]: I1013 06:40:30.543481 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 06:40:30 crc kubenswrapper[4833]: I1013 06:40:30.543556 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 06:40:31 crc kubenswrapper[4833]: I1013 06:40:31.241335 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-drn9l" event={"ID":"be580188-b967-4a91-b2ff-5b82f300d50f","Type":"ContainerStarted","Data":"d26cc33eea027212562f724e6229e531f922d7ee5010c984cafdf5199a556d8c"} Oct 13 06:40:31 crc kubenswrapper[4833]: I1013 06:40:31.242867 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-9657w" event={"ID":"8d564272-a394-4689-b2c6-0685d447a2a4","Type":"ContainerStarted","Data":"186d9a9f9081da1efaeaf121cd6ff9caa0ae5842d7f47b5cb661aecf138ef4e1"} Oct 13 06:40:31 crc kubenswrapper[4833]: I1013 06:40:31.244604 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-zfgzs" event={"ID":"3fd37d62-cc28-41c0-a6ee-f086c41cbcec","Type":"ContainerStarted","Data":"6600083fddd3b98d69ed74e3f858a0695ae517e65a8c2443b9541a0f3d36343a"} Oct 13 06:40:31 crc kubenswrapper[4833]: I1013 06:40:31.244680 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-zfgzs" Oct 13 06:40:31 crc kubenswrapper[4833]: I1013 06:40:31.246485 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-d45xz" event={"ID":"a0c9d202-f469-4633-85a2-16cea67b5d26","Type":"ContainerStarted","Data":"8dff157f567aa08e81909272b8b8cd0c5f33f07b4c56d51a3cba229cf6d6dd50"} Oct 13 06:40:31 crc kubenswrapper[4833]: I1013 06:40:31.246903 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-d45xz" Oct 13 06:40:31 crc kubenswrapper[4833]: I1013 06:40:31.276834 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-9657w" podStartSLOduration=1.8704009240000001 podStartE2EDuration="4.276810446s" podCreationTimestamp="2025-10-13 06:40:27 +0000 UTC" firstStartedPulling="2025-10-13 06:40:28.473830995 +0000 UTC m=+718.574253911" lastFinishedPulling="2025-10-13 06:40:30.880240517 +0000 UTC m=+720.980663433" observedRunningTime="2025-10-13 06:40:31.258078802 +0000 UTC m=+721.358501718" watchObservedRunningTime="2025-10-13 06:40:31.276810446 +0000 UTC m=+721.377233362" Oct 13 06:40:31 crc kubenswrapper[4833]: I1013 06:40:31.293753 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-d45xz" podStartSLOduration=1.236988902 podStartE2EDuration="4.293736957s" podCreationTimestamp="2025-10-13 06:40:27 +0000 UTC" firstStartedPulling="2025-10-13 06:40:27.922427473 +0000 UTC m=+718.022850389" lastFinishedPulling="2025-10-13 06:40:30.979175528 +0000 UTC m=+721.079598444" observedRunningTime="2025-10-13 06:40:31.292362327 +0000 UTC m=+721.392785253" watchObservedRunningTime="2025-10-13 06:40:31.293736957 +0000 UTC m=+721.394159873" Oct 13 06:40:31 crc kubenswrapper[4833]: I1013 06:40:31.295249 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-zfgzs" podStartSLOduration=2.001655242 podStartE2EDuration="4.295243381s" podCreationTimestamp="2025-10-13 06:40:27 +0000 UTC" firstStartedPulling="2025-10-13 06:40:28.623750285 +0000 UTC m=+718.724173201" lastFinishedPulling="2025-10-13 06:40:30.917338424 +0000 UTC m=+721.017761340" observedRunningTime="2025-10-13 06:40:31.277847116 +0000 UTC m=+721.378270022" watchObservedRunningTime="2025-10-13 06:40:31.295243381 +0000 UTC m=+721.395666297" Oct 13 06:40:33 crc kubenswrapper[4833]: I1013 06:40:33.264287 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-drn9l" event={"ID":"be580188-b967-4a91-b2ff-5b82f300d50f","Type":"ContainerStarted","Data":"485df97dc5190b861c5593080a7744887aef7240eadfc2227e3df198534b1e2b"} Oct 13 06:40:33 crc kubenswrapper[4833]: I1013 06:40:33.283874 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-drn9l" podStartSLOduration=1.340968338 podStartE2EDuration="6.283855669s" podCreationTimestamp="2025-10-13 06:40:27 +0000 UTC" firstStartedPulling="2025-10-13 06:40:28.062474937 +0000 UTC m=+718.162897853" lastFinishedPulling="2025-10-13 06:40:33.005362268 +0000 UTC m=+723.105785184" observedRunningTime="2025-10-13 06:40:33.280276486 +0000 UTC m=+723.380699402" watchObservedRunningTime="2025-10-13 06:40:33.283855669 +0000 UTC m=+723.384278585" Oct 13 06:40:37 crc kubenswrapper[4833]: I1013 06:40:37.901078 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-d45xz" Oct 13 06:40:38 crc kubenswrapper[4833]: I1013 06:40:38.203659 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-754bfd49bf-hhg4k" Oct 13 06:40:38 crc kubenswrapper[4833]: I1013 06:40:38.204071 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-754bfd49bf-hhg4k" Oct 13 06:40:38 crc kubenswrapper[4833]: I1013 06:40:38.209957 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-754bfd49bf-hhg4k" Oct 13 06:40:38 crc kubenswrapper[4833]: I1013 06:40:38.298903 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-754bfd49bf-hhg4k" Oct 13 06:40:38 crc kubenswrapper[4833]: I1013 06:40:38.352109 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-tgsfn"] Oct 13 06:40:48 crc kubenswrapper[4833]: I1013 06:40:48.456155 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-zfgzs" Oct 13 06:41:00 crc kubenswrapper[4833]: I1013 06:41:00.428096 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4"] Oct 13 06:41:00 crc kubenswrapper[4833]: I1013 06:41:00.429750 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4" Oct 13 06:41:00 crc kubenswrapper[4833]: I1013 06:41:00.431790 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 13 06:41:00 crc kubenswrapper[4833]: I1013 06:41:00.442694 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4"] Oct 13 06:41:00 crc kubenswrapper[4833]: I1013 06:41:00.528947 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17d37695-97f2-49f8-907b-a7b183bd0bc0-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4\" (UID: \"17d37695-97f2-49f8-907b-a7b183bd0bc0\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4" Oct 13 06:41:00 crc kubenswrapper[4833]: I1013 06:41:00.529024 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ckhw\" (UniqueName: \"kubernetes.io/projected/17d37695-97f2-49f8-907b-a7b183bd0bc0-kube-api-access-6ckhw\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4\" (UID: \"17d37695-97f2-49f8-907b-a7b183bd0bc0\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4" Oct 13 06:41:00 crc kubenswrapper[4833]: I1013 06:41:00.529067 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17d37695-97f2-49f8-907b-a7b183bd0bc0-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4\" (UID: \"17d37695-97f2-49f8-907b-a7b183bd0bc0\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4" Oct 13 06:41:00 crc kubenswrapper[4833]: I1013 06:41:00.542265 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 06:41:00 crc kubenswrapper[4833]: I1013 06:41:00.542340 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 06:41:00 crc kubenswrapper[4833]: I1013 06:41:00.630354 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ckhw\" (UniqueName: \"kubernetes.io/projected/17d37695-97f2-49f8-907b-a7b183bd0bc0-kube-api-access-6ckhw\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4\" (UID: \"17d37695-97f2-49f8-907b-a7b183bd0bc0\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4" Oct 13 06:41:00 crc kubenswrapper[4833]: I1013 06:41:00.630422 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17d37695-97f2-49f8-907b-a7b183bd0bc0-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4\" (UID: \"17d37695-97f2-49f8-907b-a7b183bd0bc0\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4" Oct 13 06:41:00 crc kubenswrapper[4833]: I1013 06:41:00.630495 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17d37695-97f2-49f8-907b-a7b183bd0bc0-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4\" (UID: \"17d37695-97f2-49f8-907b-a7b183bd0bc0\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4" Oct 13 06:41:00 crc kubenswrapper[4833]: I1013 06:41:00.631028 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17d37695-97f2-49f8-907b-a7b183bd0bc0-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4\" (UID: \"17d37695-97f2-49f8-907b-a7b183bd0bc0\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4" Oct 13 06:41:00 crc kubenswrapper[4833]: I1013 06:41:00.631095 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17d37695-97f2-49f8-907b-a7b183bd0bc0-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4\" (UID: \"17d37695-97f2-49f8-907b-a7b183bd0bc0\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4" Oct 13 06:41:00 crc kubenswrapper[4833]: I1013 06:41:00.649447 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ckhw\" (UniqueName: \"kubernetes.io/projected/17d37695-97f2-49f8-907b-a7b183bd0bc0-kube-api-access-6ckhw\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4\" (UID: \"17d37695-97f2-49f8-907b-a7b183bd0bc0\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4" Oct 13 06:41:00 crc kubenswrapper[4833]: I1013 06:41:00.747619 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4" Oct 13 06:41:00 crc kubenswrapper[4833]: I1013 06:41:00.795455 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qz7k6"] Oct 13 06:41:00 crc kubenswrapper[4833]: I1013 06:41:00.795651 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-qz7k6" podUID="ba083af2-d9a6-42e5-99ec-2b89278b08a2" containerName="controller-manager" containerID="cri-o://5365d7bef70e431306f51ed52945bb8400a4eaa88b0daef806944ec98d9d96a3" gracePeriod=30 Oct 13 06:41:00 crc kubenswrapper[4833]: I1013 06:41:00.898569 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5"] Oct 13 06:41:00 crc kubenswrapper[4833]: I1013 06:41:00.899060 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5" podUID="adfdbeae-0ada-4f22-937a-ff7fdb0d0901" containerName="route-controller-manager" containerID="cri-o://d29b2f8c1a96ae0d11c2a312d83c361bf592b6d00ed768953777ac925bcc9ea4" gracePeriod=30 Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.017827 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4"] Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.147769 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qz7k6" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.235753 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.288005 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-78bc9686d5-lf24v"] Oct 13 06:41:01 crc kubenswrapper[4833]: E1013 06:41:01.288233 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba083af2-d9a6-42e5-99ec-2b89278b08a2" containerName="controller-manager" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.288245 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba083af2-d9a6-42e5-99ec-2b89278b08a2" containerName="controller-manager" Oct 13 06:41:01 crc kubenswrapper[4833]: E1013 06:41:01.288254 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adfdbeae-0ada-4f22-937a-ff7fdb0d0901" containerName="route-controller-manager" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.288262 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="adfdbeae-0ada-4f22-937a-ff7fdb0d0901" containerName="route-controller-manager" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.288372 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="adfdbeae-0ada-4f22-937a-ff7fdb0d0901" containerName="route-controller-manager" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.288388 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba083af2-d9a6-42e5-99ec-2b89278b08a2" containerName="controller-manager" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.288726 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78bc9686d5-lf24v" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.302455 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78bc9686d5-lf24v"] Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.338082 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ba083af2-d9a6-42e5-99ec-2b89278b08a2-proxy-ca-bundles\") pod \"ba083af2-d9a6-42e5-99ec-2b89278b08a2\" (UID: \"ba083af2-d9a6-42e5-99ec-2b89278b08a2\") " Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.338123 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba083af2-d9a6-42e5-99ec-2b89278b08a2-client-ca\") pod \"ba083af2-d9a6-42e5-99ec-2b89278b08a2\" (UID: \"ba083af2-d9a6-42e5-99ec-2b89278b08a2\") " Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.338183 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba083af2-d9a6-42e5-99ec-2b89278b08a2-config\") pod \"ba083af2-d9a6-42e5-99ec-2b89278b08a2\" (UID: \"ba083af2-d9a6-42e5-99ec-2b89278b08a2\") " Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.338209 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba083af2-d9a6-42e5-99ec-2b89278b08a2-serving-cert\") pod \"ba083af2-d9a6-42e5-99ec-2b89278b08a2\" (UID: \"ba083af2-d9a6-42e5-99ec-2b89278b08a2\") " Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.338254 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adfdbeae-0ada-4f22-937a-ff7fdb0d0901-serving-cert\") pod \"adfdbeae-0ada-4f22-937a-ff7fdb0d0901\" (UID: \"adfdbeae-0ada-4f22-937a-ff7fdb0d0901\") " Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.338294 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8qc6\" (UniqueName: \"kubernetes.io/projected/adfdbeae-0ada-4f22-937a-ff7fdb0d0901-kube-api-access-j8qc6\") pod \"adfdbeae-0ada-4f22-937a-ff7fdb0d0901\" (UID: \"adfdbeae-0ada-4f22-937a-ff7fdb0d0901\") " Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.338333 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/adfdbeae-0ada-4f22-937a-ff7fdb0d0901-client-ca\") pod \"adfdbeae-0ada-4f22-937a-ff7fdb0d0901\" (UID: \"adfdbeae-0ada-4f22-937a-ff7fdb0d0901\") " Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.338349 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adfdbeae-0ada-4f22-937a-ff7fdb0d0901-config\") pod \"adfdbeae-0ada-4f22-937a-ff7fdb0d0901\" (UID: \"adfdbeae-0ada-4f22-937a-ff7fdb0d0901\") " Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.338372 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g24p\" (UniqueName: \"kubernetes.io/projected/ba083af2-d9a6-42e5-99ec-2b89278b08a2-kube-api-access-7g24p\") pod \"ba083af2-d9a6-42e5-99ec-2b89278b08a2\" (UID: \"ba083af2-d9a6-42e5-99ec-2b89278b08a2\") " Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.339144 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adfdbeae-0ada-4f22-937a-ff7fdb0d0901-client-ca" (OuterVolumeSpecName: "client-ca") pod "adfdbeae-0ada-4f22-937a-ff7fdb0d0901" (UID: "adfdbeae-0ada-4f22-937a-ff7fdb0d0901"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.339209 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adfdbeae-0ada-4f22-937a-ff7fdb0d0901-config" (OuterVolumeSpecName: "config") pod "adfdbeae-0ada-4f22-937a-ff7fdb0d0901" (UID: "adfdbeae-0ada-4f22-937a-ff7fdb0d0901"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.339608 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba083af2-d9a6-42e5-99ec-2b89278b08a2-client-ca" (OuterVolumeSpecName: "client-ca") pod "ba083af2-d9a6-42e5-99ec-2b89278b08a2" (UID: "ba083af2-d9a6-42e5-99ec-2b89278b08a2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.339686 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba083af2-d9a6-42e5-99ec-2b89278b08a2-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ba083af2-d9a6-42e5-99ec-2b89278b08a2" (UID: "ba083af2-d9a6-42e5-99ec-2b89278b08a2"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.340877 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba083af2-d9a6-42e5-99ec-2b89278b08a2-config" (OuterVolumeSpecName: "config") pod "ba083af2-d9a6-42e5-99ec-2b89278b08a2" (UID: "ba083af2-d9a6-42e5-99ec-2b89278b08a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.344752 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adfdbeae-0ada-4f22-937a-ff7fdb0d0901-kube-api-access-j8qc6" (OuterVolumeSpecName: "kube-api-access-j8qc6") pod "adfdbeae-0ada-4f22-937a-ff7fdb0d0901" (UID: "adfdbeae-0ada-4f22-937a-ff7fdb0d0901"). InnerVolumeSpecName "kube-api-access-j8qc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.346451 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adfdbeae-0ada-4f22-937a-ff7fdb0d0901-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "adfdbeae-0ada-4f22-937a-ff7fdb0d0901" (UID: "adfdbeae-0ada-4f22-937a-ff7fdb0d0901"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.346620 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba083af2-d9a6-42e5-99ec-2b89278b08a2-kube-api-access-7g24p" (OuterVolumeSpecName: "kube-api-access-7g24p") pod "ba083af2-d9a6-42e5-99ec-2b89278b08a2" (UID: "ba083af2-d9a6-42e5-99ec-2b89278b08a2"). InnerVolumeSpecName "kube-api-access-7g24p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.347549 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba083af2-d9a6-42e5-99ec-2b89278b08a2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ba083af2-d9a6-42e5-99ec-2b89278b08a2" (UID: "ba083af2-d9a6-42e5-99ec-2b89278b08a2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.371418 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fd94f88f6-d8mn9"] Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.372092 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fd94f88f6-d8mn9" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.380918 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fd94f88f6-d8mn9"] Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.434418 4833 generic.go:334] "Generic (PLEG): container finished" podID="17d37695-97f2-49f8-907b-a7b183bd0bc0" containerID="ec955fc9914bffba6c879436014495630d6ff996857b3e44b0ccc557c83dd312" exitCode=0 Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.434584 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4" event={"ID":"17d37695-97f2-49f8-907b-a7b183bd0bc0","Type":"ContainerDied","Data":"ec955fc9914bffba6c879436014495630d6ff996857b3e44b0ccc557c83dd312"} Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.434614 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4" event={"ID":"17d37695-97f2-49f8-907b-a7b183bd0bc0","Type":"ContainerStarted","Data":"9fc5c4ca056c0b84b7c5d5afdbc2abb301f734d80e343cc58b7d27e9f1d0e916"} Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.437526 4833 generic.go:334] "Generic (PLEG): container finished" podID="adfdbeae-0ada-4f22-937a-ff7fdb0d0901" containerID="d29b2f8c1a96ae0d11c2a312d83c361bf592b6d00ed768953777ac925bcc9ea4" exitCode=0 Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.437622 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5" event={"ID":"adfdbeae-0ada-4f22-937a-ff7fdb0d0901","Type":"ContainerDied","Data":"d29b2f8c1a96ae0d11c2a312d83c361bf592b6d00ed768953777ac925bcc9ea4"} Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.437636 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.437661 4833 scope.go:117] "RemoveContainer" containerID="d29b2f8c1a96ae0d11c2a312d83c361bf592b6d00ed768953777ac925bcc9ea4" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.437649 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5" event={"ID":"adfdbeae-0ada-4f22-937a-ff7fdb0d0901","Type":"ContainerDied","Data":"d30d23e71b31f898ce173b34e7c5976e0a0df2f581cdf68a4955bf6442dadb21"} Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.439206 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9mqb\" (UniqueName: \"kubernetes.io/projected/ed2252c8-233f-41c4-aed3-5ff1f9190620-kube-api-access-j9mqb\") pod \"controller-manager-78bc9686d5-lf24v\" (UID: \"ed2252c8-233f-41c4-aed3-5ff1f9190620\") " pod="openshift-controller-manager/controller-manager-78bc9686d5-lf24v" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.439238 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed2252c8-233f-41c4-aed3-5ff1f9190620-config\") pod \"controller-manager-78bc9686d5-lf24v\" (UID: \"ed2252c8-233f-41c4-aed3-5ff1f9190620\") " pod="openshift-controller-manager/controller-manager-78bc9686d5-lf24v" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.439256 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed2252c8-233f-41c4-aed3-5ff1f9190620-client-ca\") pod \"controller-manager-78bc9686d5-lf24v\" (UID: \"ed2252c8-233f-41c4-aed3-5ff1f9190620\") " pod="openshift-controller-manager/controller-manager-78bc9686d5-lf24v" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.439301 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed2252c8-233f-41c4-aed3-5ff1f9190620-serving-cert\") pod \"controller-manager-78bc9686d5-lf24v\" (UID: \"ed2252c8-233f-41c4-aed3-5ff1f9190620\") " pod="openshift-controller-manager/controller-manager-78bc9686d5-lf24v" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.439321 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed2252c8-233f-41c4-aed3-5ff1f9190620-proxy-ca-bundles\") pod \"controller-manager-78bc9686d5-lf24v\" (UID: \"ed2252c8-233f-41c4-aed3-5ff1f9190620\") " pod="openshift-controller-manager/controller-manager-78bc9686d5-lf24v" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.439362 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba083af2-d9a6-42e5-99ec-2b89278b08a2-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.439375 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adfdbeae-0ada-4f22-937a-ff7fdb0d0901-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.439389 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8qc6\" (UniqueName: \"kubernetes.io/projected/adfdbeae-0ada-4f22-937a-ff7fdb0d0901-kube-api-access-j8qc6\") on node \"crc\" DevicePath \"\"" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.439399 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/adfdbeae-0ada-4f22-937a-ff7fdb0d0901-client-ca\") on node \"crc\" DevicePath \"\"" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.439408 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adfdbeae-0ada-4f22-937a-ff7fdb0d0901-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.439416 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7g24p\" (UniqueName: \"kubernetes.io/projected/ba083af2-d9a6-42e5-99ec-2b89278b08a2-kube-api-access-7g24p\") on node \"crc\" DevicePath \"\"" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.439424 4833 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ba083af2-d9a6-42e5-99ec-2b89278b08a2-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.439432 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba083af2-d9a6-42e5-99ec-2b89278b08a2-client-ca\") on node \"crc\" DevicePath \"\"" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.439441 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba083af2-d9a6-42e5-99ec-2b89278b08a2-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.441455 4833 generic.go:334] "Generic (PLEG): container finished" podID="ba083af2-d9a6-42e5-99ec-2b89278b08a2" containerID="5365d7bef70e431306f51ed52945bb8400a4eaa88b0daef806944ec98d9d96a3" exitCode=0 Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.441504 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qz7k6" event={"ID":"ba083af2-d9a6-42e5-99ec-2b89278b08a2","Type":"ContainerDied","Data":"5365d7bef70e431306f51ed52945bb8400a4eaa88b0daef806944ec98d9d96a3"} Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.441547 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qz7k6" event={"ID":"ba083af2-d9a6-42e5-99ec-2b89278b08a2","Type":"ContainerDied","Data":"61c094351334356f940136cc582ca7d863ab70e590ac14a38a57a99f367aff05"} Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.441670 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qz7k6" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.452860 4833 scope.go:117] "RemoveContainer" containerID="d29b2f8c1a96ae0d11c2a312d83c361bf592b6d00ed768953777ac925bcc9ea4" Oct 13 06:41:01 crc kubenswrapper[4833]: E1013 06:41:01.453175 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d29b2f8c1a96ae0d11c2a312d83c361bf592b6d00ed768953777ac925bcc9ea4\": container with ID starting with d29b2f8c1a96ae0d11c2a312d83c361bf592b6d00ed768953777ac925bcc9ea4 not found: ID does not exist" containerID="d29b2f8c1a96ae0d11c2a312d83c361bf592b6d00ed768953777ac925bcc9ea4" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.453207 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d29b2f8c1a96ae0d11c2a312d83c361bf592b6d00ed768953777ac925bcc9ea4"} err="failed to get container status \"d29b2f8c1a96ae0d11c2a312d83c361bf592b6d00ed768953777ac925bcc9ea4\": rpc error: code = NotFound desc = could not find container \"d29b2f8c1a96ae0d11c2a312d83c361bf592b6d00ed768953777ac925bcc9ea4\": container with ID starting with d29b2f8c1a96ae0d11c2a312d83c361bf592b6d00ed768953777ac925bcc9ea4 not found: ID does not exist" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.453227 4833 scope.go:117] "RemoveContainer" containerID="5365d7bef70e431306f51ed52945bb8400a4eaa88b0daef806944ec98d9d96a3" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.473459 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5"] Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.479705 4833 scope.go:117] "RemoveContainer" containerID="5365d7bef70e431306f51ed52945bb8400a4eaa88b0daef806944ec98d9d96a3" Oct 13 06:41:01 crc kubenswrapper[4833]: E1013 06:41:01.482102 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5365d7bef70e431306f51ed52945bb8400a4eaa88b0daef806944ec98d9d96a3\": container with ID starting with 5365d7bef70e431306f51ed52945bb8400a4eaa88b0daef806944ec98d9d96a3 not found: ID does not exist" containerID="5365d7bef70e431306f51ed52945bb8400a4eaa88b0daef806944ec98d9d96a3" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.482140 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5365d7bef70e431306f51ed52945bb8400a4eaa88b0daef806944ec98d9d96a3"} err="failed to get container status \"5365d7bef70e431306f51ed52945bb8400a4eaa88b0daef806944ec98d9d96a3\": rpc error: code = NotFound desc = could not find container \"5365d7bef70e431306f51ed52945bb8400a4eaa88b0daef806944ec98d9d96a3\": container with ID starting with 5365d7bef70e431306f51ed52945bb8400a4eaa88b0daef806944ec98d9d96a3 not found: ID does not exist" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.485497 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-652c5"] Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.492072 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qz7k6"] Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.497441 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qz7k6"] Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.540073 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9bd1dff-e19a-4feb-8543-cd8465c785d2-client-ca\") pod \"route-controller-manager-6fd94f88f6-d8mn9\" (UID: \"d9bd1dff-e19a-4feb-8543-cd8465c785d2\") " pod="openshift-route-controller-manager/route-controller-manager-6fd94f88f6-d8mn9" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.540115 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed2252c8-233f-41c4-aed3-5ff1f9190620-serving-cert\") pod \"controller-manager-78bc9686d5-lf24v\" (UID: \"ed2252c8-233f-41c4-aed3-5ff1f9190620\") " pod="openshift-controller-manager/controller-manager-78bc9686d5-lf24v" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.540137 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9bd1dff-e19a-4feb-8543-cd8465c785d2-serving-cert\") pod \"route-controller-manager-6fd94f88f6-d8mn9\" (UID: \"d9bd1dff-e19a-4feb-8543-cd8465c785d2\") " pod="openshift-route-controller-manager/route-controller-manager-6fd94f88f6-d8mn9" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.540155 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed2252c8-233f-41c4-aed3-5ff1f9190620-proxy-ca-bundles\") pod \"controller-manager-78bc9686d5-lf24v\" (UID: \"ed2252c8-233f-41c4-aed3-5ff1f9190620\") " pod="openshift-controller-manager/controller-manager-78bc9686d5-lf24v" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.540200 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9bd1dff-e19a-4feb-8543-cd8465c785d2-config\") pod \"route-controller-manager-6fd94f88f6-d8mn9\" (UID: \"d9bd1dff-e19a-4feb-8543-cd8465c785d2\") " pod="openshift-route-controller-manager/route-controller-manager-6fd94f88f6-d8mn9" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.540222 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dflbp\" (UniqueName: \"kubernetes.io/projected/d9bd1dff-e19a-4feb-8543-cd8465c785d2-kube-api-access-dflbp\") pod \"route-controller-manager-6fd94f88f6-d8mn9\" (UID: \"d9bd1dff-e19a-4feb-8543-cd8465c785d2\") " pod="openshift-route-controller-manager/route-controller-manager-6fd94f88f6-d8mn9" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.540271 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9mqb\" (UniqueName: \"kubernetes.io/projected/ed2252c8-233f-41c4-aed3-5ff1f9190620-kube-api-access-j9mqb\") pod \"controller-manager-78bc9686d5-lf24v\" (UID: \"ed2252c8-233f-41c4-aed3-5ff1f9190620\") " pod="openshift-controller-manager/controller-manager-78bc9686d5-lf24v" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.540347 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed2252c8-233f-41c4-aed3-5ff1f9190620-config\") pod \"controller-manager-78bc9686d5-lf24v\" (UID: \"ed2252c8-233f-41c4-aed3-5ff1f9190620\") " pod="openshift-controller-manager/controller-manager-78bc9686d5-lf24v" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.540389 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed2252c8-233f-41c4-aed3-5ff1f9190620-client-ca\") pod \"controller-manager-78bc9686d5-lf24v\" (UID: \"ed2252c8-233f-41c4-aed3-5ff1f9190620\") " pod="openshift-controller-manager/controller-manager-78bc9686d5-lf24v" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.541874 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed2252c8-233f-41c4-aed3-5ff1f9190620-client-ca\") pod \"controller-manager-78bc9686d5-lf24v\" (UID: \"ed2252c8-233f-41c4-aed3-5ff1f9190620\") " pod="openshift-controller-manager/controller-manager-78bc9686d5-lf24v" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.541872 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed2252c8-233f-41c4-aed3-5ff1f9190620-proxy-ca-bundles\") pod \"controller-manager-78bc9686d5-lf24v\" (UID: \"ed2252c8-233f-41c4-aed3-5ff1f9190620\") " pod="openshift-controller-manager/controller-manager-78bc9686d5-lf24v" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.543025 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed2252c8-233f-41c4-aed3-5ff1f9190620-config\") pod \"controller-manager-78bc9686d5-lf24v\" (UID: \"ed2252c8-233f-41c4-aed3-5ff1f9190620\") " pod="openshift-controller-manager/controller-manager-78bc9686d5-lf24v" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.550313 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed2252c8-233f-41c4-aed3-5ff1f9190620-serving-cert\") pod \"controller-manager-78bc9686d5-lf24v\" (UID: \"ed2252c8-233f-41c4-aed3-5ff1f9190620\") " pod="openshift-controller-manager/controller-manager-78bc9686d5-lf24v" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.555016 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9mqb\" (UniqueName: \"kubernetes.io/projected/ed2252c8-233f-41c4-aed3-5ff1f9190620-kube-api-access-j9mqb\") pod \"controller-manager-78bc9686d5-lf24v\" (UID: \"ed2252c8-233f-41c4-aed3-5ff1f9190620\") " pod="openshift-controller-manager/controller-manager-78bc9686d5-lf24v" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.641943 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9bd1dff-e19a-4feb-8543-cd8465c785d2-client-ca\") pod \"route-controller-manager-6fd94f88f6-d8mn9\" (UID: \"d9bd1dff-e19a-4feb-8543-cd8465c785d2\") " pod="openshift-route-controller-manager/route-controller-manager-6fd94f88f6-d8mn9" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.641995 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9bd1dff-e19a-4feb-8543-cd8465c785d2-serving-cert\") pod \"route-controller-manager-6fd94f88f6-d8mn9\" (UID: \"d9bd1dff-e19a-4feb-8543-cd8465c785d2\") " pod="openshift-route-controller-manager/route-controller-manager-6fd94f88f6-d8mn9" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.642039 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9bd1dff-e19a-4feb-8543-cd8465c785d2-config\") pod \"route-controller-manager-6fd94f88f6-d8mn9\" (UID: \"d9bd1dff-e19a-4feb-8543-cd8465c785d2\") " pod="openshift-route-controller-manager/route-controller-manager-6fd94f88f6-d8mn9" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.642067 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dflbp\" (UniqueName: \"kubernetes.io/projected/d9bd1dff-e19a-4feb-8543-cd8465c785d2-kube-api-access-dflbp\") pod \"route-controller-manager-6fd94f88f6-d8mn9\" (UID: \"d9bd1dff-e19a-4feb-8543-cd8465c785d2\") " pod="openshift-route-controller-manager/route-controller-manager-6fd94f88f6-d8mn9" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.643164 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9bd1dff-e19a-4feb-8543-cd8465c785d2-config\") pod \"route-controller-manager-6fd94f88f6-d8mn9\" (UID: \"d9bd1dff-e19a-4feb-8543-cd8465c785d2\") " pod="openshift-route-controller-manager/route-controller-manager-6fd94f88f6-d8mn9" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.643200 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9bd1dff-e19a-4feb-8543-cd8465c785d2-client-ca\") pod \"route-controller-manager-6fd94f88f6-d8mn9\" (UID: \"d9bd1dff-e19a-4feb-8543-cd8465c785d2\") " pod="openshift-route-controller-manager/route-controller-manager-6fd94f88f6-d8mn9" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.645127 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9bd1dff-e19a-4feb-8543-cd8465c785d2-serving-cert\") pod \"route-controller-manager-6fd94f88f6-d8mn9\" (UID: \"d9bd1dff-e19a-4feb-8543-cd8465c785d2\") " pod="openshift-route-controller-manager/route-controller-manager-6fd94f88f6-d8mn9" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.657483 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dflbp\" (UniqueName: \"kubernetes.io/projected/d9bd1dff-e19a-4feb-8543-cd8465c785d2-kube-api-access-dflbp\") pod \"route-controller-manager-6fd94f88f6-d8mn9\" (UID: \"d9bd1dff-e19a-4feb-8543-cd8465c785d2\") " pod="openshift-route-controller-manager/route-controller-manager-6fd94f88f6-d8mn9" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.681369 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78bc9686d5-lf24v" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.693204 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fd94f88f6-d8mn9" Oct 13 06:41:01 crc kubenswrapper[4833]: I1013 06:41:01.921125 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fd94f88f6-d8mn9"] Oct 13 06:41:02 crc kubenswrapper[4833]: I1013 06:41:02.096475 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78bc9686d5-lf24v"] Oct 13 06:41:02 crc kubenswrapper[4833]: W1013 06:41:02.102139 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded2252c8_233f_41c4_aed3_5ff1f9190620.slice/crio-db4e54a63197efde45e7615012fa4e6c87eb73a43c8749e437bee332ac7fe6c3 WatchSource:0}: Error finding container db4e54a63197efde45e7615012fa4e6c87eb73a43c8749e437bee332ac7fe6c3: Status 404 returned error can't find the container with id db4e54a63197efde45e7615012fa4e6c87eb73a43c8749e437bee332ac7fe6c3 Oct 13 06:41:02 crc kubenswrapper[4833]: I1013 06:41:02.450128 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fd94f88f6-d8mn9" event={"ID":"d9bd1dff-e19a-4feb-8543-cd8465c785d2","Type":"ContainerStarted","Data":"e0fa55c1734ff90da4ca9a314e35f7452b4f5d321838b0a20e4668d6fdc16a80"} Oct 13 06:41:02 crc kubenswrapper[4833]: I1013 06:41:02.450458 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fd94f88f6-d8mn9" event={"ID":"d9bd1dff-e19a-4feb-8543-cd8465c785d2","Type":"ContainerStarted","Data":"74d798d855358cd1055c0187efd6f22473bdb8d4c55d420fcf348de04ad10fe2"} Oct 13 06:41:02 crc kubenswrapper[4833]: I1013 06:41:02.450477 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6fd94f88f6-d8mn9" Oct 13 06:41:02 crc kubenswrapper[4833]: I1013 06:41:02.451773 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78bc9686d5-lf24v" event={"ID":"ed2252c8-233f-41c4-aed3-5ff1f9190620","Type":"ContainerStarted","Data":"0cfd7c9f7dd2392c9f09e2dc279944867bb1b7b47cd08259eb2d2292e2b9fd83"} Oct 13 06:41:02 crc kubenswrapper[4833]: I1013 06:41:02.451798 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78bc9686d5-lf24v" event={"ID":"ed2252c8-233f-41c4-aed3-5ff1f9190620","Type":"ContainerStarted","Data":"db4e54a63197efde45e7615012fa4e6c87eb73a43c8749e437bee332ac7fe6c3"} Oct 13 06:41:02 crc kubenswrapper[4833]: I1013 06:41:02.452017 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-78bc9686d5-lf24v" Oct 13 06:41:02 crc kubenswrapper[4833]: I1013 06:41:02.462900 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-78bc9686d5-lf24v" Oct 13 06:41:02 crc kubenswrapper[4833]: I1013 06:41:02.472215 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6fd94f88f6-d8mn9" podStartSLOduration=1.472189938 podStartE2EDuration="1.472189938s" podCreationTimestamp="2025-10-13 06:41:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:41:02.468939914 +0000 UTC m=+752.569362840" watchObservedRunningTime="2025-10-13 06:41:02.472189938 +0000 UTC m=+752.572612854" Oct 13 06:41:02 crc kubenswrapper[4833]: I1013 06:41:02.488596 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-78bc9686d5-lf24v" podStartSLOduration=1.488576503 podStartE2EDuration="1.488576503s" podCreationTimestamp="2025-10-13 06:41:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:41:02.486493873 +0000 UTC m=+752.586916789" watchObservedRunningTime="2025-10-13 06:41:02.488576503 +0000 UTC m=+752.588999419" Oct 13 06:41:02 crc kubenswrapper[4833]: I1013 06:41:02.634498 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adfdbeae-0ada-4f22-937a-ff7fdb0d0901" path="/var/lib/kubelet/pods/adfdbeae-0ada-4f22-937a-ff7fdb0d0901/volumes" Oct 13 06:41:02 crc kubenswrapper[4833]: I1013 06:41:02.635042 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba083af2-d9a6-42e5-99ec-2b89278b08a2" path="/var/lib/kubelet/pods/ba083af2-d9a6-42e5-99ec-2b89278b08a2/volumes" Oct 13 06:41:02 crc kubenswrapper[4833]: I1013 06:41:02.635470 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6fd94f88f6-d8mn9" Oct 13 06:41:03 crc kubenswrapper[4833]: I1013 06:41:03.397309 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-tgsfn" podUID="00fbf18f-bb4d-4d88-801d-5598eb6f6fa2" containerName="console" containerID="cri-o://f6df0a09a7e6886f5366652bc3207e4501ddd59540cf3ae0b116942282666989" gracePeriod=15 Oct 13 06:41:03 crc kubenswrapper[4833]: I1013 06:41:03.458979 4833 generic.go:334] "Generic (PLEG): container finished" podID="17d37695-97f2-49f8-907b-a7b183bd0bc0" containerID="55f95aa839950044b753009e2b794f00d5d7f2a626be49e6eb9f495637aff51f" exitCode=0 Oct 13 06:41:03 crc kubenswrapper[4833]: I1013 06:41:03.459034 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4" event={"ID":"17d37695-97f2-49f8-907b-a7b183bd0bc0","Type":"ContainerDied","Data":"55f95aa839950044b753009e2b794f00d5d7f2a626be49e6eb9f495637aff51f"} Oct 13 06:41:03 crc kubenswrapper[4833]: I1013 06:41:03.826667 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-tgsfn_00fbf18f-bb4d-4d88-801d-5598eb6f6fa2/console/0.log" Oct 13 06:41:03 crc kubenswrapper[4833]: I1013 06:41:03.827171 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tgsfn" Oct 13 06:41:03 crc kubenswrapper[4833]: I1013 06:41:03.970901 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-console-config\") pod \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\" (UID: \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\") " Oct 13 06:41:03 crc kubenswrapper[4833]: I1013 06:41:03.970973 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-trusted-ca-bundle\") pod \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\" (UID: \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\") " Oct 13 06:41:03 crc kubenswrapper[4833]: I1013 06:41:03.971040 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-console-serving-cert\") pod \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\" (UID: \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\") " Oct 13 06:41:03 crc kubenswrapper[4833]: I1013 06:41:03.971120 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-oauth-serving-cert\") pod \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\" (UID: \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\") " Oct 13 06:41:03 crc kubenswrapper[4833]: I1013 06:41:03.971145 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74pmn\" (UniqueName: \"kubernetes.io/projected/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-kube-api-access-74pmn\") pod \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\" (UID: \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\") " Oct 13 06:41:03 crc kubenswrapper[4833]: I1013 06:41:03.971366 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-console-oauth-config\") pod \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\" (UID: \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\") " Oct 13 06:41:03 crc kubenswrapper[4833]: I1013 06:41:03.971399 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-service-ca\") pod \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\" (UID: \"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2\") " Oct 13 06:41:03 crc kubenswrapper[4833]: I1013 06:41:03.971944 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-console-config" (OuterVolumeSpecName: "console-config") pod "00fbf18f-bb4d-4d88-801d-5598eb6f6fa2" (UID: "00fbf18f-bb4d-4d88-801d-5598eb6f6fa2"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:41:03 crc kubenswrapper[4833]: I1013 06:41:03.972097 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "00fbf18f-bb4d-4d88-801d-5598eb6f6fa2" (UID: "00fbf18f-bb4d-4d88-801d-5598eb6f6fa2"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:41:03 crc kubenswrapper[4833]: I1013 06:41:03.972186 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-service-ca" (OuterVolumeSpecName: "service-ca") pod "00fbf18f-bb4d-4d88-801d-5598eb6f6fa2" (UID: "00fbf18f-bb4d-4d88-801d-5598eb6f6fa2"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:41:03 crc kubenswrapper[4833]: I1013 06:41:03.972763 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "00fbf18f-bb4d-4d88-801d-5598eb6f6fa2" (UID: "00fbf18f-bb4d-4d88-801d-5598eb6f6fa2"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:41:03 crc kubenswrapper[4833]: I1013 06:41:03.972907 4833 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-service-ca\") on node \"crc\" DevicePath \"\"" Oct 13 06:41:03 crc kubenswrapper[4833]: I1013 06:41:03.972933 4833 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-console-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:41:03 crc kubenswrapper[4833]: I1013 06:41:03.972946 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:41:03 crc kubenswrapper[4833]: I1013 06:41:03.972957 4833 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:41:03 crc kubenswrapper[4833]: I1013 06:41:03.985936 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-kube-api-access-74pmn" (OuterVolumeSpecName: "kube-api-access-74pmn") pod "00fbf18f-bb4d-4d88-801d-5598eb6f6fa2" (UID: "00fbf18f-bb4d-4d88-801d-5598eb6f6fa2"). InnerVolumeSpecName "kube-api-access-74pmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:41:03 crc kubenswrapper[4833]: I1013 06:41:03.986718 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "00fbf18f-bb4d-4d88-801d-5598eb6f6fa2" (UID: "00fbf18f-bb4d-4d88-801d-5598eb6f6fa2"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:41:03 crc kubenswrapper[4833]: I1013 06:41:03.993126 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "00fbf18f-bb4d-4d88-801d-5598eb6f6fa2" (UID: "00fbf18f-bb4d-4d88-801d-5598eb6f6fa2"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:41:04 crc kubenswrapper[4833]: I1013 06:41:04.074018 4833 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 06:41:04 crc kubenswrapper[4833]: I1013 06:41:04.074061 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74pmn\" (UniqueName: \"kubernetes.io/projected/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-kube-api-access-74pmn\") on node \"crc\" DevicePath \"\"" Oct 13 06:41:04 crc kubenswrapper[4833]: I1013 06:41:04.074075 4833 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:41:04 crc kubenswrapper[4833]: I1013 06:41:04.466719 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-tgsfn_00fbf18f-bb4d-4d88-801d-5598eb6f6fa2/console/0.log" Oct 13 06:41:04 crc kubenswrapper[4833]: I1013 06:41:04.466766 4833 generic.go:334] "Generic (PLEG): container finished" podID="00fbf18f-bb4d-4d88-801d-5598eb6f6fa2" containerID="f6df0a09a7e6886f5366652bc3207e4501ddd59540cf3ae0b116942282666989" exitCode=2 Oct 13 06:41:04 crc kubenswrapper[4833]: I1013 06:41:04.466814 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tgsfn" event={"ID":"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2","Type":"ContainerDied","Data":"f6df0a09a7e6886f5366652bc3207e4501ddd59540cf3ae0b116942282666989"} Oct 13 06:41:04 crc kubenswrapper[4833]: I1013 06:41:04.466841 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tgsfn" event={"ID":"00fbf18f-bb4d-4d88-801d-5598eb6f6fa2","Type":"ContainerDied","Data":"6a7bff9589055443af10abb86c9299460b71801a49f0fb40e967dc46c79ce011"} Oct 13 06:41:04 crc kubenswrapper[4833]: I1013 06:41:04.466856 4833 scope.go:117] "RemoveContainer" containerID="f6df0a09a7e6886f5366652bc3207e4501ddd59540cf3ae0b116942282666989" Oct 13 06:41:04 crc kubenswrapper[4833]: I1013 06:41:04.466859 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tgsfn" Oct 13 06:41:04 crc kubenswrapper[4833]: I1013 06:41:04.471942 4833 generic.go:334] "Generic (PLEG): container finished" podID="17d37695-97f2-49f8-907b-a7b183bd0bc0" containerID="a0c6a68a7af46f1013aeaa770b4fd082322bad6a849ff1faa2d93b82042ac1a8" exitCode=0 Oct 13 06:41:04 crc kubenswrapper[4833]: I1013 06:41:04.471980 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4" event={"ID":"17d37695-97f2-49f8-907b-a7b183bd0bc0","Type":"ContainerDied","Data":"a0c6a68a7af46f1013aeaa770b4fd082322bad6a849ff1faa2d93b82042ac1a8"} Oct 13 06:41:04 crc kubenswrapper[4833]: I1013 06:41:04.486903 4833 scope.go:117] "RemoveContainer" containerID="f6df0a09a7e6886f5366652bc3207e4501ddd59540cf3ae0b116942282666989" Oct 13 06:41:04 crc kubenswrapper[4833]: E1013 06:41:04.487309 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6df0a09a7e6886f5366652bc3207e4501ddd59540cf3ae0b116942282666989\": container with ID starting with f6df0a09a7e6886f5366652bc3207e4501ddd59540cf3ae0b116942282666989 not found: ID does not exist" containerID="f6df0a09a7e6886f5366652bc3207e4501ddd59540cf3ae0b116942282666989" Oct 13 06:41:04 crc kubenswrapper[4833]: I1013 06:41:04.487360 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6df0a09a7e6886f5366652bc3207e4501ddd59540cf3ae0b116942282666989"} err="failed to get container status \"f6df0a09a7e6886f5366652bc3207e4501ddd59540cf3ae0b116942282666989\": rpc error: code = NotFound desc = could not find container \"f6df0a09a7e6886f5366652bc3207e4501ddd59540cf3ae0b116942282666989\": container with ID starting with f6df0a09a7e6886f5366652bc3207e4501ddd59540cf3ae0b116942282666989 not found: ID does not exist" Oct 13 06:41:04 crc kubenswrapper[4833]: I1013 06:41:04.510885 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-tgsfn"] Oct 13 06:41:04 crc kubenswrapper[4833]: I1013 06:41:04.514443 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-tgsfn"] Oct 13 06:41:04 crc kubenswrapper[4833]: I1013 06:41:04.633378 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00fbf18f-bb4d-4d88-801d-5598eb6f6fa2" path="/var/lib/kubelet/pods/00fbf18f-bb4d-4d88-801d-5598eb6f6fa2/volumes" Oct 13 06:41:05 crc kubenswrapper[4833]: I1013 06:41:05.795909 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4" Oct 13 06:41:05 crc kubenswrapper[4833]: I1013 06:41:05.896218 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ckhw\" (UniqueName: \"kubernetes.io/projected/17d37695-97f2-49f8-907b-a7b183bd0bc0-kube-api-access-6ckhw\") pod \"17d37695-97f2-49f8-907b-a7b183bd0bc0\" (UID: \"17d37695-97f2-49f8-907b-a7b183bd0bc0\") " Oct 13 06:41:05 crc kubenswrapper[4833]: I1013 06:41:05.896303 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17d37695-97f2-49f8-907b-a7b183bd0bc0-util\") pod \"17d37695-97f2-49f8-907b-a7b183bd0bc0\" (UID: \"17d37695-97f2-49f8-907b-a7b183bd0bc0\") " Oct 13 06:41:05 crc kubenswrapper[4833]: I1013 06:41:05.896346 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17d37695-97f2-49f8-907b-a7b183bd0bc0-bundle\") pod \"17d37695-97f2-49f8-907b-a7b183bd0bc0\" (UID: \"17d37695-97f2-49f8-907b-a7b183bd0bc0\") " Oct 13 06:41:05 crc kubenswrapper[4833]: I1013 06:41:05.897769 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17d37695-97f2-49f8-907b-a7b183bd0bc0-bundle" (OuterVolumeSpecName: "bundle") pod "17d37695-97f2-49f8-907b-a7b183bd0bc0" (UID: "17d37695-97f2-49f8-907b-a7b183bd0bc0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:41:05 crc kubenswrapper[4833]: I1013 06:41:05.903313 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17d37695-97f2-49f8-907b-a7b183bd0bc0-kube-api-access-6ckhw" (OuterVolumeSpecName: "kube-api-access-6ckhw") pod "17d37695-97f2-49f8-907b-a7b183bd0bc0" (UID: "17d37695-97f2-49f8-907b-a7b183bd0bc0"). InnerVolumeSpecName "kube-api-access-6ckhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:41:05 crc kubenswrapper[4833]: I1013 06:41:05.911917 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17d37695-97f2-49f8-907b-a7b183bd0bc0-util" (OuterVolumeSpecName: "util") pod "17d37695-97f2-49f8-907b-a7b183bd0bc0" (UID: "17d37695-97f2-49f8-907b-a7b183bd0bc0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:41:05 crc kubenswrapper[4833]: I1013 06:41:05.941922 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qkc8q"] Oct 13 06:41:05 crc kubenswrapper[4833]: E1013 06:41:05.942134 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17d37695-97f2-49f8-907b-a7b183bd0bc0" containerName="util" Oct 13 06:41:05 crc kubenswrapper[4833]: I1013 06:41:05.942145 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="17d37695-97f2-49f8-907b-a7b183bd0bc0" containerName="util" Oct 13 06:41:05 crc kubenswrapper[4833]: E1013 06:41:05.942152 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17d37695-97f2-49f8-907b-a7b183bd0bc0" containerName="extract" Oct 13 06:41:05 crc kubenswrapper[4833]: I1013 06:41:05.942158 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="17d37695-97f2-49f8-907b-a7b183bd0bc0" containerName="extract" Oct 13 06:41:05 crc kubenswrapper[4833]: E1013 06:41:05.942166 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00fbf18f-bb4d-4d88-801d-5598eb6f6fa2" containerName="console" Oct 13 06:41:05 crc kubenswrapper[4833]: I1013 06:41:05.942172 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="00fbf18f-bb4d-4d88-801d-5598eb6f6fa2" containerName="console" Oct 13 06:41:05 crc kubenswrapper[4833]: E1013 06:41:05.942182 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17d37695-97f2-49f8-907b-a7b183bd0bc0" containerName="pull" Oct 13 06:41:05 crc kubenswrapper[4833]: I1013 06:41:05.942188 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="17d37695-97f2-49f8-907b-a7b183bd0bc0" containerName="pull" Oct 13 06:41:05 crc kubenswrapper[4833]: I1013 06:41:05.942398 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="17d37695-97f2-49f8-907b-a7b183bd0bc0" containerName="extract" Oct 13 06:41:05 crc kubenswrapper[4833]: I1013 06:41:05.942463 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="00fbf18f-bb4d-4d88-801d-5598eb6f6fa2" containerName="console" Oct 13 06:41:05 crc kubenswrapper[4833]: I1013 06:41:05.944212 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qkc8q" Oct 13 06:41:05 crc kubenswrapper[4833]: I1013 06:41:05.953322 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qkc8q"] Oct 13 06:41:05 crc kubenswrapper[4833]: I1013 06:41:05.997377 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d77d6f2f-70ef-4729-aa52-39c91463d165-catalog-content\") pod \"redhat-operators-qkc8q\" (UID: \"d77d6f2f-70ef-4729-aa52-39c91463d165\") " pod="openshift-marketplace/redhat-operators-qkc8q" Oct 13 06:41:05 crc kubenswrapper[4833]: I1013 06:41:05.997434 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l79g7\" (UniqueName: \"kubernetes.io/projected/d77d6f2f-70ef-4729-aa52-39c91463d165-kube-api-access-l79g7\") pod \"redhat-operators-qkc8q\" (UID: \"d77d6f2f-70ef-4729-aa52-39c91463d165\") " pod="openshift-marketplace/redhat-operators-qkc8q" Oct 13 06:41:05 crc kubenswrapper[4833]: I1013 06:41:05.997649 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d77d6f2f-70ef-4729-aa52-39c91463d165-utilities\") pod \"redhat-operators-qkc8q\" (UID: \"d77d6f2f-70ef-4729-aa52-39c91463d165\") " pod="openshift-marketplace/redhat-operators-qkc8q" Oct 13 06:41:05 crc kubenswrapper[4833]: I1013 06:41:05.997726 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ckhw\" (UniqueName: \"kubernetes.io/projected/17d37695-97f2-49f8-907b-a7b183bd0bc0-kube-api-access-6ckhw\") on node \"crc\" DevicePath \"\"" Oct 13 06:41:05 crc kubenswrapper[4833]: I1013 06:41:05.997765 4833 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17d37695-97f2-49f8-907b-a7b183bd0bc0-util\") on node \"crc\" DevicePath \"\"" Oct 13 06:41:05 crc kubenswrapper[4833]: I1013 06:41:05.997777 4833 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17d37695-97f2-49f8-907b-a7b183bd0bc0-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:41:06 crc kubenswrapper[4833]: I1013 06:41:06.098764 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d77d6f2f-70ef-4729-aa52-39c91463d165-utilities\") pod \"redhat-operators-qkc8q\" (UID: \"d77d6f2f-70ef-4729-aa52-39c91463d165\") " pod="openshift-marketplace/redhat-operators-qkc8q" Oct 13 06:41:06 crc kubenswrapper[4833]: I1013 06:41:06.098826 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d77d6f2f-70ef-4729-aa52-39c91463d165-catalog-content\") pod \"redhat-operators-qkc8q\" (UID: \"d77d6f2f-70ef-4729-aa52-39c91463d165\") " pod="openshift-marketplace/redhat-operators-qkc8q" Oct 13 06:41:06 crc kubenswrapper[4833]: I1013 06:41:06.098855 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l79g7\" (UniqueName: \"kubernetes.io/projected/d77d6f2f-70ef-4729-aa52-39c91463d165-kube-api-access-l79g7\") pod \"redhat-operators-qkc8q\" (UID: \"d77d6f2f-70ef-4729-aa52-39c91463d165\") " pod="openshift-marketplace/redhat-operators-qkc8q" Oct 13 06:41:06 crc kubenswrapper[4833]: I1013 06:41:06.099293 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d77d6f2f-70ef-4729-aa52-39c91463d165-utilities\") pod \"redhat-operators-qkc8q\" (UID: \"d77d6f2f-70ef-4729-aa52-39c91463d165\") " pod="openshift-marketplace/redhat-operators-qkc8q" Oct 13 06:41:06 crc kubenswrapper[4833]: I1013 06:41:06.099455 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d77d6f2f-70ef-4729-aa52-39c91463d165-catalog-content\") pod \"redhat-operators-qkc8q\" (UID: \"d77d6f2f-70ef-4729-aa52-39c91463d165\") " pod="openshift-marketplace/redhat-operators-qkc8q" Oct 13 06:41:06 crc kubenswrapper[4833]: I1013 06:41:06.116959 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l79g7\" (UniqueName: \"kubernetes.io/projected/d77d6f2f-70ef-4729-aa52-39c91463d165-kube-api-access-l79g7\") pod \"redhat-operators-qkc8q\" (UID: \"d77d6f2f-70ef-4729-aa52-39c91463d165\") " pod="openshift-marketplace/redhat-operators-qkc8q" Oct 13 06:41:06 crc kubenswrapper[4833]: I1013 06:41:06.261282 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qkc8q" Oct 13 06:41:06 crc kubenswrapper[4833]: I1013 06:41:06.462374 4833 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 13 06:41:06 crc kubenswrapper[4833]: I1013 06:41:06.488214 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4" event={"ID":"17d37695-97f2-49f8-907b-a7b183bd0bc0","Type":"ContainerDied","Data":"9fc5c4ca056c0b84b7c5d5afdbc2abb301f734d80e343cc58b7d27e9f1d0e916"} Oct 13 06:41:06 crc kubenswrapper[4833]: I1013 06:41:06.488251 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fc5c4ca056c0b84b7c5d5afdbc2abb301f734d80e343cc58b7d27e9f1d0e916" Oct 13 06:41:06 crc kubenswrapper[4833]: I1013 06:41:06.488311 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4" Oct 13 06:41:06 crc kubenswrapper[4833]: I1013 06:41:06.662041 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qkc8q"] Oct 13 06:41:07 crc kubenswrapper[4833]: I1013 06:41:07.502337 4833 generic.go:334] "Generic (PLEG): container finished" podID="d77d6f2f-70ef-4729-aa52-39c91463d165" containerID="b145c8ce7f0e458af9ec669e36dea683b91b26c5d42006c1617cdf941b6394f6" exitCode=0 Oct 13 06:41:07 crc kubenswrapper[4833]: I1013 06:41:07.502583 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkc8q" event={"ID":"d77d6f2f-70ef-4729-aa52-39c91463d165","Type":"ContainerDied","Data":"b145c8ce7f0e458af9ec669e36dea683b91b26c5d42006c1617cdf941b6394f6"} Oct 13 06:41:07 crc kubenswrapper[4833]: I1013 06:41:07.502754 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkc8q" event={"ID":"d77d6f2f-70ef-4729-aa52-39c91463d165","Type":"ContainerStarted","Data":"dc28b220378fca75e8040c1fe3f178577400ede9da67d701e3636e2de1b38f74"} Oct 13 06:41:08 crc kubenswrapper[4833]: I1013 06:41:08.511903 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkc8q" event={"ID":"d77d6f2f-70ef-4729-aa52-39c91463d165","Type":"ContainerStarted","Data":"396ee37104e7feb5e6048285beb283f81aa288ddde009943723be04dfae3d701"} Oct 13 06:41:09 crc kubenswrapper[4833]: I1013 06:41:09.519648 4833 generic.go:334] "Generic (PLEG): container finished" podID="d77d6f2f-70ef-4729-aa52-39c91463d165" containerID="396ee37104e7feb5e6048285beb283f81aa288ddde009943723be04dfae3d701" exitCode=0 Oct 13 06:41:09 crc kubenswrapper[4833]: I1013 06:41:09.519692 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkc8q" event={"ID":"d77d6f2f-70ef-4729-aa52-39c91463d165","Type":"ContainerDied","Data":"396ee37104e7feb5e6048285beb283f81aa288ddde009943723be04dfae3d701"} Oct 13 06:41:10 crc kubenswrapper[4833]: I1013 06:41:10.526453 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkc8q" event={"ID":"d77d6f2f-70ef-4729-aa52-39c91463d165","Type":"ContainerStarted","Data":"8da5a565de877b4ba5987a13798378a5ba7aa27873d242af0c64f6e933fa5d35"} Oct 13 06:41:10 crc kubenswrapper[4833]: I1013 06:41:10.544344 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qkc8q" podStartSLOduration=2.98206821 podStartE2EDuration="5.544324476s" podCreationTimestamp="2025-10-13 06:41:05 +0000 UTC" firstStartedPulling="2025-10-13 06:41:07.504824952 +0000 UTC m=+757.605247878" lastFinishedPulling="2025-10-13 06:41:10.067081228 +0000 UTC m=+760.167504144" observedRunningTime="2025-10-13 06:41:10.540634309 +0000 UTC m=+760.641057235" watchObservedRunningTime="2025-10-13 06:41:10.544324476 +0000 UTC m=+760.644747392" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.335457 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-57cb68956b-fwz7n"] Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.336591 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-57cb68956b-fwz7n" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.338237 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.338562 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.338592 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.338660 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-25jp6" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.338938 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.353691 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-57cb68956b-fwz7n"] Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.425582 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5fa05b8f-37f5-468d-a716-752f3402d091-webhook-cert\") pod \"metallb-operator-controller-manager-57cb68956b-fwz7n\" (UID: \"5fa05b8f-37f5-468d-a716-752f3402d091\") " pod="metallb-system/metallb-operator-controller-manager-57cb68956b-fwz7n" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.425652 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbp95\" (UniqueName: \"kubernetes.io/projected/5fa05b8f-37f5-468d-a716-752f3402d091-kube-api-access-wbp95\") pod \"metallb-operator-controller-manager-57cb68956b-fwz7n\" (UID: \"5fa05b8f-37f5-468d-a716-752f3402d091\") " pod="metallb-system/metallb-operator-controller-manager-57cb68956b-fwz7n" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.425697 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5fa05b8f-37f5-468d-a716-752f3402d091-apiservice-cert\") pod \"metallb-operator-controller-manager-57cb68956b-fwz7n\" (UID: \"5fa05b8f-37f5-468d-a716-752f3402d091\") " pod="metallb-system/metallb-operator-controller-manager-57cb68956b-fwz7n" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.526660 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5fa05b8f-37f5-468d-a716-752f3402d091-webhook-cert\") pod \"metallb-operator-controller-manager-57cb68956b-fwz7n\" (UID: \"5fa05b8f-37f5-468d-a716-752f3402d091\") " pod="metallb-system/metallb-operator-controller-manager-57cb68956b-fwz7n" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.526732 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbp95\" (UniqueName: \"kubernetes.io/projected/5fa05b8f-37f5-468d-a716-752f3402d091-kube-api-access-wbp95\") pod \"metallb-operator-controller-manager-57cb68956b-fwz7n\" (UID: \"5fa05b8f-37f5-468d-a716-752f3402d091\") " pod="metallb-system/metallb-operator-controller-manager-57cb68956b-fwz7n" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.526768 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5fa05b8f-37f5-468d-a716-752f3402d091-apiservice-cert\") pod \"metallb-operator-controller-manager-57cb68956b-fwz7n\" (UID: \"5fa05b8f-37f5-468d-a716-752f3402d091\") " pod="metallb-system/metallb-operator-controller-manager-57cb68956b-fwz7n" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.532164 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5fa05b8f-37f5-468d-a716-752f3402d091-apiservice-cert\") pod \"metallb-operator-controller-manager-57cb68956b-fwz7n\" (UID: \"5fa05b8f-37f5-468d-a716-752f3402d091\") " pod="metallb-system/metallb-operator-controller-manager-57cb68956b-fwz7n" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.542028 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5fa05b8f-37f5-468d-a716-752f3402d091-webhook-cert\") pod \"metallb-operator-controller-manager-57cb68956b-fwz7n\" (UID: \"5fa05b8f-37f5-468d-a716-752f3402d091\") " pod="metallb-system/metallb-operator-controller-manager-57cb68956b-fwz7n" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.545894 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbp95\" (UniqueName: \"kubernetes.io/projected/5fa05b8f-37f5-468d-a716-752f3402d091-kube-api-access-wbp95\") pod \"metallb-operator-controller-manager-57cb68956b-fwz7n\" (UID: \"5fa05b8f-37f5-468d-a716-752f3402d091\") " pod="metallb-system/metallb-operator-controller-manager-57cb68956b-fwz7n" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.547116 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qr6tn"] Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.549594 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qr6tn" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.561206 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qr6tn"] Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.606880 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6645b6586b-dks2w"] Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.607717 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6645b6586b-dks2w" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.609343 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.609506 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-4wtzj" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.610100 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.628233 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6645b6586b-dks2w"] Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.628241 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d738483-0bfc-440f-8ac9-37eb0ddc0da7-catalog-content\") pod \"redhat-marketplace-qr6tn\" (UID: \"3d738483-0bfc-440f-8ac9-37eb0ddc0da7\") " pod="openshift-marketplace/redhat-marketplace-qr6tn" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.628386 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d3563d2c-a34e-4301-a437-b963e22b0c33-apiservice-cert\") pod \"metallb-operator-webhook-server-6645b6586b-dks2w\" (UID: \"d3563d2c-a34e-4301-a437-b963e22b0c33\") " pod="metallb-system/metallb-operator-webhook-server-6645b6586b-dks2w" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.628417 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d738483-0bfc-440f-8ac9-37eb0ddc0da7-utilities\") pod \"redhat-marketplace-qr6tn\" (UID: \"3d738483-0bfc-440f-8ac9-37eb0ddc0da7\") " pod="openshift-marketplace/redhat-marketplace-qr6tn" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.628442 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d3563d2c-a34e-4301-a437-b963e22b0c33-webhook-cert\") pod \"metallb-operator-webhook-server-6645b6586b-dks2w\" (UID: \"d3563d2c-a34e-4301-a437-b963e22b0c33\") " pod="metallb-system/metallb-operator-webhook-server-6645b6586b-dks2w" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.628486 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9vrg\" (UniqueName: \"kubernetes.io/projected/d3563d2c-a34e-4301-a437-b963e22b0c33-kube-api-access-x9vrg\") pod \"metallb-operator-webhook-server-6645b6586b-dks2w\" (UID: \"d3563d2c-a34e-4301-a437-b963e22b0c33\") " pod="metallb-system/metallb-operator-webhook-server-6645b6586b-dks2w" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.628530 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx2vl\" (UniqueName: \"kubernetes.io/projected/3d738483-0bfc-440f-8ac9-37eb0ddc0da7-kube-api-access-tx2vl\") pod \"redhat-marketplace-qr6tn\" (UID: \"3d738483-0bfc-440f-8ac9-37eb0ddc0da7\") " pod="openshift-marketplace/redhat-marketplace-qr6tn" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.651338 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-57cb68956b-fwz7n" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.730494 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d738483-0bfc-440f-8ac9-37eb0ddc0da7-utilities\") pod \"redhat-marketplace-qr6tn\" (UID: \"3d738483-0bfc-440f-8ac9-37eb0ddc0da7\") " pod="openshift-marketplace/redhat-marketplace-qr6tn" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.730782 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d3563d2c-a34e-4301-a437-b963e22b0c33-webhook-cert\") pod \"metallb-operator-webhook-server-6645b6586b-dks2w\" (UID: \"d3563d2c-a34e-4301-a437-b963e22b0c33\") " pod="metallb-system/metallb-operator-webhook-server-6645b6586b-dks2w" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.730827 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9vrg\" (UniqueName: \"kubernetes.io/projected/d3563d2c-a34e-4301-a437-b963e22b0c33-kube-api-access-x9vrg\") pod \"metallb-operator-webhook-server-6645b6586b-dks2w\" (UID: \"d3563d2c-a34e-4301-a437-b963e22b0c33\") " pod="metallb-system/metallb-operator-webhook-server-6645b6586b-dks2w" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.730873 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx2vl\" (UniqueName: \"kubernetes.io/projected/3d738483-0bfc-440f-8ac9-37eb0ddc0da7-kube-api-access-tx2vl\") pod \"redhat-marketplace-qr6tn\" (UID: \"3d738483-0bfc-440f-8ac9-37eb0ddc0da7\") " pod="openshift-marketplace/redhat-marketplace-qr6tn" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.730927 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d738483-0bfc-440f-8ac9-37eb0ddc0da7-catalog-content\") pod \"redhat-marketplace-qr6tn\" (UID: \"3d738483-0bfc-440f-8ac9-37eb0ddc0da7\") " pod="openshift-marketplace/redhat-marketplace-qr6tn" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.730965 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d3563d2c-a34e-4301-a437-b963e22b0c33-apiservice-cert\") pod \"metallb-operator-webhook-server-6645b6586b-dks2w\" (UID: \"d3563d2c-a34e-4301-a437-b963e22b0c33\") " pod="metallb-system/metallb-operator-webhook-server-6645b6586b-dks2w" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.736342 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d738483-0bfc-440f-8ac9-37eb0ddc0da7-utilities\") pod \"redhat-marketplace-qr6tn\" (UID: \"3d738483-0bfc-440f-8ac9-37eb0ddc0da7\") " pod="openshift-marketplace/redhat-marketplace-qr6tn" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.737772 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d3563d2c-a34e-4301-a437-b963e22b0c33-apiservice-cert\") pod \"metallb-operator-webhook-server-6645b6586b-dks2w\" (UID: \"d3563d2c-a34e-4301-a437-b963e22b0c33\") " pod="metallb-system/metallb-operator-webhook-server-6645b6586b-dks2w" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.738398 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d738483-0bfc-440f-8ac9-37eb0ddc0da7-catalog-content\") pod \"redhat-marketplace-qr6tn\" (UID: \"3d738483-0bfc-440f-8ac9-37eb0ddc0da7\") " pod="openshift-marketplace/redhat-marketplace-qr6tn" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.744120 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d3563d2c-a34e-4301-a437-b963e22b0c33-webhook-cert\") pod \"metallb-operator-webhook-server-6645b6586b-dks2w\" (UID: \"d3563d2c-a34e-4301-a437-b963e22b0c33\") " pod="metallb-system/metallb-operator-webhook-server-6645b6586b-dks2w" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.759925 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9vrg\" (UniqueName: \"kubernetes.io/projected/d3563d2c-a34e-4301-a437-b963e22b0c33-kube-api-access-x9vrg\") pod \"metallb-operator-webhook-server-6645b6586b-dks2w\" (UID: \"d3563d2c-a34e-4301-a437-b963e22b0c33\") " pod="metallb-system/metallb-operator-webhook-server-6645b6586b-dks2w" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.760452 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx2vl\" (UniqueName: \"kubernetes.io/projected/3d738483-0bfc-440f-8ac9-37eb0ddc0da7-kube-api-access-tx2vl\") pod \"redhat-marketplace-qr6tn\" (UID: \"3d738483-0bfc-440f-8ac9-37eb0ddc0da7\") " pod="openshift-marketplace/redhat-marketplace-qr6tn" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.893519 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qr6tn" Oct 13 06:41:15 crc kubenswrapper[4833]: I1013 06:41:15.920840 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6645b6586b-dks2w" Oct 13 06:41:16 crc kubenswrapper[4833]: I1013 06:41:16.128756 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-57cb68956b-fwz7n"] Oct 13 06:41:16 crc kubenswrapper[4833]: W1013 06:41:16.176444 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fa05b8f_37f5_468d_a716_752f3402d091.slice/crio-97be99989c6f00dfc80095d232e6b3d9b8757219bad9f2486e21acd2f8a669b2 WatchSource:0}: Error finding container 97be99989c6f00dfc80095d232e6b3d9b8757219bad9f2486e21acd2f8a669b2: Status 404 returned error can't find the container with id 97be99989c6f00dfc80095d232e6b3d9b8757219bad9f2486e21acd2f8a669b2 Oct 13 06:41:16 crc kubenswrapper[4833]: I1013 06:41:16.261735 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qkc8q" Oct 13 06:41:16 crc kubenswrapper[4833]: I1013 06:41:16.262147 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qkc8q" Oct 13 06:41:16 crc kubenswrapper[4833]: I1013 06:41:16.327342 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qkc8q" Oct 13 06:41:16 crc kubenswrapper[4833]: I1013 06:41:16.439509 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6645b6586b-dks2w"] Oct 13 06:41:16 crc kubenswrapper[4833]: W1013 06:41:16.441769 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3563d2c_a34e_4301_a437_b963e22b0c33.slice/crio-e4590c96d7887d92468ccfbd92f0b47de3133a3c0c701554b30817c689bd2844 WatchSource:0}: Error finding container e4590c96d7887d92468ccfbd92f0b47de3133a3c0c701554b30817c689bd2844: Status 404 returned error can't find the container with id e4590c96d7887d92468ccfbd92f0b47de3133a3c0c701554b30817c689bd2844 Oct 13 06:41:16 crc kubenswrapper[4833]: I1013 06:41:16.508944 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qr6tn"] Oct 13 06:41:16 crc kubenswrapper[4833]: W1013 06:41:16.512568 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d738483_0bfc_440f_8ac9_37eb0ddc0da7.slice/crio-a82e2c11e60ed964630c50ef4b584109d727164f77a0ebe19588572ef906e645 WatchSource:0}: Error finding container a82e2c11e60ed964630c50ef4b584109d727164f77a0ebe19588572ef906e645: Status 404 returned error can't find the container with id a82e2c11e60ed964630c50ef4b584109d727164f77a0ebe19588572ef906e645 Oct 13 06:41:16 crc kubenswrapper[4833]: I1013 06:41:16.562229 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-57cb68956b-fwz7n" event={"ID":"5fa05b8f-37f5-468d-a716-752f3402d091","Type":"ContainerStarted","Data":"97be99989c6f00dfc80095d232e6b3d9b8757219bad9f2486e21acd2f8a669b2"} Oct 13 06:41:16 crc kubenswrapper[4833]: I1013 06:41:16.563394 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6645b6586b-dks2w" event={"ID":"d3563d2c-a34e-4301-a437-b963e22b0c33","Type":"ContainerStarted","Data":"e4590c96d7887d92468ccfbd92f0b47de3133a3c0c701554b30817c689bd2844"} Oct 13 06:41:16 crc kubenswrapper[4833]: I1013 06:41:16.564345 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qr6tn" event={"ID":"3d738483-0bfc-440f-8ac9-37eb0ddc0da7","Type":"ContainerStarted","Data":"a82e2c11e60ed964630c50ef4b584109d727164f77a0ebe19588572ef906e645"} Oct 13 06:41:16 crc kubenswrapper[4833]: I1013 06:41:16.601442 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qkc8q" Oct 13 06:41:17 crc kubenswrapper[4833]: I1013 06:41:17.578032 4833 generic.go:334] "Generic (PLEG): container finished" podID="3d738483-0bfc-440f-8ac9-37eb0ddc0da7" containerID="84e0a35bd174ce5198756b7e3859f6e29ca653de267d0e8d8765bc3d1e6ee30e" exitCode=0 Oct 13 06:41:17 crc kubenswrapper[4833]: I1013 06:41:17.578114 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qr6tn" event={"ID":"3d738483-0bfc-440f-8ac9-37eb0ddc0da7","Type":"ContainerDied","Data":"84e0a35bd174ce5198756b7e3859f6e29ca653de267d0e8d8765bc3d1e6ee30e"} Oct 13 06:41:20 crc kubenswrapper[4833]: I1013 06:41:20.134571 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qkc8q"] Oct 13 06:41:20 crc kubenswrapper[4833]: I1013 06:41:20.135045 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qkc8q" podUID="d77d6f2f-70ef-4729-aa52-39c91463d165" containerName="registry-server" containerID="cri-o://8da5a565de877b4ba5987a13798378a5ba7aa27873d242af0c64f6e933fa5d35" gracePeriod=2 Oct 13 06:41:20 crc kubenswrapper[4833]: I1013 06:41:20.621145 4833 generic.go:334] "Generic (PLEG): container finished" podID="d77d6f2f-70ef-4729-aa52-39c91463d165" containerID="8da5a565de877b4ba5987a13798378a5ba7aa27873d242af0c64f6e933fa5d35" exitCode=0 Oct 13 06:41:20 crc kubenswrapper[4833]: I1013 06:41:20.621223 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkc8q" event={"ID":"d77d6f2f-70ef-4729-aa52-39c91463d165","Type":"ContainerDied","Data":"8da5a565de877b4ba5987a13798378a5ba7aa27873d242af0c64f6e933fa5d35"} Oct 13 06:41:20 crc kubenswrapper[4833]: I1013 06:41:20.623772 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-57cb68956b-fwz7n" event={"ID":"5fa05b8f-37f5-468d-a716-752f3402d091","Type":"ContainerStarted","Data":"e479ebd9f1bb972be40b2f61c9fce90c0ddc81f826e6d059504023817863ea13"} Oct 13 06:41:20 crc kubenswrapper[4833]: I1013 06:41:20.624349 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-57cb68956b-fwz7n" Oct 13 06:41:20 crc kubenswrapper[4833]: I1013 06:41:20.625872 4833 generic.go:334] "Generic (PLEG): container finished" podID="3d738483-0bfc-440f-8ac9-37eb0ddc0da7" containerID="3d2dfd8b228dcaf2d4104e0b692e6b52960389416a80f0d043727322b92608d0" exitCode=0 Oct 13 06:41:20 crc kubenswrapper[4833]: I1013 06:41:20.625934 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qr6tn" event={"ID":"3d738483-0bfc-440f-8ac9-37eb0ddc0da7","Type":"ContainerDied","Data":"3d2dfd8b228dcaf2d4104e0b692e6b52960389416a80f0d043727322b92608d0"} Oct 13 06:41:20 crc kubenswrapper[4833]: I1013 06:41:20.653418 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-57cb68956b-fwz7n" podStartSLOduration=1.5119817549999999 podStartE2EDuration="5.653396427s" podCreationTimestamp="2025-10-13 06:41:15 +0000 UTC" firstStartedPulling="2025-10-13 06:41:16.180078813 +0000 UTC m=+766.280501729" lastFinishedPulling="2025-10-13 06:41:20.321493485 +0000 UTC m=+770.421916401" observedRunningTime="2025-10-13 06:41:20.645408245 +0000 UTC m=+770.745831181" watchObservedRunningTime="2025-10-13 06:41:20.653396427 +0000 UTC m=+770.753819343" Oct 13 06:41:20 crc kubenswrapper[4833]: I1013 06:41:20.741444 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qkc8q" Oct 13 06:41:20 crc kubenswrapper[4833]: I1013 06:41:20.819048 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l79g7\" (UniqueName: \"kubernetes.io/projected/d77d6f2f-70ef-4729-aa52-39c91463d165-kube-api-access-l79g7\") pod \"d77d6f2f-70ef-4729-aa52-39c91463d165\" (UID: \"d77d6f2f-70ef-4729-aa52-39c91463d165\") " Oct 13 06:41:20 crc kubenswrapper[4833]: I1013 06:41:20.819088 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d77d6f2f-70ef-4729-aa52-39c91463d165-utilities\") pod \"d77d6f2f-70ef-4729-aa52-39c91463d165\" (UID: \"d77d6f2f-70ef-4729-aa52-39c91463d165\") " Oct 13 06:41:20 crc kubenswrapper[4833]: I1013 06:41:20.819107 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d77d6f2f-70ef-4729-aa52-39c91463d165-catalog-content\") pod \"d77d6f2f-70ef-4729-aa52-39c91463d165\" (UID: \"d77d6f2f-70ef-4729-aa52-39c91463d165\") " Oct 13 06:41:20 crc kubenswrapper[4833]: I1013 06:41:20.819833 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d77d6f2f-70ef-4729-aa52-39c91463d165-utilities" (OuterVolumeSpecName: "utilities") pod "d77d6f2f-70ef-4729-aa52-39c91463d165" (UID: "d77d6f2f-70ef-4729-aa52-39c91463d165"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:41:20 crc kubenswrapper[4833]: I1013 06:41:20.823860 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d77d6f2f-70ef-4729-aa52-39c91463d165-kube-api-access-l79g7" (OuterVolumeSpecName: "kube-api-access-l79g7") pod "d77d6f2f-70ef-4729-aa52-39c91463d165" (UID: "d77d6f2f-70ef-4729-aa52-39c91463d165"). InnerVolumeSpecName "kube-api-access-l79g7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:41:20 crc kubenswrapper[4833]: I1013 06:41:20.899184 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d77d6f2f-70ef-4729-aa52-39c91463d165-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d77d6f2f-70ef-4729-aa52-39c91463d165" (UID: "d77d6f2f-70ef-4729-aa52-39c91463d165"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:41:20 crc kubenswrapper[4833]: I1013 06:41:20.920936 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l79g7\" (UniqueName: \"kubernetes.io/projected/d77d6f2f-70ef-4729-aa52-39c91463d165-kube-api-access-l79g7\") on node \"crc\" DevicePath \"\"" Oct 13 06:41:20 crc kubenswrapper[4833]: I1013 06:41:20.920998 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d77d6f2f-70ef-4729-aa52-39c91463d165-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 06:41:20 crc kubenswrapper[4833]: I1013 06:41:20.921018 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d77d6f2f-70ef-4729-aa52-39c91463d165-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 06:41:21 crc kubenswrapper[4833]: I1013 06:41:21.634307 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qr6tn" event={"ID":"3d738483-0bfc-440f-8ac9-37eb0ddc0da7","Type":"ContainerStarted","Data":"8de61387e03e5fd7921f4bb82aaa3553129160883f0d2c1878151a98674fb3d8"} Oct 13 06:41:21 crc kubenswrapper[4833]: I1013 06:41:21.636863 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qkc8q" Oct 13 06:41:21 crc kubenswrapper[4833]: I1013 06:41:21.636874 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkc8q" event={"ID":"d77d6f2f-70ef-4729-aa52-39c91463d165","Type":"ContainerDied","Data":"dc28b220378fca75e8040c1fe3f178577400ede9da67d701e3636e2de1b38f74"} Oct 13 06:41:21 crc kubenswrapper[4833]: I1013 06:41:21.636971 4833 scope.go:117] "RemoveContainer" containerID="8da5a565de877b4ba5987a13798378a5ba7aa27873d242af0c64f6e933fa5d35" Oct 13 06:41:21 crc kubenswrapper[4833]: I1013 06:41:21.641107 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6645b6586b-dks2w" event={"ID":"d3563d2c-a34e-4301-a437-b963e22b0c33","Type":"ContainerStarted","Data":"e567c2c925e29bf83000ec359fe40d9e53af093a415562519c9ec0df13301076"} Oct 13 06:41:21 crc kubenswrapper[4833]: I1013 06:41:21.641236 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6645b6586b-dks2w" Oct 13 06:41:21 crc kubenswrapper[4833]: I1013 06:41:21.654523 4833 scope.go:117] "RemoveContainer" containerID="396ee37104e7feb5e6048285beb283f81aa288ddde009943723be04dfae3d701" Oct 13 06:41:21 crc kubenswrapper[4833]: I1013 06:41:21.657134 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qr6tn" podStartSLOduration=3.087097423 podStartE2EDuration="6.657112463s" podCreationTimestamp="2025-10-13 06:41:15 +0000 UTC" firstStartedPulling="2025-10-13 06:41:17.581056568 +0000 UTC m=+767.681479484" lastFinishedPulling="2025-10-13 06:41:21.151071608 +0000 UTC m=+771.251494524" observedRunningTime="2025-10-13 06:41:21.654893419 +0000 UTC m=+771.755316345" watchObservedRunningTime="2025-10-13 06:41:21.657112463 +0000 UTC m=+771.757535379" Oct 13 06:41:21 crc kubenswrapper[4833]: I1013 06:41:21.682964 4833 scope.go:117] "RemoveContainer" containerID="b145c8ce7f0e458af9ec669e36dea683b91b26c5d42006c1617cdf941b6394f6" Oct 13 06:41:21 crc kubenswrapper[4833]: I1013 06:41:21.683071 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qkc8q"] Oct 13 06:41:21 crc kubenswrapper[4833]: I1013 06:41:21.686897 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qkc8q"] Oct 13 06:41:21 crc kubenswrapper[4833]: I1013 06:41:21.713444 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6645b6586b-dks2w" podStartSLOduration=2.755814869 podStartE2EDuration="6.713428787s" podCreationTimestamp="2025-10-13 06:41:15 +0000 UTC" firstStartedPulling="2025-10-13 06:41:16.448020259 +0000 UTC m=+766.548443175" lastFinishedPulling="2025-10-13 06:41:20.405634177 +0000 UTC m=+770.506057093" observedRunningTime="2025-10-13 06:41:21.711924664 +0000 UTC m=+771.812347570" watchObservedRunningTime="2025-10-13 06:41:21.713428787 +0000 UTC m=+771.813851703" Oct 13 06:41:22 crc kubenswrapper[4833]: I1013 06:41:22.634449 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d77d6f2f-70ef-4729-aa52-39c91463d165" path="/var/lib/kubelet/pods/d77d6f2f-70ef-4729-aa52-39c91463d165/volumes" Oct 13 06:41:25 crc kubenswrapper[4833]: I1013 06:41:25.893690 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qr6tn" Oct 13 06:41:25 crc kubenswrapper[4833]: I1013 06:41:25.893984 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qr6tn" Oct 13 06:41:25 crc kubenswrapper[4833]: I1013 06:41:25.947628 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qr6tn" Oct 13 06:41:26 crc kubenswrapper[4833]: I1013 06:41:26.716155 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qr6tn" Oct 13 06:41:27 crc kubenswrapper[4833]: I1013 06:41:27.535868 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qr6tn"] Oct 13 06:41:28 crc kubenswrapper[4833]: I1013 06:41:28.677575 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qr6tn" podUID="3d738483-0bfc-440f-8ac9-37eb0ddc0da7" containerName="registry-server" containerID="cri-o://8de61387e03e5fd7921f4bb82aaa3553129160883f0d2c1878151a98674fb3d8" gracePeriod=2 Oct 13 06:41:29 crc kubenswrapper[4833]: I1013 06:41:29.232161 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qr6tn" Oct 13 06:41:29 crc kubenswrapper[4833]: I1013 06:41:29.328135 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx2vl\" (UniqueName: \"kubernetes.io/projected/3d738483-0bfc-440f-8ac9-37eb0ddc0da7-kube-api-access-tx2vl\") pod \"3d738483-0bfc-440f-8ac9-37eb0ddc0da7\" (UID: \"3d738483-0bfc-440f-8ac9-37eb0ddc0da7\") " Oct 13 06:41:29 crc kubenswrapper[4833]: I1013 06:41:29.328517 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d738483-0bfc-440f-8ac9-37eb0ddc0da7-catalog-content\") pod \"3d738483-0bfc-440f-8ac9-37eb0ddc0da7\" (UID: \"3d738483-0bfc-440f-8ac9-37eb0ddc0da7\") " Oct 13 06:41:29 crc kubenswrapper[4833]: I1013 06:41:29.328580 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d738483-0bfc-440f-8ac9-37eb0ddc0da7-utilities\") pod \"3d738483-0bfc-440f-8ac9-37eb0ddc0da7\" (UID: \"3d738483-0bfc-440f-8ac9-37eb0ddc0da7\") " Oct 13 06:41:29 crc kubenswrapper[4833]: I1013 06:41:29.329810 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d738483-0bfc-440f-8ac9-37eb0ddc0da7-utilities" (OuterVolumeSpecName: "utilities") pod "3d738483-0bfc-440f-8ac9-37eb0ddc0da7" (UID: "3d738483-0bfc-440f-8ac9-37eb0ddc0da7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:41:29 crc kubenswrapper[4833]: I1013 06:41:29.335315 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d738483-0bfc-440f-8ac9-37eb0ddc0da7-kube-api-access-tx2vl" (OuterVolumeSpecName: "kube-api-access-tx2vl") pod "3d738483-0bfc-440f-8ac9-37eb0ddc0da7" (UID: "3d738483-0bfc-440f-8ac9-37eb0ddc0da7"). InnerVolumeSpecName "kube-api-access-tx2vl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:41:29 crc kubenswrapper[4833]: I1013 06:41:29.345330 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d738483-0bfc-440f-8ac9-37eb0ddc0da7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d738483-0bfc-440f-8ac9-37eb0ddc0da7" (UID: "3d738483-0bfc-440f-8ac9-37eb0ddc0da7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:41:29 crc kubenswrapper[4833]: I1013 06:41:29.430574 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx2vl\" (UniqueName: \"kubernetes.io/projected/3d738483-0bfc-440f-8ac9-37eb0ddc0da7-kube-api-access-tx2vl\") on node \"crc\" DevicePath \"\"" Oct 13 06:41:29 crc kubenswrapper[4833]: I1013 06:41:29.430613 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d738483-0bfc-440f-8ac9-37eb0ddc0da7-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 06:41:29 crc kubenswrapper[4833]: I1013 06:41:29.430625 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d738483-0bfc-440f-8ac9-37eb0ddc0da7-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 06:41:29 crc kubenswrapper[4833]: I1013 06:41:29.684683 4833 generic.go:334] "Generic (PLEG): container finished" podID="3d738483-0bfc-440f-8ac9-37eb0ddc0da7" containerID="8de61387e03e5fd7921f4bb82aaa3553129160883f0d2c1878151a98674fb3d8" exitCode=0 Oct 13 06:41:29 crc kubenswrapper[4833]: I1013 06:41:29.684907 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qr6tn" event={"ID":"3d738483-0bfc-440f-8ac9-37eb0ddc0da7","Type":"ContainerDied","Data":"8de61387e03e5fd7921f4bb82aaa3553129160883f0d2c1878151a98674fb3d8"} Oct 13 06:41:29 crc kubenswrapper[4833]: I1013 06:41:29.686193 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qr6tn" event={"ID":"3d738483-0bfc-440f-8ac9-37eb0ddc0da7","Type":"ContainerDied","Data":"a82e2c11e60ed964630c50ef4b584109d727164f77a0ebe19588572ef906e645"} Oct 13 06:41:29 crc kubenswrapper[4833]: I1013 06:41:29.685022 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qr6tn" Oct 13 06:41:29 crc kubenswrapper[4833]: I1013 06:41:29.686333 4833 scope.go:117] "RemoveContainer" containerID="8de61387e03e5fd7921f4bb82aaa3553129160883f0d2c1878151a98674fb3d8" Oct 13 06:41:29 crc kubenswrapper[4833]: I1013 06:41:29.702188 4833 scope.go:117] "RemoveContainer" containerID="3d2dfd8b228dcaf2d4104e0b692e6b52960389416a80f0d043727322b92608d0" Oct 13 06:41:29 crc kubenswrapper[4833]: I1013 06:41:29.719891 4833 scope.go:117] "RemoveContainer" containerID="84e0a35bd174ce5198756b7e3859f6e29ca653de267d0e8d8765bc3d1e6ee30e" Oct 13 06:41:29 crc kubenswrapper[4833]: I1013 06:41:29.744554 4833 scope.go:117] "RemoveContainer" containerID="8de61387e03e5fd7921f4bb82aaa3553129160883f0d2c1878151a98674fb3d8" Oct 13 06:41:29 crc kubenswrapper[4833]: E1013 06:41:29.745018 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8de61387e03e5fd7921f4bb82aaa3553129160883f0d2c1878151a98674fb3d8\": container with ID starting with 8de61387e03e5fd7921f4bb82aaa3553129160883f0d2c1878151a98674fb3d8 not found: ID does not exist" containerID="8de61387e03e5fd7921f4bb82aaa3553129160883f0d2c1878151a98674fb3d8" Oct 13 06:41:29 crc kubenswrapper[4833]: I1013 06:41:29.745055 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8de61387e03e5fd7921f4bb82aaa3553129160883f0d2c1878151a98674fb3d8"} err="failed to get container status \"8de61387e03e5fd7921f4bb82aaa3553129160883f0d2c1878151a98674fb3d8\": rpc error: code = NotFound desc = could not find container \"8de61387e03e5fd7921f4bb82aaa3553129160883f0d2c1878151a98674fb3d8\": container with ID starting with 8de61387e03e5fd7921f4bb82aaa3553129160883f0d2c1878151a98674fb3d8 not found: ID does not exist" Oct 13 06:41:29 crc kubenswrapper[4833]: I1013 06:41:29.745080 4833 scope.go:117] "RemoveContainer" containerID="3d2dfd8b228dcaf2d4104e0b692e6b52960389416a80f0d043727322b92608d0" Oct 13 06:41:29 crc kubenswrapper[4833]: E1013 06:41:29.745419 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d2dfd8b228dcaf2d4104e0b692e6b52960389416a80f0d043727322b92608d0\": container with ID starting with 3d2dfd8b228dcaf2d4104e0b692e6b52960389416a80f0d043727322b92608d0 not found: ID does not exist" containerID="3d2dfd8b228dcaf2d4104e0b692e6b52960389416a80f0d043727322b92608d0" Oct 13 06:41:29 crc kubenswrapper[4833]: I1013 06:41:29.745446 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d2dfd8b228dcaf2d4104e0b692e6b52960389416a80f0d043727322b92608d0"} err="failed to get container status \"3d2dfd8b228dcaf2d4104e0b692e6b52960389416a80f0d043727322b92608d0\": rpc error: code = NotFound desc = could not find container \"3d2dfd8b228dcaf2d4104e0b692e6b52960389416a80f0d043727322b92608d0\": container with ID starting with 3d2dfd8b228dcaf2d4104e0b692e6b52960389416a80f0d043727322b92608d0 not found: ID does not exist" Oct 13 06:41:29 crc kubenswrapper[4833]: I1013 06:41:29.745465 4833 scope.go:117] "RemoveContainer" containerID="84e0a35bd174ce5198756b7e3859f6e29ca653de267d0e8d8765bc3d1e6ee30e" Oct 13 06:41:29 crc kubenswrapper[4833]: E1013 06:41:29.745720 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84e0a35bd174ce5198756b7e3859f6e29ca653de267d0e8d8765bc3d1e6ee30e\": container with ID starting with 84e0a35bd174ce5198756b7e3859f6e29ca653de267d0e8d8765bc3d1e6ee30e not found: ID does not exist" containerID="84e0a35bd174ce5198756b7e3859f6e29ca653de267d0e8d8765bc3d1e6ee30e" Oct 13 06:41:29 crc kubenswrapper[4833]: I1013 06:41:29.745746 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84e0a35bd174ce5198756b7e3859f6e29ca653de267d0e8d8765bc3d1e6ee30e"} err="failed to get container status \"84e0a35bd174ce5198756b7e3859f6e29ca653de267d0e8d8765bc3d1e6ee30e\": rpc error: code = NotFound desc = could not find container \"84e0a35bd174ce5198756b7e3859f6e29ca653de267d0e8d8765bc3d1e6ee30e\": container with ID starting with 84e0a35bd174ce5198756b7e3859f6e29ca653de267d0e8d8765bc3d1e6ee30e not found: ID does not exist" Oct 13 06:41:29 crc kubenswrapper[4833]: I1013 06:41:29.748185 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qr6tn"] Oct 13 06:41:29 crc kubenswrapper[4833]: I1013 06:41:29.755277 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qr6tn"] Oct 13 06:41:30 crc kubenswrapper[4833]: I1013 06:41:30.542170 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 06:41:30 crc kubenswrapper[4833]: I1013 06:41:30.542233 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 06:41:30 crc kubenswrapper[4833]: I1013 06:41:30.542280 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 06:41:30 crc kubenswrapper[4833]: I1013 06:41:30.542918 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0c7662799f74f815e9b59b491ebe056d2f40e7a81a10b5cc35be025975e84206"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 06:41:30 crc kubenswrapper[4833]: I1013 06:41:30.542990 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://0c7662799f74f815e9b59b491ebe056d2f40e7a81a10b5cc35be025975e84206" gracePeriod=600 Oct 13 06:41:30 crc kubenswrapper[4833]: I1013 06:41:30.653518 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d738483-0bfc-440f-8ac9-37eb0ddc0da7" path="/var/lib/kubelet/pods/3d738483-0bfc-440f-8ac9-37eb0ddc0da7/volumes" Oct 13 06:41:30 crc kubenswrapper[4833]: E1013 06:41:30.656659 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa5b6ea2_f89e_4768_8663_bd965bde64fa.slice/crio-0c7662799f74f815e9b59b491ebe056d2f40e7a81a10b5cc35be025975e84206.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa5b6ea2_f89e_4768_8663_bd965bde64fa.slice/crio-conmon-0c7662799f74f815e9b59b491ebe056d2f40e7a81a10b5cc35be025975e84206.scope\": RecentStats: unable to find data in memory cache]" Oct 13 06:41:30 crc kubenswrapper[4833]: I1013 06:41:30.694371 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="0c7662799f74f815e9b59b491ebe056d2f40e7a81a10b5cc35be025975e84206" exitCode=0 Oct 13 06:41:30 crc kubenswrapper[4833]: I1013 06:41:30.694448 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"0c7662799f74f815e9b59b491ebe056d2f40e7a81a10b5cc35be025975e84206"} Oct 13 06:41:30 crc kubenswrapper[4833]: I1013 06:41:30.694722 4833 scope.go:117] "RemoveContainer" containerID="3d72eefcdc034d5e45d5c524e5c6641a6db280762987026a18bc3b264c963a34" Oct 13 06:41:31 crc kubenswrapper[4833]: I1013 06:41:31.706758 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"dd9e737b5446edf3a1bc45401b54500d618ca084763a39fb5a0be12fcd006b99"} Oct 13 06:41:35 crc kubenswrapper[4833]: I1013 06:41:35.925171 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6645b6586b-dks2w" Oct 13 06:41:55 crc kubenswrapper[4833]: I1013 06:41:55.654118 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-57cb68956b-fwz7n" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.306329 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-4nf9x"] Oct 13 06:41:56 crc kubenswrapper[4833]: E1013 06:41:56.306608 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d738483-0bfc-440f-8ac9-37eb0ddc0da7" containerName="registry-server" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.306624 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d738483-0bfc-440f-8ac9-37eb0ddc0da7" containerName="registry-server" Oct 13 06:41:56 crc kubenswrapper[4833]: E1013 06:41:56.306640 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d77d6f2f-70ef-4729-aa52-39c91463d165" containerName="registry-server" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.306648 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d77d6f2f-70ef-4729-aa52-39c91463d165" containerName="registry-server" Oct 13 06:41:56 crc kubenswrapper[4833]: E1013 06:41:56.306661 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d738483-0bfc-440f-8ac9-37eb0ddc0da7" containerName="extract-utilities" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.306669 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d738483-0bfc-440f-8ac9-37eb0ddc0da7" containerName="extract-utilities" Oct 13 06:41:56 crc kubenswrapper[4833]: E1013 06:41:56.306684 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d77d6f2f-70ef-4729-aa52-39c91463d165" containerName="extract-content" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.306691 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d77d6f2f-70ef-4729-aa52-39c91463d165" containerName="extract-content" Oct 13 06:41:56 crc kubenswrapper[4833]: E1013 06:41:56.306699 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d77d6f2f-70ef-4729-aa52-39c91463d165" containerName="extract-utilities" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.306706 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d77d6f2f-70ef-4729-aa52-39c91463d165" containerName="extract-utilities" Oct 13 06:41:56 crc kubenswrapper[4833]: E1013 06:41:56.306721 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d738483-0bfc-440f-8ac9-37eb0ddc0da7" containerName="extract-content" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.306731 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d738483-0bfc-440f-8ac9-37eb0ddc0da7" containerName="extract-content" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.306857 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d77d6f2f-70ef-4729-aa52-39c91463d165" containerName="registry-server" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.306873 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d738483-0bfc-440f-8ac9-37eb0ddc0da7" containerName="registry-server" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.308637 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4nf9x" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.311268 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.311290 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-mtwkd" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.311330 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.315172 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-n574p"] Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.316035 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-n574p" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.318222 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.329698 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-n574p"] Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.381837 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5-frr-sockets\") pod \"frr-k8s-4nf9x\" (UID: \"283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5\") " pod="metallb-system/frr-k8s-4nf9x" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.381887 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5-frr-startup\") pod \"frr-k8s-4nf9x\" (UID: \"283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5\") " pod="metallb-system/frr-k8s-4nf9x" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.381906 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5-frr-conf\") pod \"frr-k8s-4nf9x\" (UID: \"283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5\") " pod="metallb-system/frr-k8s-4nf9x" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.381957 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e16790f-a99f-4c1c-ac8e-b350e0e9efc9-cert\") pod \"frr-k8s-webhook-server-64bf5d555-n574p\" (UID: \"5e16790f-a99f-4c1c-ac8e-b350e0e9efc9\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-n574p" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.381972 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5-metrics-certs\") pod \"frr-k8s-4nf9x\" (UID: \"283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5\") " pod="metallb-system/frr-k8s-4nf9x" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.381990 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5-reloader\") pod \"frr-k8s-4nf9x\" (UID: \"283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5\") " pod="metallb-system/frr-k8s-4nf9x" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.382158 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5-metrics\") pod \"frr-k8s-4nf9x\" (UID: \"283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5\") " pod="metallb-system/frr-k8s-4nf9x" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.382232 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rpj7\" (UniqueName: \"kubernetes.io/projected/283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5-kube-api-access-2rpj7\") pod \"frr-k8s-4nf9x\" (UID: \"283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5\") " pod="metallb-system/frr-k8s-4nf9x" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.382362 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbkq8\" (UniqueName: \"kubernetes.io/projected/5e16790f-a99f-4c1c-ac8e-b350e0e9efc9-kube-api-access-jbkq8\") pod \"frr-k8s-webhook-server-64bf5d555-n574p\" (UID: \"5e16790f-a99f-4c1c-ac8e-b350e0e9efc9\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-n574p" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.390324 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-w6zhs"] Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.391406 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-w6zhs" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.393181 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.393218 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.393758 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.394243 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-xbxhx" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.414984 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-68d546b9d8-wv7ff"] Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.416493 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-wv7ff" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.422511 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.433962 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-wv7ff"] Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.483277 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbkq8\" (UniqueName: \"kubernetes.io/projected/5e16790f-a99f-4c1c-ac8e-b350e0e9efc9-kube-api-access-jbkq8\") pod \"frr-k8s-webhook-server-64bf5d555-n574p\" (UID: \"5e16790f-a99f-4c1c-ac8e-b350e0e9efc9\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-n574p" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.483324 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx4jm\" (UniqueName: \"kubernetes.io/projected/1a12e7a4-c597-4876-b004-8a12717d688e-kube-api-access-kx4jm\") pod \"speaker-w6zhs\" (UID: \"1a12e7a4-c597-4876-b004-8a12717d688e\") " pod="metallb-system/speaker-w6zhs" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.483347 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5-frr-sockets\") pod \"frr-k8s-4nf9x\" (UID: \"283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5\") " pod="metallb-system/frr-k8s-4nf9x" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.483371 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5-frr-startup\") pod \"frr-k8s-4nf9x\" (UID: \"283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5\") " pod="metallb-system/frr-k8s-4nf9x" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.483391 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5-frr-conf\") pod \"frr-k8s-4nf9x\" (UID: \"283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5\") " pod="metallb-system/frr-k8s-4nf9x" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.483413 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1a12e7a4-c597-4876-b004-8a12717d688e-metallb-excludel2\") pod \"speaker-w6zhs\" (UID: \"1a12e7a4-c597-4876-b004-8a12717d688e\") " pod="metallb-system/speaker-w6zhs" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.483436 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c6237d22-0249-4927-9ec7-d7b86bb6e80e-cert\") pod \"controller-68d546b9d8-wv7ff\" (UID: \"c6237d22-0249-4927-9ec7-d7b86bb6e80e\") " pod="metallb-system/controller-68d546b9d8-wv7ff" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.483453 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a12e7a4-c597-4876-b004-8a12717d688e-metrics-certs\") pod \"speaker-w6zhs\" (UID: \"1a12e7a4-c597-4876-b004-8a12717d688e\") " pod="metallb-system/speaker-w6zhs" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.483483 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e16790f-a99f-4c1c-ac8e-b350e0e9efc9-cert\") pod \"frr-k8s-webhook-server-64bf5d555-n574p\" (UID: \"5e16790f-a99f-4c1c-ac8e-b350e0e9efc9\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-n574p" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.483499 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5-metrics-certs\") pod \"frr-k8s-4nf9x\" (UID: \"283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5\") " pod="metallb-system/frr-k8s-4nf9x" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.483518 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5-reloader\") pod \"frr-k8s-4nf9x\" (UID: \"283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5\") " pod="metallb-system/frr-k8s-4nf9x" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.483549 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c47c2\" (UniqueName: \"kubernetes.io/projected/c6237d22-0249-4927-9ec7-d7b86bb6e80e-kube-api-access-c47c2\") pod \"controller-68d546b9d8-wv7ff\" (UID: \"c6237d22-0249-4927-9ec7-d7b86bb6e80e\") " pod="metallb-system/controller-68d546b9d8-wv7ff" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.483570 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5-metrics\") pod \"frr-k8s-4nf9x\" (UID: \"283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5\") " pod="metallb-system/frr-k8s-4nf9x" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.483583 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c6237d22-0249-4927-9ec7-d7b86bb6e80e-metrics-certs\") pod \"controller-68d546b9d8-wv7ff\" (UID: \"c6237d22-0249-4927-9ec7-d7b86bb6e80e\") " pod="metallb-system/controller-68d546b9d8-wv7ff" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.483605 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rpj7\" (UniqueName: \"kubernetes.io/projected/283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5-kube-api-access-2rpj7\") pod \"frr-k8s-4nf9x\" (UID: \"283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5\") " pod="metallb-system/frr-k8s-4nf9x" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.483620 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1a12e7a4-c597-4876-b004-8a12717d688e-memberlist\") pod \"speaker-w6zhs\" (UID: \"1a12e7a4-c597-4876-b004-8a12717d688e\") " pod="metallb-system/speaker-w6zhs" Oct 13 06:41:56 crc kubenswrapper[4833]: E1013 06:41:56.483698 4833 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Oct 13 06:41:56 crc kubenswrapper[4833]: E1013 06:41:56.483761 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e16790f-a99f-4c1c-ac8e-b350e0e9efc9-cert podName:5e16790f-a99f-4c1c-ac8e-b350e0e9efc9 nodeName:}" failed. No retries permitted until 2025-10-13 06:41:56.983740993 +0000 UTC m=+807.084163919 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5e16790f-a99f-4c1c-ac8e-b350e0e9efc9-cert") pod "frr-k8s-webhook-server-64bf5d555-n574p" (UID: "5e16790f-a99f-4c1c-ac8e-b350e0e9efc9") : secret "frr-k8s-webhook-server-cert" not found Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.484190 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5-frr-sockets\") pod \"frr-k8s-4nf9x\" (UID: \"283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5\") " pod="metallb-system/frr-k8s-4nf9x" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.484263 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5-reloader\") pod \"frr-k8s-4nf9x\" (UID: \"283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5\") " pod="metallb-system/frr-k8s-4nf9x" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.484322 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5-metrics\") pod \"frr-k8s-4nf9x\" (UID: \"283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5\") " pod="metallb-system/frr-k8s-4nf9x" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.484435 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5-frr-conf\") pod \"frr-k8s-4nf9x\" (UID: \"283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5\") " pod="metallb-system/frr-k8s-4nf9x" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.484785 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5-frr-startup\") pod \"frr-k8s-4nf9x\" (UID: \"283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5\") " pod="metallb-system/frr-k8s-4nf9x" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.499735 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5-metrics-certs\") pod \"frr-k8s-4nf9x\" (UID: \"283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5\") " pod="metallb-system/frr-k8s-4nf9x" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.503486 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rpj7\" (UniqueName: \"kubernetes.io/projected/283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5-kube-api-access-2rpj7\") pod \"frr-k8s-4nf9x\" (UID: \"283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5\") " pod="metallb-system/frr-k8s-4nf9x" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.512033 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbkq8\" (UniqueName: \"kubernetes.io/projected/5e16790f-a99f-4c1c-ac8e-b350e0e9efc9-kube-api-access-jbkq8\") pod \"frr-k8s-webhook-server-64bf5d555-n574p\" (UID: \"5e16790f-a99f-4c1c-ac8e-b350e0e9efc9\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-n574p" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.584354 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1a12e7a4-c597-4876-b004-8a12717d688e-memberlist\") pod \"speaker-w6zhs\" (UID: \"1a12e7a4-c597-4876-b004-8a12717d688e\") " pod="metallb-system/speaker-w6zhs" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.584426 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx4jm\" (UniqueName: \"kubernetes.io/projected/1a12e7a4-c597-4876-b004-8a12717d688e-kube-api-access-kx4jm\") pod \"speaker-w6zhs\" (UID: \"1a12e7a4-c597-4876-b004-8a12717d688e\") " pod="metallb-system/speaker-w6zhs" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.584482 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1a12e7a4-c597-4876-b004-8a12717d688e-metallb-excludel2\") pod \"speaker-w6zhs\" (UID: \"1a12e7a4-c597-4876-b004-8a12717d688e\") " pod="metallb-system/speaker-w6zhs" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.584515 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c6237d22-0249-4927-9ec7-d7b86bb6e80e-cert\") pod \"controller-68d546b9d8-wv7ff\" (UID: \"c6237d22-0249-4927-9ec7-d7b86bb6e80e\") " pod="metallb-system/controller-68d546b9d8-wv7ff" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.584535 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a12e7a4-c597-4876-b004-8a12717d688e-metrics-certs\") pod \"speaker-w6zhs\" (UID: \"1a12e7a4-c597-4876-b004-8a12717d688e\") " pod="metallb-system/speaker-w6zhs" Oct 13 06:41:56 crc kubenswrapper[4833]: E1013 06:41:56.584600 4833 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 13 06:41:56 crc kubenswrapper[4833]: E1013 06:41:56.584679 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a12e7a4-c597-4876-b004-8a12717d688e-memberlist podName:1a12e7a4-c597-4876-b004-8a12717d688e nodeName:}" failed. No retries permitted until 2025-10-13 06:41:57.084659032 +0000 UTC m=+807.185081948 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/1a12e7a4-c597-4876-b004-8a12717d688e-memberlist") pod "speaker-w6zhs" (UID: "1a12e7a4-c597-4876-b004-8a12717d688e") : secret "metallb-memberlist" not found Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.584611 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c47c2\" (UniqueName: \"kubernetes.io/projected/c6237d22-0249-4927-9ec7-d7b86bb6e80e-kube-api-access-c47c2\") pod \"controller-68d546b9d8-wv7ff\" (UID: \"c6237d22-0249-4927-9ec7-d7b86bb6e80e\") " pod="metallb-system/controller-68d546b9d8-wv7ff" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.584860 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c6237d22-0249-4927-9ec7-d7b86bb6e80e-metrics-certs\") pod \"controller-68d546b9d8-wv7ff\" (UID: \"c6237d22-0249-4927-9ec7-d7b86bb6e80e\") " pod="metallb-system/controller-68d546b9d8-wv7ff" Oct 13 06:41:56 crc kubenswrapper[4833]: E1013 06:41:56.584870 4833 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Oct 13 06:41:56 crc kubenswrapper[4833]: E1013 06:41:56.584923 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a12e7a4-c597-4876-b004-8a12717d688e-metrics-certs podName:1a12e7a4-c597-4876-b004-8a12717d688e nodeName:}" failed. No retries permitted until 2025-10-13 06:41:57.084904759 +0000 UTC m=+807.185327785 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a12e7a4-c597-4876-b004-8a12717d688e-metrics-certs") pod "speaker-w6zhs" (UID: "1a12e7a4-c597-4876-b004-8a12717d688e") : secret "speaker-certs-secret" not found Oct 13 06:41:56 crc kubenswrapper[4833]: E1013 06:41:56.584998 4833 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Oct 13 06:41:56 crc kubenswrapper[4833]: E1013 06:41:56.585027 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6237d22-0249-4927-9ec7-d7b86bb6e80e-metrics-certs podName:c6237d22-0249-4927-9ec7-d7b86bb6e80e nodeName:}" failed. No retries permitted until 2025-10-13 06:41:57.085020502 +0000 UTC m=+807.185443418 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c6237d22-0249-4927-9ec7-d7b86bb6e80e-metrics-certs") pod "controller-68d546b9d8-wv7ff" (UID: "c6237d22-0249-4927-9ec7-d7b86bb6e80e") : secret "controller-certs-secret" not found Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.585608 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1a12e7a4-c597-4876-b004-8a12717d688e-metallb-excludel2\") pod \"speaker-w6zhs\" (UID: \"1a12e7a4-c597-4876-b004-8a12717d688e\") " pod="metallb-system/speaker-w6zhs" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.586370 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.598362 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c6237d22-0249-4927-9ec7-d7b86bb6e80e-cert\") pod \"controller-68d546b9d8-wv7ff\" (UID: \"c6237d22-0249-4927-9ec7-d7b86bb6e80e\") " pod="metallb-system/controller-68d546b9d8-wv7ff" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.602291 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx4jm\" (UniqueName: \"kubernetes.io/projected/1a12e7a4-c597-4876-b004-8a12717d688e-kube-api-access-kx4jm\") pod \"speaker-w6zhs\" (UID: \"1a12e7a4-c597-4876-b004-8a12717d688e\") " pod="metallb-system/speaker-w6zhs" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.605768 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c47c2\" (UniqueName: \"kubernetes.io/projected/c6237d22-0249-4927-9ec7-d7b86bb6e80e-kube-api-access-c47c2\") pod \"controller-68d546b9d8-wv7ff\" (UID: \"c6237d22-0249-4927-9ec7-d7b86bb6e80e\") " pod="metallb-system/controller-68d546b9d8-wv7ff" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.626451 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4nf9x" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.857275 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4nf9x" event={"ID":"283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5","Type":"ContainerStarted","Data":"fa9d60e50b1b76cce4e3cd79446329f7d3235f623ba538916d74e984b54c7c03"} Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.990391 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e16790f-a99f-4c1c-ac8e-b350e0e9efc9-cert\") pod \"frr-k8s-webhook-server-64bf5d555-n574p\" (UID: \"5e16790f-a99f-4c1c-ac8e-b350e0e9efc9\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-n574p" Oct 13 06:41:56 crc kubenswrapper[4833]: I1013 06:41:56.993916 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e16790f-a99f-4c1c-ac8e-b350e0e9efc9-cert\") pod \"frr-k8s-webhook-server-64bf5d555-n574p\" (UID: \"5e16790f-a99f-4c1c-ac8e-b350e0e9efc9\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-n574p" Oct 13 06:41:57 crc kubenswrapper[4833]: I1013 06:41:57.091975 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a12e7a4-c597-4876-b004-8a12717d688e-metrics-certs\") pod \"speaker-w6zhs\" (UID: \"1a12e7a4-c597-4876-b004-8a12717d688e\") " pod="metallb-system/speaker-w6zhs" Oct 13 06:41:57 crc kubenswrapper[4833]: I1013 06:41:57.092049 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c6237d22-0249-4927-9ec7-d7b86bb6e80e-metrics-certs\") pod \"controller-68d546b9d8-wv7ff\" (UID: \"c6237d22-0249-4927-9ec7-d7b86bb6e80e\") " pod="metallb-system/controller-68d546b9d8-wv7ff" Oct 13 06:41:57 crc kubenswrapper[4833]: I1013 06:41:57.092074 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1a12e7a4-c597-4876-b004-8a12717d688e-memberlist\") pod \"speaker-w6zhs\" (UID: \"1a12e7a4-c597-4876-b004-8a12717d688e\") " pod="metallb-system/speaker-w6zhs" Oct 13 06:41:57 crc kubenswrapper[4833]: E1013 06:41:57.092187 4833 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 13 06:41:57 crc kubenswrapper[4833]: E1013 06:41:57.092246 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a12e7a4-c597-4876-b004-8a12717d688e-memberlist podName:1a12e7a4-c597-4876-b004-8a12717d688e nodeName:}" failed. No retries permitted until 2025-10-13 06:41:58.092229282 +0000 UTC m=+808.192652198 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/1a12e7a4-c597-4876-b004-8a12717d688e-memberlist") pod "speaker-w6zhs" (UID: "1a12e7a4-c597-4876-b004-8a12717d688e") : secret "metallb-memberlist" not found Oct 13 06:41:57 crc kubenswrapper[4833]: I1013 06:41:57.095167 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c6237d22-0249-4927-9ec7-d7b86bb6e80e-metrics-certs\") pod \"controller-68d546b9d8-wv7ff\" (UID: \"c6237d22-0249-4927-9ec7-d7b86bb6e80e\") " pod="metallb-system/controller-68d546b9d8-wv7ff" Oct 13 06:41:57 crc kubenswrapper[4833]: I1013 06:41:57.096968 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a12e7a4-c597-4876-b004-8a12717d688e-metrics-certs\") pod \"speaker-w6zhs\" (UID: \"1a12e7a4-c597-4876-b004-8a12717d688e\") " pod="metallb-system/speaker-w6zhs" Oct 13 06:41:57 crc kubenswrapper[4833]: I1013 06:41:57.236260 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-n574p" Oct 13 06:41:57 crc kubenswrapper[4833]: I1013 06:41:57.328987 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-wv7ff" Oct 13 06:41:57 crc kubenswrapper[4833]: I1013 06:41:57.636994 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-n574p"] Oct 13 06:41:57 crc kubenswrapper[4833]: W1013 06:41:57.642934 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e16790f_a99f_4c1c_ac8e_b350e0e9efc9.slice/crio-7c76fa196c0b63046cfc34cc6a57b6940d427e4ded5279e548731dad25281344 WatchSource:0}: Error finding container 7c76fa196c0b63046cfc34cc6a57b6940d427e4ded5279e548731dad25281344: Status 404 returned error can't find the container with id 7c76fa196c0b63046cfc34cc6a57b6940d427e4ded5279e548731dad25281344 Oct 13 06:41:57 crc kubenswrapper[4833]: I1013 06:41:57.782101 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-wv7ff"] Oct 13 06:41:57 crc kubenswrapper[4833]: I1013 06:41:57.863772 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-wv7ff" event={"ID":"c6237d22-0249-4927-9ec7-d7b86bb6e80e","Type":"ContainerStarted","Data":"8f12139224a907714a359ab696feadb699bd869945545aba2bcf58eeaad76547"} Oct 13 06:41:57 crc kubenswrapper[4833]: I1013 06:41:57.865405 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-n574p" event={"ID":"5e16790f-a99f-4c1c-ac8e-b350e0e9efc9","Type":"ContainerStarted","Data":"7c76fa196c0b63046cfc34cc6a57b6940d427e4ded5279e548731dad25281344"} Oct 13 06:41:58 crc kubenswrapper[4833]: I1013 06:41:58.105030 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1a12e7a4-c597-4876-b004-8a12717d688e-memberlist\") pod \"speaker-w6zhs\" (UID: \"1a12e7a4-c597-4876-b004-8a12717d688e\") " pod="metallb-system/speaker-w6zhs" Oct 13 06:41:58 crc kubenswrapper[4833]: I1013 06:41:58.110176 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1a12e7a4-c597-4876-b004-8a12717d688e-memberlist\") pod \"speaker-w6zhs\" (UID: \"1a12e7a4-c597-4876-b004-8a12717d688e\") " pod="metallb-system/speaker-w6zhs" Oct 13 06:41:58 crc kubenswrapper[4833]: I1013 06:41:58.204635 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-w6zhs" Oct 13 06:41:58 crc kubenswrapper[4833]: W1013 06:41:58.224530 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a12e7a4_c597_4876_b004_8a12717d688e.slice/crio-3ec0a2f0bf9a43fdc43e185f4bc4da364839a1c09aa33996e31cbb9834fd531e WatchSource:0}: Error finding container 3ec0a2f0bf9a43fdc43e185f4bc4da364839a1c09aa33996e31cbb9834fd531e: Status 404 returned error can't find the container with id 3ec0a2f0bf9a43fdc43e185f4bc4da364839a1c09aa33996e31cbb9834fd531e Oct 13 06:41:58 crc kubenswrapper[4833]: I1013 06:41:58.870784 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-wv7ff" event={"ID":"c6237d22-0249-4927-9ec7-d7b86bb6e80e","Type":"ContainerStarted","Data":"64506a0873cb947af352087294d222253c9bc03ffab4e931105c4fbf67a5f064"} Oct 13 06:41:58 crc kubenswrapper[4833]: I1013 06:41:58.870832 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-wv7ff" event={"ID":"c6237d22-0249-4927-9ec7-d7b86bb6e80e","Type":"ContainerStarted","Data":"f5aed4b96e7570faf260a0d8011c26d2a3bcc76693bb2b04309ed2e51fb21833"} Oct 13 06:41:58 crc kubenswrapper[4833]: I1013 06:41:58.871763 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-68d546b9d8-wv7ff" Oct 13 06:41:58 crc kubenswrapper[4833]: I1013 06:41:58.873004 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-w6zhs" event={"ID":"1a12e7a4-c597-4876-b004-8a12717d688e","Type":"ContainerStarted","Data":"ef49ea055092e6c2c5fa1edaca067076813ec7e0e2af14bce0b1a49cfa298897"} Oct 13 06:41:58 crc kubenswrapper[4833]: I1013 06:41:58.873026 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-w6zhs" event={"ID":"1a12e7a4-c597-4876-b004-8a12717d688e","Type":"ContainerStarted","Data":"74ae34df793ad5f3ecd0f4ff8caec77dbeb1aefa48f1ba3113e6e9811a599dad"} Oct 13 06:41:58 crc kubenswrapper[4833]: I1013 06:41:58.873038 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-w6zhs" event={"ID":"1a12e7a4-c597-4876-b004-8a12717d688e","Type":"ContainerStarted","Data":"3ec0a2f0bf9a43fdc43e185f4bc4da364839a1c09aa33996e31cbb9834fd531e"} Oct 13 06:41:58 crc kubenswrapper[4833]: I1013 06:41:58.873434 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-w6zhs" Oct 13 06:41:58 crc kubenswrapper[4833]: I1013 06:41:58.915259 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-68d546b9d8-wv7ff" podStartSLOduration=2.915239631 podStartE2EDuration="2.915239631s" podCreationTimestamp="2025-10-13 06:41:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:41:58.897362592 +0000 UTC m=+808.997785508" watchObservedRunningTime="2025-10-13 06:41:58.915239631 +0000 UTC m=+809.015662547" Oct 13 06:41:58 crc kubenswrapper[4833]: I1013 06:41:58.920084 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-w6zhs" podStartSLOduration=2.920071111 podStartE2EDuration="2.920071111s" podCreationTimestamp="2025-10-13 06:41:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:41:58.91623236 +0000 UTC m=+809.016655306" watchObservedRunningTime="2025-10-13 06:41:58.920071111 +0000 UTC m=+809.020494027" Oct 13 06:42:03 crc kubenswrapper[4833]: I1013 06:42:03.252201 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vf84p"] Oct 13 06:42:03 crc kubenswrapper[4833]: I1013 06:42:03.256604 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vf84p" Oct 13 06:42:03 crc kubenswrapper[4833]: I1013 06:42:03.271783 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vf84p"] Oct 13 06:42:03 crc kubenswrapper[4833]: I1013 06:42:03.287663 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9zt6\" (UniqueName: \"kubernetes.io/projected/766ba0cc-c795-40d3-b06b-2aaee8e09747-kube-api-access-j9zt6\") pod \"community-operators-vf84p\" (UID: \"766ba0cc-c795-40d3-b06b-2aaee8e09747\") " pod="openshift-marketplace/community-operators-vf84p" Oct 13 06:42:03 crc kubenswrapper[4833]: I1013 06:42:03.287721 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/766ba0cc-c795-40d3-b06b-2aaee8e09747-utilities\") pod \"community-operators-vf84p\" (UID: \"766ba0cc-c795-40d3-b06b-2aaee8e09747\") " pod="openshift-marketplace/community-operators-vf84p" Oct 13 06:42:03 crc kubenswrapper[4833]: I1013 06:42:03.287783 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/766ba0cc-c795-40d3-b06b-2aaee8e09747-catalog-content\") pod \"community-operators-vf84p\" (UID: \"766ba0cc-c795-40d3-b06b-2aaee8e09747\") " pod="openshift-marketplace/community-operators-vf84p" Oct 13 06:42:03 crc kubenswrapper[4833]: I1013 06:42:03.388965 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/766ba0cc-c795-40d3-b06b-2aaee8e09747-catalog-content\") pod \"community-operators-vf84p\" (UID: \"766ba0cc-c795-40d3-b06b-2aaee8e09747\") " pod="openshift-marketplace/community-operators-vf84p" Oct 13 06:42:03 crc kubenswrapper[4833]: I1013 06:42:03.389079 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9zt6\" (UniqueName: \"kubernetes.io/projected/766ba0cc-c795-40d3-b06b-2aaee8e09747-kube-api-access-j9zt6\") pod \"community-operators-vf84p\" (UID: \"766ba0cc-c795-40d3-b06b-2aaee8e09747\") " pod="openshift-marketplace/community-operators-vf84p" Oct 13 06:42:03 crc kubenswrapper[4833]: I1013 06:42:03.389110 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/766ba0cc-c795-40d3-b06b-2aaee8e09747-utilities\") pod \"community-operators-vf84p\" (UID: \"766ba0cc-c795-40d3-b06b-2aaee8e09747\") " pod="openshift-marketplace/community-operators-vf84p" Oct 13 06:42:03 crc kubenswrapper[4833]: I1013 06:42:03.389568 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/766ba0cc-c795-40d3-b06b-2aaee8e09747-utilities\") pod \"community-operators-vf84p\" (UID: \"766ba0cc-c795-40d3-b06b-2aaee8e09747\") " pod="openshift-marketplace/community-operators-vf84p" Oct 13 06:42:03 crc kubenswrapper[4833]: I1013 06:42:03.390847 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/766ba0cc-c795-40d3-b06b-2aaee8e09747-catalog-content\") pod \"community-operators-vf84p\" (UID: \"766ba0cc-c795-40d3-b06b-2aaee8e09747\") " pod="openshift-marketplace/community-operators-vf84p" Oct 13 06:42:03 crc kubenswrapper[4833]: I1013 06:42:03.408324 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9zt6\" (UniqueName: \"kubernetes.io/projected/766ba0cc-c795-40d3-b06b-2aaee8e09747-kube-api-access-j9zt6\") pod \"community-operators-vf84p\" (UID: \"766ba0cc-c795-40d3-b06b-2aaee8e09747\") " pod="openshift-marketplace/community-operators-vf84p" Oct 13 06:42:03 crc kubenswrapper[4833]: I1013 06:42:03.582433 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vf84p" Oct 13 06:42:04 crc kubenswrapper[4833]: W1013 06:42:04.319595 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod766ba0cc_c795_40d3_b06b_2aaee8e09747.slice/crio-5e050306acadd30e6a0759c53df52ccdde08eaf37efa40364f6a978b8a655f22 WatchSource:0}: Error finding container 5e050306acadd30e6a0759c53df52ccdde08eaf37efa40364f6a978b8a655f22: Status 404 returned error can't find the container with id 5e050306acadd30e6a0759c53df52ccdde08eaf37efa40364f6a978b8a655f22 Oct 13 06:42:04 crc kubenswrapper[4833]: I1013 06:42:04.320097 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vf84p"] Oct 13 06:42:04 crc kubenswrapper[4833]: I1013 06:42:04.918900 4833 generic.go:334] "Generic (PLEG): container finished" podID="766ba0cc-c795-40d3-b06b-2aaee8e09747" containerID="078cbc375809c6afbbea75269cff652624bccdc1b0a5c6b8f3e9aeed17bc9d04" exitCode=0 Oct 13 06:42:04 crc kubenswrapper[4833]: I1013 06:42:04.918995 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vf84p" event={"ID":"766ba0cc-c795-40d3-b06b-2aaee8e09747","Type":"ContainerDied","Data":"078cbc375809c6afbbea75269cff652624bccdc1b0a5c6b8f3e9aeed17bc9d04"} Oct 13 06:42:04 crc kubenswrapper[4833]: I1013 06:42:04.919202 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vf84p" event={"ID":"766ba0cc-c795-40d3-b06b-2aaee8e09747","Type":"ContainerStarted","Data":"5e050306acadd30e6a0759c53df52ccdde08eaf37efa40364f6a978b8a655f22"} Oct 13 06:42:04 crc kubenswrapper[4833]: I1013 06:42:04.920399 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-n574p" event={"ID":"5e16790f-a99f-4c1c-ac8e-b350e0e9efc9","Type":"ContainerStarted","Data":"2b3b8eedfc6a98ceb15a6a6ca8d9f6880287022aa43c15b09120877953546f69"} Oct 13 06:42:04 crc kubenswrapper[4833]: I1013 06:42:04.920611 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-n574p" Oct 13 06:42:04 crc kubenswrapper[4833]: I1013 06:42:04.921800 4833 generic.go:334] "Generic (PLEG): container finished" podID="283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5" containerID="179fd51c87320ad0104ac62695863ec81ba6c58bcf8327bc68c6c22f11e09128" exitCode=0 Oct 13 06:42:04 crc kubenswrapper[4833]: I1013 06:42:04.921842 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4nf9x" event={"ID":"283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5","Type":"ContainerDied","Data":"179fd51c87320ad0104ac62695863ec81ba6c58bcf8327bc68c6c22f11e09128"} Oct 13 06:42:04 crc kubenswrapper[4833]: I1013 06:42:04.981827 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-n574p" podStartSLOduration=2.65886194 podStartE2EDuration="8.981801308s" podCreationTimestamp="2025-10-13 06:41:56 +0000 UTC" firstStartedPulling="2025-10-13 06:41:57.644703966 +0000 UTC m=+807.745126882" lastFinishedPulling="2025-10-13 06:42:03.967643334 +0000 UTC m=+814.068066250" observedRunningTime="2025-10-13 06:42:04.974900987 +0000 UTC m=+815.075323913" watchObservedRunningTime="2025-10-13 06:42:04.981801308 +0000 UTC m=+815.082224224" Oct 13 06:42:05 crc kubenswrapper[4833]: I1013 06:42:05.929444 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vf84p" event={"ID":"766ba0cc-c795-40d3-b06b-2aaee8e09747","Type":"ContainerStarted","Data":"a35c50c7f8f30766da80d48a9271c025b9960293db0a24303d8027efa252cf9f"} Oct 13 06:42:05 crc kubenswrapper[4833]: I1013 06:42:05.930886 4833 generic.go:334] "Generic (PLEG): container finished" podID="283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5" containerID="77dfe68deec3c596ede78c7f7f6291ec64021f911a5af09c2c7a1a8a5cfe2b25" exitCode=0 Oct 13 06:42:05 crc kubenswrapper[4833]: I1013 06:42:05.930916 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4nf9x" event={"ID":"283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5","Type":"ContainerDied","Data":"77dfe68deec3c596ede78c7f7f6291ec64021f911a5af09c2c7a1a8a5cfe2b25"} Oct 13 06:42:06 crc kubenswrapper[4833]: I1013 06:42:06.937918 4833 generic.go:334] "Generic (PLEG): container finished" podID="766ba0cc-c795-40d3-b06b-2aaee8e09747" containerID="a35c50c7f8f30766da80d48a9271c025b9960293db0a24303d8027efa252cf9f" exitCode=0 Oct 13 06:42:06 crc kubenswrapper[4833]: I1013 06:42:06.938000 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vf84p" event={"ID":"766ba0cc-c795-40d3-b06b-2aaee8e09747","Type":"ContainerDied","Data":"a35c50c7f8f30766da80d48a9271c025b9960293db0a24303d8027efa252cf9f"} Oct 13 06:42:06 crc kubenswrapper[4833]: I1013 06:42:06.940827 4833 generic.go:334] "Generic (PLEG): container finished" podID="283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5" containerID="92c6b0e8fc73926441f7e7dec912f01ad05be3933573c80211ab8325196170db" exitCode=0 Oct 13 06:42:06 crc kubenswrapper[4833]: I1013 06:42:06.940872 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4nf9x" event={"ID":"283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5","Type":"ContainerDied","Data":"92c6b0e8fc73926441f7e7dec912f01ad05be3933573c80211ab8325196170db"} Oct 13 06:42:07 crc kubenswrapper[4833]: I1013 06:42:07.340524 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-68d546b9d8-wv7ff" Oct 13 06:42:07 crc kubenswrapper[4833]: I1013 06:42:07.960351 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4nf9x" event={"ID":"283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5","Type":"ContainerStarted","Data":"c56cfc3d0a99bbd8bafae1ee16b1df254dfd6e65dd50e5613a0f3a8c4382e73f"} Oct 13 06:42:07 crc kubenswrapper[4833]: I1013 06:42:07.960402 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4nf9x" event={"ID":"283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5","Type":"ContainerStarted","Data":"efa6df4d450d436eb66f7d3b250471ffaed5bc39ee47eb0bd7557edfb11e05e4"} Oct 13 06:42:07 crc kubenswrapper[4833]: I1013 06:42:07.960412 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4nf9x" event={"ID":"283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5","Type":"ContainerStarted","Data":"b43f09105fd354141a1dd006412d59c1bdad591356d29fd5bdf42d0c48dd39eb"} Oct 13 06:42:07 crc kubenswrapper[4833]: I1013 06:42:07.960421 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4nf9x" event={"ID":"283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5","Type":"ContainerStarted","Data":"56572920984f477a1eef334834fc65d4816ff8374865596d1b01a392f9e3b29e"} Oct 13 06:42:07 crc kubenswrapper[4833]: I1013 06:42:07.960431 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4nf9x" event={"ID":"283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5","Type":"ContainerStarted","Data":"637e61d87f6a8602c01fd18f02139f1531b6bbb0ea49b2250927dbab512ad25a"} Oct 13 06:42:07 crc kubenswrapper[4833]: I1013 06:42:07.962214 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vf84p" event={"ID":"766ba0cc-c795-40d3-b06b-2aaee8e09747","Type":"ContainerStarted","Data":"bd106482d18b3a0e185fec24fd452ddef3c0af080afb093b94e27f5f8e4d9b1b"} Oct 13 06:42:07 crc kubenswrapper[4833]: I1013 06:42:07.991352 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vf84p" podStartSLOduration=2.498823453 podStartE2EDuration="4.991330981s" podCreationTimestamp="2025-10-13 06:42:03 +0000 UTC" firstStartedPulling="2025-10-13 06:42:04.920344464 +0000 UTC m=+815.020767380" lastFinishedPulling="2025-10-13 06:42:07.412851982 +0000 UTC m=+817.513274908" observedRunningTime="2025-10-13 06:42:07.988349125 +0000 UTC m=+818.088772051" watchObservedRunningTime="2025-10-13 06:42:07.991330981 +0000 UTC m=+818.091753897" Oct 13 06:42:08 crc kubenswrapper[4833]: I1013 06:42:08.209009 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-w6zhs" Oct 13 06:42:08 crc kubenswrapper[4833]: I1013 06:42:08.971346 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4nf9x" event={"ID":"283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5","Type":"ContainerStarted","Data":"75d87ca869ba6fffbf7f2612f8dd21ffadd334e79f696cc7131d7239edac947b"} Oct 13 06:42:08 crc kubenswrapper[4833]: I1013 06:42:08.971740 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-4nf9x" Oct 13 06:42:09 crc kubenswrapper[4833]: I1013 06:42:09.660027 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-4nf9x" podStartSLOduration=6.450628317 podStartE2EDuration="13.660007351s" podCreationTimestamp="2025-10-13 06:41:56 +0000 UTC" firstStartedPulling="2025-10-13 06:41:56.726656832 +0000 UTC m=+806.827079748" lastFinishedPulling="2025-10-13 06:42:03.936035856 +0000 UTC m=+814.036458782" observedRunningTime="2025-10-13 06:42:09.002129498 +0000 UTC m=+819.102552414" watchObservedRunningTime="2025-10-13 06:42:09.660007351 +0000 UTC m=+819.760430267" Oct 13 06:42:09 crc kubenswrapper[4833]: I1013 06:42:09.662818 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs"] Oct 13 06:42:09 crc kubenswrapper[4833]: I1013 06:42:09.663927 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs" Oct 13 06:42:09 crc kubenswrapper[4833]: I1013 06:42:09.665571 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 13 06:42:09 crc kubenswrapper[4833]: I1013 06:42:09.672224 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs"] Oct 13 06:42:09 crc kubenswrapper[4833]: I1013 06:42:09.804893 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55762019-8ec1-492a-9691-d02c118d3176-bundle\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs\" (UID: \"55762019-8ec1-492a-9691-d02c118d3176\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs" Oct 13 06:42:09 crc kubenswrapper[4833]: I1013 06:42:09.805152 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55762019-8ec1-492a-9691-d02c118d3176-util\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs\" (UID: \"55762019-8ec1-492a-9691-d02c118d3176\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs" Oct 13 06:42:09 crc kubenswrapper[4833]: I1013 06:42:09.805255 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnfx5\" (UniqueName: \"kubernetes.io/projected/55762019-8ec1-492a-9691-d02c118d3176-kube-api-access-jnfx5\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs\" (UID: \"55762019-8ec1-492a-9691-d02c118d3176\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs" Oct 13 06:42:09 crc kubenswrapper[4833]: I1013 06:42:09.906884 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55762019-8ec1-492a-9691-d02c118d3176-bundle\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs\" (UID: \"55762019-8ec1-492a-9691-d02c118d3176\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs" Oct 13 06:42:09 crc kubenswrapper[4833]: I1013 06:42:09.906944 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55762019-8ec1-492a-9691-d02c118d3176-util\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs\" (UID: \"55762019-8ec1-492a-9691-d02c118d3176\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs" Oct 13 06:42:09 crc kubenswrapper[4833]: I1013 06:42:09.906991 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnfx5\" (UniqueName: \"kubernetes.io/projected/55762019-8ec1-492a-9691-d02c118d3176-kube-api-access-jnfx5\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs\" (UID: \"55762019-8ec1-492a-9691-d02c118d3176\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs" Oct 13 06:42:09 crc kubenswrapper[4833]: I1013 06:42:09.907445 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55762019-8ec1-492a-9691-d02c118d3176-bundle\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs\" (UID: \"55762019-8ec1-492a-9691-d02c118d3176\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs" Oct 13 06:42:09 crc kubenswrapper[4833]: I1013 06:42:09.907459 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55762019-8ec1-492a-9691-d02c118d3176-util\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs\" (UID: \"55762019-8ec1-492a-9691-d02c118d3176\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs" Oct 13 06:42:09 crc kubenswrapper[4833]: I1013 06:42:09.925418 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnfx5\" (UniqueName: \"kubernetes.io/projected/55762019-8ec1-492a-9691-d02c118d3176-kube-api-access-jnfx5\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs\" (UID: \"55762019-8ec1-492a-9691-d02c118d3176\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs" Oct 13 06:42:09 crc kubenswrapper[4833]: I1013 06:42:09.987048 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs" Oct 13 06:42:10 crc kubenswrapper[4833]: I1013 06:42:10.368683 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs"] Oct 13 06:42:10 crc kubenswrapper[4833]: I1013 06:42:10.991657 4833 generic.go:334] "Generic (PLEG): container finished" podID="55762019-8ec1-492a-9691-d02c118d3176" containerID="2d57006356af1f0d1811b11d9343d5fb2a43d5149f1fe315c1d835fe8801ccdd" exitCode=0 Oct 13 06:42:10 crc kubenswrapper[4833]: I1013 06:42:10.991699 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs" event={"ID":"55762019-8ec1-492a-9691-d02c118d3176","Type":"ContainerDied","Data":"2d57006356af1f0d1811b11d9343d5fb2a43d5149f1fe315c1d835fe8801ccdd"} Oct 13 06:42:10 crc kubenswrapper[4833]: I1013 06:42:10.992055 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs" event={"ID":"55762019-8ec1-492a-9691-d02c118d3176","Type":"ContainerStarted","Data":"de19eef9770afd3e722d5985e364a1f7efb248b758ee4a6f107a80743bead199"} Oct 13 06:42:11 crc kubenswrapper[4833]: I1013 06:42:11.627712 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-4nf9x" Oct 13 06:42:11 crc kubenswrapper[4833]: I1013 06:42:11.666865 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-4nf9x" Oct 13 06:42:13 crc kubenswrapper[4833]: I1013 06:42:13.583582 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vf84p" Oct 13 06:42:13 crc kubenswrapper[4833]: I1013 06:42:13.583812 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vf84p" Oct 13 06:42:13 crc kubenswrapper[4833]: I1013 06:42:13.632986 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vf84p" Oct 13 06:42:14 crc kubenswrapper[4833]: I1013 06:42:14.055976 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vf84p" Oct 13 06:42:15 crc kubenswrapper[4833]: I1013 06:42:15.021901 4833 generic.go:334] "Generic (PLEG): container finished" podID="55762019-8ec1-492a-9691-d02c118d3176" containerID="6318a31bb016bb6c48c046fed5c965a72da0b87af64725e58744356f1c49d7be" exitCode=0 Oct 13 06:42:15 crc kubenswrapper[4833]: I1013 06:42:15.022024 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs" event={"ID":"55762019-8ec1-492a-9691-d02c118d3176","Type":"ContainerDied","Data":"6318a31bb016bb6c48c046fed5c965a72da0b87af64725e58744356f1c49d7be"} Oct 13 06:42:16 crc kubenswrapper[4833]: I1013 06:42:16.029794 4833 generic.go:334] "Generic (PLEG): container finished" podID="55762019-8ec1-492a-9691-d02c118d3176" containerID="b9bbdb478c810637a3bbde7f880119dc4af924cdfb8aadac9a8dfb6cb6538c40" exitCode=0 Oct 13 06:42:16 crc kubenswrapper[4833]: I1013 06:42:16.029835 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs" event={"ID":"55762019-8ec1-492a-9691-d02c118d3176","Type":"ContainerDied","Data":"b9bbdb478c810637a3bbde7f880119dc4af924cdfb8aadac9a8dfb6cb6538c40"} Oct 13 06:42:16 crc kubenswrapper[4833]: I1013 06:42:16.223038 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vf84p"] Oct 13 06:42:16 crc kubenswrapper[4833]: I1013 06:42:16.223345 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vf84p" podUID="766ba0cc-c795-40d3-b06b-2aaee8e09747" containerName="registry-server" containerID="cri-o://bd106482d18b3a0e185fec24fd452ddef3c0af080afb093b94e27f5f8e4d9b1b" gracePeriod=2 Oct 13 06:42:16 crc kubenswrapper[4833]: I1013 06:42:16.582498 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vf84p" Oct 13 06:42:16 crc kubenswrapper[4833]: I1013 06:42:16.704334 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9zt6\" (UniqueName: \"kubernetes.io/projected/766ba0cc-c795-40d3-b06b-2aaee8e09747-kube-api-access-j9zt6\") pod \"766ba0cc-c795-40d3-b06b-2aaee8e09747\" (UID: \"766ba0cc-c795-40d3-b06b-2aaee8e09747\") " Oct 13 06:42:16 crc kubenswrapper[4833]: I1013 06:42:16.704430 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/766ba0cc-c795-40d3-b06b-2aaee8e09747-utilities\") pod \"766ba0cc-c795-40d3-b06b-2aaee8e09747\" (UID: \"766ba0cc-c795-40d3-b06b-2aaee8e09747\") " Oct 13 06:42:16 crc kubenswrapper[4833]: I1013 06:42:16.704495 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/766ba0cc-c795-40d3-b06b-2aaee8e09747-catalog-content\") pod \"766ba0cc-c795-40d3-b06b-2aaee8e09747\" (UID: \"766ba0cc-c795-40d3-b06b-2aaee8e09747\") " Oct 13 06:42:16 crc kubenswrapper[4833]: I1013 06:42:16.705335 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/766ba0cc-c795-40d3-b06b-2aaee8e09747-utilities" (OuterVolumeSpecName: "utilities") pod "766ba0cc-c795-40d3-b06b-2aaee8e09747" (UID: "766ba0cc-c795-40d3-b06b-2aaee8e09747"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:42:16 crc kubenswrapper[4833]: I1013 06:42:16.714770 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/766ba0cc-c795-40d3-b06b-2aaee8e09747-kube-api-access-j9zt6" (OuterVolumeSpecName: "kube-api-access-j9zt6") pod "766ba0cc-c795-40d3-b06b-2aaee8e09747" (UID: "766ba0cc-c795-40d3-b06b-2aaee8e09747"). InnerVolumeSpecName "kube-api-access-j9zt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:42:16 crc kubenswrapper[4833]: I1013 06:42:16.759463 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/766ba0cc-c795-40d3-b06b-2aaee8e09747-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "766ba0cc-c795-40d3-b06b-2aaee8e09747" (UID: "766ba0cc-c795-40d3-b06b-2aaee8e09747"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:42:16 crc kubenswrapper[4833]: I1013 06:42:16.806205 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9zt6\" (UniqueName: \"kubernetes.io/projected/766ba0cc-c795-40d3-b06b-2aaee8e09747-kube-api-access-j9zt6\") on node \"crc\" DevicePath \"\"" Oct 13 06:42:16 crc kubenswrapper[4833]: I1013 06:42:16.806245 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/766ba0cc-c795-40d3-b06b-2aaee8e09747-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 06:42:16 crc kubenswrapper[4833]: I1013 06:42:16.806255 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/766ba0cc-c795-40d3-b06b-2aaee8e09747-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 06:42:17 crc kubenswrapper[4833]: I1013 06:42:17.037149 4833 generic.go:334] "Generic (PLEG): container finished" podID="766ba0cc-c795-40d3-b06b-2aaee8e09747" containerID="bd106482d18b3a0e185fec24fd452ddef3c0af080afb093b94e27f5f8e4d9b1b" exitCode=0 Oct 13 06:42:17 crc kubenswrapper[4833]: I1013 06:42:17.037213 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vf84p" Oct 13 06:42:17 crc kubenswrapper[4833]: I1013 06:42:17.037217 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vf84p" event={"ID":"766ba0cc-c795-40d3-b06b-2aaee8e09747","Type":"ContainerDied","Data":"bd106482d18b3a0e185fec24fd452ddef3c0af080afb093b94e27f5f8e4d9b1b"} Oct 13 06:42:17 crc kubenswrapper[4833]: I1013 06:42:17.037288 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vf84p" event={"ID":"766ba0cc-c795-40d3-b06b-2aaee8e09747","Type":"ContainerDied","Data":"5e050306acadd30e6a0759c53df52ccdde08eaf37efa40364f6a978b8a655f22"} Oct 13 06:42:17 crc kubenswrapper[4833]: I1013 06:42:17.037317 4833 scope.go:117] "RemoveContainer" containerID="bd106482d18b3a0e185fec24fd452ddef3c0af080afb093b94e27f5f8e4d9b1b" Oct 13 06:42:17 crc kubenswrapper[4833]: I1013 06:42:17.055521 4833 scope.go:117] "RemoveContainer" containerID="a35c50c7f8f30766da80d48a9271c025b9960293db0a24303d8027efa252cf9f" Oct 13 06:42:17 crc kubenswrapper[4833]: I1013 06:42:17.066624 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vf84p"] Oct 13 06:42:17 crc kubenswrapper[4833]: I1013 06:42:17.078024 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vf84p"] Oct 13 06:42:17 crc kubenswrapper[4833]: I1013 06:42:17.093288 4833 scope.go:117] "RemoveContainer" containerID="078cbc375809c6afbbea75269cff652624bccdc1b0a5c6b8f3e9aeed17bc9d04" Oct 13 06:42:17 crc kubenswrapper[4833]: I1013 06:42:17.106963 4833 scope.go:117] "RemoveContainer" containerID="bd106482d18b3a0e185fec24fd452ddef3c0af080afb093b94e27f5f8e4d9b1b" Oct 13 06:42:17 crc kubenswrapper[4833]: E1013 06:42:17.108189 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd106482d18b3a0e185fec24fd452ddef3c0af080afb093b94e27f5f8e4d9b1b\": container with ID starting with bd106482d18b3a0e185fec24fd452ddef3c0af080afb093b94e27f5f8e4d9b1b not found: ID does not exist" containerID="bd106482d18b3a0e185fec24fd452ddef3c0af080afb093b94e27f5f8e4d9b1b" Oct 13 06:42:17 crc kubenswrapper[4833]: I1013 06:42:17.108716 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd106482d18b3a0e185fec24fd452ddef3c0af080afb093b94e27f5f8e4d9b1b"} err="failed to get container status \"bd106482d18b3a0e185fec24fd452ddef3c0af080afb093b94e27f5f8e4d9b1b\": rpc error: code = NotFound desc = could not find container \"bd106482d18b3a0e185fec24fd452ddef3c0af080afb093b94e27f5f8e4d9b1b\": container with ID starting with bd106482d18b3a0e185fec24fd452ddef3c0af080afb093b94e27f5f8e4d9b1b not found: ID does not exist" Oct 13 06:42:17 crc kubenswrapper[4833]: I1013 06:42:17.108750 4833 scope.go:117] "RemoveContainer" containerID="a35c50c7f8f30766da80d48a9271c025b9960293db0a24303d8027efa252cf9f" Oct 13 06:42:17 crc kubenswrapper[4833]: E1013 06:42:17.109159 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a35c50c7f8f30766da80d48a9271c025b9960293db0a24303d8027efa252cf9f\": container with ID starting with a35c50c7f8f30766da80d48a9271c025b9960293db0a24303d8027efa252cf9f not found: ID does not exist" containerID="a35c50c7f8f30766da80d48a9271c025b9960293db0a24303d8027efa252cf9f" Oct 13 06:42:17 crc kubenswrapper[4833]: I1013 06:42:17.109180 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a35c50c7f8f30766da80d48a9271c025b9960293db0a24303d8027efa252cf9f"} err="failed to get container status \"a35c50c7f8f30766da80d48a9271c025b9960293db0a24303d8027efa252cf9f\": rpc error: code = NotFound desc = could not find container \"a35c50c7f8f30766da80d48a9271c025b9960293db0a24303d8027efa252cf9f\": container with ID starting with a35c50c7f8f30766da80d48a9271c025b9960293db0a24303d8027efa252cf9f not found: ID does not exist" Oct 13 06:42:17 crc kubenswrapper[4833]: I1013 06:42:17.109194 4833 scope.go:117] "RemoveContainer" containerID="078cbc375809c6afbbea75269cff652624bccdc1b0a5c6b8f3e9aeed17bc9d04" Oct 13 06:42:17 crc kubenswrapper[4833]: E1013 06:42:17.109491 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"078cbc375809c6afbbea75269cff652624bccdc1b0a5c6b8f3e9aeed17bc9d04\": container with ID starting with 078cbc375809c6afbbea75269cff652624bccdc1b0a5c6b8f3e9aeed17bc9d04 not found: ID does not exist" containerID="078cbc375809c6afbbea75269cff652624bccdc1b0a5c6b8f3e9aeed17bc9d04" Oct 13 06:42:17 crc kubenswrapper[4833]: I1013 06:42:17.109508 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"078cbc375809c6afbbea75269cff652624bccdc1b0a5c6b8f3e9aeed17bc9d04"} err="failed to get container status \"078cbc375809c6afbbea75269cff652624bccdc1b0a5c6b8f3e9aeed17bc9d04\": rpc error: code = NotFound desc = could not find container \"078cbc375809c6afbbea75269cff652624bccdc1b0a5c6b8f3e9aeed17bc9d04\": container with ID starting with 078cbc375809c6afbbea75269cff652624bccdc1b0a5c6b8f3e9aeed17bc9d04 not found: ID does not exist" Oct 13 06:42:17 crc kubenswrapper[4833]: I1013 06:42:17.246117 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-n574p" Oct 13 06:42:17 crc kubenswrapper[4833]: I1013 06:42:17.262822 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs" Oct 13 06:42:17 crc kubenswrapper[4833]: I1013 06:42:17.312828 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55762019-8ec1-492a-9691-d02c118d3176-bundle\") pod \"55762019-8ec1-492a-9691-d02c118d3176\" (UID: \"55762019-8ec1-492a-9691-d02c118d3176\") " Oct 13 06:42:17 crc kubenswrapper[4833]: I1013 06:42:17.312938 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55762019-8ec1-492a-9691-d02c118d3176-util\") pod \"55762019-8ec1-492a-9691-d02c118d3176\" (UID: \"55762019-8ec1-492a-9691-d02c118d3176\") " Oct 13 06:42:17 crc kubenswrapper[4833]: I1013 06:42:17.312986 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnfx5\" (UniqueName: \"kubernetes.io/projected/55762019-8ec1-492a-9691-d02c118d3176-kube-api-access-jnfx5\") pod \"55762019-8ec1-492a-9691-d02c118d3176\" (UID: \"55762019-8ec1-492a-9691-d02c118d3176\") " Oct 13 06:42:17 crc kubenswrapper[4833]: I1013 06:42:17.314109 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55762019-8ec1-492a-9691-d02c118d3176-bundle" (OuterVolumeSpecName: "bundle") pod "55762019-8ec1-492a-9691-d02c118d3176" (UID: "55762019-8ec1-492a-9691-d02c118d3176"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:42:17 crc kubenswrapper[4833]: I1013 06:42:17.316313 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55762019-8ec1-492a-9691-d02c118d3176-kube-api-access-jnfx5" (OuterVolumeSpecName: "kube-api-access-jnfx5") pod "55762019-8ec1-492a-9691-d02c118d3176" (UID: "55762019-8ec1-492a-9691-d02c118d3176"). InnerVolumeSpecName "kube-api-access-jnfx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:42:17 crc kubenswrapper[4833]: I1013 06:42:17.322894 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55762019-8ec1-492a-9691-d02c118d3176-util" (OuterVolumeSpecName: "util") pod "55762019-8ec1-492a-9691-d02c118d3176" (UID: "55762019-8ec1-492a-9691-d02c118d3176"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:42:17 crc kubenswrapper[4833]: I1013 06:42:17.415345 4833 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55762019-8ec1-492a-9691-d02c118d3176-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:42:17 crc kubenswrapper[4833]: I1013 06:42:17.415387 4833 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55762019-8ec1-492a-9691-d02c118d3176-util\") on node \"crc\" DevicePath \"\"" Oct 13 06:42:17 crc kubenswrapper[4833]: I1013 06:42:17.415400 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnfx5\" (UniqueName: \"kubernetes.io/projected/55762019-8ec1-492a-9691-d02c118d3176-kube-api-access-jnfx5\") on node \"crc\" DevicePath \"\"" Oct 13 06:42:18 crc kubenswrapper[4833]: I1013 06:42:18.045792 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs" event={"ID":"55762019-8ec1-492a-9691-d02c118d3176","Type":"ContainerDied","Data":"de19eef9770afd3e722d5985e364a1f7efb248b758ee4a6f107a80743bead199"} Oct 13 06:42:18 crc kubenswrapper[4833]: I1013 06:42:18.045861 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de19eef9770afd3e722d5985e364a1f7efb248b758ee4a6f107a80743bead199" Oct 13 06:42:18 crc kubenswrapper[4833]: I1013 06:42:18.045899 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs" Oct 13 06:42:18 crc kubenswrapper[4833]: I1013 06:42:18.633918 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="766ba0cc-c795-40d3-b06b-2aaee8e09747" path="/var/lib/kubelet/pods/766ba0cc-c795-40d3-b06b-2aaee8e09747/volumes" Oct 13 06:42:22 crc kubenswrapper[4833]: I1013 06:42:22.533089 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-qtkrs"] Oct 13 06:42:22 crc kubenswrapper[4833]: E1013 06:42:22.533731 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="766ba0cc-c795-40d3-b06b-2aaee8e09747" containerName="extract-utilities" Oct 13 06:42:22 crc kubenswrapper[4833]: I1013 06:42:22.533743 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="766ba0cc-c795-40d3-b06b-2aaee8e09747" containerName="extract-utilities" Oct 13 06:42:22 crc kubenswrapper[4833]: E1013 06:42:22.533755 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="766ba0cc-c795-40d3-b06b-2aaee8e09747" containerName="registry-server" Oct 13 06:42:22 crc kubenswrapper[4833]: I1013 06:42:22.533762 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="766ba0cc-c795-40d3-b06b-2aaee8e09747" containerName="registry-server" Oct 13 06:42:22 crc kubenswrapper[4833]: E1013 06:42:22.533771 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="766ba0cc-c795-40d3-b06b-2aaee8e09747" containerName="extract-content" Oct 13 06:42:22 crc kubenswrapper[4833]: I1013 06:42:22.533778 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="766ba0cc-c795-40d3-b06b-2aaee8e09747" containerName="extract-content" Oct 13 06:42:22 crc kubenswrapper[4833]: E1013 06:42:22.533785 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55762019-8ec1-492a-9691-d02c118d3176" containerName="util" Oct 13 06:42:22 crc kubenswrapper[4833]: I1013 06:42:22.533791 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="55762019-8ec1-492a-9691-d02c118d3176" containerName="util" Oct 13 06:42:22 crc kubenswrapper[4833]: E1013 06:42:22.533799 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55762019-8ec1-492a-9691-d02c118d3176" containerName="pull" Oct 13 06:42:22 crc kubenswrapper[4833]: I1013 06:42:22.533804 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="55762019-8ec1-492a-9691-d02c118d3176" containerName="pull" Oct 13 06:42:22 crc kubenswrapper[4833]: E1013 06:42:22.533812 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55762019-8ec1-492a-9691-d02c118d3176" containerName="extract" Oct 13 06:42:22 crc kubenswrapper[4833]: I1013 06:42:22.533819 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="55762019-8ec1-492a-9691-d02c118d3176" containerName="extract" Oct 13 06:42:22 crc kubenswrapper[4833]: I1013 06:42:22.533909 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="766ba0cc-c795-40d3-b06b-2aaee8e09747" containerName="registry-server" Oct 13 06:42:22 crc kubenswrapper[4833]: I1013 06:42:22.533920 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="55762019-8ec1-492a-9691-d02c118d3176" containerName="extract" Oct 13 06:42:22 crc kubenswrapper[4833]: I1013 06:42:22.534289 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-qtkrs" Oct 13 06:42:22 crc kubenswrapper[4833]: I1013 06:42:22.536232 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Oct 13 06:42:22 crc kubenswrapper[4833]: I1013 06:42:22.536769 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Oct 13 06:42:22 crc kubenswrapper[4833]: I1013 06:42:22.547637 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-qtkrs"] Oct 13 06:42:22 crc kubenswrapper[4833]: I1013 06:42:22.584519 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdbf7\" (UniqueName: \"kubernetes.io/projected/07836b10-4d72-4bc6-ba2e-635a9a5e3d66-kube-api-access-xdbf7\") pod \"cert-manager-operator-controller-manager-57cd46d6d-qtkrs\" (UID: \"07836b10-4d72-4bc6-ba2e-635a9a5e3d66\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-qtkrs" Oct 13 06:42:22 crc kubenswrapper[4833]: I1013 06:42:22.589766 4833 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-rjx5l" Oct 13 06:42:22 crc kubenswrapper[4833]: I1013 06:42:22.686109 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdbf7\" (UniqueName: \"kubernetes.io/projected/07836b10-4d72-4bc6-ba2e-635a9a5e3d66-kube-api-access-xdbf7\") pod \"cert-manager-operator-controller-manager-57cd46d6d-qtkrs\" (UID: \"07836b10-4d72-4bc6-ba2e-635a9a5e3d66\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-qtkrs" Oct 13 06:42:22 crc kubenswrapper[4833]: I1013 06:42:22.709860 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdbf7\" (UniqueName: \"kubernetes.io/projected/07836b10-4d72-4bc6-ba2e-635a9a5e3d66-kube-api-access-xdbf7\") pod \"cert-manager-operator-controller-manager-57cd46d6d-qtkrs\" (UID: \"07836b10-4d72-4bc6-ba2e-635a9a5e3d66\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-qtkrs" Oct 13 06:42:22 crc kubenswrapper[4833]: I1013 06:42:22.891903 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-qtkrs" Oct 13 06:42:23 crc kubenswrapper[4833]: I1013 06:42:23.278187 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-qtkrs"] Oct 13 06:42:24 crc kubenswrapper[4833]: I1013 06:42:24.087706 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-qtkrs" event={"ID":"07836b10-4d72-4bc6-ba2e-635a9a5e3d66","Type":"ContainerStarted","Data":"d371c42713a118ce7840113bc85365f069f4ed582c87b356ce55a4457756835b"} Oct 13 06:42:26 crc kubenswrapper[4833]: I1013 06:42:26.641668 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-4nf9x" Oct 13 06:42:30 crc kubenswrapper[4833]: I1013 06:42:30.129587 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-qtkrs" event={"ID":"07836b10-4d72-4bc6-ba2e-635a9a5e3d66","Type":"ContainerStarted","Data":"8ec1db69123d65a1896bd758327c6a3f72cf5373b9e624e6148a3dee38ac64e7"} Oct 13 06:42:30 crc kubenswrapper[4833]: I1013 06:42:30.149270 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-qtkrs" podStartSLOduration=1.646059112 podStartE2EDuration="8.149252391s" podCreationTimestamp="2025-10-13 06:42:22 +0000 UTC" firstStartedPulling="2025-10-13 06:42:23.305516289 +0000 UTC m=+833.405939205" lastFinishedPulling="2025-10-13 06:42:29.808709568 +0000 UTC m=+839.909132484" observedRunningTime="2025-10-13 06:42:30.145917964 +0000 UTC m=+840.246340880" watchObservedRunningTime="2025-10-13 06:42:30.149252391 +0000 UTC m=+840.249675297" Oct 13 06:42:34 crc kubenswrapper[4833]: I1013 06:42:34.034627 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-d969966f-wlkmt"] Oct 13 06:42:34 crc kubenswrapper[4833]: I1013 06:42:34.035951 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-d969966f-wlkmt" Oct 13 06:42:34 crc kubenswrapper[4833]: I1013 06:42:34.038056 4833 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-6jgf9" Oct 13 06:42:34 crc kubenswrapper[4833]: I1013 06:42:34.038355 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Oct 13 06:42:34 crc kubenswrapper[4833]: I1013 06:42:34.039464 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Oct 13 06:42:34 crc kubenswrapper[4833]: I1013 06:42:34.041386 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-d969966f-wlkmt"] Oct 13 06:42:34 crc kubenswrapper[4833]: I1013 06:42:34.144493 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/45fe6616-16f5-4179-b297-6260a2573ae7-bound-sa-token\") pod \"cert-manager-webhook-d969966f-wlkmt\" (UID: \"45fe6616-16f5-4179-b297-6260a2573ae7\") " pod="cert-manager/cert-manager-webhook-d969966f-wlkmt" Oct 13 06:42:34 crc kubenswrapper[4833]: I1013 06:42:34.144556 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8bqs\" (UniqueName: \"kubernetes.io/projected/45fe6616-16f5-4179-b297-6260a2573ae7-kube-api-access-d8bqs\") pod \"cert-manager-webhook-d969966f-wlkmt\" (UID: \"45fe6616-16f5-4179-b297-6260a2573ae7\") " pod="cert-manager/cert-manager-webhook-d969966f-wlkmt" Oct 13 06:42:34 crc kubenswrapper[4833]: I1013 06:42:34.245623 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8bqs\" (UniqueName: \"kubernetes.io/projected/45fe6616-16f5-4179-b297-6260a2573ae7-kube-api-access-d8bqs\") pod \"cert-manager-webhook-d969966f-wlkmt\" (UID: \"45fe6616-16f5-4179-b297-6260a2573ae7\") " pod="cert-manager/cert-manager-webhook-d969966f-wlkmt" Oct 13 06:42:34 crc kubenswrapper[4833]: I1013 06:42:34.245780 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/45fe6616-16f5-4179-b297-6260a2573ae7-bound-sa-token\") pod \"cert-manager-webhook-d969966f-wlkmt\" (UID: \"45fe6616-16f5-4179-b297-6260a2573ae7\") " pod="cert-manager/cert-manager-webhook-d969966f-wlkmt" Oct 13 06:42:34 crc kubenswrapper[4833]: I1013 06:42:34.264657 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/45fe6616-16f5-4179-b297-6260a2573ae7-bound-sa-token\") pod \"cert-manager-webhook-d969966f-wlkmt\" (UID: \"45fe6616-16f5-4179-b297-6260a2573ae7\") " pod="cert-manager/cert-manager-webhook-d969966f-wlkmt" Oct 13 06:42:34 crc kubenswrapper[4833]: I1013 06:42:34.266595 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8bqs\" (UniqueName: \"kubernetes.io/projected/45fe6616-16f5-4179-b297-6260a2573ae7-kube-api-access-d8bqs\") pod \"cert-manager-webhook-d969966f-wlkmt\" (UID: \"45fe6616-16f5-4179-b297-6260a2573ae7\") " pod="cert-manager/cert-manager-webhook-d969966f-wlkmt" Oct 13 06:42:34 crc kubenswrapper[4833]: I1013 06:42:34.350444 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-d969966f-wlkmt" Oct 13 06:42:34 crc kubenswrapper[4833]: I1013 06:42:34.815784 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-d969966f-wlkmt"] Oct 13 06:42:34 crc kubenswrapper[4833]: W1013 06:42:34.823027 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45fe6616_16f5_4179_b297_6260a2573ae7.slice/crio-3e0374eeccf8c4aeeb7b7b3bc511c27188e330b335e0aaaf62d2978cb577bf5d WatchSource:0}: Error finding container 3e0374eeccf8c4aeeb7b7b3bc511c27188e330b335e0aaaf62d2978cb577bf5d: Status 404 returned error can't find the container with id 3e0374eeccf8c4aeeb7b7b3bc511c27188e330b335e0aaaf62d2978cb577bf5d Oct 13 06:42:35 crc kubenswrapper[4833]: I1013 06:42:35.155592 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-d969966f-wlkmt" event={"ID":"45fe6616-16f5-4179-b297-6260a2573ae7","Type":"ContainerStarted","Data":"3e0374eeccf8c4aeeb7b7b3bc511c27188e330b335e0aaaf62d2978cb577bf5d"} Oct 13 06:42:36 crc kubenswrapper[4833]: I1013 06:42:36.540201 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7d9f95dbf-bvm97"] Oct 13 06:42:36 crc kubenswrapper[4833]: I1013 06:42:36.541209 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7d9f95dbf-bvm97" Oct 13 06:42:36 crc kubenswrapper[4833]: I1013 06:42:36.542932 4833 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-xr7f5" Oct 13 06:42:36 crc kubenswrapper[4833]: I1013 06:42:36.547524 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7d9f95dbf-bvm97"] Oct 13 06:42:36 crc kubenswrapper[4833]: I1013 06:42:36.714640 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2674d19-a371-463e-8baa-c7c278bea011-bound-sa-token\") pod \"cert-manager-cainjector-7d9f95dbf-bvm97\" (UID: \"c2674d19-a371-463e-8baa-c7c278bea011\") " pod="cert-manager/cert-manager-cainjector-7d9f95dbf-bvm97" Oct 13 06:42:36 crc kubenswrapper[4833]: I1013 06:42:36.714684 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjbcv\" (UniqueName: \"kubernetes.io/projected/c2674d19-a371-463e-8baa-c7c278bea011-kube-api-access-wjbcv\") pod \"cert-manager-cainjector-7d9f95dbf-bvm97\" (UID: \"c2674d19-a371-463e-8baa-c7c278bea011\") " pod="cert-manager/cert-manager-cainjector-7d9f95dbf-bvm97" Oct 13 06:42:36 crc kubenswrapper[4833]: I1013 06:42:36.816210 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2674d19-a371-463e-8baa-c7c278bea011-bound-sa-token\") pod \"cert-manager-cainjector-7d9f95dbf-bvm97\" (UID: \"c2674d19-a371-463e-8baa-c7c278bea011\") " pod="cert-manager/cert-manager-cainjector-7d9f95dbf-bvm97" Oct 13 06:42:36 crc kubenswrapper[4833]: I1013 06:42:36.816265 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjbcv\" (UniqueName: \"kubernetes.io/projected/c2674d19-a371-463e-8baa-c7c278bea011-kube-api-access-wjbcv\") pod \"cert-manager-cainjector-7d9f95dbf-bvm97\" (UID: \"c2674d19-a371-463e-8baa-c7c278bea011\") " pod="cert-manager/cert-manager-cainjector-7d9f95dbf-bvm97" Oct 13 06:42:36 crc kubenswrapper[4833]: I1013 06:42:36.834930 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjbcv\" (UniqueName: \"kubernetes.io/projected/c2674d19-a371-463e-8baa-c7c278bea011-kube-api-access-wjbcv\") pod \"cert-manager-cainjector-7d9f95dbf-bvm97\" (UID: \"c2674d19-a371-463e-8baa-c7c278bea011\") " pod="cert-manager/cert-manager-cainjector-7d9f95dbf-bvm97" Oct 13 06:42:36 crc kubenswrapper[4833]: I1013 06:42:36.838163 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2674d19-a371-463e-8baa-c7c278bea011-bound-sa-token\") pod \"cert-manager-cainjector-7d9f95dbf-bvm97\" (UID: \"c2674d19-a371-463e-8baa-c7c278bea011\") " pod="cert-manager/cert-manager-cainjector-7d9f95dbf-bvm97" Oct 13 06:42:36 crc kubenswrapper[4833]: I1013 06:42:36.862173 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7d9f95dbf-bvm97" Oct 13 06:42:37 crc kubenswrapper[4833]: I1013 06:42:37.265934 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7d9f95dbf-bvm97"] Oct 13 06:42:38 crc kubenswrapper[4833]: W1013 06:42:38.479940 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2674d19_a371_463e_8baa_c7c278bea011.slice/crio-6b833bd575b85479cc0c6fc17ffaee9608e30352b1fa57b72f63d37cc4baf621 WatchSource:0}: Error finding container 6b833bd575b85479cc0c6fc17ffaee9608e30352b1fa57b72f63d37cc4baf621: Status 404 returned error can't find the container with id 6b833bd575b85479cc0c6fc17ffaee9608e30352b1fa57b72f63d37cc4baf621 Oct 13 06:42:39 crc kubenswrapper[4833]: I1013 06:42:39.182280 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-d969966f-wlkmt" event={"ID":"45fe6616-16f5-4179-b297-6260a2573ae7","Type":"ContainerStarted","Data":"cfaad68486f8c2c914f7fc7192ba11a50d6c67f704b5d22b5fba035d3399bc85"} Oct 13 06:42:39 crc kubenswrapper[4833]: I1013 06:42:39.182611 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-d969966f-wlkmt" Oct 13 06:42:39 crc kubenswrapper[4833]: I1013 06:42:39.183681 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7d9f95dbf-bvm97" event={"ID":"c2674d19-a371-463e-8baa-c7c278bea011","Type":"ContainerStarted","Data":"6b833bd575b85479cc0c6fc17ffaee9608e30352b1fa57b72f63d37cc4baf621"} Oct 13 06:42:39 crc kubenswrapper[4833]: I1013 06:42:39.201731 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-d969966f-wlkmt" podStartSLOduration=1.48642091 podStartE2EDuration="5.201692197s" podCreationTimestamp="2025-10-13 06:42:34 +0000 UTC" firstStartedPulling="2025-10-13 06:42:34.826881968 +0000 UTC m=+844.927304904" lastFinishedPulling="2025-10-13 06:42:38.542153265 +0000 UTC m=+848.642576191" observedRunningTime="2025-10-13 06:42:39.197369331 +0000 UTC m=+849.297792287" watchObservedRunningTime="2025-10-13 06:42:39.201692197 +0000 UTC m=+849.302115113" Oct 13 06:42:40 crc kubenswrapper[4833]: I1013 06:42:40.190062 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7d9f95dbf-bvm97" event={"ID":"c2674d19-a371-463e-8baa-c7c278bea011","Type":"ContainerStarted","Data":"20e32f7ea19f8648b13669e680692c463f9b89b7731d093e785dc912666da3ad"} Oct 13 06:42:40 crc kubenswrapper[4833]: I1013 06:42:40.221145 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7d9f95dbf-bvm97" podStartSLOduration=3.451666651 podStartE2EDuration="4.221129953s" podCreationTimestamp="2025-10-13 06:42:36 +0000 UTC" firstStartedPulling="2025-10-13 06:42:38.482281397 +0000 UTC m=+848.582704313" lastFinishedPulling="2025-10-13 06:42:39.251744699 +0000 UTC m=+849.352167615" observedRunningTime="2025-10-13 06:42:40.219261818 +0000 UTC m=+850.319684744" watchObservedRunningTime="2025-10-13 06:42:40.221129953 +0000 UTC m=+850.321552859" Oct 13 06:42:44 crc kubenswrapper[4833]: I1013 06:42:44.354225 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-d969966f-wlkmt" Oct 13 06:42:52 crc kubenswrapper[4833]: I1013 06:42:52.026742 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-7d4cc89fcb-69dqf"] Oct 13 06:42:52 crc kubenswrapper[4833]: I1013 06:42:52.028873 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-7d4cc89fcb-69dqf" Oct 13 06:42:52 crc kubenswrapper[4833]: I1013 06:42:52.033315 4833 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-r2qpt" Oct 13 06:42:52 crc kubenswrapper[4833]: I1013 06:42:52.054994 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-7d4cc89fcb-69dqf"] Oct 13 06:42:52 crc kubenswrapper[4833]: I1013 06:42:52.126518 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjqgk\" (UniqueName: \"kubernetes.io/projected/31c1c666-1827-46ae-9e9f-5639a894a089-kube-api-access-bjqgk\") pod \"cert-manager-7d4cc89fcb-69dqf\" (UID: \"31c1c666-1827-46ae-9e9f-5639a894a089\") " pod="cert-manager/cert-manager-7d4cc89fcb-69dqf" Oct 13 06:42:52 crc kubenswrapper[4833]: I1013 06:42:52.126646 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/31c1c666-1827-46ae-9e9f-5639a894a089-bound-sa-token\") pod \"cert-manager-7d4cc89fcb-69dqf\" (UID: \"31c1c666-1827-46ae-9e9f-5639a894a089\") " pod="cert-manager/cert-manager-7d4cc89fcb-69dqf" Oct 13 06:42:52 crc kubenswrapper[4833]: I1013 06:42:52.227969 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjqgk\" (UniqueName: \"kubernetes.io/projected/31c1c666-1827-46ae-9e9f-5639a894a089-kube-api-access-bjqgk\") pod \"cert-manager-7d4cc89fcb-69dqf\" (UID: \"31c1c666-1827-46ae-9e9f-5639a894a089\") " pod="cert-manager/cert-manager-7d4cc89fcb-69dqf" Oct 13 06:42:52 crc kubenswrapper[4833]: I1013 06:42:52.228034 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/31c1c666-1827-46ae-9e9f-5639a894a089-bound-sa-token\") pod \"cert-manager-7d4cc89fcb-69dqf\" (UID: \"31c1c666-1827-46ae-9e9f-5639a894a089\") " pod="cert-manager/cert-manager-7d4cc89fcb-69dqf" Oct 13 06:42:52 crc kubenswrapper[4833]: I1013 06:42:52.250958 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjqgk\" (UniqueName: \"kubernetes.io/projected/31c1c666-1827-46ae-9e9f-5639a894a089-kube-api-access-bjqgk\") pod \"cert-manager-7d4cc89fcb-69dqf\" (UID: \"31c1c666-1827-46ae-9e9f-5639a894a089\") " pod="cert-manager/cert-manager-7d4cc89fcb-69dqf" Oct 13 06:42:52 crc kubenswrapper[4833]: I1013 06:42:52.262401 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/31c1c666-1827-46ae-9e9f-5639a894a089-bound-sa-token\") pod \"cert-manager-7d4cc89fcb-69dqf\" (UID: \"31c1c666-1827-46ae-9e9f-5639a894a089\") " pod="cert-manager/cert-manager-7d4cc89fcb-69dqf" Oct 13 06:42:52 crc kubenswrapper[4833]: I1013 06:42:52.354012 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-7d4cc89fcb-69dqf" Oct 13 06:42:52 crc kubenswrapper[4833]: I1013 06:42:52.768980 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-7d4cc89fcb-69dqf"] Oct 13 06:42:53 crc kubenswrapper[4833]: I1013 06:42:53.294938 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-7d4cc89fcb-69dqf" event={"ID":"31c1c666-1827-46ae-9e9f-5639a894a089","Type":"ContainerStarted","Data":"74ea19de2ef4b2ce770ffacacfbd496174d393afd1d97693befd94b67bb64403"} Oct 13 06:42:53 crc kubenswrapper[4833]: I1013 06:42:53.295008 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-7d4cc89fcb-69dqf" event={"ID":"31c1c666-1827-46ae-9e9f-5639a894a089","Type":"ContainerStarted","Data":"83e6c7756b1489fe4e1913adce4b41f18c1f170af5fb4d62ff6d37a649f9f532"} Oct 13 06:42:53 crc kubenswrapper[4833]: I1013 06:42:53.315228 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-7d4cc89fcb-69dqf" podStartSLOduration=1.3152024469999999 podStartE2EDuration="1.315202447s" podCreationTimestamp="2025-10-13 06:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:42:53.310338396 +0000 UTC m=+863.410761332" watchObservedRunningTime="2025-10-13 06:42:53.315202447 +0000 UTC m=+863.415625403" Oct 13 06:42:57 crc kubenswrapper[4833]: I1013 06:42:57.978895 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-xr5jq"] Oct 13 06:42:57 crc kubenswrapper[4833]: I1013 06:42:57.980358 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xr5jq" Oct 13 06:42:57 crc kubenswrapper[4833]: I1013 06:42:57.982174 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-jq8q8" Oct 13 06:42:57 crc kubenswrapper[4833]: I1013 06:42:57.985137 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Oct 13 06:42:57 crc kubenswrapper[4833]: I1013 06:42:57.985595 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Oct 13 06:42:58 crc kubenswrapper[4833]: I1013 06:42:58.002216 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xr5jq"] Oct 13 06:42:58 crc kubenswrapper[4833]: I1013 06:42:58.019092 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xx6x\" (UniqueName: \"kubernetes.io/projected/1769cbf8-6c0f-4920-b831-62000d49c84d-kube-api-access-6xx6x\") pod \"openstack-operator-index-xr5jq\" (UID: \"1769cbf8-6c0f-4920-b831-62000d49c84d\") " pod="openstack-operators/openstack-operator-index-xr5jq" Oct 13 06:42:58 crc kubenswrapper[4833]: I1013 06:42:58.120287 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xx6x\" (UniqueName: \"kubernetes.io/projected/1769cbf8-6c0f-4920-b831-62000d49c84d-kube-api-access-6xx6x\") pod \"openstack-operator-index-xr5jq\" (UID: \"1769cbf8-6c0f-4920-b831-62000d49c84d\") " pod="openstack-operators/openstack-operator-index-xr5jq" Oct 13 06:42:58 crc kubenswrapper[4833]: I1013 06:42:58.138745 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xx6x\" (UniqueName: \"kubernetes.io/projected/1769cbf8-6c0f-4920-b831-62000d49c84d-kube-api-access-6xx6x\") pod \"openstack-operator-index-xr5jq\" (UID: \"1769cbf8-6c0f-4920-b831-62000d49c84d\") " pod="openstack-operators/openstack-operator-index-xr5jq" Oct 13 06:42:58 crc kubenswrapper[4833]: I1013 06:42:58.298236 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xr5jq" Oct 13 06:42:58 crc kubenswrapper[4833]: I1013 06:42:58.706312 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xr5jq"] Oct 13 06:42:59 crc kubenswrapper[4833]: I1013 06:42:59.335872 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xr5jq" event={"ID":"1769cbf8-6c0f-4920-b831-62000d49c84d","Type":"ContainerStarted","Data":"287b0e0c1c6b5ce86c4dd7c43b60a740318e59f291b979c510bc8a900993b1d3"} Oct 13 06:43:00 crc kubenswrapper[4833]: I1013 06:43:00.343045 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xr5jq" event={"ID":"1769cbf8-6c0f-4920-b831-62000d49c84d","Type":"ContainerStarted","Data":"4837dc574abd4822f068d18ef5fa4d334a18d7e817556c5dfec4bbc05fb0c62d"} Oct 13 06:43:00 crc kubenswrapper[4833]: I1013 06:43:00.361138 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-xr5jq" podStartSLOduration=2.633356316 podStartE2EDuration="3.361114217s" podCreationTimestamp="2025-10-13 06:42:57 +0000 UTC" firstStartedPulling="2025-10-13 06:42:58.712006136 +0000 UTC m=+868.812429052" lastFinishedPulling="2025-10-13 06:42:59.439764037 +0000 UTC m=+869.540186953" observedRunningTime="2025-10-13 06:43:00.358736358 +0000 UTC m=+870.459159314" watchObservedRunningTime="2025-10-13 06:43:00.361114217 +0000 UTC m=+870.461537133" Oct 13 06:43:01 crc kubenswrapper[4833]: I1013 06:43:01.551602 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-xr5jq"] Oct 13 06:43:02 crc kubenswrapper[4833]: I1013 06:43:02.348842 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-7gvlk"] Oct 13 06:43:02 crc kubenswrapper[4833]: I1013 06:43:02.349756 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7gvlk" Oct 13 06:43:02 crc kubenswrapper[4833]: I1013 06:43:02.359397 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-xr5jq" podUID="1769cbf8-6c0f-4920-b831-62000d49c84d" containerName="registry-server" containerID="cri-o://4837dc574abd4822f068d18ef5fa4d334a18d7e817556c5dfec4bbc05fb0c62d" gracePeriod=2 Oct 13 06:43:02 crc kubenswrapper[4833]: I1013 06:43:02.359658 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7gvlk"] Oct 13 06:43:02 crc kubenswrapper[4833]: I1013 06:43:02.388062 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwdhk\" (UniqueName: \"kubernetes.io/projected/a9ca3f95-4e28-4e9d-ba53-2c6aa8dd7298-kube-api-access-hwdhk\") pod \"openstack-operator-index-7gvlk\" (UID: \"a9ca3f95-4e28-4e9d-ba53-2c6aa8dd7298\") " pod="openstack-operators/openstack-operator-index-7gvlk" Oct 13 06:43:02 crc kubenswrapper[4833]: I1013 06:43:02.489521 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwdhk\" (UniqueName: \"kubernetes.io/projected/a9ca3f95-4e28-4e9d-ba53-2c6aa8dd7298-kube-api-access-hwdhk\") pod \"openstack-operator-index-7gvlk\" (UID: \"a9ca3f95-4e28-4e9d-ba53-2c6aa8dd7298\") " pod="openstack-operators/openstack-operator-index-7gvlk" Oct 13 06:43:02 crc kubenswrapper[4833]: I1013 06:43:02.512204 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwdhk\" (UniqueName: \"kubernetes.io/projected/a9ca3f95-4e28-4e9d-ba53-2c6aa8dd7298-kube-api-access-hwdhk\") pod \"openstack-operator-index-7gvlk\" (UID: \"a9ca3f95-4e28-4e9d-ba53-2c6aa8dd7298\") " pod="openstack-operators/openstack-operator-index-7gvlk" Oct 13 06:43:02 crc kubenswrapper[4833]: I1013 06:43:02.714136 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7gvlk" Oct 13 06:43:02 crc kubenswrapper[4833]: I1013 06:43:02.760808 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xr5jq" Oct 13 06:43:02 crc kubenswrapper[4833]: I1013 06:43:02.792922 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xx6x\" (UniqueName: \"kubernetes.io/projected/1769cbf8-6c0f-4920-b831-62000d49c84d-kube-api-access-6xx6x\") pod \"1769cbf8-6c0f-4920-b831-62000d49c84d\" (UID: \"1769cbf8-6c0f-4920-b831-62000d49c84d\") " Oct 13 06:43:02 crc kubenswrapper[4833]: I1013 06:43:02.798055 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1769cbf8-6c0f-4920-b831-62000d49c84d-kube-api-access-6xx6x" (OuterVolumeSpecName: "kube-api-access-6xx6x") pod "1769cbf8-6c0f-4920-b831-62000d49c84d" (UID: "1769cbf8-6c0f-4920-b831-62000d49c84d"). InnerVolumeSpecName "kube-api-access-6xx6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:43:02 crc kubenswrapper[4833]: I1013 06:43:02.894987 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xx6x\" (UniqueName: \"kubernetes.io/projected/1769cbf8-6c0f-4920-b831-62000d49c84d-kube-api-access-6xx6x\") on node \"crc\" DevicePath \"\"" Oct 13 06:43:03 crc kubenswrapper[4833]: I1013 06:43:03.125361 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7gvlk"] Oct 13 06:43:03 crc kubenswrapper[4833]: W1013 06:43:03.128499 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9ca3f95_4e28_4e9d_ba53_2c6aa8dd7298.slice/crio-748cc5eb80965897cb0b26bf5eccf501b5f5684b813fccbe89cab6c03cb85d45 WatchSource:0}: Error finding container 748cc5eb80965897cb0b26bf5eccf501b5f5684b813fccbe89cab6c03cb85d45: Status 404 returned error can't find the container with id 748cc5eb80965897cb0b26bf5eccf501b5f5684b813fccbe89cab6c03cb85d45 Oct 13 06:43:03 crc kubenswrapper[4833]: I1013 06:43:03.367788 4833 generic.go:334] "Generic (PLEG): container finished" podID="1769cbf8-6c0f-4920-b831-62000d49c84d" containerID="4837dc574abd4822f068d18ef5fa4d334a18d7e817556c5dfec4bbc05fb0c62d" exitCode=0 Oct 13 06:43:03 crc kubenswrapper[4833]: I1013 06:43:03.367858 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xr5jq" event={"ID":"1769cbf8-6c0f-4920-b831-62000d49c84d","Type":"ContainerDied","Data":"4837dc574abd4822f068d18ef5fa4d334a18d7e817556c5dfec4bbc05fb0c62d"} Oct 13 06:43:03 crc kubenswrapper[4833]: I1013 06:43:03.367879 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xr5jq" Oct 13 06:43:03 crc kubenswrapper[4833]: I1013 06:43:03.367938 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xr5jq" event={"ID":"1769cbf8-6c0f-4920-b831-62000d49c84d","Type":"ContainerDied","Data":"287b0e0c1c6b5ce86c4dd7c43b60a740318e59f291b979c510bc8a900993b1d3"} Oct 13 06:43:03 crc kubenswrapper[4833]: I1013 06:43:03.367972 4833 scope.go:117] "RemoveContainer" containerID="4837dc574abd4822f068d18ef5fa4d334a18d7e817556c5dfec4bbc05fb0c62d" Oct 13 06:43:03 crc kubenswrapper[4833]: I1013 06:43:03.369095 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7gvlk" event={"ID":"a9ca3f95-4e28-4e9d-ba53-2c6aa8dd7298","Type":"ContainerStarted","Data":"748cc5eb80965897cb0b26bf5eccf501b5f5684b813fccbe89cab6c03cb85d45"} Oct 13 06:43:03 crc kubenswrapper[4833]: I1013 06:43:03.381111 4833 scope.go:117] "RemoveContainer" containerID="4837dc574abd4822f068d18ef5fa4d334a18d7e817556c5dfec4bbc05fb0c62d" Oct 13 06:43:03 crc kubenswrapper[4833]: E1013 06:43:03.381599 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4837dc574abd4822f068d18ef5fa4d334a18d7e817556c5dfec4bbc05fb0c62d\": container with ID starting with 4837dc574abd4822f068d18ef5fa4d334a18d7e817556c5dfec4bbc05fb0c62d not found: ID does not exist" containerID="4837dc574abd4822f068d18ef5fa4d334a18d7e817556c5dfec4bbc05fb0c62d" Oct 13 06:43:03 crc kubenswrapper[4833]: I1013 06:43:03.381666 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4837dc574abd4822f068d18ef5fa4d334a18d7e817556c5dfec4bbc05fb0c62d"} err="failed to get container status \"4837dc574abd4822f068d18ef5fa4d334a18d7e817556c5dfec4bbc05fb0c62d\": rpc error: code = NotFound desc = could not find container \"4837dc574abd4822f068d18ef5fa4d334a18d7e817556c5dfec4bbc05fb0c62d\": container with ID starting with 4837dc574abd4822f068d18ef5fa4d334a18d7e817556c5dfec4bbc05fb0c62d not found: ID does not exist" Oct 13 06:43:03 crc kubenswrapper[4833]: I1013 06:43:03.406776 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-xr5jq"] Oct 13 06:43:03 crc kubenswrapper[4833]: I1013 06:43:03.409714 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-xr5jq"] Oct 13 06:43:04 crc kubenswrapper[4833]: I1013 06:43:04.375658 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7gvlk" event={"ID":"a9ca3f95-4e28-4e9d-ba53-2c6aa8dd7298","Type":"ContainerStarted","Data":"f28cb9aa0eb0d230a4c42456e92347d4493a4714ae113c7d144ed88768f65735"} Oct 13 06:43:04 crc kubenswrapper[4833]: I1013 06:43:04.393489 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-7gvlk" podStartSLOduration=1.891715035 podStartE2EDuration="2.393471917s" podCreationTimestamp="2025-10-13 06:43:02 +0000 UTC" firstStartedPulling="2025-10-13 06:43:03.136323361 +0000 UTC m=+873.236746277" lastFinishedPulling="2025-10-13 06:43:03.638080243 +0000 UTC m=+873.738503159" observedRunningTime="2025-10-13 06:43:04.390075548 +0000 UTC m=+874.490498484" watchObservedRunningTime="2025-10-13 06:43:04.393471917 +0000 UTC m=+874.493894833" Oct 13 06:43:04 crc kubenswrapper[4833]: I1013 06:43:04.636601 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1769cbf8-6c0f-4920-b831-62000d49c84d" path="/var/lib/kubelet/pods/1769cbf8-6c0f-4920-b831-62000d49c84d/volumes" Oct 13 06:43:04 crc kubenswrapper[4833]: I1013 06:43:04.956146 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r7rnc"] Oct 13 06:43:04 crc kubenswrapper[4833]: E1013 06:43:04.956405 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1769cbf8-6c0f-4920-b831-62000d49c84d" containerName="registry-server" Oct 13 06:43:04 crc kubenswrapper[4833]: I1013 06:43:04.956419 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1769cbf8-6c0f-4920-b831-62000d49c84d" containerName="registry-server" Oct 13 06:43:04 crc kubenswrapper[4833]: I1013 06:43:04.956591 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="1769cbf8-6c0f-4920-b831-62000d49c84d" containerName="registry-server" Oct 13 06:43:04 crc kubenswrapper[4833]: I1013 06:43:04.958831 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7rnc" Oct 13 06:43:04 crc kubenswrapper[4833]: I1013 06:43:04.969452 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r7rnc"] Oct 13 06:43:05 crc kubenswrapper[4833]: I1013 06:43:05.030626 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/052a40ff-c27d-4910-888b-6b525980ac75-utilities\") pod \"certified-operators-r7rnc\" (UID: \"052a40ff-c27d-4910-888b-6b525980ac75\") " pod="openshift-marketplace/certified-operators-r7rnc" Oct 13 06:43:05 crc kubenswrapper[4833]: I1013 06:43:05.030721 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlfss\" (UniqueName: \"kubernetes.io/projected/052a40ff-c27d-4910-888b-6b525980ac75-kube-api-access-vlfss\") pod \"certified-operators-r7rnc\" (UID: \"052a40ff-c27d-4910-888b-6b525980ac75\") " pod="openshift-marketplace/certified-operators-r7rnc" Oct 13 06:43:05 crc kubenswrapper[4833]: I1013 06:43:05.030765 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/052a40ff-c27d-4910-888b-6b525980ac75-catalog-content\") pod \"certified-operators-r7rnc\" (UID: \"052a40ff-c27d-4910-888b-6b525980ac75\") " pod="openshift-marketplace/certified-operators-r7rnc" Oct 13 06:43:05 crc kubenswrapper[4833]: I1013 06:43:05.132712 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/052a40ff-c27d-4910-888b-6b525980ac75-utilities\") pod \"certified-operators-r7rnc\" (UID: \"052a40ff-c27d-4910-888b-6b525980ac75\") " pod="openshift-marketplace/certified-operators-r7rnc" Oct 13 06:43:05 crc kubenswrapper[4833]: I1013 06:43:05.132773 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlfss\" (UniqueName: \"kubernetes.io/projected/052a40ff-c27d-4910-888b-6b525980ac75-kube-api-access-vlfss\") pod \"certified-operators-r7rnc\" (UID: \"052a40ff-c27d-4910-888b-6b525980ac75\") " pod="openshift-marketplace/certified-operators-r7rnc" Oct 13 06:43:05 crc kubenswrapper[4833]: I1013 06:43:05.132816 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/052a40ff-c27d-4910-888b-6b525980ac75-catalog-content\") pod \"certified-operators-r7rnc\" (UID: \"052a40ff-c27d-4910-888b-6b525980ac75\") " pod="openshift-marketplace/certified-operators-r7rnc" Oct 13 06:43:05 crc kubenswrapper[4833]: I1013 06:43:05.133185 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/052a40ff-c27d-4910-888b-6b525980ac75-utilities\") pod \"certified-operators-r7rnc\" (UID: \"052a40ff-c27d-4910-888b-6b525980ac75\") " pod="openshift-marketplace/certified-operators-r7rnc" Oct 13 06:43:05 crc kubenswrapper[4833]: I1013 06:43:05.133243 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/052a40ff-c27d-4910-888b-6b525980ac75-catalog-content\") pod \"certified-operators-r7rnc\" (UID: \"052a40ff-c27d-4910-888b-6b525980ac75\") " pod="openshift-marketplace/certified-operators-r7rnc" Oct 13 06:43:05 crc kubenswrapper[4833]: I1013 06:43:05.164751 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlfss\" (UniqueName: \"kubernetes.io/projected/052a40ff-c27d-4910-888b-6b525980ac75-kube-api-access-vlfss\") pod \"certified-operators-r7rnc\" (UID: \"052a40ff-c27d-4910-888b-6b525980ac75\") " pod="openshift-marketplace/certified-operators-r7rnc" Oct 13 06:43:05 crc kubenswrapper[4833]: I1013 06:43:05.286687 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7rnc" Oct 13 06:43:05 crc kubenswrapper[4833]: I1013 06:43:05.781660 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r7rnc"] Oct 13 06:43:05 crc kubenswrapper[4833]: W1013 06:43:05.792880 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod052a40ff_c27d_4910_888b_6b525980ac75.slice/crio-b4c125cfc5fdcf0c2d609a4bd2678ef34884a95816358eb161fc4091cd8754d9 WatchSource:0}: Error finding container b4c125cfc5fdcf0c2d609a4bd2678ef34884a95816358eb161fc4091cd8754d9: Status 404 returned error can't find the container with id b4c125cfc5fdcf0c2d609a4bd2678ef34884a95816358eb161fc4091cd8754d9 Oct 13 06:43:06 crc kubenswrapper[4833]: I1013 06:43:06.391618 4833 generic.go:334] "Generic (PLEG): container finished" podID="052a40ff-c27d-4910-888b-6b525980ac75" containerID="cb7a94d9311404c9cdbd9d852eaa0552508f9e7af77472472608ca60297d2c5c" exitCode=0 Oct 13 06:43:06 crc kubenswrapper[4833]: I1013 06:43:06.391682 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7rnc" event={"ID":"052a40ff-c27d-4910-888b-6b525980ac75","Type":"ContainerDied","Data":"cb7a94d9311404c9cdbd9d852eaa0552508f9e7af77472472608ca60297d2c5c"} Oct 13 06:43:06 crc kubenswrapper[4833]: I1013 06:43:06.391720 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7rnc" event={"ID":"052a40ff-c27d-4910-888b-6b525980ac75","Type":"ContainerStarted","Data":"b4c125cfc5fdcf0c2d609a4bd2678ef34884a95816358eb161fc4091cd8754d9"} Oct 13 06:43:07 crc kubenswrapper[4833]: I1013 06:43:07.407783 4833 generic.go:334] "Generic (PLEG): container finished" podID="052a40ff-c27d-4910-888b-6b525980ac75" containerID="eb513754fffec50a2a52ed11b97eaee25ac4ca85539282094ca98bd826850ef1" exitCode=0 Oct 13 06:43:07 crc kubenswrapper[4833]: I1013 06:43:07.408332 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7rnc" event={"ID":"052a40ff-c27d-4910-888b-6b525980ac75","Type":"ContainerDied","Data":"eb513754fffec50a2a52ed11b97eaee25ac4ca85539282094ca98bd826850ef1"} Oct 13 06:43:08 crc kubenswrapper[4833]: I1013 06:43:08.418500 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7rnc" event={"ID":"052a40ff-c27d-4910-888b-6b525980ac75","Type":"ContainerStarted","Data":"810eb3ec37013aedc949e21cfc99f5066d88851f5be8be2c1e3bfe1089a7f621"} Oct 13 06:43:08 crc kubenswrapper[4833]: I1013 06:43:08.438329 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r7rnc" podStartSLOduration=2.998613734 podStartE2EDuration="4.438315658s" podCreationTimestamp="2025-10-13 06:43:04 +0000 UTC" firstStartedPulling="2025-10-13 06:43:06.392982437 +0000 UTC m=+876.493405363" lastFinishedPulling="2025-10-13 06:43:07.832684321 +0000 UTC m=+877.933107287" observedRunningTime="2025-10-13 06:43:08.437521714 +0000 UTC m=+878.537944630" watchObservedRunningTime="2025-10-13 06:43:08.438315658 +0000 UTC m=+878.538738574" Oct 13 06:43:12 crc kubenswrapper[4833]: I1013 06:43:12.715688 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-7gvlk" Oct 13 06:43:12 crc kubenswrapper[4833]: I1013 06:43:12.716715 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-7gvlk" Oct 13 06:43:12 crc kubenswrapper[4833]: I1013 06:43:12.759414 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-7gvlk" Oct 13 06:43:13 crc kubenswrapper[4833]: I1013 06:43:13.480353 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-7gvlk" Oct 13 06:43:15 crc kubenswrapper[4833]: I1013 06:43:15.208235 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g"] Oct 13 06:43:15 crc kubenswrapper[4833]: I1013 06:43:15.210071 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g" Oct 13 06:43:15 crc kubenswrapper[4833]: I1013 06:43:15.212441 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-b5mh6" Oct 13 06:43:15 crc kubenswrapper[4833]: I1013 06:43:15.223325 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g"] Oct 13 06:43:15 crc kubenswrapper[4833]: I1013 06:43:15.278352 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69cpp\" (UniqueName: \"kubernetes.io/projected/1c6b623c-4ccf-4b07-8448-c9b8b403615c-kube-api-access-69cpp\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g\" (UID: \"1c6b623c-4ccf-4b07-8448-c9b8b403615c\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g" Oct 13 06:43:15 crc kubenswrapper[4833]: I1013 06:43:15.278512 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c6b623c-4ccf-4b07-8448-c9b8b403615c-util\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g\" (UID: \"1c6b623c-4ccf-4b07-8448-c9b8b403615c\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g" Oct 13 06:43:15 crc kubenswrapper[4833]: I1013 06:43:15.278615 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c6b623c-4ccf-4b07-8448-c9b8b403615c-bundle\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g\" (UID: \"1c6b623c-4ccf-4b07-8448-c9b8b403615c\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g" Oct 13 06:43:15 crc kubenswrapper[4833]: I1013 06:43:15.287155 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r7rnc" Oct 13 06:43:15 crc kubenswrapper[4833]: I1013 06:43:15.287636 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r7rnc" Oct 13 06:43:15 crc kubenswrapper[4833]: I1013 06:43:15.333256 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r7rnc" Oct 13 06:43:15 crc kubenswrapper[4833]: I1013 06:43:15.379484 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c6b623c-4ccf-4b07-8448-c9b8b403615c-util\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g\" (UID: \"1c6b623c-4ccf-4b07-8448-c9b8b403615c\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g" Oct 13 06:43:15 crc kubenswrapper[4833]: I1013 06:43:15.379696 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c6b623c-4ccf-4b07-8448-c9b8b403615c-bundle\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g\" (UID: \"1c6b623c-4ccf-4b07-8448-c9b8b403615c\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g" Oct 13 06:43:15 crc kubenswrapper[4833]: I1013 06:43:15.379790 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69cpp\" (UniqueName: \"kubernetes.io/projected/1c6b623c-4ccf-4b07-8448-c9b8b403615c-kube-api-access-69cpp\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g\" (UID: \"1c6b623c-4ccf-4b07-8448-c9b8b403615c\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g" Oct 13 06:43:15 crc kubenswrapper[4833]: I1013 06:43:15.379981 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c6b623c-4ccf-4b07-8448-c9b8b403615c-util\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g\" (UID: \"1c6b623c-4ccf-4b07-8448-c9b8b403615c\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g" Oct 13 06:43:15 crc kubenswrapper[4833]: I1013 06:43:15.380216 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c6b623c-4ccf-4b07-8448-c9b8b403615c-bundle\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g\" (UID: \"1c6b623c-4ccf-4b07-8448-c9b8b403615c\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g" Oct 13 06:43:15 crc kubenswrapper[4833]: I1013 06:43:15.398670 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69cpp\" (UniqueName: \"kubernetes.io/projected/1c6b623c-4ccf-4b07-8448-c9b8b403615c-kube-api-access-69cpp\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g\" (UID: \"1c6b623c-4ccf-4b07-8448-c9b8b403615c\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g" Oct 13 06:43:15 crc kubenswrapper[4833]: I1013 06:43:15.543696 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r7rnc" Oct 13 06:43:15 crc kubenswrapper[4833]: I1013 06:43:15.570103 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g" Oct 13 06:43:15 crc kubenswrapper[4833]: I1013 06:43:15.987241 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g"] Oct 13 06:43:16 crc kubenswrapper[4833]: W1013 06:43:16.002924 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c6b623c_4ccf_4b07_8448_c9b8b403615c.slice/crio-6f93e02e535019fb1df6f3eaf6195846dc8bdef44c54661fade6962fa69ab5cd WatchSource:0}: Error finding container 6f93e02e535019fb1df6f3eaf6195846dc8bdef44c54661fade6962fa69ab5cd: Status 404 returned error can't find the container with id 6f93e02e535019fb1df6f3eaf6195846dc8bdef44c54661fade6962fa69ab5cd Oct 13 06:43:16 crc kubenswrapper[4833]: I1013 06:43:16.480962 4833 generic.go:334] "Generic (PLEG): container finished" podID="1c6b623c-4ccf-4b07-8448-c9b8b403615c" containerID="29dd77ddf07f9ecef2bfbe71c269ae2be37b6ba0082244f4710ad51ed5b118f6" exitCode=0 Oct 13 06:43:16 crc kubenswrapper[4833]: I1013 06:43:16.481036 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g" event={"ID":"1c6b623c-4ccf-4b07-8448-c9b8b403615c","Type":"ContainerDied","Data":"29dd77ddf07f9ecef2bfbe71c269ae2be37b6ba0082244f4710ad51ed5b118f6"} Oct 13 06:43:16 crc kubenswrapper[4833]: I1013 06:43:16.481093 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g" event={"ID":"1c6b623c-4ccf-4b07-8448-c9b8b403615c","Type":"ContainerStarted","Data":"6f93e02e535019fb1df6f3eaf6195846dc8bdef44c54661fade6962fa69ab5cd"} Oct 13 06:43:17 crc kubenswrapper[4833]: I1013 06:43:17.491686 4833 generic.go:334] "Generic (PLEG): container finished" podID="1c6b623c-4ccf-4b07-8448-c9b8b403615c" containerID="3ee6bb86c38479eb909a2b2f4826942a09657d2555312b6d6d80f1b6bf8c689a" exitCode=0 Oct 13 06:43:17 crc kubenswrapper[4833]: I1013 06:43:17.491825 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g" event={"ID":"1c6b623c-4ccf-4b07-8448-c9b8b403615c","Type":"ContainerDied","Data":"3ee6bb86c38479eb909a2b2f4826942a09657d2555312b6d6d80f1b6bf8c689a"} Oct 13 06:43:18 crc kubenswrapper[4833]: I1013 06:43:18.503786 4833 generic.go:334] "Generic (PLEG): container finished" podID="1c6b623c-4ccf-4b07-8448-c9b8b403615c" containerID="d22ef6b69d6ecb66f9cfc45de3530b0fd816eecf59e0813da4d5a373ed68e0de" exitCode=0 Oct 13 06:43:18 crc kubenswrapper[4833]: I1013 06:43:18.503860 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g" event={"ID":"1c6b623c-4ccf-4b07-8448-c9b8b403615c","Type":"ContainerDied","Data":"d22ef6b69d6ecb66f9cfc45de3530b0fd816eecf59e0813da4d5a373ed68e0de"} Oct 13 06:43:18 crc kubenswrapper[4833]: I1013 06:43:18.549089 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r7rnc"] Oct 13 06:43:18 crc kubenswrapper[4833]: I1013 06:43:18.549493 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r7rnc" podUID="052a40ff-c27d-4910-888b-6b525980ac75" containerName="registry-server" containerID="cri-o://810eb3ec37013aedc949e21cfc99f5066d88851f5be8be2c1e3bfe1089a7f621" gracePeriod=2 Oct 13 06:43:18 crc kubenswrapper[4833]: I1013 06:43:18.938291 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7rnc" Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.039463 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/052a40ff-c27d-4910-888b-6b525980ac75-utilities\") pod \"052a40ff-c27d-4910-888b-6b525980ac75\" (UID: \"052a40ff-c27d-4910-888b-6b525980ac75\") " Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.039599 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlfss\" (UniqueName: \"kubernetes.io/projected/052a40ff-c27d-4910-888b-6b525980ac75-kube-api-access-vlfss\") pod \"052a40ff-c27d-4910-888b-6b525980ac75\" (UID: \"052a40ff-c27d-4910-888b-6b525980ac75\") " Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.040456 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/052a40ff-c27d-4910-888b-6b525980ac75-utilities" (OuterVolumeSpecName: "utilities") pod "052a40ff-c27d-4910-888b-6b525980ac75" (UID: "052a40ff-c27d-4910-888b-6b525980ac75"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.040579 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/052a40ff-c27d-4910-888b-6b525980ac75-catalog-content\") pod \"052a40ff-c27d-4910-888b-6b525980ac75\" (UID: \"052a40ff-c27d-4910-888b-6b525980ac75\") " Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.040875 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/052a40ff-c27d-4910-888b-6b525980ac75-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.044530 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/052a40ff-c27d-4910-888b-6b525980ac75-kube-api-access-vlfss" (OuterVolumeSpecName: "kube-api-access-vlfss") pod "052a40ff-c27d-4910-888b-6b525980ac75" (UID: "052a40ff-c27d-4910-888b-6b525980ac75"). InnerVolumeSpecName "kube-api-access-vlfss". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.082861 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/052a40ff-c27d-4910-888b-6b525980ac75-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "052a40ff-c27d-4910-888b-6b525980ac75" (UID: "052a40ff-c27d-4910-888b-6b525980ac75"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.142144 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlfss\" (UniqueName: \"kubernetes.io/projected/052a40ff-c27d-4910-888b-6b525980ac75-kube-api-access-vlfss\") on node \"crc\" DevicePath \"\"" Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.142180 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/052a40ff-c27d-4910-888b-6b525980ac75-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.525051 4833 generic.go:334] "Generic (PLEG): container finished" podID="052a40ff-c27d-4910-888b-6b525980ac75" containerID="810eb3ec37013aedc949e21cfc99f5066d88851f5be8be2c1e3bfe1089a7f621" exitCode=0 Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.525381 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7rnc" Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.529764 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7rnc" event={"ID":"052a40ff-c27d-4910-888b-6b525980ac75","Type":"ContainerDied","Data":"810eb3ec37013aedc949e21cfc99f5066d88851f5be8be2c1e3bfe1089a7f621"} Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.529854 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7rnc" event={"ID":"052a40ff-c27d-4910-888b-6b525980ac75","Type":"ContainerDied","Data":"b4c125cfc5fdcf0c2d609a4bd2678ef34884a95816358eb161fc4091cd8754d9"} Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.529890 4833 scope.go:117] "RemoveContainer" containerID="810eb3ec37013aedc949e21cfc99f5066d88851f5be8be2c1e3bfe1089a7f621" Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.556363 4833 scope.go:117] "RemoveContainer" containerID="eb513754fffec50a2a52ed11b97eaee25ac4ca85539282094ca98bd826850ef1" Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.567597 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r7rnc"] Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.591908 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r7rnc"] Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.596485 4833 scope.go:117] "RemoveContainer" containerID="cb7a94d9311404c9cdbd9d852eaa0552508f9e7af77472472608ca60297d2c5c" Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.612790 4833 scope.go:117] "RemoveContainer" containerID="810eb3ec37013aedc949e21cfc99f5066d88851f5be8be2c1e3bfe1089a7f621" Oct 13 06:43:19 crc kubenswrapper[4833]: E1013 06:43:19.613179 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"810eb3ec37013aedc949e21cfc99f5066d88851f5be8be2c1e3bfe1089a7f621\": container with ID starting with 810eb3ec37013aedc949e21cfc99f5066d88851f5be8be2c1e3bfe1089a7f621 not found: ID does not exist" containerID="810eb3ec37013aedc949e21cfc99f5066d88851f5be8be2c1e3bfe1089a7f621" Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.613207 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"810eb3ec37013aedc949e21cfc99f5066d88851f5be8be2c1e3bfe1089a7f621"} err="failed to get container status \"810eb3ec37013aedc949e21cfc99f5066d88851f5be8be2c1e3bfe1089a7f621\": rpc error: code = NotFound desc = could not find container \"810eb3ec37013aedc949e21cfc99f5066d88851f5be8be2c1e3bfe1089a7f621\": container with ID starting with 810eb3ec37013aedc949e21cfc99f5066d88851f5be8be2c1e3bfe1089a7f621 not found: ID does not exist" Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.613228 4833 scope.go:117] "RemoveContainer" containerID="eb513754fffec50a2a52ed11b97eaee25ac4ca85539282094ca98bd826850ef1" Oct 13 06:43:19 crc kubenswrapper[4833]: E1013 06:43:19.613420 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb513754fffec50a2a52ed11b97eaee25ac4ca85539282094ca98bd826850ef1\": container with ID starting with eb513754fffec50a2a52ed11b97eaee25ac4ca85539282094ca98bd826850ef1 not found: ID does not exist" containerID="eb513754fffec50a2a52ed11b97eaee25ac4ca85539282094ca98bd826850ef1" Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.613442 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb513754fffec50a2a52ed11b97eaee25ac4ca85539282094ca98bd826850ef1"} err="failed to get container status \"eb513754fffec50a2a52ed11b97eaee25ac4ca85539282094ca98bd826850ef1\": rpc error: code = NotFound desc = could not find container \"eb513754fffec50a2a52ed11b97eaee25ac4ca85539282094ca98bd826850ef1\": container with ID starting with eb513754fffec50a2a52ed11b97eaee25ac4ca85539282094ca98bd826850ef1 not found: ID does not exist" Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.613456 4833 scope.go:117] "RemoveContainer" containerID="cb7a94d9311404c9cdbd9d852eaa0552508f9e7af77472472608ca60297d2c5c" Oct 13 06:43:19 crc kubenswrapper[4833]: E1013 06:43:19.613649 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb7a94d9311404c9cdbd9d852eaa0552508f9e7af77472472608ca60297d2c5c\": container with ID starting with cb7a94d9311404c9cdbd9d852eaa0552508f9e7af77472472608ca60297d2c5c not found: ID does not exist" containerID="cb7a94d9311404c9cdbd9d852eaa0552508f9e7af77472472608ca60297d2c5c" Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.613670 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb7a94d9311404c9cdbd9d852eaa0552508f9e7af77472472608ca60297d2c5c"} err="failed to get container status \"cb7a94d9311404c9cdbd9d852eaa0552508f9e7af77472472608ca60297d2c5c\": rpc error: code = NotFound desc = could not find container \"cb7a94d9311404c9cdbd9d852eaa0552508f9e7af77472472608ca60297d2c5c\": container with ID starting with cb7a94d9311404c9cdbd9d852eaa0552508f9e7af77472472608ca60297d2c5c not found: ID does not exist" Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.823363 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g" Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.877713 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69cpp\" (UniqueName: \"kubernetes.io/projected/1c6b623c-4ccf-4b07-8448-c9b8b403615c-kube-api-access-69cpp\") pod \"1c6b623c-4ccf-4b07-8448-c9b8b403615c\" (UID: \"1c6b623c-4ccf-4b07-8448-c9b8b403615c\") " Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.877816 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c6b623c-4ccf-4b07-8448-c9b8b403615c-util\") pod \"1c6b623c-4ccf-4b07-8448-c9b8b403615c\" (UID: \"1c6b623c-4ccf-4b07-8448-c9b8b403615c\") " Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.877899 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c6b623c-4ccf-4b07-8448-c9b8b403615c-bundle\") pod \"1c6b623c-4ccf-4b07-8448-c9b8b403615c\" (UID: \"1c6b623c-4ccf-4b07-8448-c9b8b403615c\") " Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.878825 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c6b623c-4ccf-4b07-8448-c9b8b403615c-bundle" (OuterVolumeSpecName: "bundle") pod "1c6b623c-4ccf-4b07-8448-c9b8b403615c" (UID: "1c6b623c-4ccf-4b07-8448-c9b8b403615c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.881649 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c6b623c-4ccf-4b07-8448-c9b8b403615c-kube-api-access-69cpp" (OuterVolumeSpecName: "kube-api-access-69cpp") pod "1c6b623c-4ccf-4b07-8448-c9b8b403615c" (UID: "1c6b623c-4ccf-4b07-8448-c9b8b403615c"). InnerVolumeSpecName "kube-api-access-69cpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.891731 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c6b623c-4ccf-4b07-8448-c9b8b403615c-util" (OuterVolumeSpecName: "util") pod "1c6b623c-4ccf-4b07-8448-c9b8b403615c" (UID: "1c6b623c-4ccf-4b07-8448-c9b8b403615c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.979208 4833 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c6b623c-4ccf-4b07-8448-c9b8b403615c-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.979245 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69cpp\" (UniqueName: \"kubernetes.io/projected/1c6b623c-4ccf-4b07-8448-c9b8b403615c-kube-api-access-69cpp\") on node \"crc\" DevicePath \"\"" Oct 13 06:43:19 crc kubenswrapper[4833]: I1013 06:43:19.979256 4833 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c6b623c-4ccf-4b07-8448-c9b8b403615c-util\") on node \"crc\" DevicePath \"\"" Oct 13 06:43:20 crc kubenswrapper[4833]: I1013 06:43:20.538924 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g" event={"ID":"1c6b623c-4ccf-4b07-8448-c9b8b403615c","Type":"ContainerDied","Data":"6f93e02e535019fb1df6f3eaf6195846dc8bdef44c54661fade6962fa69ab5cd"} Oct 13 06:43:20 crc kubenswrapper[4833]: I1013 06:43:20.538980 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f93e02e535019fb1df6f3eaf6195846dc8bdef44c54661fade6962fa69ab5cd" Oct 13 06:43:20 crc kubenswrapper[4833]: I1013 06:43:20.538996 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g" Oct 13 06:43:20 crc kubenswrapper[4833]: I1013 06:43:20.641187 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="052a40ff-c27d-4910-888b-6b525980ac75" path="/var/lib/kubelet/pods/052a40ff-c27d-4910-888b-6b525980ac75/volumes" Oct 13 06:43:25 crc kubenswrapper[4833]: I1013 06:43:25.084427 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-688d597459-56fw9"] Oct 13 06:43:25 crc kubenswrapper[4833]: E1013 06:43:25.085158 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="052a40ff-c27d-4910-888b-6b525980ac75" containerName="extract-utilities" Oct 13 06:43:25 crc kubenswrapper[4833]: I1013 06:43:25.085171 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="052a40ff-c27d-4910-888b-6b525980ac75" containerName="extract-utilities" Oct 13 06:43:25 crc kubenswrapper[4833]: E1013 06:43:25.085180 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="052a40ff-c27d-4910-888b-6b525980ac75" containerName="extract-content" Oct 13 06:43:25 crc kubenswrapper[4833]: I1013 06:43:25.085186 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="052a40ff-c27d-4910-888b-6b525980ac75" containerName="extract-content" Oct 13 06:43:25 crc kubenswrapper[4833]: E1013 06:43:25.085201 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="052a40ff-c27d-4910-888b-6b525980ac75" containerName="registry-server" Oct 13 06:43:25 crc kubenswrapper[4833]: I1013 06:43:25.085207 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="052a40ff-c27d-4910-888b-6b525980ac75" containerName="registry-server" Oct 13 06:43:25 crc kubenswrapper[4833]: E1013 06:43:25.085215 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c6b623c-4ccf-4b07-8448-c9b8b403615c" containerName="extract" Oct 13 06:43:25 crc kubenswrapper[4833]: I1013 06:43:25.085220 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c6b623c-4ccf-4b07-8448-c9b8b403615c" containerName="extract" Oct 13 06:43:25 crc kubenswrapper[4833]: E1013 06:43:25.085231 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c6b623c-4ccf-4b07-8448-c9b8b403615c" containerName="pull" Oct 13 06:43:25 crc kubenswrapper[4833]: I1013 06:43:25.085237 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c6b623c-4ccf-4b07-8448-c9b8b403615c" containerName="pull" Oct 13 06:43:25 crc kubenswrapper[4833]: E1013 06:43:25.085249 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c6b623c-4ccf-4b07-8448-c9b8b403615c" containerName="util" Oct 13 06:43:25 crc kubenswrapper[4833]: I1013 06:43:25.085254 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c6b623c-4ccf-4b07-8448-c9b8b403615c" containerName="util" Oct 13 06:43:25 crc kubenswrapper[4833]: I1013 06:43:25.085355 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c6b623c-4ccf-4b07-8448-c9b8b403615c" containerName="extract" Oct 13 06:43:25 crc kubenswrapper[4833]: I1013 06:43:25.085365 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="052a40ff-c27d-4910-888b-6b525980ac75" containerName="registry-server" Oct 13 06:43:25 crc kubenswrapper[4833]: I1013 06:43:25.085957 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-688d597459-56fw9" Oct 13 06:43:25 crc kubenswrapper[4833]: I1013 06:43:25.089422 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-pjjfj" Oct 13 06:43:25 crc kubenswrapper[4833]: I1013 06:43:25.115045 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-688d597459-56fw9"] Oct 13 06:43:25 crc kubenswrapper[4833]: I1013 06:43:25.147520 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nng5z\" (UniqueName: \"kubernetes.io/projected/3e030361-241a-4361-b8aa-13454891c551-kube-api-access-nng5z\") pod \"openstack-operator-controller-operator-688d597459-56fw9\" (UID: \"3e030361-241a-4361-b8aa-13454891c551\") " pod="openstack-operators/openstack-operator-controller-operator-688d597459-56fw9" Oct 13 06:43:25 crc kubenswrapper[4833]: I1013 06:43:25.249162 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nng5z\" (UniqueName: \"kubernetes.io/projected/3e030361-241a-4361-b8aa-13454891c551-kube-api-access-nng5z\") pod \"openstack-operator-controller-operator-688d597459-56fw9\" (UID: \"3e030361-241a-4361-b8aa-13454891c551\") " pod="openstack-operators/openstack-operator-controller-operator-688d597459-56fw9" Oct 13 06:43:25 crc kubenswrapper[4833]: I1013 06:43:25.281644 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nng5z\" (UniqueName: \"kubernetes.io/projected/3e030361-241a-4361-b8aa-13454891c551-kube-api-access-nng5z\") pod \"openstack-operator-controller-operator-688d597459-56fw9\" (UID: \"3e030361-241a-4361-b8aa-13454891c551\") " pod="openstack-operators/openstack-operator-controller-operator-688d597459-56fw9" Oct 13 06:43:25 crc kubenswrapper[4833]: I1013 06:43:25.402405 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-688d597459-56fw9" Oct 13 06:43:25 crc kubenswrapper[4833]: I1013 06:43:25.615192 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-688d597459-56fw9"] Oct 13 06:43:26 crc kubenswrapper[4833]: I1013 06:43:26.587023 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-688d597459-56fw9" event={"ID":"3e030361-241a-4361-b8aa-13454891c551","Type":"ContainerStarted","Data":"fe2e4ddc2b59a9860d11cb48cba7bd93fb06bbbd6eb8751f16aa4b0cb07e3c31"} Oct 13 06:43:30 crc kubenswrapper[4833]: I1013 06:43:30.543073 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 06:43:30 crc kubenswrapper[4833]: I1013 06:43:30.543646 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 06:43:30 crc kubenswrapper[4833]: I1013 06:43:30.623050 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-688d597459-56fw9" event={"ID":"3e030361-241a-4361-b8aa-13454891c551","Type":"ContainerStarted","Data":"8faccbbf44b3369dd48e8c5bc9c0ff0b4d51fc32797513d1224fb2ba8b447801"} Oct 13 06:43:32 crc kubenswrapper[4833]: I1013 06:43:32.637191 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-688d597459-56fw9" event={"ID":"3e030361-241a-4361-b8aa-13454891c551","Type":"ContainerStarted","Data":"430c6ed68f6b2222c12205ef31a4e23c2c054c341c2a3538c59058a54964aa6f"} Oct 13 06:43:32 crc kubenswrapper[4833]: I1013 06:43:32.638341 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-688d597459-56fw9" Oct 13 06:43:32 crc kubenswrapper[4833]: I1013 06:43:32.668724 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-688d597459-56fw9" podStartSLOduration=1.397682976 podStartE2EDuration="7.668704827s" podCreationTimestamp="2025-10-13 06:43:25 +0000 UTC" firstStartedPulling="2025-10-13 06:43:25.623425634 +0000 UTC m=+895.723848550" lastFinishedPulling="2025-10-13 06:43:31.894447465 +0000 UTC m=+901.994870401" observedRunningTime="2025-10-13 06:43:32.664970128 +0000 UTC m=+902.765393054" watchObservedRunningTime="2025-10-13 06:43:32.668704827 +0000 UTC m=+902.769127743" Oct 13 06:43:34 crc kubenswrapper[4833]: I1013 06:43:34.660085 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-688d597459-56fw9" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.587853 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-658bdf4b74-w9zrf"] Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.589645 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-w9zrf" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.597896 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-4sk6p" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.601080 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7b7fb68549-mhd5p"] Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.602400 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-mhd5p" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.607658 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-dz566" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.609135 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-658bdf4b74-w9zrf"] Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.690479 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7b7fb68549-mhd5p"] Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.690530 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-85d5d9dd78-hmcxj"] Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.691708 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-hmcxj" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.695615 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-84b9b84486-5pqx6"] Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.696834 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-7rgd8" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.696884 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-5pqx6" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.701033 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-858f76bbdd-wl4kq"] Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.702294 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-wl4kq" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.703032 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-bvdb5" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.704840 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-v6p8k" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.705085 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-85d5d9dd78-hmcxj"] Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.710317 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-84b9b84486-5pqx6"] Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.730889 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-858f76bbdd-wl4kq"] Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.744586 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7ffbcb7588-4nz57"] Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.745760 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-4nz57" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.748013 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-5gsht" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.749486 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpbrl\" (UniqueName: \"kubernetes.io/projected/bbfa1bde-53ad-46fb-9217-cfd3bcbd9355-kube-api-access-xpbrl\") pod \"barbican-operator-controller-manager-658bdf4b74-w9zrf\" (UID: \"bbfa1bde-53ad-46fb-9217-cfd3bcbd9355\") " pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-w9zrf" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.749737 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xrgj\" (UniqueName: \"kubernetes.io/projected/9608166e-0d48-4e57-99d7-6fa85036e7bf-kube-api-access-6xrgj\") pod \"cinder-operator-controller-manager-7b7fb68549-mhd5p\" (UID: \"9608166e-0d48-4e57-99d7-6fa85036e7bf\") " pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-mhd5p" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.770828 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7ffbcb7588-4nz57"] Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.770890 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-656bcbd775-6wmgz"] Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.771858 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-6wmgz" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.774189 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-9c5c78d49-9rvcx"] Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.775241 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-9rvcx" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.781990 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-656bcbd775-6wmgz"] Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.790046 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-jtmc8" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.793887 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.797582 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-9c5c78d49-9rvcx"] Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.802721 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-c9lv5" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.826380 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-sqftk"] Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.843856 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5f67fbc655-dzsh5"] Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.844009 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-sqftk" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.844980 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-sqftk"] Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.845152 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5f67fbc655-dzsh5"] Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.845216 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-dzsh5" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.847985 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-mnjqn" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.848432 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-p4cc2" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.851828 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xrgj\" (UniqueName: \"kubernetes.io/projected/9608166e-0d48-4e57-99d7-6fa85036e7bf-kube-api-access-6xrgj\") pod \"cinder-operator-controller-manager-7b7fb68549-mhd5p\" (UID: \"9608166e-0d48-4e57-99d7-6fa85036e7bf\") " pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-mhd5p" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.851884 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpbrl\" (UniqueName: \"kubernetes.io/projected/bbfa1bde-53ad-46fb-9217-cfd3bcbd9355-kube-api-access-xpbrl\") pod \"barbican-operator-controller-manager-658bdf4b74-w9zrf\" (UID: \"bbfa1bde-53ad-46fb-9217-cfd3bcbd9355\") " pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-w9zrf" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.851950 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q625\" (UniqueName: \"kubernetes.io/projected/a24ab3f0-f53c-4f16-9fd8-e0e69149776d-kube-api-access-4q625\") pod \"designate-operator-controller-manager-85d5d9dd78-hmcxj\" (UID: \"a24ab3f0-f53c-4f16-9fd8-e0e69149776d\") " pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-hmcxj" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.851970 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h78f6\" (UniqueName: \"kubernetes.io/projected/3da4ee47-a023-411b-b367-c7eae5c8bd9b-kube-api-access-h78f6\") pod \"glance-operator-controller-manager-84b9b84486-5pqx6\" (UID: \"3da4ee47-a023-411b-b367-c7eae5c8bd9b\") " pod="openstack-operators/glance-operator-controller-manager-84b9b84486-5pqx6" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.851995 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpx9t\" (UniqueName: \"kubernetes.io/projected/63700b89-455e-4df1-baec-273d82261c60-kube-api-access-tpx9t\") pod \"horizon-operator-controller-manager-7ffbcb7588-4nz57\" (UID: \"63700b89-455e-4df1-baec-273d82261c60\") " pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-4nz57" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.852014 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bsgx\" (UniqueName: \"kubernetes.io/projected/677b9ffe-683c-4a84-9b7c-a625280c79f8-kube-api-access-8bsgx\") pod \"heat-operator-controller-manager-858f76bbdd-wl4kq\" (UID: \"677b9ffe-683c-4a84-9b7c-a625280c79f8\") " pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-wl4kq" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.878632 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-5fzdw"] Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.880501 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-5fzdw" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.883293 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-rhnzf" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.893390 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5df598886f-5nmfh"] Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.898760 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5df598886f-5nmfh" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.909856 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-5stqj" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.910505 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpbrl\" (UniqueName: \"kubernetes.io/projected/bbfa1bde-53ad-46fb-9217-cfd3bcbd9355-kube-api-access-xpbrl\") pod \"barbican-operator-controller-manager-658bdf4b74-w9zrf\" (UID: \"bbfa1bde-53ad-46fb-9217-cfd3bcbd9355\") " pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-w9zrf" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.910556 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-79d585cb66-542w7"] Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.915067 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-w9zrf" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.919596 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-5fzdw"] Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.919908 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-542w7" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.925028 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-lcdjx" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.950981 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xrgj\" (UniqueName: \"kubernetes.io/projected/9608166e-0d48-4e57-99d7-6fa85036e7bf-kube-api-access-6xrgj\") pod \"cinder-operator-controller-manager-7b7fb68549-mhd5p\" (UID: \"9608166e-0d48-4e57-99d7-6fa85036e7bf\") " pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-mhd5p" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.952913 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bsgx\" (UniqueName: \"kubernetes.io/projected/677b9ffe-683c-4a84-9b7c-a625280c79f8-kube-api-access-8bsgx\") pod \"heat-operator-controller-manager-858f76bbdd-wl4kq\" (UID: \"677b9ffe-683c-4a84-9b7c-a625280c79f8\") " pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-wl4kq" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.952952 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47ff6370-fc12-4d28-a59a-1ae1614191a9-cert\") pod \"infra-operator-controller-manager-656bcbd775-6wmgz\" (UID: \"47ff6370-fc12-4d28-a59a-1ae1614191a9\") " pod="openstack-operators/infra-operator-controller-manager-656bcbd775-6wmgz" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.952975 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdvdw\" (UniqueName: \"kubernetes.io/projected/47ff6370-fc12-4d28-a59a-1ae1614191a9-kube-api-access-zdvdw\") pod \"infra-operator-controller-manager-656bcbd775-6wmgz\" (UID: \"47ff6370-fc12-4d28-a59a-1ae1614191a9\") " pod="openstack-operators/infra-operator-controller-manager-656bcbd775-6wmgz" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.953008 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmdfh\" (UniqueName: \"kubernetes.io/projected/8ea24fc5-58ac-426e-8943-eccec9261185-kube-api-access-kmdfh\") pod \"ironic-operator-controller-manager-9c5c78d49-9rvcx\" (UID: \"8ea24fc5-58ac-426e-8943-eccec9261185\") " pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-9rvcx" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.953030 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn2b8\" (UniqueName: \"kubernetes.io/projected/1b4f79cc-b86c-4742-97cd-d5c2a7fc95fb-kube-api-access-gn2b8\") pod \"manila-operator-controller-manager-5f67fbc655-dzsh5\" (UID: \"1b4f79cc-b86c-4742-97cd-d5c2a7fc95fb\") " pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-dzsh5" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.953079 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4db25\" (UniqueName: \"kubernetes.io/projected/bc224de1-26e7-447e-a2cd-9290d6a756c8-kube-api-access-4db25\") pod \"keystone-operator-controller-manager-55b6b7c7b8-sqftk\" (UID: \"bc224de1-26e7-447e-a2cd-9290d6a756c8\") " pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-sqftk" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.953108 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q625\" (UniqueName: \"kubernetes.io/projected/a24ab3f0-f53c-4f16-9fd8-e0e69149776d-kube-api-access-4q625\") pod \"designate-operator-controller-manager-85d5d9dd78-hmcxj\" (UID: \"a24ab3f0-f53c-4f16-9fd8-e0e69149776d\") " pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-hmcxj" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.953125 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h78f6\" (UniqueName: \"kubernetes.io/projected/3da4ee47-a023-411b-b367-c7eae5c8bd9b-kube-api-access-h78f6\") pod \"glance-operator-controller-manager-84b9b84486-5pqx6\" (UID: \"3da4ee47-a023-411b-b367-c7eae5c8bd9b\") " pod="openstack-operators/glance-operator-controller-manager-84b9b84486-5pqx6" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.953157 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpx9t\" (UniqueName: \"kubernetes.io/projected/63700b89-455e-4df1-baec-273d82261c60-kube-api-access-tpx9t\") pod \"horizon-operator-controller-manager-7ffbcb7588-4nz57\" (UID: \"63700b89-455e-4df1-baec-273d82261c60\") " pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-4nz57" Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.973679 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-79d585cb66-542w7"] Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.983155 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-h2bzs"] Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.984132 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5df598886f-5nmfh"] Oct 13 06:43:52 crc kubenswrapper[4833]: I1013 06:43:52.984369 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-h2bzs" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.007992 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-rwt2l" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.013174 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q625\" (UniqueName: \"kubernetes.io/projected/a24ab3f0-f53c-4f16-9fd8-e0e69149776d-kube-api-access-4q625\") pod \"designate-operator-controller-manager-85d5d9dd78-hmcxj\" (UID: \"a24ab3f0-f53c-4f16-9fd8-e0e69149776d\") " pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-hmcxj" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.022673 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h78f6\" (UniqueName: \"kubernetes.io/projected/3da4ee47-a023-411b-b367-c7eae5c8bd9b-kube-api-access-h78f6\") pod \"glance-operator-controller-manager-84b9b84486-5pqx6\" (UID: \"3da4ee47-a023-411b-b367-c7eae5c8bd9b\") " pod="openstack-operators/glance-operator-controller-manager-84b9b84486-5pqx6" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.023766 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bsgx\" (UniqueName: \"kubernetes.io/projected/677b9ffe-683c-4a84-9b7c-a625280c79f8-kube-api-access-8bsgx\") pod \"heat-operator-controller-manager-858f76bbdd-wl4kq\" (UID: \"677b9ffe-683c-4a84-9b7c-a625280c79f8\") " pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-wl4kq" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.028864 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-hmcxj" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.071292 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-h2bzs"] Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.073057 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q57ph\" (UniqueName: \"kubernetes.io/projected/390d3ba7-7d67-4b01-9729-22040d2c8ecd-kube-api-access-q57ph\") pod \"neutron-operator-controller-manager-79d585cb66-542w7\" (UID: \"390d3ba7-7d67-4b01-9729-22040d2c8ecd\") " pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-542w7" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.073306 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47ff6370-fc12-4d28-a59a-1ae1614191a9-cert\") pod \"infra-operator-controller-manager-656bcbd775-6wmgz\" (UID: \"47ff6370-fc12-4d28-a59a-1ae1614191a9\") " pod="openstack-operators/infra-operator-controller-manager-656bcbd775-6wmgz" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.073337 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdvdw\" (UniqueName: \"kubernetes.io/projected/47ff6370-fc12-4d28-a59a-1ae1614191a9-kube-api-access-zdvdw\") pod \"infra-operator-controller-manager-656bcbd775-6wmgz\" (UID: \"47ff6370-fc12-4d28-a59a-1ae1614191a9\") " pod="openstack-operators/infra-operator-controller-manager-656bcbd775-6wmgz" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.073515 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmdfh\" (UniqueName: \"kubernetes.io/projected/8ea24fc5-58ac-426e-8943-eccec9261185-kube-api-access-kmdfh\") pod \"ironic-operator-controller-manager-9c5c78d49-9rvcx\" (UID: \"8ea24fc5-58ac-426e-8943-eccec9261185\") " pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-9rvcx" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.073562 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z74r\" (UniqueName: \"kubernetes.io/projected/336c7267-9a4b-4924-bad8-9ccbef37dc21-kube-api-access-9z74r\") pod \"nova-operator-controller-manager-5df598886f-5nmfh\" (UID: \"336c7267-9a4b-4924-bad8-9ccbef37dc21\") " pod="openstack-operators/nova-operator-controller-manager-5df598886f-5nmfh" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.073703 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn2b8\" (UniqueName: \"kubernetes.io/projected/1b4f79cc-b86c-4742-97cd-d5c2a7fc95fb-kube-api-access-gn2b8\") pod \"manila-operator-controller-manager-5f67fbc655-dzsh5\" (UID: \"1b4f79cc-b86c-4742-97cd-d5c2a7fc95fb\") " pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-dzsh5" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.073870 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpx9t\" (UniqueName: \"kubernetes.io/projected/63700b89-455e-4df1-baec-273d82261c60-kube-api-access-tpx9t\") pod \"horizon-operator-controller-manager-7ffbcb7588-4nz57\" (UID: \"63700b89-455e-4df1-baec-273d82261c60\") " pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-4nz57" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.073775 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqm7q\" (UniqueName: \"kubernetes.io/projected/d971bfad-5c66-4145-beb3-fadf231cbacf-kube-api-access-nqm7q\") pod \"mariadb-operator-controller-manager-f9fb45f8f-5fzdw\" (UID: \"d971bfad-5c66-4145-beb3-fadf231cbacf\") " pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-5fzdw" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.099632 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-79df5fb58c-hmwbl"] Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.102747 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-hmwbl" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.109935 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-wl4kq" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.111453 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4db25\" (UniqueName: \"kubernetes.io/projected/bc224de1-26e7-447e-a2cd-9290d6a756c8-kube-api-access-4db25\") pod \"keystone-operator-controller-manager-55b6b7c7b8-sqftk\" (UID: \"bc224de1-26e7-447e-a2cd-9290d6a756c8\") " pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-sqftk" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.112218 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-5pqx6" Oct 13 06:43:53 crc kubenswrapper[4833]: E1013 06:43:53.112777 4833 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.112947 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-4nz57" Oct 13 06:43:53 crc kubenswrapper[4833]: E1013 06:43:53.113370 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47ff6370-fc12-4d28-a59a-1ae1614191a9-cert podName:47ff6370-fc12-4d28-a59a-1ae1614191a9 nodeName:}" failed. No retries permitted until 2025-10-13 06:43:53.613344842 +0000 UTC m=+923.713767768 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/47ff6370-fc12-4d28-a59a-1ae1614191a9-cert") pod "infra-operator-controller-manager-656bcbd775-6wmgz" (UID: "47ff6370-fc12-4d28-a59a-1ae1614191a9") : secret "infra-operator-webhook-server-cert" not found Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.114321 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-6t7s8" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.128186 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7bm68wk"] Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.148652 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7bm68wk" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.150601 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-ztmq8" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.150776 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.156447 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4db25\" (UniqueName: \"kubernetes.io/projected/bc224de1-26e7-447e-a2cd-9290d6a756c8-kube-api-access-4db25\") pod \"keystone-operator-controller-manager-55b6b7c7b8-sqftk\" (UID: \"bc224de1-26e7-447e-a2cd-9290d6a756c8\") " pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-sqftk" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.156928 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn2b8\" (UniqueName: \"kubernetes.io/projected/1b4f79cc-b86c-4742-97cd-d5c2a7fc95fb-kube-api-access-gn2b8\") pod \"manila-operator-controller-manager-5f67fbc655-dzsh5\" (UID: \"1b4f79cc-b86c-4742-97cd-d5c2a7fc95fb\") " pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-dzsh5" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.160866 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmdfh\" (UniqueName: \"kubernetes.io/projected/8ea24fc5-58ac-426e-8943-eccec9261185-kube-api-access-kmdfh\") pod \"ironic-operator-controller-manager-9c5c78d49-9rvcx\" (UID: \"8ea24fc5-58ac-426e-8943-eccec9261185\") " pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-9rvcx" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.172169 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-79df5fb58c-hmwbl"] Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.175862 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdvdw\" (UniqueName: \"kubernetes.io/projected/47ff6370-fc12-4d28-a59a-1ae1614191a9-kube-api-access-zdvdw\") pod \"infra-operator-controller-manager-656bcbd775-6wmgz\" (UID: \"47ff6370-fc12-4d28-a59a-1ae1614191a9\") " pod="openstack-operators/infra-operator-controller-manager-656bcbd775-6wmgz" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.215733 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-sqftk" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.217091 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e21ec2f6-af6b-4fa8-98c7-937dbf8f44b6-cert\") pod \"openstack-baremetal-operator-controller-manager-5956dffb7bm68wk\" (UID: \"e21ec2f6-af6b-4fa8-98c7-937dbf8f44b6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7bm68wk" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.217118 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fbq7\" (UniqueName: \"kubernetes.io/projected/e21ec2f6-af6b-4fa8-98c7-937dbf8f44b6-kube-api-access-9fbq7\") pod \"openstack-baremetal-operator-controller-manager-5956dffb7bm68wk\" (UID: \"e21ec2f6-af6b-4fa8-98c7-937dbf8f44b6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7bm68wk" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.217147 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqm7q\" (UniqueName: \"kubernetes.io/projected/d971bfad-5c66-4145-beb3-fadf231cbacf-kube-api-access-nqm7q\") pod \"mariadb-operator-controller-manager-f9fb45f8f-5fzdw\" (UID: \"d971bfad-5c66-4145-beb3-fadf231cbacf\") " pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-5fzdw" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.217178 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q57ph\" (UniqueName: \"kubernetes.io/projected/390d3ba7-7d67-4b01-9729-22040d2c8ecd-kube-api-access-q57ph\") pod \"neutron-operator-controller-manager-79d585cb66-542w7\" (UID: \"390d3ba7-7d67-4b01-9729-22040d2c8ecd\") " pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-542w7" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.217239 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z74r\" (UniqueName: \"kubernetes.io/projected/336c7267-9a4b-4924-bad8-9ccbef37dc21-kube-api-access-9z74r\") pod \"nova-operator-controller-manager-5df598886f-5nmfh\" (UID: \"336c7267-9a4b-4924-bad8-9ccbef37dc21\") " pod="openstack-operators/nova-operator-controller-manager-5df598886f-5nmfh" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.217260 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r62k9\" (UniqueName: \"kubernetes.io/projected/7e224255-b0b7-444a-aa67-8980b10e4131-kube-api-access-r62k9\") pod \"octavia-operator-controller-manager-69fdcfc5f5-h2bzs\" (UID: \"7e224255-b0b7-444a-aa67-8980b10e4131\") " pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-h2bzs" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.221045 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7bm68wk"] Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.234887 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-mhd5p" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.247157 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q57ph\" (UniqueName: \"kubernetes.io/projected/390d3ba7-7d67-4b01-9729-22040d2c8ecd-kube-api-access-q57ph\") pod \"neutron-operator-controller-manager-79d585cb66-542w7\" (UID: \"390d3ba7-7d67-4b01-9729-22040d2c8ecd\") " pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-542w7" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.247167 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z74r\" (UniqueName: \"kubernetes.io/projected/336c7267-9a4b-4924-bad8-9ccbef37dc21-kube-api-access-9z74r\") pod \"nova-operator-controller-manager-5df598886f-5nmfh\" (UID: \"336c7267-9a4b-4924-bad8-9ccbef37dc21\") " pod="openstack-operators/nova-operator-controller-manager-5df598886f-5nmfh" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.247892 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqm7q\" (UniqueName: \"kubernetes.io/projected/d971bfad-5c66-4145-beb3-fadf231cbacf-kube-api-access-nqm7q\") pod \"mariadb-operator-controller-manager-f9fb45f8f-5fzdw\" (UID: \"d971bfad-5c66-4145-beb3-fadf231cbacf\") " pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-5fzdw" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.255658 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-68b6c87b68-ffb5v"] Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.256748 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-ffb5v" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.263507 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-dg72t" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.263690 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-68b6c87b68-ffb5v"] Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.280398 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-dzsh5" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.284113 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-db6d7f97b-gnrwn"] Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.285625 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-gnrwn" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.295891 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-8z75p" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.300131 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-67cfc6749b-jh5ml"] Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.301156 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-jh5ml" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.303134 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-zxxkk" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.306685 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-db6d7f97b-gnrwn"] Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.313232 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5458f77c4-npmrc"] Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.314682 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5458f77c4-npmrc" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.320102 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-67cfc6749b-jh5ml"] Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.321135 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlddc\" (UniqueName: \"kubernetes.io/projected/f7288e97-60b3-4dbf-8717-883c28e960b4-kube-api-access-tlddc\") pod \"placement-operator-controller-manager-68b6c87b68-ffb5v\" (UID: \"f7288e97-60b3-4dbf-8717-883c28e960b4\") " pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-ffb5v" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.321216 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2njq\" (UniqueName: \"kubernetes.io/projected/f47ab387-784f-4cfa-998c-1c37b7b15bb8-kube-api-access-z2njq\") pod \"swift-operator-controller-manager-db6d7f97b-gnrwn\" (UID: \"f47ab387-784f-4cfa-998c-1c37b7b15bb8\") " pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-gnrwn" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.321257 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8ccs\" (UniqueName: \"kubernetes.io/projected/96aa2f66-4ecd-476b-9bf2-a9da443767df-kube-api-access-q8ccs\") pod \"telemetry-operator-controller-manager-67cfc6749b-jh5ml\" (UID: \"96aa2f66-4ecd-476b-9bf2-a9da443767df\") " pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-jh5ml" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.321426 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r62k9\" (UniqueName: \"kubernetes.io/projected/7e224255-b0b7-444a-aa67-8980b10e4131-kube-api-access-r62k9\") pod \"octavia-operator-controller-manager-69fdcfc5f5-h2bzs\" (UID: \"7e224255-b0b7-444a-aa67-8980b10e4131\") " pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-h2bzs" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.321483 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e21ec2f6-af6b-4fa8-98c7-937dbf8f44b6-cert\") pod \"openstack-baremetal-operator-controller-manager-5956dffb7bm68wk\" (UID: \"e21ec2f6-af6b-4fa8-98c7-937dbf8f44b6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7bm68wk" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.321505 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fbq7\" (UniqueName: \"kubernetes.io/projected/e21ec2f6-af6b-4fa8-98c7-937dbf8f44b6-kube-api-access-9fbq7\") pod \"openstack-baremetal-operator-controller-manager-5956dffb7bm68wk\" (UID: \"e21ec2f6-af6b-4fa8-98c7-937dbf8f44b6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7bm68wk" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.321524 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmg95\" (UniqueName: \"kubernetes.io/projected/a8ddb0b0-ab30-4dfe-b4c1-3ba0a53d9972-kube-api-access-nmg95\") pod \"ovn-operator-controller-manager-79df5fb58c-hmwbl\" (UID: \"a8ddb0b0-ab30-4dfe-b4c1-3ba0a53d9972\") " pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-hmwbl" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.321581 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrpdg\" (UniqueName: \"kubernetes.io/projected/f710c8db-0ead-4d38-9dd5-74b1068c85cc-kube-api-access-mrpdg\") pod \"test-operator-controller-manager-5458f77c4-npmrc\" (UID: \"f710c8db-0ead-4d38-9dd5-74b1068c85cc\") " pod="openstack-operators/test-operator-controller-manager-5458f77c4-npmrc" Oct 13 06:43:53 crc kubenswrapper[4833]: E1013 06:43:53.322899 4833 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 13 06:43:53 crc kubenswrapper[4833]: E1013 06:43:53.323055 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e21ec2f6-af6b-4fa8-98c7-937dbf8f44b6-cert podName:e21ec2f6-af6b-4fa8-98c7-937dbf8f44b6 nodeName:}" failed. No retries permitted until 2025-10-13 06:43:53.823032907 +0000 UTC m=+923.923455913 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e21ec2f6-af6b-4fa8-98c7-937dbf8f44b6-cert") pod "openstack-baremetal-operator-controller-manager-5956dffb7bm68wk" (UID: "e21ec2f6-af6b-4fa8-98c7-937dbf8f44b6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.325346 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-p2fbg" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.340074 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7f554bff7b-2kjcp"] Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.341138 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-2kjcp" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.343711 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r62k9\" (UniqueName: \"kubernetes.io/projected/7e224255-b0b7-444a-aa67-8980b10e4131-kube-api-access-r62k9\") pod \"octavia-operator-controller-manager-69fdcfc5f5-h2bzs\" (UID: \"7e224255-b0b7-444a-aa67-8980b10e4131\") " pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-h2bzs" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.347424 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-pmvf4" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.351082 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fbq7\" (UniqueName: \"kubernetes.io/projected/e21ec2f6-af6b-4fa8-98c7-937dbf8f44b6-kube-api-access-9fbq7\") pod \"openstack-baremetal-operator-controller-manager-5956dffb7bm68wk\" (UID: \"e21ec2f6-af6b-4fa8-98c7-937dbf8f44b6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7bm68wk" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.362735 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-542w7" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.382315 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5458f77c4-npmrc"] Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.397652 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7f554bff7b-2kjcp"] Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.423450 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlddc\" (UniqueName: \"kubernetes.io/projected/f7288e97-60b3-4dbf-8717-883c28e960b4-kube-api-access-tlddc\") pod \"placement-operator-controller-manager-68b6c87b68-ffb5v\" (UID: \"f7288e97-60b3-4dbf-8717-883c28e960b4\") " pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-ffb5v" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.423703 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pbnz\" (UniqueName: \"kubernetes.io/projected/5822a35e-6851-47e9-be13-3a5418c44787-kube-api-access-2pbnz\") pod \"watcher-operator-controller-manager-7f554bff7b-2kjcp\" (UID: \"5822a35e-6851-47e9-be13-3a5418c44787\") " pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-2kjcp" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.423792 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2njq\" (UniqueName: \"kubernetes.io/projected/f47ab387-784f-4cfa-998c-1c37b7b15bb8-kube-api-access-z2njq\") pod \"swift-operator-controller-manager-db6d7f97b-gnrwn\" (UID: \"f47ab387-784f-4cfa-998c-1c37b7b15bb8\") " pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-gnrwn" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.423856 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8ccs\" (UniqueName: \"kubernetes.io/projected/96aa2f66-4ecd-476b-9bf2-a9da443767df-kube-api-access-q8ccs\") pod \"telemetry-operator-controller-manager-67cfc6749b-jh5ml\" (UID: \"96aa2f66-4ecd-476b-9bf2-a9da443767df\") " pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-jh5ml" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.423975 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmg95\" (UniqueName: \"kubernetes.io/projected/a8ddb0b0-ab30-4dfe-b4c1-3ba0a53d9972-kube-api-access-nmg95\") pod \"ovn-operator-controller-manager-79df5fb58c-hmwbl\" (UID: \"a8ddb0b0-ab30-4dfe-b4c1-3ba0a53d9972\") " pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-hmwbl" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.424053 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrpdg\" (UniqueName: \"kubernetes.io/projected/f710c8db-0ead-4d38-9dd5-74b1068c85cc-kube-api-access-mrpdg\") pod \"test-operator-controller-manager-5458f77c4-npmrc\" (UID: \"f710c8db-0ead-4d38-9dd5-74b1068c85cc\") " pod="openstack-operators/test-operator-controller-manager-5458f77c4-npmrc" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.451167 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8ccs\" (UniqueName: \"kubernetes.io/projected/96aa2f66-4ecd-476b-9bf2-a9da443767df-kube-api-access-q8ccs\") pod \"telemetry-operator-controller-manager-67cfc6749b-jh5ml\" (UID: \"96aa2f66-4ecd-476b-9bf2-a9da443767df\") " pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-jh5ml" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.452190 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-9rvcx" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.455085 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrpdg\" (UniqueName: \"kubernetes.io/projected/f710c8db-0ead-4d38-9dd5-74b1068c85cc-kube-api-access-mrpdg\") pod \"test-operator-controller-manager-5458f77c4-npmrc\" (UID: \"f710c8db-0ead-4d38-9dd5-74b1068c85cc\") " pod="openstack-operators/test-operator-controller-manager-5458f77c4-npmrc" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.455794 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2njq\" (UniqueName: \"kubernetes.io/projected/f47ab387-784f-4cfa-998c-1c37b7b15bb8-kube-api-access-z2njq\") pod \"swift-operator-controller-manager-db6d7f97b-gnrwn\" (UID: \"f47ab387-784f-4cfa-998c-1c37b7b15bb8\") " pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-gnrwn" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.462188 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmg95\" (UniqueName: \"kubernetes.io/projected/a8ddb0b0-ab30-4dfe-b4c1-3ba0a53d9972-kube-api-access-nmg95\") pod \"ovn-operator-controller-manager-79df5fb58c-hmwbl\" (UID: \"a8ddb0b0-ab30-4dfe-b4c1-3ba0a53d9972\") " pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-hmwbl" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.468482 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlddc\" (UniqueName: \"kubernetes.io/projected/f7288e97-60b3-4dbf-8717-883c28e960b4-kube-api-access-tlddc\") pod \"placement-operator-controller-manager-68b6c87b68-ffb5v\" (UID: \"f7288e97-60b3-4dbf-8717-883c28e960b4\") " pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-ffb5v" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.477911 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-5fzdw" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.480702 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5df598886f-5nmfh" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.481236 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-h2bzs" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.486432 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5b95c8954b-pn28d"] Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.487720 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5b95c8954b-pn28d" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.503136 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-hmwbl" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.512210 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-fk5vs" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.512395 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.514706 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5b95c8954b-pn28d"] Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.525066 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pbnz\" (UniqueName: \"kubernetes.io/projected/5822a35e-6851-47e9-be13-3a5418c44787-kube-api-access-2pbnz\") pod \"watcher-operator-controller-manager-7f554bff7b-2kjcp\" (UID: \"5822a35e-6851-47e9-be13-3a5418c44787\") " pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-2kjcp" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.537773 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4lk4k"] Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.539055 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4lk4k" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.550951 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-khgtx" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.565748 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4lk4k"] Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.578168 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pbnz\" (UniqueName: \"kubernetes.io/projected/5822a35e-6851-47e9-be13-3a5418c44787-kube-api-access-2pbnz\") pod \"watcher-operator-controller-manager-7f554bff7b-2kjcp\" (UID: \"5822a35e-6851-47e9-be13-3a5418c44787\") " pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-2kjcp" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.597005 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-ffb5v" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.628673 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47ff6370-fc12-4d28-a59a-1ae1614191a9-cert\") pod \"infra-operator-controller-manager-656bcbd775-6wmgz\" (UID: \"47ff6370-fc12-4d28-a59a-1ae1614191a9\") " pod="openstack-operators/infra-operator-controller-manager-656bcbd775-6wmgz" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.628722 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zpv7\" (UniqueName: \"kubernetes.io/projected/d941a373-01b1-4305-a56a-8829605f9efa-kube-api-access-6zpv7\") pod \"openstack-operator-controller-manager-5b95c8954b-pn28d\" (UID: \"d941a373-01b1-4305-a56a-8829605f9efa\") " pod="openstack-operators/openstack-operator-controller-manager-5b95c8954b-pn28d" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.628772 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d941a373-01b1-4305-a56a-8829605f9efa-cert\") pod \"openstack-operator-controller-manager-5b95c8954b-pn28d\" (UID: \"d941a373-01b1-4305-a56a-8829605f9efa\") " pod="openstack-operators/openstack-operator-controller-manager-5b95c8954b-pn28d" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.629041 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-gnrwn" Oct 13 06:43:53 crc kubenswrapper[4833]: E1013 06:43:53.629524 4833 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Oct 13 06:43:53 crc kubenswrapper[4833]: E1013 06:43:53.629594 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47ff6370-fc12-4d28-a59a-1ae1614191a9-cert podName:47ff6370-fc12-4d28-a59a-1ae1614191a9 nodeName:}" failed. No retries permitted until 2025-10-13 06:43:54.629575824 +0000 UTC m=+924.729998740 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/47ff6370-fc12-4d28-a59a-1ae1614191a9-cert") pod "infra-operator-controller-manager-656bcbd775-6wmgz" (UID: "47ff6370-fc12-4d28-a59a-1ae1614191a9") : secret "infra-operator-webhook-server-cert" not found Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.689410 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-jh5ml" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.697611 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5458f77c4-npmrc" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.699034 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-658bdf4b74-w9zrf"] Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.716911 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-2kjcp" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.731343 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zpv7\" (UniqueName: \"kubernetes.io/projected/d941a373-01b1-4305-a56a-8829605f9efa-kube-api-access-6zpv7\") pod \"openstack-operator-controller-manager-5b95c8954b-pn28d\" (UID: \"d941a373-01b1-4305-a56a-8829605f9efa\") " pod="openstack-operators/openstack-operator-controller-manager-5b95c8954b-pn28d" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.731381 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzs62\" (UniqueName: \"kubernetes.io/projected/76ebee26-0a3b-49a8-92f1-4eb0362ed0c5-kube-api-access-jzs62\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-4lk4k\" (UID: \"76ebee26-0a3b-49a8-92f1-4eb0362ed0c5\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4lk4k" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.731411 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d941a373-01b1-4305-a56a-8829605f9efa-cert\") pod \"openstack-operator-controller-manager-5b95c8954b-pn28d\" (UID: \"d941a373-01b1-4305-a56a-8829605f9efa\") " pod="openstack-operators/openstack-operator-controller-manager-5b95c8954b-pn28d" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.744013 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-85d5d9dd78-hmcxj"] Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.746297 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d941a373-01b1-4305-a56a-8829605f9efa-cert\") pod \"openstack-operator-controller-manager-5b95c8954b-pn28d\" (UID: \"d941a373-01b1-4305-a56a-8829605f9efa\") " pod="openstack-operators/openstack-operator-controller-manager-5b95c8954b-pn28d" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.754119 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zpv7\" (UniqueName: \"kubernetes.io/projected/d941a373-01b1-4305-a56a-8829605f9efa-kube-api-access-6zpv7\") pod \"openstack-operator-controller-manager-5b95c8954b-pn28d\" (UID: \"d941a373-01b1-4305-a56a-8829605f9efa\") " pod="openstack-operators/openstack-operator-controller-manager-5b95c8954b-pn28d" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.775427 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-w9zrf" event={"ID":"bbfa1bde-53ad-46fb-9217-cfd3bcbd9355","Type":"ContainerStarted","Data":"f03f250a72caf27cbd88d63d854c98d0bdd6906e9a0e3a814855eb3dc5f5ead7"} Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.836103 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzs62\" (UniqueName: \"kubernetes.io/projected/76ebee26-0a3b-49a8-92f1-4eb0362ed0c5-kube-api-access-jzs62\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-4lk4k\" (UID: \"76ebee26-0a3b-49a8-92f1-4eb0362ed0c5\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4lk4k" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.836193 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e21ec2f6-af6b-4fa8-98c7-937dbf8f44b6-cert\") pod \"openstack-baremetal-operator-controller-manager-5956dffb7bm68wk\" (UID: \"e21ec2f6-af6b-4fa8-98c7-937dbf8f44b6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7bm68wk" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.857223 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e21ec2f6-af6b-4fa8-98c7-937dbf8f44b6-cert\") pod \"openstack-baremetal-operator-controller-manager-5956dffb7bm68wk\" (UID: \"e21ec2f6-af6b-4fa8-98c7-937dbf8f44b6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7bm68wk" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.861051 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzs62\" (UniqueName: \"kubernetes.io/projected/76ebee26-0a3b-49a8-92f1-4eb0362ed0c5-kube-api-access-jzs62\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-4lk4k\" (UID: \"76ebee26-0a3b-49a8-92f1-4eb0362ed0c5\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4lk4k" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.891083 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5b95c8954b-pn28d" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.908264 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4lk4k" Oct 13 06:43:53 crc kubenswrapper[4833]: I1013 06:43:53.912756 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-84b9b84486-5pqx6"] Oct 13 06:43:53 crc kubenswrapper[4833]: W1013 06:43:53.936059 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3da4ee47_a023_411b_b367_c7eae5c8bd9b.slice/crio-c7132f927194109fa27eb0ed76848b42d864ed598518bdafe3682d9b8d76f704 WatchSource:0}: Error finding container c7132f927194109fa27eb0ed76848b42d864ed598518bdafe3682d9b8d76f704: Status 404 returned error can't find the container with id c7132f927194109fa27eb0ed76848b42d864ed598518bdafe3682d9b8d76f704 Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.005824 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7ffbcb7588-4nz57"] Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.113164 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-858f76bbdd-wl4kq"] Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.121052 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-sqftk"] Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.127241 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7bm68wk" Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.141265 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7b7fb68549-mhd5p"] Oct 13 06:43:54 crc kubenswrapper[4833]: W1013 06:43:54.145515 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc224de1_26e7_447e_a2cd_9290d6a756c8.slice/crio-786a928e75ee94463f8aff643775e230180348b14c566a02d6534848faeedac2 WatchSource:0}: Error finding container 786a928e75ee94463f8aff643775e230180348b14c566a02d6534848faeedac2: Status 404 returned error can't find the container with id 786a928e75ee94463f8aff643775e230180348b14c566a02d6534848faeedac2 Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.269212 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5df598886f-5nmfh"] Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.281656 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5f67fbc655-dzsh5"] Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.288913 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-5fzdw"] Oct 13 06:43:54 crc kubenswrapper[4833]: W1013 06:43:54.288913 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b4f79cc_b86c_4742_97cd_d5c2a7fc95fb.slice/crio-fc20716f0e997420624b1b343dcd243875988c050b53116932bba75317cd004f WatchSource:0}: Error finding container fc20716f0e997420624b1b343dcd243875988c050b53116932bba75317cd004f: Status 404 returned error can't find the container with id fc20716f0e997420624b1b343dcd243875988c050b53116932bba75317cd004f Oct 13 06:43:54 crc kubenswrapper[4833]: W1013 06:43:54.290018 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod336c7267_9a4b_4924_bad8_9ccbef37dc21.slice/crio-ff4fc35ee92b9de2a605cae8a375847fcfd6bb21a93b9db5d7d5f9f9f6c62030 WatchSource:0}: Error finding container ff4fc35ee92b9de2a605cae8a375847fcfd6bb21a93b9db5d7d5f9f9f6c62030: Status 404 returned error can't find the container with id ff4fc35ee92b9de2a605cae8a375847fcfd6bb21a93b9db5d7d5f9f9f6c62030 Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.301939 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-79d585cb66-542w7"] Oct 13 06:43:54 crc kubenswrapper[4833]: W1013 06:43:54.304283 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd971bfad_5c66_4145_beb3_fadf231cbacf.slice/crio-34c42266aee56d80fca4c534f2fb88097ddad7cb426b9e9b6c00b4cbabe786d1 WatchSource:0}: Error finding container 34c42266aee56d80fca4c534f2fb88097ddad7cb426b9e9b6c00b4cbabe786d1: Status 404 returned error can't find the container with id 34c42266aee56d80fca4c534f2fb88097ddad7cb426b9e9b6c00b4cbabe786d1 Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.466193 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-68b6c87b68-ffb5v"] Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.471479 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-67cfc6749b-jh5ml"] Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.479423 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-h2bzs"] Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.481861 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-79df5fb58c-hmwbl"] Oct 13 06:43:54 crc kubenswrapper[4833]: W1013 06:43:54.483223 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e224255_b0b7_444a_aa67_8980b10e4131.slice/crio-d6f52ac0bc411bab4725995d56b632c16635e120e6b1ca5a7bbaccce5b503e3f WatchSource:0}: Error finding container d6f52ac0bc411bab4725995d56b632c16635e120e6b1ca5a7bbaccce5b503e3f: Status 404 returned error can't find the container with id d6f52ac0bc411bab4725995d56b632c16635e120e6b1ca5a7bbaccce5b503e3f Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.489275 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-9c5c78d49-9rvcx"] Oct 13 06:43:54 crc kubenswrapper[4833]: E1013 06:43:54.502171 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:315e558023b41ac1aa215082096995a03810c5b42910a33b00427ffcac9c6a14,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nmg95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-79df5fb58c-hmwbl_openstack-operators(a8ddb0b0-ab30-4dfe-b4c1-3ba0a53d9972): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 13 06:43:54 crc kubenswrapper[4833]: W1013 06:43:54.510516 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ea24fc5_58ac_426e_8943_eccec9261185.slice/crio-e950d5d60311212a24e2ceed256ba5ded1aa56939cc087bace1503bb98f8fdfd WatchSource:0}: Error finding container e950d5d60311212a24e2ceed256ba5ded1aa56939cc087bace1503bb98f8fdfd: Status 404 returned error can't find the container with id e950d5d60311212a24e2ceed256ba5ded1aa56939cc087bace1503bb98f8fdfd Oct 13 06:43:54 crc kubenswrapper[4833]: E1013 06:43:54.513938 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:abe978f8da75223de5043cca50278ad4e28c8dd309883f502fe1e7a9998733b0,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q8ccs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-67cfc6749b-jh5ml_openstack-operators(96aa2f66-4ecd-476b-9bf2-a9da443767df): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 13 06:43:54 crc kubenswrapper[4833]: E1013 06:43:54.520086 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:ee05f2b06405240a8fcdbd430a9e8983b4667f372548334307b68c154e389960,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kmdfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-9c5c78d49-9rvcx_openstack-operators(8ea24fc5-58ac-426e-8943-eccec9261185): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.652301 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47ff6370-fc12-4d28-a59a-1ae1614191a9-cert\") pod \"infra-operator-controller-manager-656bcbd775-6wmgz\" (UID: \"47ff6370-fc12-4d28-a59a-1ae1614191a9\") " pod="openstack-operators/infra-operator-controller-manager-656bcbd775-6wmgz" Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.659455 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47ff6370-fc12-4d28-a59a-1ae1614191a9-cert\") pod \"infra-operator-controller-manager-656bcbd775-6wmgz\" (UID: \"47ff6370-fc12-4d28-a59a-1ae1614191a9\") " pod="openstack-operators/infra-operator-controller-manager-656bcbd775-6wmgz" Oct 13 06:43:54 crc kubenswrapper[4833]: E1013 06:43:54.673444 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-hmwbl" podUID="a8ddb0b0-ab30-4dfe-b4c1-3ba0a53d9972" Oct 13 06:43:54 crc kubenswrapper[4833]: E1013 06:43:54.676216 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-9rvcx" podUID="8ea24fc5-58ac-426e-8943-eccec9261185" Oct 13 06:43:54 crc kubenswrapper[4833]: E1013 06:43:54.699169 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-jh5ml" podUID="96aa2f66-4ecd-476b-9bf2-a9da443767df" Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.714799 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-db6d7f97b-gnrwn"] Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.727403 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5458f77c4-npmrc"] Oct 13 06:43:54 crc kubenswrapper[4833]: W1013 06:43:54.738191 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf47ab387_784f_4cfa_998c_1c37b7b15bb8.slice/crio-aa9b9b95d9623057c587a1450516e87da9f447b6e1c8067f664471d6f4564661 WatchSource:0}: Error finding container aa9b9b95d9623057c587a1450516e87da9f447b6e1c8067f664471d6f4564661: Status 404 returned error can't find the container with id aa9b9b95d9623057c587a1450516e87da9f447b6e1c8067f664471d6f4564661 Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.760010 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7f554bff7b-2kjcp"] Oct 13 06:43:54 crc kubenswrapper[4833]: W1013 06:43:54.773444 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5822a35e_6851_47e9_be13_3a5418c44787.slice/crio-49bf15f3b78e6f24cd82da4a0b5a4e41e293105b873d03f172e0e205b4198029 WatchSource:0}: Error finding container 49bf15f3b78e6f24cd82da4a0b5a4e41e293105b873d03f172e0e205b4198029: Status 404 returned error can't find the container with id 49bf15f3b78e6f24cd82da4a0b5a4e41e293105b873d03f172e0e205b4198029 Oct 13 06:43:54 crc kubenswrapper[4833]: E1013 06:43:54.778482 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:98a5233f0596591acdf2c6a5838b08be108787cdb6ad1995b2b7886bac0fe6ca,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2pbnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-7f554bff7b-2kjcp_openstack-operators(5822a35e-6851-47e9-be13-3a5418c44787): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 13 06:43:54 crc kubenswrapper[4833]: E1013 06:43:54.780959 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jzs62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-4lk4k_openstack-operators(76ebee26-0a3b-49a8-92f1-4eb0362ed0c5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 13 06:43:54 crc kubenswrapper[4833]: E1013 06:43:54.783804 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4lk4k" podUID="76ebee26-0a3b-49a8-92f1-4eb0362ed0c5" Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.790994 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-gnrwn" event={"ID":"f47ab387-784f-4cfa-998c-1c37b7b15bb8","Type":"ContainerStarted","Data":"aa9b9b95d9623057c587a1450516e87da9f447b6e1c8067f664471d6f4564661"} Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.793002 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-ffb5v" event={"ID":"f7288e97-60b3-4dbf-8717-883c28e960b4","Type":"ContainerStarted","Data":"51c095da6273db05baa4adb46e9bc364925c210c203e63a02cbf5b0f3e6c27e8"} Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.793270 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4lk4k"] Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.796710 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-h2bzs" event={"ID":"7e224255-b0b7-444a-aa67-8980b10e4131","Type":"ContainerStarted","Data":"d6f52ac0bc411bab4725995d56b632c16635e120e6b1ca5a7bbaccce5b503e3f"} Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.798273 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4lk4k" event={"ID":"76ebee26-0a3b-49a8-92f1-4eb0362ed0c5","Type":"ContainerStarted","Data":"d3f319ce68b1b77516d4e104fa63a5d3de08cefa2b051f806012ab71bee5f929"} Oct 13 06:43:54 crc kubenswrapper[4833]: E1013 06:43:54.813370 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4lk4k" podUID="76ebee26-0a3b-49a8-92f1-4eb0362ed0c5" Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.813448 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-4nz57" event={"ID":"63700b89-455e-4df1-baec-273d82261c60","Type":"ContainerStarted","Data":"cfef1e68ad2cffc1b9ee6efb1c1a27deb5c6539d8e444f7023859d761b364074"} Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.817986 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5b95c8954b-pn28d"] Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.819278 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5458f77c4-npmrc" event={"ID":"f710c8db-0ead-4d38-9dd5-74b1068c85cc","Type":"ContainerStarted","Data":"10f18b27b428125504c3bf3d6d08f9e2f4cd0aab9094336e95f905f3d918de0e"} Oct 13 06:43:54 crc kubenswrapper[4833]: W1013 06:43:54.821813 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode21ec2f6_af6b_4fa8_98c7_937dbf8f44b6.slice/crio-3cc27e3c98291eb951244e19505b5ebcd40c192dbee5d84fe3b181e05feae2ca WatchSource:0}: Error finding container 3cc27e3c98291eb951244e19505b5ebcd40c192dbee5d84fe3b181e05feae2ca: Status 404 returned error can't find the container with id 3cc27e3c98291eb951244e19505b5ebcd40c192dbee5d84fe3b181e05feae2ca Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.821817 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-dzsh5" event={"ID":"1b4f79cc-b86c-4742-97cd-d5c2a7fc95fb","Type":"ContainerStarted","Data":"fc20716f0e997420624b1b343dcd243875988c050b53116932bba75317cd004f"} Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.823787 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7bm68wk"] Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.823818 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-2kjcp" event={"ID":"5822a35e-6851-47e9-be13-3a5418c44787","Type":"ContainerStarted","Data":"49bf15f3b78e6f24cd82da4a0b5a4e41e293105b873d03f172e0e205b4198029"} Oct 13 06:43:54 crc kubenswrapper[4833]: E1013 06:43:54.826048 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:a17fc270857869fd1efe5020b2a1cb8c2abbd838f08de88f3a6a59e8754ec351,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent@sha256:03b4f3db4b373515f7e4095984b97197c05a14f87b2a0a525eb5d7be1d7bda66,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner@sha256:6722a752fb7cbffbae811f6ad6567120fbd4ebbe8c38a83ec2df02850a3276bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api@sha256:2115452234aedb505ed4efc6cd9b9a4ce3b9809aa7d0128d8fbeeee84dad1a69,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator@sha256:50597a8eaa6c4383f357574dcab8358b698729797b4156d932985a08ab86b7cd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener@sha256:cb4997d62c7b2534233a676cb92e19cf85dda07e2fb9fa642c28aab30489f69a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier@sha256:1ccbf3f6cf24c9ee91bed71467491e22b8cb4b95bce90250f4174fae936b0fa1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24@sha256:e91d58021b54c46883595ff66be65882de54abdb3be2ca53c4162b20d18b5f48,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:cbe345acb37e57986ecf6685d28c72d0e639bdb493a18e9d3ba947d6c3a16384,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener@sha256:e7dcc3bf23d5e0393ac173e3c43d4ae85f4613a4fd16b3c147dc32ae491d49bf,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker@sha256:2a1a8b582c6e4cc31081bd8b0887acf45e31c1d14596c4e361d27d08fef0debf,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:86daeb9c834bfcedb533086dff59a6b5b6e832b94ce2a9116337f8736bb80032,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute@sha256:5d4fdf424fad33a3650163e9e7423f92e97de3305508c2b7c6435822e0313189,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi@sha256:6d28de018f6e1672e775a75735e3bc16b63da41acd8fb5196ee0b06856c07133,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter@sha256:7211a617ec657701ca819aa0ba28e1d5750f5bf2c1391b755cc4a48cc360b0fa,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification@sha256:c5fc9b72fc593bcf3b569c7ed24a256448eb1afab1504e668a3822e978be1306,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core@sha256:09b5017c95d7697e66b9c64846bc48ef5826a009cba89b956ec54561e5f4a2d1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:85c75d60e1bd2f8a9ea0a2bb21a8df64c0a6f7b504cc1a05a355981d4b90e92f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:88b99249f15470f359fb554f7f3a56974b743f4655e3f0c982c0260f75a67697,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:e861d66785047d39eb68d9bac23e3f57ac84d9bd95593502d9b3b913b99fd1a4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:b95f09bf3d259f9eacf3b63931977483f5c3c332f49b95ee8a69d8e3fb71d082,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api@sha256:6fc7801c0d18d41b9f11484b1cdb342de9cebd93072ec2205dbe40945715184f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9@sha256:d4d824b80cbed683543d9e8c7045ac97e080774f45a5067ccbca26404e067821,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central@sha256:182ec75938d8d3fb7d8f916373368add24062fec90489aa57776a81d0b36ea20,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns@sha256:9507ba5ab74cbae902e2dc07f89c7b3b5b76d8079e444365fe0eee6000fd7aaa,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer@sha256:17db080dcc4099f8a20aa0f238b6bca5c104672ae46743adeab9d1637725ecaa,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound@sha256:fd55cf3d73bfdc518419c9ba0b0cbef275140ae2d3bd0342a7310f81d57c2d78,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker@sha256:d164a9bd383f50df69fc22e7422f4650cd5076c90ed19278fc0f04e54345a63d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr@sha256:6beffe7d0bd75f9d1f495aeb7ab2334a2414af2c581d4833363df8441ed01018,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler@sha256:581b65b646301e0fcb07582150ba63438f1353a85bf9acf1eb2acb4ce71c58bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron@sha256:2308c7b6c3d0aabbadfc9a06d84d67d2243f27fe8eed740ee96b1ce910203f62,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent@sha256:9cf0ca292340f1f978603955ef682effbf24316d6e2376b1c89906d84c3f06d0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent@sha256:58f678016d7f6c8fe579abe886fd138ef853642faa6766ca60639feac12d82ac,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent@sha256:46f92909153aaf03a585374b77d103c536509747e3270558d9a533295c46a7c5,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent@sha256:7fe367f51638c5c302fd3f8e66a31b09cb3b11519a7f72ef142b6c6fe8b91694,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:9ebf424d4107275a2e3f21f7a18ef257ff2f97c1298109ac7c802a5a4f4794f2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api@sha256:4fcbe0d9a3c845708ecc32102ad4abbcbd947d87e5cf91f186de75b5d84ec681,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn@sha256:58a4e9a4dea86635c93ce37a2bb3c60ece62b3d656f6ee6a8845347cbb3e90fd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine@sha256:6f2b843bc9f4ceb1ee873972d69e6bae6e1dbd378b486995bc3697d8bcff6339,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon@sha256:03b4bb79b71d5ca7792d19c4c0ee08a5e5a407ad844c087305c42dd909ee7490,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached@sha256:773daada6402d9cad089cdc809d6c0335456d057ac1a25441ab5d82add2f70f4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis@sha256:7323406a63fb3fdbb3eea4da0f7e8ed89c94c9bd0ad5ecd6c18fa4a4c2c550c4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:7ae82068011e2d2e5ddc88c943fd32ff4a11902793e7a1df729811b2e27122a0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:0c762c15d9d98d39cc9dc3d1f9a70f9188fef58d4e2f3b0c69c896cab8da5e48,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:febf65561eeef5b36b70d0d65ee83f6451e43ec97bfab4d826e14215da6ff19b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:b8aadfc3d547c5ef1e27fcb573d4760cf8c2f2271eefe1793c35a0d46b640837,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:ecc91fd5079ee6d0c6ae1b11e97da790e33864d0e1930e574f959da2bddfa59a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent@sha256:2e981e93f99c929a3f04e5e41c8f645d44d390a9aeee3c5193cce7ec2edcbf3a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone@sha256:1e5714637b6e1a24c2858fe6d9bbb3f00bc61d69ad74a657b1c23682bf4cb2b7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api@sha256:35b8dcf27dc3b67f3840fa0e693ff312f74f7e22c634dff206a5c4d0133c716c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler@sha256:e109e4863e05e803dbfe04917756fd52231c560c65353170a2000be6cc2bb53d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share@sha256:6df0bebd9318ce11624413249e7e9781311638f276f8877668d3b382fe90e62f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:56b75d97f4a48c8cf58b3a7c18c43618efb308bf0188124f6301142e61299b0c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils@sha256:a51ed62767206067aa501142dbf01f20b3d65325d30faf1b4d6424d5b17dfba5,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api@sha256:592e3cd32d3cc97a69093ad905b449aa374ffbb1b2644b738bb6c1434476d1f6,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:9596452e283febbe08204d0ef0fd1992af3395d0969f7ac76663ed7c8be5b4d4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:d61005a10bef1b37762a8a41e6755c1169241e36cc5f92886bca6f4f6b9c381a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:e6a4335bcbeed3cd3e73ac879f754e314761e4a417a67539ca88e96a79346328,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api@sha256:97d88fc53421b699fc91983313d7beec4a0f177089e95bdf5ba15c3f521db9a9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager@sha256:5365e5c9c3ad2ede1b6945255b2cc6b009d642c39babdf25e0655282cfa646fe,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping@sha256:5b55795d774e0ea160ff8a7fd491ed41cf2d93c7d821694abb3a879eaffcefeb,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog@sha256:26e955c46a6063eafcfeb79430bf3d9268dbe95687c00e63a624b3ec5a846f5a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker@sha256:58939baa18ab09e2b24996c5f3665ae52274b781f661ea06a67c991e9a832d5a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:b8bff6857fec93c3c1521f1a8c23de21bcb86fc0f960972e81f6c3f95d4185be,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather@sha256:943eee724277e252795909137538a553ef5284c8103ad01b9be7b0138c66d14d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi@sha256:d97b08fd421065c8c33a523973822ac468500cbe853069aa9214393fbda7a908,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:289dea3beea1cd4405895fc42e44372b35e4a941e31c59e102c333471a3ca9b7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:9b19894fa67a81bf8ba4159b55b49f38877c670aeb97e2021c341cef2a9294e4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:ea164961ad30453ad0301c6b73364e1f1024f689634c88dd98265f9c7048e31d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server@sha256:6f9f2ea45f0271f6da8eb05a5f74cf5ce6769479346f5c2f407ee6f31a9c7ff3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api@sha256:59448516174fc3bab679b9a8dd62cb9a9d16b5734aadbeb98e960e3b7c79bd22,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:adcdeb8ecd601fb03c3b0901d5b5111af2ca48f7dd443e22224db6daaf08f5d0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account@sha256:2bf32d9b95899d7637dfe19d07cf1ecc9a06593984faff57a3c0dce060012edb,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container@sha256:7a452cd18b64d522e8a1e25bdcea543e9fe5f5b76e1c5e044c2b5334e06a326b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object@sha256:6a46aa13aa359b8e782a22d67db42db02bbf2bb7e35df4b684ac1daeda38cde3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:f6824854bea6b2acbb00c34639799b4744818d4adbdd40e37dc5088f9ae18d58,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all@sha256:a66d2fdc21f25c690f02e643d2666dbe7df43a64cd55086ec33d6755e6d809b9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api@sha256:30701a65382430570f6fb35621f64f1003f727b6da745ce84fb1a90436ee2350,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier@sha256:b9a657c51bbcc236e6c906a6df6c42cd2a28bab69e7ab58b0e9ced12295b2d87,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine@sha256:fd65fb5c9710c46aa1c31e65a51cd5c23ec35cf68c2452d421f919f2aa9b6255,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9fbq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-5956dffb7bm68wk_openstack-operators(e21ec2f6-af6b-4fa8-98c7-937dbf8f44b6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.827571 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-jh5ml" event={"ID":"96aa2f66-4ecd-476b-9bf2-a9da443767df","Type":"ContainerStarted","Data":"4e6b89acd07923145ceb1a5fb11d512992a07de1ec759c804099a65ed62542b9"} Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.827641 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-jh5ml" event={"ID":"96aa2f66-4ecd-476b-9bf2-a9da443767df","Type":"ContainerStarted","Data":"1c8cfbf9c7df44d31c374d486056b437b872e93235a17b33604aebdb34dacc5d"} Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.829454 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-hmcxj" event={"ID":"a24ab3f0-f53c-4f16-9fd8-e0e69149776d","Type":"ContainerStarted","Data":"161650e3ba68470bfac3d14247363e47bb3cc8aeb1a0fd8247e6ce5624c30b9e"} Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.831698 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-542w7" event={"ID":"390d3ba7-7d67-4b01-9729-22040d2c8ecd","Type":"ContainerStarted","Data":"d2f573a9a9050336fcec9ab9de8902cb984440d7b5e7b49e6966871b1967aac7"} Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.833203 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-wl4kq" event={"ID":"677b9ffe-683c-4a84-9b7c-a625280c79f8","Type":"ContainerStarted","Data":"f4bf371bfc37ac7021ccd2725d951d1c60c318982149c4ad8c53eb633196bcda"} Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.834890 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-sqftk" event={"ID":"bc224de1-26e7-447e-a2cd-9290d6a756c8","Type":"ContainerStarted","Data":"786a928e75ee94463f8aff643775e230180348b14c566a02d6534848faeedac2"} Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.836276 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5df598886f-5nmfh" event={"ID":"336c7267-9a4b-4924-bad8-9ccbef37dc21","Type":"ContainerStarted","Data":"ff4fc35ee92b9de2a605cae8a375847fcfd6bb21a93b9db5d7d5f9f9f6c62030"} Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.839018 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-5pqx6" event={"ID":"3da4ee47-a023-411b-b367-c7eae5c8bd9b","Type":"ContainerStarted","Data":"c7132f927194109fa27eb0ed76848b42d864ed598518bdafe3682d9b8d76f704"} Oct 13 06:43:54 crc kubenswrapper[4833]: E1013 06:43:54.850708 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:abe978f8da75223de5043cca50278ad4e28c8dd309883f502fe1e7a9998733b0\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-jh5ml" podUID="96aa2f66-4ecd-476b-9bf2-a9da443767df" Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.851003 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-mhd5p" event={"ID":"9608166e-0d48-4e57-99d7-6fa85036e7bf","Type":"ContainerStarted","Data":"11042cd1d5e7f403869d6ab25a41ec266dd97607485c5b2431f1b67310ff5de6"} Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.853167 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-hmwbl" event={"ID":"a8ddb0b0-ab30-4dfe-b4c1-3ba0a53d9972","Type":"ContainerStarted","Data":"563f1d7a4e0774a66054960adadab42f45b88c4372872cd95016cf6bb1777218"} Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.853196 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-hmwbl" event={"ID":"a8ddb0b0-ab30-4dfe-b4c1-3ba0a53d9972","Type":"ContainerStarted","Data":"a89baeea151e770a584b8ed15fb613162705f89bcccc59443d0813551fda4f48"} Oct 13 06:43:54 crc kubenswrapper[4833]: E1013 06:43:54.855012 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:315e558023b41ac1aa215082096995a03810c5b42910a33b00427ffcac9c6a14\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-hmwbl" podUID="a8ddb0b0-ab30-4dfe-b4c1-3ba0a53d9972" Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.857485 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-5fzdw" event={"ID":"d971bfad-5c66-4145-beb3-fadf231cbacf","Type":"ContainerStarted","Data":"34c42266aee56d80fca4c534f2fb88097ddad7cb426b9e9b6c00b4cbabe786d1"} Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.860683 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-9rvcx" event={"ID":"8ea24fc5-58ac-426e-8943-eccec9261185","Type":"ContainerStarted","Data":"31a0a7d3ee7dca3aa91c16cf09540ea40b10c65a689eaa7135a0d438da5e76fa"} Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.860729 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-9rvcx" event={"ID":"8ea24fc5-58ac-426e-8943-eccec9261185","Type":"ContainerStarted","Data":"e950d5d60311212a24e2ceed256ba5ded1aa56939cc087bace1503bb98f8fdfd"} Oct 13 06:43:54 crc kubenswrapper[4833]: E1013 06:43:54.893402 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:ee05f2b06405240a8fcdbd430a9e8983b4667f372548334307b68c154e389960\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-9rvcx" podUID="8ea24fc5-58ac-426e-8943-eccec9261185" Oct 13 06:43:54 crc kubenswrapper[4833]: I1013 06:43:54.923061 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-6wmgz" Oct 13 06:43:55 crc kubenswrapper[4833]: E1013 06:43:55.108457 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-2kjcp" podUID="5822a35e-6851-47e9-be13-3a5418c44787" Oct 13 06:43:55 crc kubenswrapper[4833]: E1013 06:43:55.125950 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7bm68wk" podUID="e21ec2f6-af6b-4fa8-98c7-937dbf8f44b6" Oct 13 06:43:55 crc kubenswrapper[4833]: I1013 06:43:55.437686 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-656bcbd775-6wmgz"] Oct 13 06:43:55 crc kubenswrapper[4833]: I1013 06:43:55.872907 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5b95c8954b-pn28d" event={"ID":"d941a373-01b1-4305-a56a-8829605f9efa","Type":"ContainerStarted","Data":"b6fb86cf7d9d4d3db70aa86c0780e5d2e9fc2a02cfb92db95139e669587cafbe"} Oct 13 06:43:55 crc kubenswrapper[4833]: I1013 06:43:55.872979 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5b95c8954b-pn28d" event={"ID":"d941a373-01b1-4305-a56a-8829605f9efa","Type":"ContainerStarted","Data":"cb4150c9db538747d5c9c954a78a7fd6aad8a691532a1e56d243517b9e43dd63"} Oct 13 06:43:55 crc kubenswrapper[4833]: I1013 06:43:55.872996 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5b95c8954b-pn28d" event={"ID":"d941a373-01b1-4305-a56a-8829605f9efa","Type":"ContainerStarted","Data":"5de9c4ea5fa10419d1257ba76fe0aa94e5a894c329d462a824a6dd4473094918"} Oct 13 06:43:55 crc kubenswrapper[4833]: I1013 06:43:55.873065 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5b95c8954b-pn28d" Oct 13 06:43:55 crc kubenswrapper[4833]: I1013 06:43:55.903023 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-2kjcp" event={"ID":"5822a35e-6851-47e9-be13-3a5418c44787","Type":"ContainerStarted","Data":"939453da1474cc879dec0166efa82b9a1d144dbfad3277156c75d487d57718c6"} Oct 13 06:43:55 crc kubenswrapper[4833]: E1013 06:43:55.905077 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:98a5233f0596591acdf2c6a5838b08be108787cdb6ad1995b2b7886bac0fe6ca\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-2kjcp" podUID="5822a35e-6851-47e9-be13-3a5418c44787" Oct 13 06:43:55 crc kubenswrapper[4833]: I1013 06:43:55.905386 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-6wmgz" event={"ID":"47ff6370-fc12-4d28-a59a-1ae1614191a9","Type":"ContainerStarted","Data":"8d9970fc372080bc885af74d8ee5ea6721eab0d2fd2dd164b380f4f9da93ef8d"} Oct 13 06:43:55 crc kubenswrapper[4833]: I1013 06:43:55.908067 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7bm68wk" event={"ID":"e21ec2f6-af6b-4fa8-98c7-937dbf8f44b6","Type":"ContainerStarted","Data":"7e8c1a9b5bd67b9f4a45370b47ffd9ce0277064917f1aa7e9208e692a97382ca"} Oct 13 06:43:55 crc kubenswrapper[4833]: I1013 06:43:55.908093 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7bm68wk" event={"ID":"e21ec2f6-af6b-4fa8-98c7-937dbf8f44b6","Type":"ContainerStarted","Data":"3cc27e3c98291eb951244e19505b5ebcd40c192dbee5d84fe3b181e05feae2ca"} Oct 13 06:43:55 crc kubenswrapper[4833]: E1013 06:43:55.909221 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:ee05f2b06405240a8fcdbd430a9e8983b4667f372548334307b68c154e389960\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-9rvcx" podUID="8ea24fc5-58ac-426e-8943-eccec9261185" Oct 13 06:43:55 crc kubenswrapper[4833]: E1013 06:43:55.909385 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:315e558023b41ac1aa215082096995a03810c5b42910a33b00427ffcac9c6a14\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-hmwbl" podUID="a8ddb0b0-ab30-4dfe-b4c1-3ba0a53d9972" Oct 13 06:43:55 crc kubenswrapper[4833]: E1013 06:43:55.915042 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:a17fc270857869fd1efe5020b2a1cb8c2abbd838f08de88f3a6a59e8754ec351\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7bm68wk" podUID="e21ec2f6-af6b-4fa8-98c7-937dbf8f44b6" Oct 13 06:43:55 crc kubenswrapper[4833]: E1013 06:43:55.915143 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4lk4k" podUID="76ebee26-0a3b-49a8-92f1-4eb0362ed0c5" Oct 13 06:43:55 crc kubenswrapper[4833]: E1013 06:43:55.915220 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:abe978f8da75223de5043cca50278ad4e28c8dd309883f502fe1e7a9998733b0\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-jh5ml" podUID="96aa2f66-4ecd-476b-9bf2-a9da443767df" Oct 13 06:43:55 crc kubenswrapper[4833]: I1013 06:43:55.936128 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5b95c8954b-pn28d" podStartSLOduration=2.936111575 podStartE2EDuration="2.936111575s" podCreationTimestamp="2025-10-13 06:43:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:43:55.924527879 +0000 UTC m=+926.024950785" watchObservedRunningTime="2025-10-13 06:43:55.936111575 +0000 UTC m=+926.036534491" Oct 13 06:43:56 crc kubenswrapper[4833]: E1013 06:43:56.916318 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:98a5233f0596591acdf2c6a5838b08be108787cdb6ad1995b2b7886bac0fe6ca\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-2kjcp" podUID="5822a35e-6851-47e9-be13-3a5418c44787" Oct 13 06:43:56 crc kubenswrapper[4833]: E1013 06:43:56.916380 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:a17fc270857869fd1efe5020b2a1cb8c2abbd838f08de88f3a6a59e8754ec351\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7bm68wk" podUID="e21ec2f6-af6b-4fa8-98c7-937dbf8f44b6" Oct 13 06:44:00 crc kubenswrapper[4833]: I1013 06:44:00.542767 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 06:44:00 crc kubenswrapper[4833]: I1013 06:44:00.543139 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 06:44:03 crc kubenswrapper[4833]: I1013 06:44:03.908866 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5b95c8954b-pn28d" Oct 13 06:44:03 crc kubenswrapper[4833]: I1013 06:44:03.993622 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-hmcxj" event={"ID":"a24ab3f0-f53c-4f16-9fd8-e0e69149776d","Type":"ContainerStarted","Data":"5c70e045bb361774ac75381131e9f158595870a668702b1e67e7a4c1ff7a814b"} Oct 13 06:44:04 crc kubenswrapper[4833]: I1013 06:44:04.011812 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-wl4kq" event={"ID":"677b9ffe-683c-4a84-9b7c-a625280c79f8","Type":"ContainerStarted","Data":"bd19ec210c1dbbc02df31ce9f868d1c912c15342b34c5cb2fe5b95aab8214e84"} Oct 13 06:44:04 crc kubenswrapper[4833]: I1013 06:44:04.018474 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-gnrwn" event={"ID":"f47ab387-784f-4cfa-998c-1c37b7b15bb8","Type":"ContainerStarted","Data":"dd04803bb9e12013f3ef1dd7e508fff4798ef7e5967bf572a56a3760b51ad602"} Oct 13 06:44:04 crc kubenswrapper[4833]: I1013 06:44:04.072023 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-dzsh5" event={"ID":"1b4f79cc-b86c-4742-97cd-d5c2a7fc95fb","Type":"ContainerStarted","Data":"1c0df22089b2c1c9f4a661f8b8fb228b91c9733b0a1d15d5b6756e67fd61511d"} Oct 13 06:44:04 crc kubenswrapper[4833]: I1013 06:44:04.081918 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-w9zrf" event={"ID":"bbfa1bde-53ad-46fb-9217-cfd3bcbd9355","Type":"ContainerStarted","Data":"9007af934da77ca486c154c85c3fd36540f2444b3de1edf46d9734c651279d1e"} Oct 13 06:44:04 crc kubenswrapper[4833]: I1013 06:44:04.081963 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-w9zrf" event={"ID":"bbfa1bde-53ad-46fb-9217-cfd3bcbd9355","Type":"ContainerStarted","Data":"160b9037c73356daab9187b4ec7e7e9174332e105dd41c6da36dcc8b52f9f9d5"} Oct 13 06:44:04 crc kubenswrapper[4833]: I1013 06:44:04.082906 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-w9zrf" Oct 13 06:44:04 crc kubenswrapper[4833]: I1013 06:44:04.095804 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-mhd5p" event={"ID":"9608166e-0d48-4e57-99d7-6fa85036e7bf","Type":"ContainerStarted","Data":"125ae251a94fbcedfad503f60f37dd1e9a463673fcd8e3f85e106e4c42025c56"} Oct 13 06:44:04 crc kubenswrapper[4833]: I1013 06:44:04.114254 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-w9zrf" podStartSLOduration=2.622817511 podStartE2EDuration="12.114236436s" podCreationTimestamp="2025-10-13 06:43:52 +0000 UTC" firstStartedPulling="2025-10-13 06:43:53.743029077 +0000 UTC m=+923.843451993" lastFinishedPulling="2025-10-13 06:44:03.234447992 +0000 UTC m=+933.334870918" observedRunningTime="2025-10-13 06:44:04.111649831 +0000 UTC m=+934.212072747" watchObservedRunningTime="2025-10-13 06:44:04.114236436 +0000 UTC m=+934.214659352" Oct 13 06:44:04 crc kubenswrapper[4833]: I1013 06:44:04.128081 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-542w7" event={"ID":"390d3ba7-7d67-4b01-9729-22040d2c8ecd","Type":"ContainerStarted","Data":"4eff661c61860b64dca06e4906869f2b36c1bf6295df0dd20a1705bc7c08319a"} Oct 13 06:44:04 crc kubenswrapper[4833]: I1013 06:44:04.144600 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-sqftk" event={"ID":"bc224de1-26e7-447e-a2cd-9290d6a756c8","Type":"ContainerStarted","Data":"85ab6265d9c9c15b0f0972e2bd5f3e3d1916afcdb5dac2d630da81d0be870e89"} Oct 13 06:44:04 crc kubenswrapper[4833]: I1013 06:44:04.153106 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5df598886f-5nmfh" event={"ID":"336c7267-9a4b-4924-bad8-9ccbef37dc21","Type":"ContainerStarted","Data":"8768fb81932e956d0e235c3279acd8b072ee2da192a376d68527ba28fc02d27b"} Oct 13 06:44:04 crc kubenswrapper[4833]: I1013 06:44:04.154177 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5458f77c4-npmrc" event={"ID":"f710c8db-0ead-4d38-9dd5-74b1068c85cc","Type":"ContainerStarted","Data":"c7e95ee6dfddb763d884aaf648a9535f94704a7b5a5ec43e5a85bff45c2fa532"} Oct 13 06:44:04 crc kubenswrapper[4833]: I1013 06:44:04.155294 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-ffb5v" event={"ID":"f7288e97-60b3-4dbf-8717-883c28e960b4","Type":"ContainerStarted","Data":"e1430c6b25900c2119eaa01785b9f01356e1e88e58f99acfe3130b5d6e025b3a"} Oct 13 06:44:04 crc kubenswrapper[4833]: I1013 06:44:04.159041 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-h2bzs" event={"ID":"7e224255-b0b7-444a-aa67-8980b10e4131","Type":"ContainerStarted","Data":"8a25144d411afcc180dc2c4bed7597226cf045c76b11ab7c7c7deb1445dc8691"} Oct 13 06:44:04 crc kubenswrapper[4833]: I1013 06:44:04.178812 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-6wmgz" event={"ID":"47ff6370-fc12-4d28-a59a-1ae1614191a9","Type":"ContainerStarted","Data":"31c379378a5593b78911f9618dd27bc1a6bb805a58668da79815ba6fb200b70e"} Oct 13 06:44:04 crc kubenswrapper[4833]: I1013 06:44:04.190696 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-5pqx6" event={"ID":"3da4ee47-a023-411b-b367-c7eae5c8bd9b","Type":"ContainerStarted","Data":"24251caec4ca08b0e836407bf6ff8d734152339f2184fd6624302581e0cf4df7"} Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.199389 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-ffb5v" event={"ID":"f7288e97-60b3-4dbf-8717-883c28e960b4","Type":"ContainerStarted","Data":"fea7f04fb44dcc31f1279f25b46d5aa932d47fead0fcb793e657d13a59ef5854"} Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.199992 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-ffb5v" Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.202641 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-wl4kq" event={"ID":"677b9ffe-683c-4a84-9b7c-a625280c79f8","Type":"ContainerStarted","Data":"9f81e2ac6237fb5a6f61c9b37d7709782f301ae12dfa64a7fa58ca8990f0c7af"} Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.202779 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-wl4kq" Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.204812 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-6wmgz" event={"ID":"47ff6370-fc12-4d28-a59a-1ae1614191a9","Type":"ContainerStarted","Data":"678e0ce7fb969dbb697c727627ae53539897776826126dcbdc73a55abfaaba83"} Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.205300 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-6wmgz" Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.206983 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-sqftk" event={"ID":"bc224de1-26e7-447e-a2cd-9290d6a756c8","Type":"ContainerStarted","Data":"d0285e436ad7ed4fd0b82f056369b9141f1c8bc4b706a66689c3db6793e051a5"} Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.207336 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-sqftk" Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.209128 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-4nz57" event={"ID":"63700b89-455e-4df1-baec-273d82261c60","Type":"ContainerStarted","Data":"55040f6b55a851f99bd16530fe30a83dbdf027a140b49c9178ee9977199a4d3f"} Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.209157 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-4nz57" event={"ID":"63700b89-455e-4df1-baec-273d82261c60","Type":"ContainerStarted","Data":"c0742fa2d53870b0d07924c2cd3ae8c54e456f12c689dbd93bbdc2859e72fa0a"} Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.209569 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-4nz57" Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.211194 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5458f77c4-npmrc" event={"ID":"f710c8db-0ead-4d38-9dd5-74b1068c85cc","Type":"ContainerStarted","Data":"a36426a3c2856fea235a0d86d098146018e3be33586b5cfeb8febf1f686cb9c7"} Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.211733 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5458f77c4-npmrc" Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.213202 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-5fzdw" event={"ID":"d971bfad-5c66-4145-beb3-fadf231cbacf","Type":"ContainerStarted","Data":"d47b420ee22c59c65fbda8ab6541f2ec858b1a7e4074847047a998b6e8d240b1"} Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.213226 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-5fzdw" event={"ID":"d971bfad-5c66-4145-beb3-fadf231cbacf","Type":"ContainerStarted","Data":"aa94f9a5e5543fc270fa350711e495b3c27ddbb0f76f674e397387e7db11075d"} Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.213328 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-5fzdw" Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.214652 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-542w7" event={"ID":"390d3ba7-7d67-4b01-9729-22040d2c8ecd","Type":"ContainerStarted","Data":"b8e74f21e66842d54877d320c8f0a63a3384ce8fea98cd24481ddc1e4f15c2e5"} Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.214728 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-542w7" Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.216012 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-gnrwn" event={"ID":"f47ab387-784f-4cfa-998c-1c37b7b15bb8","Type":"ContainerStarted","Data":"7a3c153f88119c94d2d48a8c9c4fa6b194cf8215eca07673a31a0aa7476b3d2e"} Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.216463 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-gnrwn" Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.217787 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-5pqx6" event={"ID":"3da4ee47-a023-411b-b367-c7eae5c8bd9b","Type":"ContainerStarted","Data":"518e495cb75694f28fc1b607bddd9d0a7489015d2ab336eb28d8f9130e760190"} Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.218246 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-5pqx6" Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.220215 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-h2bzs" event={"ID":"7e224255-b0b7-444a-aa67-8980b10e4131","Type":"ContainerStarted","Data":"e85e587590fdc7d6ab625fad92ad0a9076ad7f00ed3869e98b5eb168e417af64"} Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.220354 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-h2bzs" Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.222387 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-mhd5p" event={"ID":"9608166e-0d48-4e57-99d7-6fa85036e7bf","Type":"ContainerStarted","Data":"f91d910b8e7c7221ced1d579ec76d6060d66b68a302bd70613f487ecba7266e2"} Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.222542 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-mhd5p" Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.225246 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-hmcxj" event={"ID":"a24ab3f0-f53c-4f16-9fd8-e0e69149776d","Type":"ContainerStarted","Data":"0b29e6573626f342d7114df569b509b7d62d6980955d9e62689ee0fd77402758"} Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.225389 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-hmcxj" Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.228053 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5df598886f-5nmfh" event={"ID":"336c7267-9a4b-4924-bad8-9ccbef37dc21","Type":"ContainerStarted","Data":"7508a3ae70f8b6cbe1cc9e36bdfc618986ebc6cdd4aa233c6fb8175e85d205b8"} Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.228176 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5df598886f-5nmfh" Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.233224 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-ffb5v" podStartSLOduration=3.4042507730000002 podStartE2EDuration="12.233204151s" podCreationTimestamp="2025-10-13 06:43:53 +0000 UTC" firstStartedPulling="2025-10-13 06:43:54.50012087 +0000 UTC m=+924.600543786" lastFinishedPulling="2025-10-13 06:44:03.329074248 +0000 UTC m=+933.429497164" observedRunningTime="2025-10-13 06:44:05.22763341 +0000 UTC m=+935.328056326" watchObservedRunningTime="2025-10-13 06:44:05.233204151 +0000 UTC m=+935.333627067" Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.234827 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-dzsh5" event={"ID":"1b4f79cc-b86c-4742-97cd-d5c2a7fc95fb","Type":"ContainerStarted","Data":"c184f7d6bd37bcaa98ce263b872c842b1844ff01ef61130e00268927c42b9c62"} Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.234868 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-dzsh5" Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.269863 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-5fzdw" podStartSLOduration=4.249616625 podStartE2EDuration="13.269843805s" podCreationTimestamp="2025-10-13 06:43:52 +0000 UTC" firstStartedPulling="2025-10-13 06:43:54.310467395 +0000 UTC m=+924.410890321" lastFinishedPulling="2025-10-13 06:44:03.330694585 +0000 UTC m=+933.431117501" observedRunningTime="2025-10-13 06:44:05.269039181 +0000 UTC m=+935.369462117" watchObservedRunningTime="2025-10-13 06:44:05.269843805 +0000 UTC m=+935.370266721" Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.270237 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-mhd5p" podStartSLOduration=4.183816535 podStartE2EDuration="13.270232246s" podCreationTimestamp="2025-10-13 06:43:52 +0000 UTC" firstStartedPulling="2025-10-13 06:43:54.195397156 +0000 UTC m=+924.295820072" lastFinishedPulling="2025-10-13 06:44:03.281812867 +0000 UTC m=+933.382235783" observedRunningTime="2025-10-13 06:44:05.254256032 +0000 UTC m=+935.354678948" watchObservedRunningTime="2025-10-13 06:44:05.270232246 +0000 UTC m=+935.370655162" Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.337708 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5458f77c4-npmrc" podStartSLOduration=3.7749503410000003 podStartE2EDuration="12.337686264s" podCreationTimestamp="2025-10-13 06:43:53 +0000 UTC" firstStartedPulling="2025-10-13 06:43:54.766672805 +0000 UTC m=+924.867095721" lastFinishedPulling="2025-10-13 06:44:03.329408728 +0000 UTC m=+933.429831644" observedRunningTime="2025-10-13 06:44:05.296866469 +0000 UTC m=+935.397289385" watchObservedRunningTime="2025-10-13 06:44:05.337686264 +0000 UTC m=+935.438109180" Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.343078 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-542w7" podStartSLOduration=4.327684691 podStartE2EDuration="13.34305695s" podCreationTimestamp="2025-10-13 06:43:52 +0000 UTC" firstStartedPulling="2025-10-13 06:43:54.313245826 +0000 UTC m=+924.413668742" lastFinishedPulling="2025-10-13 06:44:03.328618085 +0000 UTC m=+933.429041001" observedRunningTime="2025-10-13 06:44:05.335780229 +0000 UTC m=+935.436203145" watchObservedRunningTime="2025-10-13 06:44:05.34305695 +0000 UTC m=+935.443479866" Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.362090 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-4nz57" podStartSLOduration=4.147919083 podStartE2EDuration="13.362070291s" podCreationTimestamp="2025-10-13 06:43:52 +0000 UTC" firstStartedPulling="2025-10-13 06:43:54.114980342 +0000 UTC m=+924.215403258" lastFinishedPulling="2025-10-13 06:44:03.32913155 +0000 UTC m=+933.429554466" observedRunningTime="2025-10-13 06:44:05.359643091 +0000 UTC m=+935.460066007" watchObservedRunningTime="2025-10-13 06:44:05.362070291 +0000 UTC m=+935.462493207" Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.376089 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-h2bzs" podStartSLOduration=4.54366818 podStartE2EDuration="13.376074448s" podCreationTimestamp="2025-10-13 06:43:52 +0000 UTC" firstStartedPulling="2025-10-13 06:43:54.495710042 +0000 UTC m=+924.596132958" lastFinishedPulling="2025-10-13 06:44:03.32811631 +0000 UTC m=+933.428539226" observedRunningTime="2025-10-13 06:44:05.369774845 +0000 UTC m=+935.470197771" watchObservedRunningTime="2025-10-13 06:44:05.376074448 +0000 UTC m=+935.476497364" Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.396579 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-6wmgz" podStartSLOduration=5.553002803 podStartE2EDuration="13.396563723s" podCreationTimestamp="2025-10-13 06:43:52 +0000 UTC" firstStartedPulling="2025-10-13 06:43:55.522348407 +0000 UTC m=+925.622771333" lastFinishedPulling="2025-10-13 06:44:03.365909337 +0000 UTC m=+933.466332253" observedRunningTime="2025-10-13 06:44:05.392596847 +0000 UTC m=+935.493019763" watchObservedRunningTime="2025-10-13 06:44:05.396563723 +0000 UTC m=+935.496986639" Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.414125 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-wl4kq" podStartSLOduration=4.277618838 podStartE2EDuration="13.414112392s" podCreationTimestamp="2025-10-13 06:43:52 +0000 UTC" firstStartedPulling="2025-10-13 06:43:54.182941164 +0000 UTC m=+924.283364080" lastFinishedPulling="2025-10-13 06:44:03.319434718 +0000 UTC m=+933.419857634" observedRunningTime="2025-10-13 06:44:05.413674049 +0000 UTC m=+935.514096965" watchObservedRunningTime="2025-10-13 06:44:05.414112392 +0000 UTC m=+935.514535308" Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.440276 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-gnrwn" podStartSLOduration=3.864190731 podStartE2EDuration="12.440259971s" podCreationTimestamp="2025-10-13 06:43:53 +0000 UTC" firstStartedPulling="2025-10-13 06:43:54.754736798 +0000 UTC m=+924.855159714" lastFinishedPulling="2025-10-13 06:44:03.330806038 +0000 UTC m=+933.431228954" observedRunningTime="2025-10-13 06:44:05.436263605 +0000 UTC m=+935.536686521" watchObservedRunningTime="2025-10-13 06:44:05.440259971 +0000 UTC m=+935.540682887" Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.463123 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-sqftk" podStartSLOduration=4.37143354 podStartE2EDuration="13.463109423s" podCreationTimestamp="2025-10-13 06:43:52 +0000 UTC" firstStartedPulling="2025-10-13 06:43:54.155738195 +0000 UTC m=+924.256161111" lastFinishedPulling="2025-10-13 06:44:03.247414088 +0000 UTC m=+933.347836994" observedRunningTime="2025-10-13 06:44:05.458350215 +0000 UTC m=+935.558773131" watchObservedRunningTime="2025-10-13 06:44:05.463109423 +0000 UTC m=+935.563532339" Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.475258 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-hmcxj" podStartSLOduration=4.018461885 podStartE2EDuration="13.475239985s" podCreationTimestamp="2025-10-13 06:43:52 +0000 UTC" firstStartedPulling="2025-10-13 06:43:53.825110529 +0000 UTC m=+923.925533445" lastFinishedPulling="2025-10-13 06:44:03.281888629 +0000 UTC m=+933.382311545" observedRunningTime="2025-10-13 06:44:05.473751482 +0000 UTC m=+935.574174408" watchObservedRunningTime="2025-10-13 06:44:05.475239985 +0000 UTC m=+935.575662901" Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.500558 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-5pqx6" podStartSLOduration=4.215155034 podStartE2EDuration="13.500528329s" podCreationTimestamp="2025-10-13 06:43:52 +0000 UTC" firstStartedPulling="2025-10-13 06:43:53.961353293 +0000 UTC m=+924.061776209" lastFinishedPulling="2025-10-13 06:44:03.246726588 +0000 UTC m=+933.347149504" observedRunningTime="2025-10-13 06:44:05.497681466 +0000 UTC m=+935.598104382" watchObservedRunningTime="2025-10-13 06:44:05.500528329 +0000 UTC m=+935.600951245" Oct 13 06:44:05 crc kubenswrapper[4833]: I1013 06:44:05.516700 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5df598886f-5nmfh" podStartSLOduration=4.529223029 podStartE2EDuration="13.516681088s" podCreationTimestamp="2025-10-13 06:43:52 +0000 UTC" firstStartedPulling="2025-10-13 06:43:54.294382358 +0000 UTC m=+924.394805274" lastFinishedPulling="2025-10-13 06:44:03.281840417 +0000 UTC m=+933.382263333" observedRunningTime="2025-10-13 06:44:05.512481156 +0000 UTC m=+935.612904082" watchObservedRunningTime="2025-10-13 06:44:05.516681088 +0000 UTC m=+935.617104004" Oct 13 06:44:10 crc kubenswrapper[4833]: I1013 06:44:10.265222 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-9rvcx" event={"ID":"8ea24fc5-58ac-426e-8943-eccec9261185","Type":"ContainerStarted","Data":"1d6d1aa699b85aa35bcf397c00d8b1183bc5ccae038d3f46a025c9fcab65ce1d"} Oct 13 06:44:10 crc kubenswrapper[4833]: I1013 06:44:10.265978 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-9rvcx" Oct 13 06:44:10 crc kubenswrapper[4833]: I1013 06:44:10.289168 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-9rvcx" podStartSLOduration=3.513763188 podStartE2EDuration="18.289149567s" podCreationTimestamp="2025-10-13 06:43:52 +0000 UTC" firstStartedPulling="2025-10-13 06:43:54.519949105 +0000 UTC m=+924.620372011" lastFinishedPulling="2025-10-13 06:44:09.295335474 +0000 UTC m=+939.395758390" observedRunningTime="2025-10-13 06:44:10.285891053 +0000 UTC m=+940.386313969" watchObservedRunningTime="2025-10-13 06:44:10.289149567 +0000 UTC m=+940.389572483" Oct 13 06:44:10 crc kubenswrapper[4833]: I1013 06:44:10.291384 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-dzsh5" podStartSLOduration=9.253080808 podStartE2EDuration="18.291375552s" podCreationTimestamp="2025-10-13 06:43:52 +0000 UTC" firstStartedPulling="2025-10-13 06:43:54.291772213 +0000 UTC m=+924.392195129" lastFinishedPulling="2025-10-13 06:44:03.330066947 +0000 UTC m=+933.430489873" observedRunningTime="2025-10-13 06:44:05.53710176 +0000 UTC m=+935.637524676" watchObservedRunningTime="2025-10-13 06:44:10.291375552 +0000 UTC m=+940.391798468" Oct 13 06:44:12 crc kubenswrapper[4833]: I1013 06:44:12.917503 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-w9zrf" Oct 13 06:44:13 crc kubenswrapper[4833]: I1013 06:44:13.050069 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-hmcxj" Oct 13 06:44:13 crc kubenswrapper[4833]: I1013 06:44:13.141836 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-4nz57" Oct 13 06:44:13 crc kubenswrapper[4833]: I1013 06:44:13.143081 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-5pqx6" Oct 13 06:44:13 crc kubenswrapper[4833]: I1013 06:44:13.146198 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-wl4kq" Oct 13 06:44:13 crc kubenswrapper[4833]: I1013 06:44:13.223759 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-sqftk" Oct 13 06:44:13 crc kubenswrapper[4833]: I1013 06:44:13.242850 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-mhd5p" Oct 13 06:44:13 crc kubenswrapper[4833]: I1013 06:44:13.287083 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-dzsh5" Oct 13 06:44:13 crc kubenswrapper[4833]: I1013 06:44:13.316504 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-jh5ml" event={"ID":"96aa2f66-4ecd-476b-9bf2-a9da443767df","Type":"ContainerStarted","Data":"e63d29fb62ccc9fb7185c2d01795fa633ad4b22911e249216027a1135cd8794c"} Oct 13 06:44:13 crc kubenswrapper[4833]: I1013 06:44:13.317310 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-jh5ml" Oct 13 06:44:13 crc kubenswrapper[4833]: I1013 06:44:13.321061 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7bm68wk" event={"ID":"e21ec2f6-af6b-4fa8-98c7-937dbf8f44b6","Type":"ContainerStarted","Data":"e27bb9ec359378b8f15c9eeb5d749e2da53fcf3964ac46673ca3496b99408593"} Oct 13 06:44:13 crc kubenswrapper[4833]: I1013 06:44:13.321702 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7bm68wk" Oct 13 06:44:13 crc kubenswrapper[4833]: I1013 06:44:13.325877 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4lk4k" event={"ID":"76ebee26-0a3b-49a8-92f1-4eb0362ed0c5","Type":"ContainerStarted","Data":"415518db6e2f6e37380ead443e27938b36f59130a282137df5f1ec0283c8a60c"} Oct 13 06:44:13 crc kubenswrapper[4833]: I1013 06:44:13.331230 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-2kjcp" event={"ID":"5822a35e-6851-47e9-be13-3a5418c44787","Type":"ContainerStarted","Data":"08efab567a8d778277f16703d29813f5c16cbac19140494daaefed4706aa248b"} Oct 13 06:44:13 crc kubenswrapper[4833]: I1013 06:44:13.331482 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-2kjcp" Oct 13 06:44:13 crc kubenswrapper[4833]: I1013 06:44:13.341863 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-jh5ml" podStartSLOduration=2.426837254 podStartE2EDuration="20.341847424s" podCreationTimestamp="2025-10-13 06:43:53 +0000 UTC" firstStartedPulling="2025-10-13 06:43:54.513828947 +0000 UTC m=+924.614251863" lastFinishedPulling="2025-10-13 06:44:12.428839117 +0000 UTC m=+942.529262033" observedRunningTime="2025-10-13 06:44:13.334879592 +0000 UTC m=+943.435302508" watchObservedRunningTime="2025-10-13 06:44:13.341847424 +0000 UTC m=+943.442270340" Oct 13 06:44:13 crc kubenswrapper[4833]: I1013 06:44:13.355843 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-2kjcp" podStartSLOduration=2.705349527 podStartE2EDuration="20.35582259s" podCreationTimestamp="2025-10-13 06:43:53 +0000 UTC" firstStartedPulling="2025-10-13 06:43:54.778343123 +0000 UTC m=+924.878766039" lastFinishedPulling="2025-10-13 06:44:12.428816186 +0000 UTC m=+942.529239102" observedRunningTime="2025-10-13 06:44:13.350625549 +0000 UTC m=+943.451048465" watchObservedRunningTime="2025-10-13 06:44:13.35582259 +0000 UTC m=+943.456245506" Oct 13 06:44:13 crc kubenswrapper[4833]: I1013 06:44:13.365838 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-542w7" Oct 13 06:44:13 crc kubenswrapper[4833]: I1013 06:44:13.404471 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7bm68wk" podStartSLOduration=3.801302602 podStartE2EDuration="21.404450731s" podCreationTimestamp="2025-10-13 06:43:52 +0000 UTC" firstStartedPulling="2025-10-13 06:43:54.825661637 +0000 UTC m=+924.926084543" lastFinishedPulling="2025-10-13 06:44:12.428809756 +0000 UTC m=+942.529232672" observedRunningTime="2025-10-13 06:44:13.398815517 +0000 UTC m=+943.499238433" watchObservedRunningTime="2025-10-13 06:44:13.404450731 +0000 UTC m=+943.504873647" Oct 13 06:44:13 crc kubenswrapper[4833]: I1013 06:44:13.415586 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4lk4k" podStartSLOduration=2.752569398 podStartE2EDuration="20.415566374s" podCreationTimestamp="2025-10-13 06:43:53 +0000 UTC" firstStartedPulling="2025-10-13 06:43:54.780810345 +0000 UTC m=+924.881233271" lastFinishedPulling="2025-10-13 06:44:12.443807311 +0000 UTC m=+942.544230247" observedRunningTime="2025-10-13 06:44:13.414887184 +0000 UTC m=+943.515310100" watchObservedRunningTime="2025-10-13 06:44:13.415566374 +0000 UTC m=+943.515989290" Oct 13 06:44:13 crc kubenswrapper[4833]: I1013 06:44:13.491916 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5df598886f-5nmfh" Oct 13 06:44:13 crc kubenswrapper[4833]: I1013 06:44:13.491981 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-h2bzs" Oct 13 06:44:13 crc kubenswrapper[4833]: I1013 06:44:13.492010 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-5fzdw" Oct 13 06:44:13 crc kubenswrapper[4833]: I1013 06:44:13.599576 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-ffb5v" Oct 13 06:44:13 crc kubenswrapper[4833]: I1013 06:44:13.631261 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-gnrwn" Oct 13 06:44:13 crc kubenswrapper[4833]: I1013 06:44:13.700954 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5458f77c4-npmrc" Oct 13 06:44:14 crc kubenswrapper[4833]: I1013 06:44:14.341020 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-hmwbl" event={"ID":"a8ddb0b0-ab30-4dfe-b4c1-3ba0a53d9972","Type":"ContainerStarted","Data":"02c034395e1c790fc9c925da004ee4bfb18cd797e83f453d6468fad583be4829"} Oct 13 06:44:14 crc kubenswrapper[4833]: I1013 06:44:14.356558 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-hmwbl" podStartSLOduration=3.104316665 podStartE2EDuration="22.356521783s" podCreationTimestamp="2025-10-13 06:43:52 +0000 UTC" firstStartedPulling="2025-10-13 06:43:54.502035715 +0000 UTC m=+924.602458631" lastFinishedPulling="2025-10-13 06:44:13.754240813 +0000 UTC m=+943.854663749" observedRunningTime="2025-10-13 06:44:14.354418322 +0000 UTC m=+944.454841238" watchObservedRunningTime="2025-10-13 06:44:14.356521783 +0000 UTC m=+944.456944699" Oct 13 06:44:14 crc kubenswrapper[4833]: I1013 06:44:14.930943 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-6wmgz" Oct 13 06:44:23 crc kubenswrapper[4833]: I1013 06:44:23.454806 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-9rvcx" Oct 13 06:44:23 crc kubenswrapper[4833]: I1013 06:44:23.505903 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-hmwbl" Oct 13 06:44:23 crc kubenswrapper[4833]: I1013 06:44:23.508279 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-hmwbl" Oct 13 06:44:23 crc kubenswrapper[4833]: I1013 06:44:23.700001 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-jh5ml" Oct 13 06:44:23 crc kubenswrapper[4833]: I1013 06:44:23.720546 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-2kjcp" Oct 13 06:44:24 crc kubenswrapper[4833]: I1013 06:44:24.133606 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7bm68wk" Oct 13 06:44:30 crc kubenswrapper[4833]: I1013 06:44:30.543363 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 06:44:30 crc kubenswrapper[4833]: I1013 06:44:30.544078 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 06:44:30 crc kubenswrapper[4833]: I1013 06:44:30.544219 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 06:44:30 crc kubenswrapper[4833]: I1013 06:44:30.545119 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dd9e737b5446edf3a1bc45401b54500d618ca084763a39fb5a0be12fcd006b99"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 06:44:30 crc kubenswrapper[4833]: I1013 06:44:30.545224 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://dd9e737b5446edf3a1bc45401b54500d618ca084763a39fb5a0be12fcd006b99" gracePeriod=600 Oct 13 06:44:31 crc kubenswrapper[4833]: I1013 06:44:31.479222 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="dd9e737b5446edf3a1bc45401b54500d618ca084763a39fb5a0be12fcd006b99" exitCode=0 Oct 13 06:44:31 crc kubenswrapper[4833]: I1013 06:44:31.479293 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"dd9e737b5446edf3a1bc45401b54500d618ca084763a39fb5a0be12fcd006b99"} Oct 13 06:44:31 crc kubenswrapper[4833]: I1013 06:44:31.479354 4833 scope.go:117] "RemoveContainer" containerID="0c7662799f74f815e9b59b491ebe056d2f40e7a81a10b5cc35be025975e84206" Oct 13 06:44:36 crc kubenswrapper[4833]: I1013 06:44:36.515569 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"e40769d80ac05fe37627a30679bf55af458c5472940a5c49dc7bce9376576247"} Oct 13 06:44:38 crc kubenswrapper[4833]: I1013 06:44:38.580203 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bfcb9d745-blzb7"] Oct 13 06:44:38 crc kubenswrapper[4833]: I1013 06:44:38.582179 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bfcb9d745-blzb7" Oct 13 06:44:38 crc kubenswrapper[4833]: I1013 06:44:38.584781 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Oct 13 06:44:38 crc kubenswrapper[4833]: I1013 06:44:38.584952 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 13 06:44:38 crc kubenswrapper[4833]: I1013 06:44:38.585053 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Oct 13 06:44:38 crc kubenswrapper[4833]: I1013 06:44:38.587366 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bfcb9d745-blzb7"] Oct 13 06:44:38 crc kubenswrapper[4833]: I1013 06:44:38.592082 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-nm2fq" Oct 13 06:44:38 crc kubenswrapper[4833]: I1013 06:44:38.659328 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-758b79db4c-lr8w5"] Oct 13 06:44:38 crc kubenswrapper[4833]: I1013 06:44:38.660473 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-758b79db4c-lr8w5" Oct 13 06:44:38 crc kubenswrapper[4833]: I1013 06:44:38.662902 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 13 06:44:38 crc kubenswrapper[4833]: I1013 06:44:38.668167 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-758b79db4c-lr8w5"] Oct 13 06:44:38 crc kubenswrapper[4833]: I1013 06:44:38.770154 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a178ef3-16d4-4642-b8a0-6a3f07fe662d-config\") pod \"dnsmasq-dns-758b79db4c-lr8w5\" (UID: \"4a178ef3-16d4-4642-b8a0-6a3f07fe662d\") " pod="openstack/dnsmasq-dns-758b79db4c-lr8w5" Oct 13 06:44:38 crc kubenswrapper[4833]: I1013 06:44:38.770272 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz47l\" (UniqueName: \"kubernetes.io/projected/9c226b93-b8ee-4b05-b2b6-29e21944da6f-kube-api-access-sz47l\") pod \"dnsmasq-dns-7bfcb9d745-blzb7\" (UID: \"9c226b93-b8ee-4b05-b2b6-29e21944da6f\") " pod="openstack/dnsmasq-dns-7bfcb9d745-blzb7" Oct 13 06:44:38 crc kubenswrapper[4833]: I1013 06:44:38.770301 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c226b93-b8ee-4b05-b2b6-29e21944da6f-config\") pod \"dnsmasq-dns-7bfcb9d745-blzb7\" (UID: \"9c226b93-b8ee-4b05-b2b6-29e21944da6f\") " pod="openstack/dnsmasq-dns-7bfcb9d745-blzb7" Oct 13 06:44:38 crc kubenswrapper[4833]: I1013 06:44:38.770325 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkkvb\" (UniqueName: \"kubernetes.io/projected/4a178ef3-16d4-4642-b8a0-6a3f07fe662d-kube-api-access-pkkvb\") pod \"dnsmasq-dns-758b79db4c-lr8w5\" (UID: \"4a178ef3-16d4-4642-b8a0-6a3f07fe662d\") " pod="openstack/dnsmasq-dns-758b79db4c-lr8w5" Oct 13 06:44:38 crc kubenswrapper[4833]: I1013 06:44:38.770363 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a178ef3-16d4-4642-b8a0-6a3f07fe662d-dns-svc\") pod \"dnsmasq-dns-758b79db4c-lr8w5\" (UID: \"4a178ef3-16d4-4642-b8a0-6a3f07fe662d\") " pod="openstack/dnsmasq-dns-758b79db4c-lr8w5" Oct 13 06:44:38 crc kubenswrapper[4833]: I1013 06:44:38.871874 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a178ef3-16d4-4642-b8a0-6a3f07fe662d-config\") pod \"dnsmasq-dns-758b79db4c-lr8w5\" (UID: \"4a178ef3-16d4-4642-b8a0-6a3f07fe662d\") " pod="openstack/dnsmasq-dns-758b79db4c-lr8w5" Oct 13 06:44:38 crc kubenswrapper[4833]: I1013 06:44:38.871927 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz47l\" (UniqueName: \"kubernetes.io/projected/9c226b93-b8ee-4b05-b2b6-29e21944da6f-kube-api-access-sz47l\") pod \"dnsmasq-dns-7bfcb9d745-blzb7\" (UID: \"9c226b93-b8ee-4b05-b2b6-29e21944da6f\") " pod="openstack/dnsmasq-dns-7bfcb9d745-blzb7" Oct 13 06:44:38 crc kubenswrapper[4833]: I1013 06:44:38.871958 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c226b93-b8ee-4b05-b2b6-29e21944da6f-config\") pod \"dnsmasq-dns-7bfcb9d745-blzb7\" (UID: \"9c226b93-b8ee-4b05-b2b6-29e21944da6f\") " pod="openstack/dnsmasq-dns-7bfcb9d745-blzb7" Oct 13 06:44:38 crc kubenswrapper[4833]: I1013 06:44:38.871993 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkkvb\" (UniqueName: \"kubernetes.io/projected/4a178ef3-16d4-4642-b8a0-6a3f07fe662d-kube-api-access-pkkvb\") pod \"dnsmasq-dns-758b79db4c-lr8w5\" (UID: \"4a178ef3-16d4-4642-b8a0-6a3f07fe662d\") " pod="openstack/dnsmasq-dns-758b79db4c-lr8w5" Oct 13 06:44:38 crc kubenswrapper[4833]: I1013 06:44:38.872053 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a178ef3-16d4-4642-b8a0-6a3f07fe662d-dns-svc\") pod \"dnsmasq-dns-758b79db4c-lr8w5\" (UID: \"4a178ef3-16d4-4642-b8a0-6a3f07fe662d\") " pod="openstack/dnsmasq-dns-758b79db4c-lr8w5" Oct 13 06:44:38 crc kubenswrapper[4833]: I1013 06:44:38.872931 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a178ef3-16d4-4642-b8a0-6a3f07fe662d-config\") pod \"dnsmasq-dns-758b79db4c-lr8w5\" (UID: \"4a178ef3-16d4-4642-b8a0-6a3f07fe662d\") " pod="openstack/dnsmasq-dns-758b79db4c-lr8w5" Oct 13 06:44:38 crc kubenswrapper[4833]: I1013 06:44:38.872965 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a178ef3-16d4-4642-b8a0-6a3f07fe662d-dns-svc\") pod \"dnsmasq-dns-758b79db4c-lr8w5\" (UID: \"4a178ef3-16d4-4642-b8a0-6a3f07fe662d\") " pod="openstack/dnsmasq-dns-758b79db4c-lr8w5" Oct 13 06:44:38 crc kubenswrapper[4833]: I1013 06:44:38.873937 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c226b93-b8ee-4b05-b2b6-29e21944da6f-config\") pod \"dnsmasq-dns-7bfcb9d745-blzb7\" (UID: \"9c226b93-b8ee-4b05-b2b6-29e21944da6f\") " pod="openstack/dnsmasq-dns-7bfcb9d745-blzb7" Oct 13 06:44:38 crc kubenswrapper[4833]: I1013 06:44:38.905943 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkkvb\" (UniqueName: \"kubernetes.io/projected/4a178ef3-16d4-4642-b8a0-6a3f07fe662d-kube-api-access-pkkvb\") pod \"dnsmasq-dns-758b79db4c-lr8w5\" (UID: \"4a178ef3-16d4-4642-b8a0-6a3f07fe662d\") " pod="openstack/dnsmasq-dns-758b79db4c-lr8w5" Oct 13 06:44:38 crc kubenswrapper[4833]: I1013 06:44:38.909336 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz47l\" (UniqueName: \"kubernetes.io/projected/9c226b93-b8ee-4b05-b2b6-29e21944da6f-kube-api-access-sz47l\") pod \"dnsmasq-dns-7bfcb9d745-blzb7\" (UID: \"9c226b93-b8ee-4b05-b2b6-29e21944da6f\") " pod="openstack/dnsmasq-dns-7bfcb9d745-blzb7" Oct 13 06:44:38 crc kubenswrapper[4833]: I1013 06:44:38.982056 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-758b79db4c-lr8w5" Oct 13 06:44:39 crc kubenswrapper[4833]: I1013 06:44:39.202903 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bfcb9d745-blzb7" Oct 13 06:44:39 crc kubenswrapper[4833]: I1013 06:44:39.326910 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-758b79db4c-lr8w5"] Oct 13 06:44:39 crc kubenswrapper[4833]: I1013 06:44:39.358398 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8575fc99d7-phl5s"] Oct 13 06:44:39 crc kubenswrapper[4833]: I1013 06:44:39.359506 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8575fc99d7-phl5s" Oct 13 06:44:39 crc kubenswrapper[4833]: I1013 06:44:39.403036 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8575fc99d7-phl5s"] Oct 13 06:44:39 crc kubenswrapper[4833]: I1013 06:44:39.460688 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-758b79db4c-lr8w5"] Oct 13 06:44:39 crc kubenswrapper[4833]: I1013 06:44:39.484333 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9859523-7b91-42bf-9439-b86433c88754-dns-svc\") pod \"dnsmasq-dns-8575fc99d7-phl5s\" (UID: \"e9859523-7b91-42bf-9439-b86433c88754\") " pod="openstack/dnsmasq-dns-8575fc99d7-phl5s" Oct 13 06:44:39 crc kubenswrapper[4833]: I1013 06:44:39.484638 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9859523-7b91-42bf-9439-b86433c88754-config\") pod \"dnsmasq-dns-8575fc99d7-phl5s\" (UID: \"e9859523-7b91-42bf-9439-b86433c88754\") " pod="openstack/dnsmasq-dns-8575fc99d7-phl5s" Oct 13 06:44:39 crc kubenswrapper[4833]: I1013 06:44:39.484663 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm9lj\" (UniqueName: \"kubernetes.io/projected/e9859523-7b91-42bf-9439-b86433c88754-kube-api-access-mm9lj\") pod \"dnsmasq-dns-8575fc99d7-phl5s\" (UID: \"e9859523-7b91-42bf-9439-b86433c88754\") " pod="openstack/dnsmasq-dns-8575fc99d7-phl5s" Oct 13 06:44:39 crc kubenswrapper[4833]: I1013 06:44:39.557463 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-758b79db4c-lr8w5" event={"ID":"4a178ef3-16d4-4642-b8a0-6a3f07fe662d","Type":"ContainerStarted","Data":"3176adea431ee765cdb9804dcd21b68005dd4857f88f2d98394611572af452aa"} Oct 13 06:44:39 crc kubenswrapper[4833]: I1013 06:44:39.587077 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9859523-7b91-42bf-9439-b86433c88754-dns-svc\") pod \"dnsmasq-dns-8575fc99d7-phl5s\" (UID: \"e9859523-7b91-42bf-9439-b86433c88754\") " pod="openstack/dnsmasq-dns-8575fc99d7-phl5s" Oct 13 06:44:39 crc kubenswrapper[4833]: I1013 06:44:39.587275 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9859523-7b91-42bf-9439-b86433c88754-config\") pod \"dnsmasq-dns-8575fc99d7-phl5s\" (UID: \"e9859523-7b91-42bf-9439-b86433c88754\") " pod="openstack/dnsmasq-dns-8575fc99d7-phl5s" Oct 13 06:44:39 crc kubenswrapper[4833]: I1013 06:44:39.587312 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm9lj\" (UniqueName: \"kubernetes.io/projected/e9859523-7b91-42bf-9439-b86433c88754-kube-api-access-mm9lj\") pod \"dnsmasq-dns-8575fc99d7-phl5s\" (UID: \"e9859523-7b91-42bf-9439-b86433c88754\") " pod="openstack/dnsmasq-dns-8575fc99d7-phl5s" Oct 13 06:44:39 crc kubenswrapper[4833]: I1013 06:44:39.588134 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9859523-7b91-42bf-9439-b86433c88754-dns-svc\") pod \"dnsmasq-dns-8575fc99d7-phl5s\" (UID: \"e9859523-7b91-42bf-9439-b86433c88754\") " pod="openstack/dnsmasq-dns-8575fc99d7-phl5s" Oct 13 06:44:39 crc kubenswrapper[4833]: I1013 06:44:39.588638 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9859523-7b91-42bf-9439-b86433c88754-config\") pod \"dnsmasq-dns-8575fc99d7-phl5s\" (UID: \"e9859523-7b91-42bf-9439-b86433c88754\") " pod="openstack/dnsmasq-dns-8575fc99d7-phl5s" Oct 13 06:44:39 crc kubenswrapper[4833]: I1013 06:44:39.605858 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm9lj\" (UniqueName: \"kubernetes.io/projected/e9859523-7b91-42bf-9439-b86433c88754-kube-api-access-mm9lj\") pod \"dnsmasq-dns-8575fc99d7-phl5s\" (UID: \"e9859523-7b91-42bf-9439-b86433c88754\") " pod="openstack/dnsmasq-dns-8575fc99d7-phl5s" Oct 13 06:44:39 crc kubenswrapper[4833]: I1013 06:44:39.672726 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bfcb9d745-blzb7"] Oct 13 06:44:39 crc kubenswrapper[4833]: I1013 06:44:39.684218 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8575fc99d7-phl5s" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.109058 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8575fc99d7-phl5s"] Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.321042 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bfcb9d745-blzb7"] Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.350481 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77597f887-8wtwb"] Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.351719 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77597f887-8wtwb" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.361292 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77597f887-8wtwb"] Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.401649 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc889568-9917-471c-b7b6-02d71536e6db-config\") pod \"dnsmasq-dns-77597f887-8wtwb\" (UID: \"fc889568-9917-471c-b7b6-02d71536e6db\") " pod="openstack/dnsmasq-dns-77597f887-8wtwb" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.401705 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc889568-9917-471c-b7b6-02d71536e6db-dns-svc\") pod \"dnsmasq-dns-77597f887-8wtwb\" (UID: \"fc889568-9917-471c-b7b6-02d71536e6db\") " pod="openstack/dnsmasq-dns-77597f887-8wtwb" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.401753 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q42ll\" (UniqueName: \"kubernetes.io/projected/fc889568-9917-471c-b7b6-02d71536e6db-kube-api-access-q42ll\") pod \"dnsmasq-dns-77597f887-8wtwb\" (UID: \"fc889568-9917-471c-b7b6-02d71536e6db\") " pod="openstack/dnsmasq-dns-77597f887-8wtwb" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.503230 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q42ll\" (UniqueName: \"kubernetes.io/projected/fc889568-9917-471c-b7b6-02d71536e6db-kube-api-access-q42ll\") pod \"dnsmasq-dns-77597f887-8wtwb\" (UID: \"fc889568-9917-471c-b7b6-02d71536e6db\") " pod="openstack/dnsmasq-dns-77597f887-8wtwb" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.503403 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc889568-9917-471c-b7b6-02d71536e6db-config\") pod \"dnsmasq-dns-77597f887-8wtwb\" (UID: \"fc889568-9917-471c-b7b6-02d71536e6db\") " pod="openstack/dnsmasq-dns-77597f887-8wtwb" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.503439 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc889568-9917-471c-b7b6-02d71536e6db-dns-svc\") pod \"dnsmasq-dns-77597f887-8wtwb\" (UID: \"fc889568-9917-471c-b7b6-02d71536e6db\") " pod="openstack/dnsmasq-dns-77597f887-8wtwb" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.504634 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc889568-9917-471c-b7b6-02d71536e6db-config\") pod \"dnsmasq-dns-77597f887-8wtwb\" (UID: \"fc889568-9917-471c-b7b6-02d71536e6db\") " pod="openstack/dnsmasq-dns-77597f887-8wtwb" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.504653 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc889568-9917-471c-b7b6-02d71536e6db-dns-svc\") pod \"dnsmasq-dns-77597f887-8wtwb\" (UID: \"fc889568-9917-471c-b7b6-02d71536e6db\") " pod="openstack/dnsmasq-dns-77597f887-8wtwb" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.528806 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q42ll\" (UniqueName: \"kubernetes.io/projected/fc889568-9917-471c-b7b6-02d71536e6db-kube-api-access-q42ll\") pod \"dnsmasq-dns-77597f887-8wtwb\" (UID: \"fc889568-9917-471c-b7b6-02d71536e6db\") " pod="openstack/dnsmasq-dns-77597f887-8wtwb" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.533587 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.535245 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.538847 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.538962 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-2hzsr" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.539020 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.539115 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.539176 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.539215 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.539268 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.550946 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.571330 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bfcb9d745-blzb7" event={"ID":"9c226b93-b8ee-4b05-b2b6-29e21944da6f","Type":"ContainerStarted","Data":"0f08434244cbe897983630a306598445ac25ea6e6610e0cb804da2b109e1ad8f"} Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.572126 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8575fc99d7-phl5s" event={"ID":"e9859523-7b91-42bf-9439-b86433c88754","Type":"ContainerStarted","Data":"c14c7864df31a6838b000dd0f45bdfbd4ff27f348ebb130aa6f358b585e960c2"} Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.673808 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77597f887-8wtwb" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.706124 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0a6ab499-ed60-45e7-b510-5a43422aa7f5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.706174 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.706199 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0a6ab499-ed60-45e7-b510-5a43422aa7f5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.706229 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0a6ab499-ed60-45e7-b510-5a43422aa7f5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.706256 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0a6ab499-ed60-45e7-b510-5a43422aa7f5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.706288 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0a6ab499-ed60-45e7-b510-5a43422aa7f5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.706305 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0a6ab499-ed60-45e7-b510-5a43422aa7f5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.706327 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0a6ab499-ed60-45e7-b510-5a43422aa7f5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.706347 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0a6ab499-ed60-45e7-b510-5a43422aa7f5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.706361 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0a6ab499-ed60-45e7-b510-5a43422aa7f5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.706393 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvdcf\" (UniqueName: \"kubernetes.io/projected/0a6ab499-ed60-45e7-b510-5a43422aa7f5-kube-api-access-xvdcf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.807342 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvdcf\" (UniqueName: \"kubernetes.io/projected/0a6ab499-ed60-45e7-b510-5a43422aa7f5-kube-api-access-xvdcf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.807387 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0a6ab499-ed60-45e7-b510-5a43422aa7f5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.807411 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.807429 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0a6ab499-ed60-45e7-b510-5a43422aa7f5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.807454 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0a6ab499-ed60-45e7-b510-5a43422aa7f5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.807482 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0a6ab499-ed60-45e7-b510-5a43422aa7f5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.807510 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0a6ab499-ed60-45e7-b510-5a43422aa7f5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.807527 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0a6ab499-ed60-45e7-b510-5a43422aa7f5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.807575 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0a6ab499-ed60-45e7-b510-5a43422aa7f5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.807600 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0a6ab499-ed60-45e7-b510-5a43422aa7f5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.807614 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0a6ab499-ed60-45e7-b510-5a43422aa7f5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.808757 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0a6ab499-ed60-45e7-b510-5a43422aa7f5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.809519 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0a6ab499-ed60-45e7-b510-5a43422aa7f5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.809757 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.811182 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0a6ab499-ed60-45e7-b510-5a43422aa7f5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.812840 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0a6ab499-ed60-45e7-b510-5a43422aa7f5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.814446 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0a6ab499-ed60-45e7-b510-5a43422aa7f5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.816576 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0a6ab499-ed60-45e7-b510-5a43422aa7f5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.827169 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0a6ab499-ed60-45e7-b510-5a43422aa7f5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.827228 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0a6ab499-ed60-45e7-b510-5a43422aa7f5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.827477 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0a6ab499-ed60-45e7-b510-5a43422aa7f5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.829463 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvdcf\" (UniqueName: \"kubernetes.io/projected/0a6ab499-ed60-45e7-b510-5a43422aa7f5-kube-api-access-xvdcf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.838041 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.883979 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:44:40 crc kubenswrapper[4833]: I1013 06:44:40.985286 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77597f887-8wtwb"] Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.350421 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 13 06:44:41 crc kubenswrapper[4833]: W1013 06:44:41.357116 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a6ab499_ed60_45e7_b510_5a43422aa7f5.slice/crio-0b84c0d8e32dd5f7e418f450d21a0b0dbf45ffb952b07c5d18db22a16dac8081 WatchSource:0}: Error finding container 0b84c0d8e32dd5f7e418f450d21a0b0dbf45ffb952b07c5d18db22a16dac8081: Status 404 returned error can't find the container with id 0b84c0d8e32dd5f7e418f450d21a0b0dbf45ffb952b07c5d18db22a16dac8081 Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.485514 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.488343 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.490556 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.490657 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-j4dlm" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.490708 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.490660 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.491773 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.492433 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.492915 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.498416 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.609983 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77597f887-8wtwb" event={"ID":"fc889568-9917-471c-b7b6-02d71536e6db","Type":"ContainerStarted","Data":"3293b820fe3dd8baa4be06ffe801c807bb9e1da907d2196959bf7ad31dca579b"} Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.620006 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/827f736f-2193-4ebd-ab7f-99fb22945d1e-config-data\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.620045 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/827f736f-2193-4ebd-ab7f-99fb22945d1e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.620075 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/827f736f-2193-4ebd-ab7f-99fb22945d1e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.620092 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.620121 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/827f736f-2193-4ebd-ab7f-99fb22945d1e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.620148 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/827f736f-2193-4ebd-ab7f-99fb22945d1e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.620169 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/827f736f-2193-4ebd-ab7f-99fb22945d1e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.620195 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/827f736f-2193-4ebd-ab7f-99fb22945d1e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.620253 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk2fq\" (UniqueName: \"kubernetes.io/projected/827f736f-2193-4ebd-ab7f-99fb22945d1e-kube-api-access-nk2fq\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.620280 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/827f736f-2193-4ebd-ab7f-99fb22945d1e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.620298 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/827f736f-2193-4ebd-ab7f-99fb22945d1e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.629107 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0a6ab499-ed60-45e7-b510-5a43422aa7f5","Type":"ContainerStarted","Data":"0b84c0d8e32dd5f7e418f450d21a0b0dbf45ffb952b07c5d18db22a16dac8081"} Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.721473 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/827f736f-2193-4ebd-ab7f-99fb22945d1e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.721519 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.721583 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/827f736f-2193-4ebd-ab7f-99fb22945d1e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.721619 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/827f736f-2193-4ebd-ab7f-99fb22945d1e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.721637 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/827f736f-2193-4ebd-ab7f-99fb22945d1e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.721664 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/827f736f-2193-4ebd-ab7f-99fb22945d1e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.721685 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk2fq\" (UniqueName: \"kubernetes.io/projected/827f736f-2193-4ebd-ab7f-99fb22945d1e-kube-api-access-nk2fq\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.721707 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/827f736f-2193-4ebd-ab7f-99fb22945d1e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.721730 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/827f736f-2193-4ebd-ab7f-99fb22945d1e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.721751 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/827f736f-2193-4ebd-ab7f-99fb22945d1e-config-data\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.721766 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/827f736f-2193-4ebd-ab7f-99fb22945d1e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.722516 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/827f736f-2193-4ebd-ab7f-99fb22945d1e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.722790 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.723671 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/827f736f-2193-4ebd-ab7f-99fb22945d1e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.724073 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/827f736f-2193-4ebd-ab7f-99fb22945d1e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.724565 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/827f736f-2193-4ebd-ab7f-99fb22945d1e-config-data\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.728776 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/827f736f-2193-4ebd-ab7f-99fb22945d1e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.731887 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/827f736f-2193-4ebd-ab7f-99fb22945d1e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.741904 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/827f736f-2193-4ebd-ab7f-99fb22945d1e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.743511 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/827f736f-2193-4ebd-ab7f-99fb22945d1e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.743733 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/827f736f-2193-4ebd-ab7f-99fb22945d1e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.755091 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk2fq\" (UniqueName: \"kubernetes.io/projected/827f736f-2193-4ebd-ab7f-99fb22945d1e-kube-api-access-nk2fq\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.793183 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " pod="openstack/rabbitmq-server-0" Oct 13 06:44:41 crc kubenswrapper[4833]: I1013 06:44:41.821217 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 13 06:44:42 crc kubenswrapper[4833]: I1013 06:44:42.398322 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 13 06:44:42 crc kubenswrapper[4833]: W1013 06:44:42.411249 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod827f736f_2193_4ebd_ab7f_99fb22945d1e.slice/crio-566396b1408580b2239a9e0ea20d35c824b1b882acf1d42473d5b4de5f5887be WatchSource:0}: Error finding container 566396b1408580b2239a9e0ea20d35c824b1b882acf1d42473d5b4de5f5887be: Status 404 returned error can't find the container with id 566396b1408580b2239a9e0ea20d35c824b1b882acf1d42473d5b4de5f5887be Oct 13 06:44:42 crc kubenswrapper[4833]: I1013 06:44:42.641028 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"827f736f-2193-4ebd-ab7f-99fb22945d1e","Type":"ContainerStarted","Data":"566396b1408580b2239a9e0ea20d35c824b1b882acf1d42473d5b4de5f5887be"} Oct 13 06:44:42 crc kubenswrapper[4833]: I1013 06:44:42.831813 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Oct 13 06:44:42 crc kubenswrapper[4833]: I1013 06:44:42.833596 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 13 06:44:42 crc kubenswrapper[4833]: I1013 06:44:42.836210 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Oct 13 06:44:42 crc kubenswrapper[4833]: I1013 06:44:42.836492 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Oct 13 06:44:42 crc kubenswrapper[4833]: I1013 06:44:42.836645 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-bcpns" Oct 13 06:44:42 crc kubenswrapper[4833]: I1013 06:44:42.836868 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Oct 13 06:44:42 crc kubenswrapper[4833]: I1013 06:44:42.840399 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 13 06:44:42 crc kubenswrapper[4833]: I1013 06:44:42.843963 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Oct 13 06:44:42 crc kubenswrapper[4833]: I1013 06:44:42.844429 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Oct 13 06:44:42 crc kubenswrapper[4833]: I1013 06:44:42.951676 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/aa0ca608-57b5-4289-9271-fcc10a6c7422-secrets\") pod \"openstack-galera-0\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " pod="openstack/openstack-galera-0" Oct 13 06:44:42 crc kubenswrapper[4833]: I1013 06:44:42.951725 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0ca608-57b5-4289-9271-fcc10a6c7422-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " pod="openstack/openstack-galera-0" Oct 13 06:44:42 crc kubenswrapper[4833]: I1013 06:44:42.951764 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " pod="openstack/openstack-galera-0" Oct 13 06:44:42 crc kubenswrapper[4833]: I1013 06:44:42.951804 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96dr9\" (UniqueName: \"kubernetes.io/projected/aa0ca608-57b5-4289-9271-fcc10a6c7422-kube-api-access-96dr9\") pod \"openstack-galera-0\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " pod="openstack/openstack-galera-0" Oct 13 06:44:42 crc kubenswrapper[4833]: I1013 06:44:42.951831 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/aa0ca608-57b5-4289-9271-fcc10a6c7422-config-data-generated\") pod \"openstack-galera-0\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " pod="openstack/openstack-galera-0" Oct 13 06:44:42 crc kubenswrapper[4833]: I1013 06:44:42.951852 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aa0ca608-57b5-4289-9271-fcc10a6c7422-kolla-config\") pod \"openstack-galera-0\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " pod="openstack/openstack-galera-0" Oct 13 06:44:42 crc kubenswrapper[4833]: I1013 06:44:42.951878 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/aa0ca608-57b5-4289-9271-fcc10a6c7422-config-data-default\") pod \"openstack-galera-0\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " pod="openstack/openstack-galera-0" Oct 13 06:44:42 crc kubenswrapper[4833]: I1013 06:44:42.951915 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa0ca608-57b5-4289-9271-fcc10a6c7422-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " pod="openstack/openstack-galera-0" Oct 13 06:44:42 crc kubenswrapper[4833]: I1013 06:44:42.951963 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa0ca608-57b5-4289-9271-fcc10a6c7422-operator-scripts\") pod \"openstack-galera-0\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " pod="openstack/openstack-galera-0" Oct 13 06:44:43 crc kubenswrapper[4833]: I1013 06:44:43.053363 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa0ca608-57b5-4289-9271-fcc10a6c7422-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " pod="openstack/openstack-galera-0" Oct 13 06:44:43 crc kubenswrapper[4833]: I1013 06:44:43.053426 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa0ca608-57b5-4289-9271-fcc10a6c7422-operator-scripts\") pod \"openstack-galera-0\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " pod="openstack/openstack-galera-0" Oct 13 06:44:43 crc kubenswrapper[4833]: I1013 06:44:43.053480 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/aa0ca608-57b5-4289-9271-fcc10a6c7422-secrets\") pod \"openstack-galera-0\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " pod="openstack/openstack-galera-0" Oct 13 06:44:43 crc kubenswrapper[4833]: I1013 06:44:43.053504 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0ca608-57b5-4289-9271-fcc10a6c7422-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " pod="openstack/openstack-galera-0" Oct 13 06:44:43 crc kubenswrapper[4833]: I1013 06:44:43.053528 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " pod="openstack/openstack-galera-0" Oct 13 06:44:43 crc kubenswrapper[4833]: I1013 06:44:43.053593 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96dr9\" (UniqueName: \"kubernetes.io/projected/aa0ca608-57b5-4289-9271-fcc10a6c7422-kube-api-access-96dr9\") pod \"openstack-galera-0\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " pod="openstack/openstack-galera-0" Oct 13 06:44:43 crc kubenswrapper[4833]: I1013 06:44:43.053618 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/aa0ca608-57b5-4289-9271-fcc10a6c7422-config-data-generated\") pod \"openstack-galera-0\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " pod="openstack/openstack-galera-0" Oct 13 06:44:43 crc kubenswrapper[4833]: I1013 06:44:43.053635 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aa0ca608-57b5-4289-9271-fcc10a6c7422-kolla-config\") pod \"openstack-galera-0\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " pod="openstack/openstack-galera-0" Oct 13 06:44:43 crc kubenswrapper[4833]: I1013 06:44:43.053655 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/aa0ca608-57b5-4289-9271-fcc10a6c7422-config-data-default\") pod \"openstack-galera-0\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " pod="openstack/openstack-galera-0" Oct 13 06:44:43 crc kubenswrapper[4833]: I1013 06:44:43.054294 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-galera-0" Oct 13 06:44:43 crc kubenswrapper[4833]: I1013 06:44:43.054465 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/aa0ca608-57b5-4289-9271-fcc10a6c7422-config-data-generated\") pod \"openstack-galera-0\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " pod="openstack/openstack-galera-0" Oct 13 06:44:43 crc kubenswrapper[4833]: I1013 06:44:43.055379 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/aa0ca608-57b5-4289-9271-fcc10a6c7422-config-data-default\") pod \"openstack-galera-0\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " pod="openstack/openstack-galera-0" Oct 13 06:44:43 crc kubenswrapper[4833]: I1013 06:44:43.055389 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aa0ca608-57b5-4289-9271-fcc10a6c7422-kolla-config\") pod \"openstack-galera-0\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " pod="openstack/openstack-galera-0" Oct 13 06:44:43 crc kubenswrapper[4833]: I1013 06:44:43.056126 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa0ca608-57b5-4289-9271-fcc10a6c7422-operator-scripts\") pod \"openstack-galera-0\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " pod="openstack/openstack-galera-0" Oct 13 06:44:43 crc kubenswrapper[4833]: I1013 06:44:43.067970 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0ca608-57b5-4289-9271-fcc10a6c7422-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " pod="openstack/openstack-galera-0" Oct 13 06:44:43 crc kubenswrapper[4833]: I1013 06:44:43.070051 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa0ca608-57b5-4289-9271-fcc10a6c7422-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " pod="openstack/openstack-galera-0" Oct 13 06:44:43 crc kubenswrapper[4833]: I1013 06:44:43.074100 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96dr9\" (UniqueName: \"kubernetes.io/projected/aa0ca608-57b5-4289-9271-fcc10a6c7422-kube-api-access-96dr9\") pod \"openstack-galera-0\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " pod="openstack/openstack-galera-0" Oct 13 06:44:43 crc kubenswrapper[4833]: I1013 06:44:43.075671 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " pod="openstack/openstack-galera-0" Oct 13 06:44:43 crc kubenswrapper[4833]: I1013 06:44:43.091214 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/aa0ca608-57b5-4289-9271-fcc10a6c7422-secrets\") pod \"openstack-galera-0\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " pod="openstack/openstack-galera-0" Oct 13 06:44:43 crc kubenswrapper[4833]: I1013 06:44:43.169907 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 13 06:44:43 crc kubenswrapper[4833]: I1013 06:44:43.738182 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.411621 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.414581 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.415742 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.418310 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.418355 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.418569 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-dxvgg" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.429917 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.577938 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t2wr\" (UniqueName: \"kubernetes.io/projected/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-kube-api-access-6t2wr\") pod \"openstack-cell1-galera-0\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " pod="openstack/openstack-cell1-galera-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.578021 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " pod="openstack/openstack-cell1-galera-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.578052 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " pod="openstack/openstack-cell1-galera-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.578072 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " pod="openstack/openstack-cell1-galera-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.578092 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " pod="openstack/openstack-cell1-galera-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.578107 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " pod="openstack/openstack-cell1-galera-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.578130 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " pod="openstack/openstack-cell1-galera-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.578151 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " pod="openstack/openstack-cell1-galera-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.578164 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " pod="openstack/openstack-cell1-galera-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.672186 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"aa0ca608-57b5-4289-9271-fcc10a6c7422","Type":"ContainerStarted","Data":"03828655167a40072a1527a33a61b9fbc5313e490328900dc94ec83fea826c0e"} Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.680320 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " pod="openstack/openstack-cell1-galera-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.680380 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " pod="openstack/openstack-cell1-galera-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.680403 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " pod="openstack/openstack-cell1-galera-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.680423 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " pod="openstack/openstack-cell1-galera-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.680440 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " pod="openstack/openstack-cell1-galera-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.680464 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " pod="openstack/openstack-cell1-galera-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.680489 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " pod="openstack/openstack-cell1-galera-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.680519 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " pod="openstack/openstack-cell1-galera-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.680568 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t2wr\" (UniqueName: \"kubernetes.io/projected/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-kube-api-access-6t2wr\") pod \"openstack-cell1-galera-0\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " pod="openstack/openstack-cell1-galera-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.680973 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-cell1-galera-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.682211 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " pod="openstack/openstack-cell1-galera-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.682310 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " pod="openstack/openstack-cell1-galera-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.691066 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " pod="openstack/openstack-cell1-galera-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.691288 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " pod="openstack/openstack-cell1-galera-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.691942 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " pod="openstack/openstack-cell1-galera-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.692157 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " pod="openstack/openstack-cell1-galera-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.701828 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t2wr\" (UniqueName: \"kubernetes.io/projected/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-kube-api-access-6t2wr\") pod \"openstack-cell1-galera-0\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " pod="openstack/openstack-cell1-galera-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.710910 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " pod="openstack/openstack-cell1-galera-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.728893 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.739953 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.743962 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.753479 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " pod="openstack/openstack-cell1-galera-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.753803 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.754011 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.759387 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-c2tpj" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.885791 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba-config-data\") pod \"memcached-0\" (UID: \"fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba\") " pod="openstack/memcached-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.886020 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd62r\" (UniqueName: \"kubernetes.io/projected/fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba-kube-api-access-gd62r\") pod \"memcached-0\" (UID: \"fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba\") " pod="openstack/memcached-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.886123 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba-combined-ca-bundle\") pod \"memcached-0\" (UID: \"fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba\") " pod="openstack/memcached-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.886191 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba-memcached-tls-certs\") pod \"memcached-0\" (UID: \"fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba\") " pod="openstack/memcached-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.886280 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba-kolla-config\") pod \"memcached-0\" (UID: \"fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba\") " pod="openstack/memcached-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.990206 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba-kolla-config\") pod \"memcached-0\" (UID: \"fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba\") " pod="openstack/memcached-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.990275 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba-config-data\") pod \"memcached-0\" (UID: \"fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba\") " pod="openstack/memcached-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.990328 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd62r\" (UniqueName: \"kubernetes.io/projected/fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba-kube-api-access-gd62r\") pod \"memcached-0\" (UID: \"fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba\") " pod="openstack/memcached-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.990385 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba-combined-ca-bundle\") pod \"memcached-0\" (UID: \"fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba\") " pod="openstack/memcached-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.990421 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba-memcached-tls-certs\") pod \"memcached-0\" (UID: \"fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba\") " pod="openstack/memcached-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.995915 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba-kolla-config\") pod \"memcached-0\" (UID: \"fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba\") " pod="openstack/memcached-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.996341 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba-combined-ca-bundle\") pod \"memcached-0\" (UID: \"fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba\") " pod="openstack/memcached-0" Oct 13 06:44:44 crc kubenswrapper[4833]: I1013 06:44:44.996504 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba-memcached-tls-certs\") pod \"memcached-0\" (UID: \"fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba\") " pod="openstack/memcached-0" Oct 13 06:44:45 crc kubenswrapper[4833]: I1013 06:44:45.003116 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba-config-data\") pod \"memcached-0\" (UID: \"fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba\") " pod="openstack/memcached-0" Oct 13 06:44:45 crc kubenswrapper[4833]: I1013 06:44:45.010074 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd62r\" (UniqueName: \"kubernetes.io/projected/fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba-kube-api-access-gd62r\") pod \"memcached-0\" (UID: \"fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba\") " pod="openstack/memcached-0" Oct 13 06:44:45 crc kubenswrapper[4833]: I1013 06:44:45.044605 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 13 06:44:45 crc kubenswrapper[4833]: I1013 06:44:45.110192 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 13 06:44:46 crc kubenswrapper[4833]: I1013 06:44:46.467068 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 13 06:44:46 crc kubenswrapper[4833]: I1013 06:44:46.470999 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 13 06:44:46 crc kubenswrapper[4833]: I1013 06:44:46.474216 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 13 06:44:46 crc kubenswrapper[4833]: I1013 06:44:46.474679 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-rgphf" Oct 13 06:44:46 crc kubenswrapper[4833]: I1013 06:44:46.619571 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxmfh\" (UniqueName: \"kubernetes.io/projected/40245b56-c93c-4c17-873a-dcd87e3f041b-kube-api-access-fxmfh\") pod \"kube-state-metrics-0\" (UID: \"40245b56-c93c-4c17-873a-dcd87e3f041b\") " pod="openstack/kube-state-metrics-0" Oct 13 06:44:46 crc kubenswrapper[4833]: I1013 06:44:46.728336 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxmfh\" (UniqueName: \"kubernetes.io/projected/40245b56-c93c-4c17-873a-dcd87e3f041b-kube-api-access-fxmfh\") pod \"kube-state-metrics-0\" (UID: \"40245b56-c93c-4c17-873a-dcd87e3f041b\") " pod="openstack/kube-state-metrics-0" Oct 13 06:44:46 crc kubenswrapper[4833]: I1013 06:44:46.756109 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxmfh\" (UniqueName: \"kubernetes.io/projected/40245b56-c93c-4c17-873a-dcd87e3f041b-kube-api-access-fxmfh\") pod \"kube-state-metrics-0\" (UID: \"40245b56-c93c-4c17-873a-dcd87e3f041b\") " pod="openstack/kube-state-metrics-0" Oct 13 06:44:46 crc kubenswrapper[4833]: I1013 06:44:46.794082 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 13 06:44:50 crc kubenswrapper[4833]: I1013 06:44:50.877899 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-rtrth"] Oct 13 06:44:50 crc kubenswrapper[4833]: I1013 06:44:50.880774 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rtrth" Oct 13 06:44:50 crc kubenswrapper[4833]: I1013 06:44:50.889003 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Oct 13 06:44:50 crc kubenswrapper[4833]: I1013 06:44:50.889303 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Oct 13 06:44:50 crc kubenswrapper[4833]: I1013 06:44:50.889299 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-t5zz6" Oct 13 06:44:50 crc kubenswrapper[4833]: I1013 06:44:50.894799 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-7j8gx"] Oct 13 06:44:50 crc kubenswrapper[4833]: I1013 06:44:50.897119 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7j8gx" Oct 13 06:44:50 crc kubenswrapper[4833]: I1013 06:44:50.902219 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rtrth"] Oct 13 06:44:50 crc kubenswrapper[4833]: I1013 06:44:50.909252 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7j8gx"] Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.000709 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5fb7c39d-6b28-4530-b9b1-87c2af591f61-var-run-ovn\") pod \"ovn-controller-rtrth\" (UID: \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\") " pod="openstack/ovn-controller-rtrth" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.000794 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fb7c39d-6b28-4530-b9b1-87c2af591f61-ovn-controller-tls-certs\") pod \"ovn-controller-rtrth\" (UID: \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\") " pod="openstack/ovn-controller-rtrth" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.000828 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5fb7c39d-6b28-4530-b9b1-87c2af591f61-scripts\") pod \"ovn-controller-rtrth\" (UID: \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\") " pod="openstack/ovn-controller-rtrth" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.000857 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-var-log\") pod \"ovn-controller-ovs-7j8gx\" (UID: \"6aef55de-c4dd-409e-b9f1-b79adc99ea8d\") " pod="openstack/ovn-controller-ovs-7j8gx" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.000877 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-scripts\") pod \"ovn-controller-ovs-7j8gx\" (UID: \"6aef55de-c4dd-409e-b9f1-b79adc99ea8d\") " pod="openstack/ovn-controller-ovs-7j8gx" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.001004 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-var-lib\") pod \"ovn-controller-ovs-7j8gx\" (UID: \"6aef55de-c4dd-409e-b9f1-b79adc99ea8d\") " pod="openstack/ovn-controller-ovs-7j8gx" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.001062 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fb7c39d-6b28-4530-b9b1-87c2af591f61-combined-ca-bundle\") pod \"ovn-controller-rtrth\" (UID: \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\") " pod="openstack/ovn-controller-rtrth" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.001207 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-var-run\") pod \"ovn-controller-ovs-7j8gx\" (UID: \"6aef55de-c4dd-409e-b9f1-b79adc99ea8d\") " pod="openstack/ovn-controller-ovs-7j8gx" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.001248 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq5j7\" (UniqueName: \"kubernetes.io/projected/5fb7c39d-6b28-4530-b9b1-87c2af591f61-kube-api-access-kq5j7\") pod \"ovn-controller-rtrth\" (UID: \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\") " pod="openstack/ovn-controller-rtrth" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.001282 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5fb7c39d-6b28-4530-b9b1-87c2af591f61-var-log-ovn\") pod \"ovn-controller-rtrth\" (UID: \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\") " pod="openstack/ovn-controller-rtrth" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.001299 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-etc-ovs\") pod \"ovn-controller-ovs-7j8gx\" (UID: \"6aef55de-c4dd-409e-b9f1-b79adc99ea8d\") " pod="openstack/ovn-controller-ovs-7j8gx" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.001345 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsjb5\" (UniqueName: \"kubernetes.io/projected/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-kube-api-access-zsjb5\") pod \"ovn-controller-ovs-7j8gx\" (UID: \"6aef55de-c4dd-409e-b9f1-b79adc99ea8d\") " pod="openstack/ovn-controller-ovs-7j8gx" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.001433 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5fb7c39d-6b28-4530-b9b1-87c2af591f61-var-run\") pod \"ovn-controller-rtrth\" (UID: \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\") " pod="openstack/ovn-controller-rtrth" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.102736 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5fb7c39d-6b28-4530-b9b1-87c2af591f61-var-run\") pod \"ovn-controller-rtrth\" (UID: \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\") " pod="openstack/ovn-controller-rtrth" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.102974 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5fb7c39d-6b28-4530-b9b1-87c2af591f61-var-run-ovn\") pod \"ovn-controller-rtrth\" (UID: \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\") " pod="openstack/ovn-controller-rtrth" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.103008 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fb7c39d-6b28-4530-b9b1-87c2af591f61-ovn-controller-tls-certs\") pod \"ovn-controller-rtrth\" (UID: \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\") " pod="openstack/ovn-controller-rtrth" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.103026 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5fb7c39d-6b28-4530-b9b1-87c2af591f61-scripts\") pod \"ovn-controller-rtrth\" (UID: \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\") " pod="openstack/ovn-controller-rtrth" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.103045 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-var-log\") pod \"ovn-controller-ovs-7j8gx\" (UID: \"6aef55de-c4dd-409e-b9f1-b79adc99ea8d\") " pod="openstack/ovn-controller-ovs-7j8gx" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.103060 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-scripts\") pod \"ovn-controller-ovs-7j8gx\" (UID: \"6aef55de-c4dd-409e-b9f1-b79adc99ea8d\") " pod="openstack/ovn-controller-ovs-7j8gx" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.103107 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-var-lib\") pod \"ovn-controller-ovs-7j8gx\" (UID: \"6aef55de-c4dd-409e-b9f1-b79adc99ea8d\") " pod="openstack/ovn-controller-ovs-7j8gx" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.103138 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fb7c39d-6b28-4530-b9b1-87c2af591f61-combined-ca-bundle\") pod \"ovn-controller-rtrth\" (UID: \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\") " pod="openstack/ovn-controller-rtrth" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.103166 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-var-run\") pod \"ovn-controller-ovs-7j8gx\" (UID: \"6aef55de-c4dd-409e-b9f1-b79adc99ea8d\") " pod="openstack/ovn-controller-ovs-7j8gx" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.103182 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq5j7\" (UniqueName: \"kubernetes.io/projected/5fb7c39d-6b28-4530-b9b1-87c2af591f61-kube-api-access-kq5j7\") pod \"ovn-controller-rtrth\" (UID: \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\") " pod="openstack/ovn-controller-rtrth" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.103202 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5fb7c39d-6b28-4530-b9b1-87c2af591f61-var-log-ovn\") pod \"ovn-controller-rtrth\" (UID: \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\") " pod="openstack/ovn-controller-rtrth" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.103219 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-etc-ovs\") pod \"ovn-controller-ovs-7j8gx\" (UID: \"6aef55de-c4dd-409e-b9f1-b79adc99ea8d\") " pod="openstack/ovn-controller-ovs-7j8gx" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.103243 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsjb5\" (UniqueName: \"kubernetes.io/projected/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-kube-api-access-zsjb5\") pod \"ovn-controller-ovs-7j8gx\" (UID: \"6aef55de-c4dd-409e-b9f1-b79adc99ea8d\") " pod="openstack/ovn-controller-ovs-7j8gx" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.103698 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5fb7c39d-6b28-4530-b9b1-87c2af591f61-var-run-ovn\") pod \"ovn-controller-rtrth\" (UID: \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\") " pod="openstack/ovn-controller-rtrth" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.103727 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-var-log\") pod \"ovn-controller-ovs-7j8gx\" (UID: \"6aef55de-c4dd-409e-b9f1-b79adc99ea8d\") " pod="openstack/ovn-controller-ovs-7j8gx" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.103869 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-var-lib\") pod \"ovn-controller-ovs-7j8gx\" (UID: \"6aef55de-c4dd-409e-b9f1-b79adc99ea8d\") " pod="openstack/ovn-controller-ovs-7j8gx" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.103901 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5fb7c39d-6b28-4530-b9b1-87c2af591f61-var-log-ovn\") pod \"ovn-controller-rtrth\" (UID: \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\") " pod="openstack/ovn-controller-rtrth" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.103921 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5fb7c39d-6b28-4530-b9b1-87c2af591f61-var-run\") pod \"ovn-controller-rtrth\" (UID: \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\") " pod="openstack/ovn-controller-rtrth" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.103975 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-etc-ovs\") pod \"ovn-controller-ovs-7j8gx\" (UID: \"6aef55de-c4dd-409e-b9f1-b79adc99ea8d\") " pod="openstack/ovn-controller-ovs-7j8gx" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.108289 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-scripts\") pod \"ovn-controller-ovs-7j8gx\" (UID: \"6aef55de-c4dd-409e-b9f1-b79adc99ea8d\") " pod="openstack/ovn-controller-ovs-7j8gx" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.112836 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5fb7c39d-6b28-4530-b9b1-87c2af591f61-scripts\") pod \"ovn-controller-rtrth\" (UID: \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\") " pod="openstack/ovn-controller-rtrth" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.113639 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-var-run\") pod \"ovn-controller-ovs-7j8gx\" (UID: \"6aef55de-c4dd-409e-b9f1-b79adc99ea8d\") " pod="openstack/ovn-controller-ovs-7j8gx" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.121406 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsjb5\" (UniqueName: \"kubernetes.io/projected/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-kube-api-access-zsjb5\") pod \"ovn-controller-ovs-7j8gx\" (UID: \"6aef55de-c4dd-409e-b9f1-b79adc99ea8d\") " pod="openstack/ovn-controller-ovs-7j8gx" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.125397 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq5j7\" (UniqueName: \"kubernetes.io/projected/5fb7c39d-6b28-4530-b9b1-87c2af591f61-kube-api-access-kq5j7\") pod \"ovn-controller-rtrth\" (UID: \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\") " pod="openstack/ovn-controller-rtrth" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.126223 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fb7c39d-6b28-4530-b9b1-87c2af591f61-ovn-controller-tls-certs\") pod \"ovn-controller-rtrth\" (UID: \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\") " pod="openstack/ovn-controller-rtrth" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.129304 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fb7c39d-6b28-4530-b9b1-87c2af591f61-combined-ca-bundle\") pod \"ovn-controller-rtrth\" (UID: \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\") " pod="openstack/ovn-controller-rtrth" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.200693 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rtrth" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.220808 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7j8gx" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.790404 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.793964 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.797990 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.801262 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.801740 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.801784 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.801828 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.801929 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-rfk7g" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.917869 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " pod="openstack/ovsdbserver-nb-0" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.918004 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kbh9\" (UniqueName: \"kubernetes.io/projected/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-kube-api-access-9kbh9\") pod \"ovsdbserver-nb-0\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " pod="openstack/ovsdbserver-nb-0" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.918047 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " pod="openstack/ovsdbserver-nb-0" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.918070 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " pod="openstack/ovsdbserver-nb-0" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.918126 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-config\") pod \"ovsdbserver-nb-0\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " pod="openstack/ovsdbserver-nb-0" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.918149 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " pod="openstack/ovsdbserver-nb-0" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.918180 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " pod="openstack/ovsdbserver-nb-0" Oct 13 06:44:51 crc kubenswrapper[4833]: I1013 06:44:51.918205 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " pod="openstack/ovsdbserver-nb-0" Oct 13 06:44:52 crc kubenswrapper[4833]: I1013 06:44:52.019959 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kbh9\" (UniqueName: \"kubernetes.io/projected/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-kube-api-access-9kbh9\") pod \"ovsdbserver-nb-0\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " pod="openstack/ovsdbserver-nb-0" Oct 13 06:44:52 crc kubenswrapper[4833]: I1013 06:44:52.020015 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " pod="openstack/ovsdbserver-nb-0" Oct 13 06:44:52 crc kubenswrapper[4833]: I1013 06:44:52.020038 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " pod="openstack/ovsdbserver-nb-0" Oct 13 06:44:52 crc kubenswrapper[4833]: I1013 06:44:52.020078 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-config\") pod \"ovsdbserver-nb-0\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " pod="openstack/ovsdbserver-nb-0" Oct 13 06:44:52 crc kubenswrapper[4833]: I1013 06:44:52.020096 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " pod="openstack/ovsdbserver-nb-0" Oct 13 06:44:52 crc kubenswrapper[4833]: I1013 06:44:52.020121 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " pod="openstack/ovsdbserver-nb-0" Oct 13 06:44:52 crc kubenswrapper[4833]: I1013 06:44:52.020138 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " pod="openstack/ovsdbserver-nb-0" Oct 13 06:44:52 crc kubenswrapper[4833]: I1013 06:44:52.020163 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " pod="openstack/ovsdbserver-nb-0" Oct 13 06:44:52 crc kubenswrapper[4833]: I1013 06:44:52.020983 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " pod="openstack/ovsdbserver-nb-0" Oct 13 06:44:52 crc kubenswrapper[4833]: I1013 06:44:52.021221 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-config\") pod \"ovsdbserver-nb-0\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " pod="openstack/ovsdbserver-nb-0" Oct 13 06:44:52 crc kubenswrapper[4833]: I1013 06:44:52.021796 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-nb-0" Oct 13 06:44:52 crc kubenswrapper[4833]: I1013 06:44:52.024773 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " pod="openstack/ovsdbserver-nb-0" Oct 13 06:44:52 crc kubenswrapper[4833]: I1013 06:44:52.025437 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " pod="openstack/ovsdbserver-nb-0" Oct 13 06:44:52 crc kubenswrapper[4833]: I1013 06:44:52.027963 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " pod="openstack/ovsdbserver-nb-0" Oct 13 06:44:52 crc kubenswrapper[4833]: I1013 06:44:52.032630 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " pod="openstack/ovsdbserver-nb-0" Oct 13 06:44:52 crc kubenswrapper[4833]: I1013 06:44:52.035886 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kbh9\" (UniqueName: \"kubernetes.io/projected/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-kube-api-access-9kbh9\") pod \"ovsdbserver-nb-0\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " pod="openstack/ovsdbserver-nb-0" Oct 13 06:44:52 crc kubenswrapper[4833]: I1013 06:44:52.046781 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " pod="openstack/ovsdbserver-nb-0" Oct 13 06:44:52 crc kubenswrapper[4833]: I1013 06:44:52.123956 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.318767 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.344527 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.349037 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.349327 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.349363 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-hvw7s" Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.350132 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.354020 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.455037 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/336d549b-b94b-4966-af57-2289b1c8acc8-config\") pod \"ovsdbserver-sb-0\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " pod="openstack/ovsdbserver-sb-0" Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.455125 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " pod="openstack/ovsdbserver-sb-0" Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.455156 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/336d549b-b94b-4966-af57-2289b1c8acc8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " pod="openstack/ovsdbserver-sb-0" Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.455187 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm6jx\" (UniqueName: \"kubernetes.io/projected/336d549b-b94b-4966-af57-2289b1c8acc8-kube-api-access-wm6jx\") pod \"ovsdbserver-sb-0\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " pod="openstack/ovsdbserver-sb-0" Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.455256 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/336d549b-b94b-4966-af57-2289b1c8acc8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " pod="openstack/ovsdbserver-sb-0" Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.455290 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/336d549b-b94b-4966-af57-2289b1c8acc8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " pod="openstack/ovsdbserver-sb-0" Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.455319 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/336d549b-b94b-4966-af57-2289b1c8acc8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " pod="openstack/ovsdbserver-sb-0" Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.455355 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/336d549b-b94b-4966-af57-2289b1c8acc8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " pod="openstack/ovsdbserver-sb-0" Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.557075 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/336d549b-b94b-4966-af57-2289b1c8acc8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " pod="openstack/ovsdbserver-sb-0" Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.557135 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/336d549b-b94b-4966-af57-2289b1c8acc8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " pod="openstack/ovsdbserver-sb-0" Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.557204 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/336d549b-b94b-4966-af57-2289b1c8acc8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " pod="openstack/ovsdbserver-sb-0" Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.557243 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/336d549b-b94b-4966-af57-2289b1c8acc8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " pod="openstack/ovsdbserver-sb-0" Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.557298 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/336d549b-b94b-4966-af57-2289b1c8acc8-config\") pod \"ovsdbserver-sb-0\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " pod="openstack/ovsdbserver-sb-0" Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.557362 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " pod="openstack/ovsdbserver-sb-0" Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.557386 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/336d549b-b94b-4966-af57-2289b1c8acc8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " pod="openstack/ovsdbserver-sb-0" Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.557413 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm6jx\" (UniqueName: \"kubernetes.io/projected/336d549b-b94b-4966-af57-2289b1c8acc8-kube-api-access-wm6jx\") pod \"ovsdbserver-sb-0\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " pod="openstack/ovsdbserver-sb-0" Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.558484 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.558597 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/336d549b-b94b-4966-af57-2289b1c8acc8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " pod="openstack/ovsdbserver-sb-0" Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.560374 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/336d549b-b94b-4966-af57-2289b1c8acc8-config\") pod \"ovsdbserver-sb-0\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " pod="openstack/ovsdbserver-sb-0" Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.560981 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/336d549b-b94b-4966-af57-2289b1c8acc8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " pod="openstack/ovsdbserver-sb-0" Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.565006 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/336d549b-b94b-4966-af57-2289b1c8acc8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " pod="openstack/ovsdbserver-sb-0" Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.568206 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/336d549b-b94b-4966-af57-2289b1c8acc8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " pod="openstack/ovsdbserver-sb-0" Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.572971 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/336d549b-b94b-4966-af57-2289b1c8acc8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " pod="openstack/ovsdbserver-sb-0" Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.574569 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm6jx\" (UniqueName: \"kubernetes.io/projected/336d549b-b94b-4966-af57-2289b1c8acc8-kube-api-access-wm6jx\") pod \"ovsdbserver-sb-0\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " pod="openstack/ovsdbserver-sb-0" Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.582812 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " pod="openstack/ovsdbserver-sb-0" Oct 13 06:44:54 crc kubenswrapper[4833]: I1013 06:44:54.673087 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 13 06:44:55 crc kubenswrapper[4833]: E1013 06:44:55.999048 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:adcdeb8ecd601fb03c3b0901d5b5111af2ca48f7dd443e22224db6daaf08f5d0" Oct 13 06:44:56 crc kubenswrapper[4833]: E1013 06:44:55.999405 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:adcdeb8ecd601fb03c3b0901d5b5111af2ca48f7dd443e22224db6daaf08f5d0,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nk2fq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(827f736f-2193-4ebd-ab7f-99fb22945d1e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 13 06:44:56 crc kubenswrapper[4833]: E1013 06:44:56.000691 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="827f736f-2193-4ebd-ab7f-99fb22945d1e" Oct 13 06:44:56 crc kubenswrapper[4833]: E1013 06:44:56.017650 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:adcdeb8ecd601fb03c3b0901d5b5111af2ca48f7dd443e22224db6daaf08f5d0" Oct 13 06:44:56 crc kubenswrapper[4833]: E1013 06:44:56.018003 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:adcdeb8ecd601fb03c3b0901d5b5111af2ca48f7dd443e22224db6daaf08f5d0,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xvdcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(0a6ab499-ed60-45e7-b510-5a43422aa7f5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 13 06:44:56 crc kubenswrapper[4833]: E1013 06:44:56.019223 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="0a6ab499-ed60-45e7-b510-5a43422aa7f5" Oct 13 06:44:56 crc kubenswrapper[4833]: E1013 06:44:56.755288 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:adcdeb8ecd601fb03c3b0901d5b5111af2ca48f7dd443e22224db6daaf08f5d0\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="0a6ab499-ed60-45e7-b510-5a43422aa7f5" Oct 13 06:44:56 crc kubenswrapper[4833]: E1013 06:44:56.756335 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:adcdeb8ecd601fb03c3b0901d5b5111af2ca48f7dd443e22224db6daaf08f5d0\\\"\"" pod="openstack/rabbitmq-server-0" podUID="827f736f-2193-4ebd-ab7f-99fb22945d1e" Oct 13 06:45:00 crc kubenswrapper[4833]: I1013 06:45:00.142745 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29338965-dfhlw"] Oct 13 06:45:00 crc kubenswrapper[4833]: I1013 06:45:00.144293 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29338965-dfhlw" Oct 13 06:45:00 crc kubenswrapper[4833]: I1013 06:45:00.146289 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 13 06:45:00 crc kubenswrapper[4833]: I1013 06:45:00.147233 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 13 06:45:00 crc kubenswrapper[4833]: I1013 06:45:00.154050 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29338965-dfhlw"] Oct 13 06:45:00 crc kubenswrapper[4833]: I1013 06:45:00.249023 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1129198-3dd0-4ad3-8211-eb80e02362af-config-volume\") pod \"collect-profiles-29338965-dfhlw\" (UID: \"e1129198-3dd0-4ad3-8211-eb80e02362af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338965-dfhlw" Oct 13 06:45:00 crc kubenswrapper[4833]: I1013 06:45:00.249430 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n845d\" (UniqueName: \"kubernetes.io/projected/e1129198-3dd0-4ad3-8211-eb80e02362af-kube-api-access-n845d\") pod \"collect-profiles-29338965-dfhlw\" (UID: \"e1129198-3dd0-4ad3-8211-eb80e02362af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338965-dfhlw" Oct 13 06:45:00 crc kubenswrapper[4833]: I1013 06:45:00.249659 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e1129198-3dd0-4ad3-8211-eb80e02362af-secret-volume\") pod \"collect-profiles-29338965-dfhlw\" (UID: \"e1129198-3dd0-4ad3-8211-eb80e02362af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338965-dfhlw" Oct 13 06:45:00 crc kubenswrapper[4833]: E1013 06:45:00.342331 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df" Oct 13 06:45:00 crc kubenswrapper[4833]: E1013 06:45:00.342494 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pkkvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-758b79db4c-lr8w5_openstack(4a178ef3-16d4-4642-b8a0-6a3f07fe662d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 13 06:45:00 crc kubenswrapper[4833]: E1013 06:45:00.343699 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-758b79db4c-lr8w5" podUID="4a178ef3-16d4-4642-b8a0-6a3f07fe662d" Oct 13 06:45:00 crc kubenswrapper[4833]: I1013 06:45:00.351589 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e1129198-3dd0-4ad3-8211-eb80e02362af-secret-volume\") pod \"collect-profiles-29338965-dfhlw\" (UID: \"e1129198-3dd0-4ad3-8211-eb80e02362af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338965-dfhlw" Oct 13 06:45:00 crc kubenswrapper[4833]: I1013 06:45:00.351674 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1129198-3dd0-4ad3-8211-eb80e02362af-config-volume\") pod \"collect-profiles-29338965-dfhlw\" (UID: \"e1129198-3dd0-4ad3-8211-eb80e02362af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338965-dfhlw" Oct 13 06:45:00 crc kubenswrapper[4833]: I1013 06:45:00.351727 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n845d\" (UniqueName: \"kubernetes.io/projected/e1129198-3dd0-4ad3-8211-eb80e02362af-kube-api-access-n845d\") pod \"collect-profiles-29338965-dfhlw\" (UID: \"e1129198-3dd0-4ad3-8211-eb80e02362af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338965-dfhlw" Oct 13 06:45:00 crc kubenswrapper[4833]: I1013 06:45:00.355023 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1129198-3dd0-4ad3-8211-eb80e02362af-config-volume\") pod \"collect-profiles-29338965-dfhlw\" (UID: \"e1129198-3dd0-4ad3-8211-eb80e02362af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338965-dfhlw" Oct 13 06:45:00 crc kubenswrapper[4833]: E1013 06:45:00.364012 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df" Oct 13 06:45:00 crc kubenswrapper[4833]: E1013 06:45:00.364148 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q42ll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-77597f887-8wtwb_openstack(fc889568-9917-471c-b7b6-02d71536e6db): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 13 06:45:00 crc kubenswrapper[4833]: I1013 06:45:00.364726 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e1129198-3dd0-4ad3-8211-eb80e02362af-secret-volume\") pod \"collect-profiles-29338965-dfhlw\" (UID: \"e1129198-3dd0-4ad3-8211-eb80e02362af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338965-dfhlw" Oct 13 06:45:00 crc kubenswrapper[4833]: E1013 06:45:00.365377 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-77597f887-8wtwb" podUID="fc889568-9917-471c-b7b6-02d71536e6db" Oct 13 06:45:00 crc kubenswrapper[4833]: I1013 06:45:00.367568 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n845d\" (UniqueName: \"kubernetes.io/projected/e1129198-3dd0-4ad3-8211-eb80e02362af-kube-api-access-n845d\") pod \"collect-profiles-29338965-dfhlw\" (UID: \"e1129198-3dd0-4ad3-8211-eb80e02362af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338965-dfhlw" Oct 13 06:45:00 crc kubenswrapper[4833]: E1013 06:45:00.399394 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df" Oct 13 06:45:00 crc kubenswrapper[4833]: E1013 06:45:00.399806 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mm9lj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-8575fc99d7-phl5s_openstack(e9859523-7b91-42bf-9439-b86433c88754): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 13 06:45:00 crc kubenswrapper[4833]: E1013 06:45:00.401021 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-8575fc99d7-phl5s" podUID="e9859523-7b91-42bf-9439-b86433c88754" Oct 13 06:45:00 crc kubenswrapper[4833]: E1013 06:45:00.446752 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df" Oct 13 06:45:00 crc kubenswrapper[4833]: E1013 06:45:00.446913 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sz47l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7bfcb9d745-blzb7_openstack(9c226b93-b8ee-4b05-b2b6-29e21944da6f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 13 06:45:00 crc kubenswrapper[4833]: E1013 06:45:00.448187 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7bfcb9d745-blzb7" podUID="9c226b93-b8ee-4b05-b2b6-29e21944da6f" Oct 13 06:45:00 crc kubenswrapper[4833]: I1013 06:45:00.464570 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29338965-dfhlw" Oct 13 06:45:00 crc kubenswrapper[4833]: I1013 06:45:00.784358 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"aa0ca608-57b5-4289-9271-fcc10a6c7422","Type":"ContainerStarted","Data":"b5b26884237d1d0340d022d2f237fead3af51824f579b5fa022a6074b7e977dc"} Oct 13 06:45:00 crc kubenswrapper[4833]: E1013 06:45:00.786163 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df\\\"\"" pod="openstack/dnsmasq-dns-8575fc99d7-phl5s" podUID="e9859523-7b91-42bf-9439-b86433c88754" Oct 13 06:45:00 crc kubenswrapper[4833]: E1013 06:45:00.789432 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df\\\"\"" pod="openstack/dnsmasq-dns-77597f887-8wtwb" podUID="fc889568-9917-471c-b7b6-02d71536e6db" Oct 13 06:45:00 crc kubenswrapper[4833]: I1013 06:45:00.911022 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 13 06:45:00 crc kubenswrapper[4833]: I1013 06:45:00.924449 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 13 06:45:00 crc kubenswrapper[4833]: W1013 06:45:00.925235 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa4d68da_3ce0_4d91_a414_0f4fd4bd3dba.slice/crio-93de4ba4fb1191bc7db4fdba21230ba56fb035a2a010fd3a123a7e8b541d3fd1 WatchSource:0}: Error finding container 93de4ba4fb1191bc7db4fdba21230ba56fb035a2a010fd3a123a7e8b541d3fd1: Status 404 returned error can't find the container with id 93de4ba4fb1191bc7db4fdba21230ba56fb035a2a010fd3a123a7e8b541d3fd1 Oct 13 06:45:00 crc kubenswrapper[4833]: I1013 06:45:00.929596 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 06:45:00 crc kubenswrapper[4833]: W1013 06:45:00.933179 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40245b56_c93c_4c17_873a_dcd87e3f041b.slice/crio-594667a192b9b362db3df11438c66001e3f367123a3987fc380ca4d8b9efe160 WatchSource:0}: Error finding container 594667a192b9b362db3df11438c66001e3f367123a3987fc380ca4d8b9efe160: Status 404 returned error can't find the container with id 594667a192b9b362db3df11438c66001e3f367123a3987fc380ca4d8b9efe160 Oct 13 06:45:00 crc kubenswrapper[4833]: I1013 06:45:00.936089 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 13 06:45:00 crc kubenswrapper[4833]: W1013 06:45:00.939747 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa2db326_7b3a_4cc8_acb4_9c680c8f4972.slice/crio-95e2db78158d9efaa6ce3d6cf05b6c8d57049caddb3ab58ab62b842d68ff3d79 WatchSource:0}: Error finding container 95e2db78158d9efaa6ce3d6cf05b6c8d57049caddb3ab58ab62b842d68ff3d79: Status 404 returned error can't find the container with id 95e2db78158d9efaa6ce3d6cf05b6c8d57049caddb3ab58ab62b842d68ff3d79 Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.045515 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.058618 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rtrth"] Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.141074 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.156831 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29338965-dfhlw"] Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.250209 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bfcb9d745-blzb7" Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.271202 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz47l\" (UniqueName: \"kubernetes.io/projected/9c226b93-b8ee-4b05-b2b6-29e21944da6f-kube-api-access-sz47l\") pod \"9c226b93-b8ee-4b05-b2b6-29e21944da6f\" (UID: \"9c226b93-b8ee-4b05-b2b6-29e21944da6f\") " Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.271280 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c226b93-b8ee-4b05-b2b6-29e21944da6f-config\") pod \"9c226b93-b8ee-4b05-b2b6-29e21944da6f\" (UID: \"9c226b93-b8ee-4b05-b2b6-29e21944da6f\") " Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.272181 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c226b93-b8ee-4b05-b2b6-29e21944da6f-config" (OuterVolumeSpecName: "config") pod "9c226b93-b8ee-4b05-b2b6-29e21944da6f" (UID: "9c226b93-b8ee-4b05-b2b6-29e21944da6f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.279952 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-758b79db4c-lr8w5" Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.282629 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c226b93-b8ee-4b05-b2b6-29e21944da6f-kube-api-access-sz47l" (OuterVolumeSpecName: "kube-api-access-sz47l") pod "9c226b93-b8ee-4b05-b2b6-29e21944da6f" (UID: "9c226b93-b8ee-4b05-b2b6-29e21944da6f"). InnerVolumeSpecName "kube-api-access-sz47l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.337791 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7j8gx"] Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.373015 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkkvb\" (UniqueName: \"kubernetes.io/projected/4a178ef3-16d4-4642-b8a0-6a3f07fe662d-kube-api-access-pkkvb\") pod \"4a178ef3-16d4-4642-b8a0-6a3f07fe662d\" (UID: \"4a178ef3-16d4-4642-b8a0-6a3f07fe662d\") " Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.373130 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a178ef3-16d4-4642-b8a0-6a3f07fe662d-config\") pod \"4a178ef3-16d4-4642-b8a0-6a3f07fe662d\" (UID: \"4a178ef3-16d4-4642-b8a0-6a3f07fe662d\") " Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.373185 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a178ef3-16d4-4642-b8a0-6a3f07fe662d-dns-svc\") pod \"4a178ef3-16d4-4642-b8a0-6a3f07fe662d\" (UID: \"4a178ef3-16d4-4642-b8a0-6a3f07fe662d\") " Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.373680 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz47l\" (UniqueName: \"kubernetes.io/projected/9c226b93-b8ee-4b05-b2b6-29e21944da6f-kube-api-access-sz47l\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.373706 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c226b93-b8ee-4b05-b2b6-29e21944da6f-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.374123 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a178ef3-16d4-4642-b8a0-6a3f07fe662d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4a178ef3-16d4-4642-b8a0-6a3f07fe662d" (UID: "4a178ef3-16d4-4642-b8a0-6a3f07fe662d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.375000 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a178ef3-16d4-4642-b8a0-6a3f07fe662d-config" (OuterVolumeSpecName: "config") pod "4a178ef3-16d4-4642-b8a0-6a3f07fe662d" (UID: "4a178ef3-16d4-4642-b8a0-6a3f07fe662d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.376762 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a178ef3-16d4-4642-b8a0-6a3f07fe662d-kube-api-access-pkkvb" (OuterVolumeSpecName: "kube-api-access-pkkvb") pod "4a178ef3-16d4-4642-b8a0-6a3f07fe662d" (UID: "4a178ef3-16d4-4642-b8a0-6a3f07fe662d"). InnerVolumeSpecName "kube-api-access-pkkvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.475415 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a178ef3-16d4-4642-b8a0-6a3f07fe662d-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.475450 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a178ef3-16d4-4642-b8a0-6a3f07fe662d-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.475460 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkkvb\" (UniqueName: \"kubernetes.io/projected/4a178ef3-16d4-4642-b8a0-6a3f07fe662d-kube-api-access-pkkvb\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.799154 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fa2db326-7b3a-4cc8-acb4-9c680c8f4972","Type":"ContainerStarted","Data":"e0e632db85cd6307c8c0ca386cb79abd88b3e5b632d72bd8657bfe05be5f7132"} Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.799513 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fa2db326-7b3a-4cc8-acb4-9c680c8f4972","Type":"ContainerStarted","Data":"95e2db78158d9efaa6ce3d6cf05b6c8d57049caddb3ab58ab62b842d68ff3d79"} Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.807051 4833 generic.go:334] "Generic (PLEG): container finished" podID="e1129198-3dd0-4ad3-8211-eb80e02362af" containerID="e5be0d78a4e72dae27c51937eee7234a33b843053201d55f6115c0604a701dfe" exitCode=0 Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.807092 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29338965-dfhlw" event={"ID":"e1129198-3dd0-4ad3-8211-eb80e02362af","Type":"ContainerDied","Data":"e5be0d78a4e72dae27c51937eee7234a33b843053201d55f6115c0604a701dfe"} Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.807138 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29338965-dfhlw" event={"ID":"e1129198-3dd0-4ad3-8211-eb80e02362af","Type":"ContainerStarted","Data":"4d77b4e62e82099ac29a0af9d8bbb3cccce5fc2690ceafb681ff9e786efbc046"} Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.809419 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7j8gx" event={"ID":"6aef55de-c4dd-409e-b9f1-b79adc99ea8d","Type":"ContainerStarted","Data":"c45ec8164269a0ab296754a0a07dd0c8f338f37fc34ab2c06eab9cb5210b1958"} Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.813910 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rtrth" event={"ID":"5fb7c39d-6b28-4530-b9b1-87c2af591f61","Type":"ContainerStarted","Data":"d91f7b014bb590335e9ed56269ff92c8cca467ed322b755a0eb8b2d53f724508"} Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.817955 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"336d549b-b94b-4966-af57-2289b1c8acc8","Type":"ContainerStarted","Data":"d1120e2bdc884e7d95099ddcf3bbe34694190a558f9990df9daf43b8fd5a6bde"} Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.821982 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b1ab7add-ea30-4610-a96a-2cad6ae8e40c","Type":"ContainerStarted","Data":"0c041d57a850254e5a259cd1bbac5d33a62e8cb63bbd03709ff6a7e402699fb6"} Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.822814 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"40245b56-c93c-4c17-873a-dcd87e3f041b","Type":"ContainerStarted","Data":"594667a192b9b362db3df11438c66001e3f367123a3987fc380ca4d8b9efe160"} Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.823528 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bfcb9d745-blzb7" event={"ID":"9c226b93-b8ee-4b05-b2b6-29e21944da6f","Type":"ContainerDied","Data":"0f08434244cbe897983630a306598445ac25ea6e6610e0cb804da2b109e1ad8f"} Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.823617 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bfcb9d745-blzb7" Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.834301 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-758b79db4c-lr8w5" event={"ID":"4a178ef3-16d4-4642-b8a0-6a3f07fe662d","Type":"ContainerDied","Data":"3176adea431ee765cdb9804dcd21b68005dd4857f88f2d98394611572af452aa"} Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.834419 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-758b79db4c-lr8w5" Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.839285 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba","Type":"ContainerStarted","Data":"93de4ba4fb1191bc7db4fdba21230ba56fb035a2a010fd3a123a7e8b541d3fd1"} Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.905508 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bfcb9d745-blzb7"] Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.918077 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bfcb9d745-blzb7"] Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.933349 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-758b79db4c-lr8w5"] Oct 13 06:45:01 crc kubenswrapper[4833]: I1013 06:45:01.944598 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-758b79db4c-lr8w5"] Oct 13 06:45:02 crc kubenswrapper[4833]: I1013 06:45:02.642092 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a178ef3-16d4-4642-b8a0-6a3f07fe662d" path="/var/lib/kubelet/pods/4a178ef3-16d4-4642-b8a0-6a3f07fe662d/volumes" Oct 13 06:45:02 crc kubenswrapper[4833]: I1013 06:45:02.642865 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c226b93-b8ee-4b05-b2b6-29e21944da6f" path="/var/lib/kubelet/pods/9c226b93-b8ee-4b05-b2b6-29e21944da6f/volumes" Oct 13 06:45:03 crc kubenswrapper[4833]: I1013 06:45:03.161680 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29338965-dfhlw" Oct 13 06:45:03 crc kubenswrapper[4833]: I1013 06:45:03.312500 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e1129198-3dd0-4ad3-8211-eb80e02362af-secret-volume\") pod \"e1129198-3dd0-4ad3-8211-eb80e02362af\" (UID: \"e1129198-3dd0-4ad3-8211-eb80e02362af\") " Oct 13 06:45:03 crc kubenswrapper[4833]: I1013 06:45:03.312618 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n845d\" (UniqueName: \"kubernetes.io/projected/e1129198-3dd0-4ad3-8211-eb80e02362af-kube-api-access-n845d\") pod \"e1129198-3dd0-4ad3-8211-eb80e02362af\" (UID: \"e1129198-3dd0-4ad3-8211-eb80e02362af\") " Oct 13 06:45:03 crc kubenswrapper[4833]: I1013 06:45:03.312683 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1129198-3dd0-4ad3-8211-eb80e02362af-config-volume\") pod \"e1129198-3dd0-4ad3-8211-eb80e02362af\" (UID: \"e1129198-3dd0-4ad3-8211-eb80e02362af\") " Oct 13 06:45:03 crc kubenswrapper[4833]: I1013 06:45:03.314049 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1129198-3dd0-4ad3-8211-eb80e02362af-config-volume" (OuterVolumeSpecName: "config-volume") pod "e1129198-3dd0-4ad3-8211-eb80e02362af" (UID: "e1129198-3dd0-4ad3-8211-eb80e02362af"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:45:03 crc kubenswrapper[4833]: I1013 06:45:03.318306 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1129198-3dd0-4ad3-8211-eb80e02362af-kube-api-access-n845d" (OuterVolumeSpecName: "kube-api-access-n845d") pod "e1129198-3dd0-4ad3-8211-eb80e02362af" (UID: "e1129198-3dd0-4ad3-8211-eb80e02362af"). InnerVolumeSpecName "kube-api-access-n845d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:45:03 crc kubenswrapper[4833]: I1013 06:45:03.318996 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1129198-3dd0-4ad3-8211-eb80e02362af-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e1129198-3dd0-4ad3-8211-eb80e02362af" (UID: "e1129198-3dd0-4ad3-8211-eb80e02362af"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:45:03 crc kubenswrapper[4833]: I1013 06:45:03.415295 4833 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e1129198-3dd0-4ad3-8211-eb80e02362af-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:03 crc kubenswrapper[4833]: I1013 06:45:03.415350 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n845d\" (UniqueName: \"kubernetes.io/projected/e1129198-3dd0-4ad3-8211-eb80e02362af-kube-api-access-n845d\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:03 crc kubenswrapper[4833]: I1013 06:45:03.415364 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1129198-3dd0-4ad3-8211-eb80e02362af-config-volume\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:03 crc kubenswrapper[4833]: I1013 06:45:03.856243 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29338965-dfhlw" event={"ID":"e1129198-3dd0-4ad3-8211-eb80e02362af","Type":"ContainerDied","Data":"4d77b4e62e82099ac29a0af9d8bbb3cccce5fc2690ceafb681ff9e786efbc046"} Oct 13 06:45:03 crc kubenswrapper[4833]: I1013 06:45:03.856301 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d77b4e62e82099ac29a0af9d8bbb3cccce5fc2690ceafb681ff9e786efbc046" Oct 13 06:45:03 crc kubenswrapper[4833]: I1013 06:45:03.857571 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29338965-dfhlw" Oct 13 06:45:06 crc kubenswrapper[4833]: I1013 06:45:06.884914 4833 generic.go:334] "Generic (PLEG): container finished" podID="aa0ca608-57b5-4289-9271-fcc10a6c7422" containerID="b5b26884237d1d0340d022d2f237fead3af51824f579b5fa022a6074b7e977dc" exitCode=0 Oct 13 06:45:06 crc kubenswrapper[4833]: I1013 06:45:06.885826 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"aa0ca608-57b5-4289-9271-fcc10a6c7422","Type":"ContainerDied","Data":"b5b26884237d1d0340d022d2f237fead3af51824f579b5fa022a6074b7e977dc"} Oct 13 06:45:08 crc kubenswrapper[4833]: I1013 06:45:08.901089 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa2db326-7b3a-4cc8-acb4-9c680c8f4972" containerID="e0e632db85cd6307c8c0ca386cb79abd88b3e5b632d72bd8657bfe05be5f7132" exitCode=0 Oct 13 06:45:08 crc kubenswrapper[4833]: I1013 06:45:08.901176 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fa2db326-7b3a-4cc8-acb4-9c680c8f4972","Type":"ContainerDied","Data":"e0e632db85cd6307c8c0ca386cb79abd88b3e5b632d72bd8657bfe05be5f7132"} Oct 13 06:45:10 crc kubenswrapper[4833]: I1013 06:45:10.946708 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"aa0ca608-57b5-4289-9271-fcc10a6c7422","Type":"ContainerStarted","Data":"be1096a47ac15e28bada61021cfe94e95ea96a8a39abb449b9b81d46dc563b03"} Oct 13 06:45:10 crc kubenswrapper[4833]: I1013 06:45:10.949469 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b1ab7add-ea30-4610-a96a-2cad6ae8e40c","Type":"ContainerStarted","Data":"e320ad7d5893dd2a3cf0ab4db95afc8ff7b33d93872d0c9924dfcdb12787887f"} Oct 13 06:45:10 crc kubenswrapper[4833]: I1013 06:45:10.951001 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"40245b56-c93c-4c17-873a-dcd87e3f041b","Type":"ContainerStarted","Data":"7ccea8d5874604fb1feb38a49f7828ee3685b0c011032cb698d6975b30650dca"} Oct 13 06:45:10 crc kubenswrapper[4833]: I1013 06:45:10.951087 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 13 06:45:10 crc kubenswrapper[4833]: I1013 06:45:10.952713 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7j8gx" event={"ID":"6aef55de-c4dd-409e-b9f1-b79adc99ea8d","Type":"ContainerStarted","Data":"e01d929298cb195a544729b45935babcfa665b42e6e77270e6adc29a83a7fd2f"} Oct 13 06:45:10 crc kubenswrapper[4833]: I1013 06:45:10.954360 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rtrth" event={"ID":"5fb7c39d-6b28-4530-b9b1-87c2af591f61","Type":"ContainerStarted","Data":"14a8af544c70fb32901be0e6cb469a5beefe6d01d04d2a1c6569af305cff5545"} Oct 13 06:45:10 crc kubenswrapper[4833]: I1013 06:45:10.954499 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-rtrth" Oct 13 06:45:10 crc kubenswrapper[4833]: I1013 06:45:10.955970 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"336d549b-b94b-4966-af57-2289b1c8acc8","Type":"ContainerStarted","Data":"4862d927879e0dfe854052f89e5cade3b64a709d334d9d354f523dde629a88ac"} Oct 13 06:45:10 crc kubenswrapper[4833]: I1013 06:45:10.957614 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba","Type":"ContainerStarted","Data":"5e2a48246659117dd99f108e17e66bdef082cff5d89afeb57ea91830bf119391"} Oct 13 06:45:10 crc kubenswrapper[4833]: I1013 06:45:10.957725 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Oct 13 06:45:10 crc kubenswrapper[4833]: I1013 06:45:10.959412 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fa2db326-7b3a-4cc8-acb4-9c680c8f4972","Type":"ContainerStarted","Data":"fd89fcb801c3e73ae689bfd58be2d8c38227f0a3b7769b280f02d0cbef0d1f9c"} Oct 13 06:45:10 crc kubenswrapper[4833]: I1013 06:45:10.970006 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=13.338898142 podStartE2EDuration="29.969983998s" podCreationTimestamp="2025-10-13 06:44:41 +0000 UTC" firstStartedPulling="2025-10-13 06:44:43.754345781 +0000 UTC m=+973.854768697" lastFinishedPulling="2025-10-13 06:45:00.385431637 +0000 UTC m=+990.485854553" observedRunningTime="2025-10-13 06:45:10.966217339 +0000 UTC m=+1001.066640265" watchObservedRunningTime="2025-10-13 06:45:10.969983998 +0000 UTC m=+1001.070406924" Oct 13 06:45:10 crc kubenswrapper[4833]: I1013 06:45:10.988015 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-rtrth" podStartSLOduration=11.635741735 podStartE2EDuration="20.987992481s" podCreationTimestamp="2025-10-13 06:44:50 +0000 UTC" firstStartedPulling="2025-10-13 06:45:01.08704867 +0000 UTC m=+991.187471586" lastFinishedPulling="2025-10-13 06:45:10.439299416 +0000 UTC m=+1000.539722332" observedRunningTime="2025-10-13 06:45:10.986785596 +0000 UTC m=+1001.087208512" watchObservedRunningTime="2025-10-13 06:45:10.987992481 +0000 UTC m=+1001.088415397" Oct 13 06:45:11 crc kubenswrapper[4833]: I1013 06:45:11.008344 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=18.03905543 podStartE2EDuration="27.008323661s" podCreationTimestamp="2025-10-13 06:44:44 +0000 UTC" firstStartedPulling="2025-10-13 06:45:00.927109718 +0000 UTC m=+991.027532634" lastFinishedPulling="2025-10-13 06:45:09.896377949 +0000 UTC m=+999.996800865" observedRunningTime="2025-10-13 06:45:11.002594405 +0000 UTC m=+1001.103017321" watchObservedRunningTime="2025-10-13 06:45:11.008323661 +0000 UTC m=+1001.108746577" Oct 13 06:45:11 crc kubenswrapper[4833]: I1013 06:45:11.084785 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=15.559806172 podStartE2EDuration="25.08476569s" podCreationTimestamp="2025-10-13 06:44:46 +0000 UTC" firstStartedPulling="2025-10-13 06:45:00.939774236 +0000 UTC m=+991.040197142" lastFinishedPulling="2025-10-13 06:45:10.464733744 +0000 UTC m=+1000.565156660" observedRunningTime="2025-10-13 06:45:11.080744883 +0000 UTC m=+1001.181167799" watchObservedRunningTime="2025-10-13 06:45:11.08476569 +0000 UTC m=+1001.185188626" Oct 13 06:45:11 crc kubenswrapper[4833]: I1013 06:45:11.085735 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=28.085729068 podStartE2EDuration="28.085729068s" podCreationTimestamp="2025-10-13 06:44:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:45:11.06615247 +0000 UTC m=+1001.166575386" watchObservedRunningTime="2025-10-13 06:45:11.085729068 +0000 UTC m=+1001.186152004" Oct 13 06:45:11 crc kubenswrapper[4833]: I1013 06:45:11.971695 4833 generic.go:334] "Generic (PLEG): container finished" podID="6aef55de-c4dd-409e-b9f1-b79adc99ea8d" containerID="e01d929298cb195a544729b45935babcfa665b42e6e77270e6adc29a83a7fd2f" exitCode=0 Oct 13 06:45:11 crc kubenswrapper[4833]: I1013 06:45:11.971784 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7j8gx" event={"ID":"6aef55de-c4dd-409e-b9f1-b79adc99ea8d","Type":"ContainerDied","Data":"e01d929298cb195a544729b45935babcfa665b42e6e77270e6adc29a83a7fd2f"} Oct 13 06:45:12 crc kubenswrapper[4833]: I1013 06:45:12.981413 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"827f736f-2193-4ebd-ab7f-99fb22945d1e","Type":"ContainerStarted","Data":"8e0a7d40f38e036ffe265726cc3871a21f2953637eec4dba0015a2fbeb48b65a"} Oct 13 06:45:13 crc kubenswrapper[4833]: I1013 06:45:13.170564 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Oct 13 06:45:13 crc kubenswrapper[4833]: I1013 06:45:13.171634 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Oct 13 06:45:14 crc kubenswrapper[4833]: I1013 06:45:14.003004 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b1ab7add-ea30-4610-a96a-2cad6ae8e40c","Type":"ContainerStarted","Data":"f94b7170cff535d70b886a880f441f6bc49ccf39c462e54f24bba46d4e1405e6"} Oct 13 06:45:14 crc kubenswrapper[4833]: I1013 06:45:14.004394 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0a6ab499-ed60-45e7-b510-5a43422aa7f5","Type":"ContainerStarted","Data":"81cf39063cd0366ae3391f599d627326a725a85022b63926955f72073a4f5bd7"} Oct 13 06:45:14 crc kubenswrapper[4833]: I1013 06:45:14.007459 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7j8gx" event={"ID":"6aef55de-c4dd-409e-b9f1-b79adc99ea8d","Type":"ContainerStarted","Data":"8e35e2a8250ae17cf941b37d8802e61ec860dff89840f9a3f6729e8f6064f8e0"} Oct 13 06:45:14 crc kubenswrapper[4833]: I1013 06:45:14.007511 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7j8gx" event={"ID":"6aef55de-c4dd-409e-b9f1-b79adc99ea8d","Type":"ContainerStarted","Data":"ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574"} Oct 13 06:45:14 crc kubenswrapper[4833]: I1013 06:45:14.007584 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7j8gx" Oct 13 06:45:14 crc kubenswrapper[4833]: I1013 06:45:14.007625 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7j8gx" Oct 13 06:45:14 crc kubenswrapper[4833]: I1013 06:45:14.009263 4833 generic.go:334] "Generic (PLEG): container finished" podID="e9859523-7b91-42bf-9439-b86433c88754" containerID="8d58252cf87ea3adc97c942848b280be283be3224920eedff5a9c091c96cef7f" exitCode=0 Oct 13 06:45:14 crc kubenswrapper[4833]: I1013 06:45:14.009334 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8575fc99d7-phl5s" event={"ID":"e9859523-7b91-42bf-9439-b86433c88754","Type":"ContainerDied","Data":"8d58252cf87ea3adc97c942848b280be283be3224920eedff5a9c091c96cef7f"} Oct 13 06:45:14 crc kubenswrapper[4833]: I1013 06:45:14.013511 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"336d549b-b94b-4966-af57-2289b1c8acc8","Type":"ContainerStarted","Data":"006c322d6580fcba72f2451b54eabfb708adf2bd8b5526724079641631c1a6be"} Oct 13 06:45:14 crc kubenswrapper[4833]: I1013 06:45:14.026325 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=11.539902154 podStartE2EDuration="24.026205988s" podCreationTimestamp="2025-10-13 06:44:50 +0000 UTC" firstStartedPulling="2025-10-13 06:45:01.153636713 +0000 UTC m=+991.254059629" lastFinishedPulling="2025-10-13 06:45:13.639940547 +0000 UTC m=+1003.740363463" observedRunningTime="2025-10-13 06:45:14.024667533 +0000 UTC m=+1004.125090479" watchObservedRunningTime="2025-10-13 06:45:14.026205988 +0000 UTC m=+1004.126628904" Oct 13 06:45:14 crc kubenswrapper[4833]: I1013 06:45:14.046827 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-7j8gx" podStartSLOduration=15.016686759 podStartE2EDuration="24.046811736s" podCreationTimestamp="2025-10-13 06:44:50 +0000 UTC" firstStartedPulling="2025-10-13 06:45:01.346386057 +0000 UTC m=+991.446808973" lastFinishedPulling="2025-10-13 06:45:10.376511034 +0000 UTC m=+1000.476933950" observedRunningTime="2025-10-13 06:45:14.04112175 +0000 UTC m=+1004.141544666" watchObservedRunningTime="2025-10-13 06:45:14.046811736 +0000 UTC m=+1004.147234652" Oct 13 06:45:14 crc kubenswrapper[4833]: I1013 06:45:14.100772 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=8.538455921 podStartE2EDuration="21.100751741s" podCreationTimestamp="2025-10-13 06:44:53 +0000 UTC" firstStartedPulling="2025-10-13 06:45:01.053547238 +0000 UTC m=+991.153970154" lastFinishedPulling="2025-10-13 06:45:13.615843058 +0000 UTC m=+1003.716265974" observedRunningTime="2025-10-13 06:45:14.096986382 +0000 UTC m=+1004.197409298" watchObservedRunningTime="2025-10-13 06:45:14.100751741 +0000 UTC m=+1004.201174667" Oct 13 06:45:14 crc kubenswrapper[4833]: I1013 06:45:14.673161 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Oct 13 06:45:15 crc kubenswrapper[4833]: I1013 06:45:15.021919 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8575fc99d7-phl5s" event={"ID":"e9859523-7b91-42bf-9439-b86433c88754","Type":"ContainerStarted","Data":"f90207c307f761a1960137eca06aee1ecc46f72783ef24fde5e40fec68c8689d"} Oct 13 06:45:15 crc kubenswrapper[4833]: I1013 06:45:15.022195 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8575fc99d7-phl5s" Oct 13 06:45:15 crc kubenswrapper[4833]: I1013 06:45:15.023233 4833 generic.go:334] "Generic (PLEG): container finished" podID="fc889568-9917-471c-b7b6-02d71536e6db" containerID="98c0f186e0b73e50f0dfade994d9dcaa055e8e44538aea7d5f6678805ce163ff" exitCode=0 Oct 13 06:45:15 crc kubenswrapper[4833]: I1013 06:45:15.023315 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77597f887-8wtwb" event={"ID":"fc889568-9917-471c-b7b6-02d71536e6db","Type":"ContainerDied","Data":"98c0f186e0b73e50f0dfade994d9dcaa055e8e44538aea7d5f6678805ce163ff"} Oct 13 06:45:15 crc kubenswrapper[4833]: I1013 06:45:15.045625 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Oct 13 06:45:15 crc kubenswrapper[4833]: I1013 06:45:15.045665 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Oct 13 06:45:15 crc kubenswrapper[4833]: I1013 06:45:15.049010 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8575fc99d7-phl5s" podStartSLOduration=2.554891988 podStartE2EDuration="36.048999652s" podCreationTimestamp="2025-10-13 06:44:39 +0000 UTC" firstStartedPulling="2025-10-13 06:44:40.124503544 +0000 UTC m=+970.224926460" lastFinishedPulling="2025-10-13 06:45:13.618611208 +0000 UTC m=+1003.719034124" observedRunningTime="2025-10-13 06:45:15.04134768 +0000 UTC m=+1005.141770596" watchObservedRunningTime="2025-10-13 06:45:15.048999652 +0000 UTC m=+1005.149422568" Oct 13 06:45:15 crc kubenswrapper[4833]: I1013 06:45:15.117278 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Oct 13 06:45:15 crc kubenswrapper[4833]: I1013 06:45:15.673274 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Oct 13 06:45:15 crc kubenswrapper[4833]: I1013 06:45:15.715010 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Oct 13 06:45:15 crc kubenswrapper[4833]: I1013 06:45:15.765863 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Oct 13 06:45:15 crc kubenswrapper[4833]: I1013 06:45:15.817590 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.030965 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77597f887-8wtwb" event={"ID":"fc889568-9917-471c-b7b6-02d71536e6db","Type":"ContainerStarted","Data":"adbafea9f749ff7a5d7a91dc927c080454b6d648fcfd1b81a587d5e736c6e730"} Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.044829 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77597f887-8wtwb" podStartSLOduration=-9223372000.809965 podStartE2EDuration="36.044810113s" podCreationTimestamp="2025-10-13 06:44:40 +0000 UTC" firstStartedPulling="2025-10-13 06:44:40.98780638 +0000 UTC m=+971.088229296" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:45:16.044582727 +0000 UTC m=+1006.145005653" watchObservedRunningTime="2025-10-13 06:45:16.044810113 +0000 UTC m=+1006.145233029" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.069150 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.125142 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.162739 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.299865 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8575fc99d7-phl5s"] Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.334404 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-545fb8c44f-v7pt9"] Oct 13 06:45:16 crc kubenswrapper[4833]: E1013 06:45:16.334809 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1129198-3dd0-4ad3-8211-eb80e02362af" containerName="collect-profiles" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.334825 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1129198-3dd0-4ad3-8211-eb80e02362af" containerName="collect-profiles" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.335104 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1129198-3dd0-4ad3-8211-eb80e02362af" containerName="collect-profiles" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.336178 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545fb8c44f-v7pt9" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.341851 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-545fb8c44f-v7pt9"] Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.342483 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.433018 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6741bee5-5b7b-46cf-ae60-c620f80784cf-dns-svc\") pod \"dnsmasq-dns-545fb8c44f-v7pt9\" (UID: \"6741bee5-5b7b-46cf-ae60-c620f80784cf\") " pod="openstack/dnsmasq-dns-545fb8c44f-v7pt9" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.433068 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6741bee5-5b7b-46cf-ae60-c620f80784cf-config\") pod \"dnsmasq-dns-545fb8c44f-v7pt9\" (UID: \"6741bee5-5b7b-46cf-ae60-c620f80784cf\") " pod="openstack/dnsmasq-dns-545fb8c44f-v7pt9" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.433093 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fgkj\" (UniqueName: \"kubernetes.io/projected/6741bee5-5b7b-46cf-ae60-c620f80784cf-kube-api-access-5fgkj\") pod \"dnsmasq-dns-545fb8c44f-v7pt9\" (UID: \"6741bee5-5b7b-46cf-ae60-c620f80784cf\") " pod="openstack/dnsmasq-dns-545fb8c44f-v7pt9" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.433141 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6741bee5-5b7b-46cf-ae60-c620f80784cf-ovsdbserver-sb\") pod \"dnsmasq-dns-545fb8c44f-v7pt9\" (UID: \"6741bee5-5b7b-46cf-ae60-c620f80784cf\") " pod="openstack/dnsmasq-dns-545fb8c44f-v7pt9" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.513870 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-lx4t5"] Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.517680 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-lx4t5" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.519987 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.528221 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-lx4t5"] Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.533669 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c7b98eb9-459c-4a87-88e3-63624b7969b9-ovs-rundir\") pod \"ovn-controller-metrics-lx4t5\" (UID: \"c7b98eb9-459c-4a87-88e3-63624b7969b9\") " pod="openstack/ovn-controller-metrics-lx4t5" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.533717 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6741bee5-5b7b-46cf-ae60-c620f80784cf-ovsdbserver-sb\") pod \"dnsmasq-dns-545fb8c44f-v7pt9\" (UID: \"6741bee5-5b7b-46cf-ae60-c620f80784cf\") " pod="openstack/dnsmasq-dns-545fb8c44f-v7pt9" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.533784 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b98eb9-459c-4a87-88e3-63624b7969b9-combined-ca-bundle\") pod \"ovn-controller-metrics-lx4t5\" (UID: \"c7b98eb9-459c-4a87-88e3-63624b7969b9\") " pod="openstack/ovn-controller-metrics-lx4t5" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.533819 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7b98eb9-459c-4a87-88e3-63624b7969b9-config\") pod \"ovn-controller-metrics-lx4t5\" (UID: \"c7b98eb9-459c-4a87-88e3-63624b7969b9\") " pod="openstack/ovn-controller-metrics-lx4t5" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.533838 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxshq\" (UniqueName: \"kubernetes.io/projected/c7b98eb9-459c-4a87-88e3-63624b7969b9-kube-api-access-qxshq\") pod \"ovn-controller-metrics-lx4t5\" (UID: \"c7b98eb9-459c-4a87-88e3-63624b7969b9\") " pod="openstack/ovn-controller-metrics-lx4t5" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.533873 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6741bee5-5b7b-46cf-ae60-c620f80784cf-dns-svc\") pod \"dnsmasq-dns-545fb8c44f-v7pt9\" (UID: \"6741bee5-5b7b-46cf-ae60-c620f80784cf\") " pod="openstack/dnsmasq-dns-545fb8c44f-v7pt9" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.533996 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6741bee5-5b7b-46cf-ae60-c620f80784cf-config\") pod \"dnsmasq-dns-545fb8c44f-v7pt9\" (UID: \"6741bee5-5b7b-46cf-ae60-c620f80784cf\") " pod="openstack/dnsmasq-dns-545fb8c44f-v7pt9" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.534055 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fgkj\" (UniqueName: \"kubernetes.io/projected/6741bee5-5b7b-46cf-ae60-c620f80784cf-kube-api-access-5fgkj\") pod \"dnsmasq-dns-545fb8c44f-v7pt9\" (UID: \"6741bee5-5b7b-46cf-ae60-c620f80784cf\") " pod="openstack/dnsmasq-dns-545fb8c44f-v7pt9" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.534102 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c7b98eb9-459c-4a87-88e3-63624b7969b9-ovn-rundir\") pod \"ovn-controller-metrics-lx4t5\" (UID: \"c7b98eb9-459c-4a87-88e3-63624b7969b9\") " pod="openstack/ovn-controller-metrics-lx4t5" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.534128 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b98eb9-459c-4a87-88e3-63624b7969b9-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-lx4t5\" (UID: \"c7b98eb9-459c-4a87-88e3-63624b7969b9\") " pod="openstack/ovn-controller-metrics-lx4t5" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.534403 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6741bee5-5b7b-46cf-ae60-c620f80784cf-ovsdbserver-sb\") pod \"dnsmasq-dns-545fb8c44f-v7pt9\" (UID: \"6741bee5-5b7b-46cf-ae60-c620f80784cf\") " pod="openstack/dnsmasq-dns-545fb8c44f-v7pt9" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.534623 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6741bee5-5b7b-46cf-ae60-c620f80784cf-dns-svc\") pod \"dnsmasq-dns-545fb8c44f-v7pt9\" (UID: \"6741bee5-5b7b-46cf-ae60-c620f80784cf\") " pod="openstack/dnsmasq-dns-545fb8c44f-v7pt9" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.534737 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6741bee5-5b7b-46cf-ae60-c620f80784cf-config\") pod \"dnsmasq-dns-545fb8c44f-v7pt9\" (UID: \"6741bee5-5b7b-46cf-ae60-c620f80784cf\") " pod="openstack/dnsmasq-dns-545fb8c44f-v7pt9" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.563226 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fgkj\" (UniqueName: \"kubernetes.io/projected/6741bee5-5b7b-46cf-ae60-c620f80784cf-kube-api-access-5fgkj\") pod \"dnsmasq-dns-545fb8c44f-v7pt9\" (UID: \"6741bee5-5b7b-46cf-ae60-c620f80784cf\") " pod="openstack/dnsmasq-dns-545fb8c44f-v7pt9" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.635630 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxshq\" (UniqueName: \"kubernetes.io/projected/c7b98eb9-459c-4a87-88e3-63624b7969b9-kube-api-access-qxshq\") pod \"ovn-controller-metrics-lx4t5\" (UID: \"c7b98eb9-459c-4a87-88e3-63624b7969b9\") " pod="openstack/ovn-controller-metrics-lx4t5" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.635713 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c7b98eb9-459c-4a87-88e3-63624b7969b9-ovn-rundir\") pod \"ovn-controller-metrics-lx4t5\" (UID: \"c7b98eb9-459c-4a87-88e3-63624b7969b9\") " pod="openstack/ovn-controller-metrics-lx4t5" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.635897 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b98eb9-459c-4a87-88e3-63624b7969b9-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-lx4t5\" (UID: \"c7b98eb9-459c-4a87-88e3-63624b7969b9\") " pod="openstack/ovn-controller-metrics-lx4t5" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.635991 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c7b98eb9-459c-4a87-88e3-63624b7969b9-ovs-rundir\") pod \"ovn-controller-metrics-lx4t5\" (UID: \"c7b98eb9-459c-4a87-88e3-63624b7969b9\") " pod="openstack/ovn-controller-metrics-lx4t5" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.636128 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b98eb9-459c-4a87-88e3-63624b7969b9-combined-ca-bundle\") pod \"ovn-controller-metrics-lx4t5\" (UID: \"c7b98eb9-459c-4a87-88e3-63624b7969b9\") " pod="openstack/ovn-controller-metrics-lx4t5" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.636180 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c7b98eb9-459c-4a87-88e3-63624b7969b9-ovn-rundir\") pod \"ovn-controller-metrics-lx4t5\" (UID: \"c7b98eb9-459c-4a87-88e3-63624b7969b9\") " pod="openstack/ovn-controller-metrics-lx4t5" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.636189 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7b98eb9-459c-4a87-88e3-63624b7969b9-config\") pod \"ovn-controller-metrics-lx4t5\" (UID: \"c7b98eb9-459c-4a87-88e3-63624b7969b9\") " pod="openstack/ovn-controller-metrics-lx4t5" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.636425 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c7b98eb9-459c-4a87-88e3-63624b7969b9-ovs-rundir\") pod \"ovn-controller-metrics-lx4t5\" (UID: \"c7b98eb9-459c-4a87-88e3-63624b7969b9\") " pod="openstack/ovn-controller-metrics-lx4t5" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.636994 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7b98eb9-459c-4a87-88e3-63624b7969b9-config\") pod \"ovn-controller-metrics-lx4t5\" (UID: \"c7b98eb9-459c-4a87-88e3-63624b7969b9\") " pod="openstack/ovn-controller-metrics-lx4t5" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.641237 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b98eb9-459c-4a87-88e3-63624b7969b9-combined-ca-bundle\") pod \"ovn-controller-metrics-lx4t5\" (UID: \"c7b98eb9-459c-4a87-88e3-63624b7969b9\") " pod="openstack/ovn-controller-metrics-lx4t5" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.653896 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545fb8c44f-v7pt9" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.659040 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b98eb9-459c-4a87-88e3-63624b7969b9-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-lx4t5\" (UID: \"c7b98eb9-459c-4a87-88e3-63624b7969b9\") " pod="openstack/ovn-controller-metrics-lx4t5" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.664097 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxshq\" (UniqueName: \"kubernetes.io/projected/c7b98eb9-459c-4a87-88e3-63624b7969b9-kube-api-access-qxshq\") pod \"ovn-controller-metrics-lx4t5\" (UID: \"c7b98eb9-459c-4a87-88e3-63624b7969b9\") " pod="openstack/ovn-controller-metrics-lx4t5" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.809977 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.834226 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-lx4t5" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.840503 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77597f887-8wtwb"] Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.915819 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-946f77c87-k2cmv"] Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.934689 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-946f77c87-k2cmv" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.942404 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-946f77c87-k2cmv"] Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.944271 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfhmh\" (UniqueName: \"kubernetes.io/projected/baad73e5-be59-4193-8456-5c5d3c4a0b90-kube-api-access-vfhmh\") pod \"dnsmasq-dns-946f77c87-k2cmv\" (UID: \"baad73e5-be59-4193-8456-5c5d3c4a0b90\") " pod="openstack/dnsmasq-dns-946f77c87-k2cmv" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.944379 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/baad73e5-be59-4193-8456-5c5d3c4a0b90-dns-svc\") pod \"dnsmasq-dns-946f77c87-k2cmv\" (UID: \"baad73e5-be59-4193-8456-5c5d3c4a0b90\") " pod="openstack/dnsmasq-dns-946f77c87-k2cmv" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.944450 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/baad73e5-be59-4193-8456-5c5d3c4a0b90-ovsdbserver-sb\") pod \"dnsmasq-dns-946f77c87-k2cmv\" (UID: \"baad73e5-be59-4193-8456-5c5d3c4a0b90\") " pod="openstack/dnsmasq-dns-946f77c87-k2cmv" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.946991 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baad73e5-be59-4193-8456-5c5d3c4a0b90-config\") pod \"dnsmasq-dns-946f77c87-k2cmv\" (UID: \"baad73e5-be59-4193-8456-5c5d3c4a0b90\") " pod="openstack/dnsmasq-dns-946f77c87-k2cmv" Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.966282 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-545fb8c44f-v7pt9"] Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.986134 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b587f8db7-w8zvt"] Oct 13 06:45:16 crc kubenswrapper[4833]: I1013 06:45:16.998914 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b587f8db7-w8zvt" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.001886 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.023450 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b587f8db7-w8zvt"] Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.046556 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8575fc99d7-phl5s" podUID="e9859523-7b91-42bf-9439-b86433c88754" containerName="dnsmasq-dns" containerID="cri-o://f90207c307f761a1960137eca06aee1ecc46f72783ef24fde5e40fec68c8689d" gracePeriod=10 Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.046911 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.046927 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77597f887-8wtwb" podUID="fc889568-9917-471c-b7b6-02d71536e6db" containerName="dnsmasq-dns" containerID="cri-o://adbafea9f749ff7a5d7a91dc927c080454b6d648fcfd1b81a587d5e736c6e730" gracePeriod=10 Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.046937 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77597f887-8wtwb" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.059261 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/baad73e5-be59-4193-8456-5c5d3c4a0b90-dns-svc\") pod \"dnsmasq-dns-946f77c87-k2cmv\" (UID: \"baad73e5-be59-4193-8456-5c5d3c4a0b90\") " pod="openstack/dnsmasq-dns-946f77c87-k2cmv" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.059466 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbaa5e3f-00a5-4af4-a775-968ad570939c-config\") pod \"dnsmasq-dns-7b587f8db7-w8zvt\" (UID: \"cbaa5e3f-00a5-4af4-a775-968ad570939c\") " pod="openstack/dnsmasq-dns-7b587f8db7-w8zvt" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.059497 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/baad73e5-be59-4193-8456-5c5d3c4a0b90-ovsdbserver-sb\") pod \"dnsmasq-dns-946f77c87-k2cmv\" (UID: \"baad73e5-be59-4193-8456-5c5d3c4a0b90\") " pod="openstack/dnsmasq-dns-946f77c87-k2cmv" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.059526 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cbaa5e3f-00a5-4af4-a775-968ad570939c-dns-svc\") pod \"dnsmasq-dns-7b587f8db7-w8zvt\" (UID: \"cbaa5e3f-00a5-4af4-a775-968ad570939c\") " pod="openstack/dnsmasq-dns-7b587f8db7-w8zvt" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.059623 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baad73e5-be59-4193-8456-5c5d3c4a0b90-config\") pod \"dnsmasq-dns-946f77c87-k2cmv\" (UID: \"baad73e5-be59-4193-8456-5c5d3c4a0b90\") " pod="openstack/dnsmasq-dns-946f77c87-k2cmv" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.059641 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cbaa5e3f-00a5-4af4-a775-968ad570939c-ovsdbserver-nb\") pod \"dnsmasq-dns-7b587f8db7-w8zvt\" (UID: \"cbaa5e3f-00a5-4af4-a775-968ad570939c\") " pod="openstack/dnsmasq-dns-7b587f8db7-w8zvt" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.059704 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdqv4\" (UniqueName: \"kubernetes.io/projected/cbaa5e3f-00a5-4af4-a775-968ad570939c-kube-api-access-jdqv4\") pod \"dnsmasq-dns-7b587f8db7-w8zvt\" (UID: \"cbaa5e3f-00a5-4af4-a775-968ad570939c\") " pod="openstack/dnsmasq-dns-7b587f8db7-w8zvt" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.059730 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cbaa5e3f-00a5-4af4-a775-968ad570939c-ovsdbserver-sb\") pod \"dnsmasq-dns-7b587f8db7-w8zvt\" (UID: \"cbaa5e3f-00a5-4af4-a775-968ad570939c\") " pod="openstack/dnsmasq-dns-7b587f8db7-w8zvt" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.059778 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfhmh\" (UniqueName: \"kubernetes.io/projected/baad73e5-be59-4193-8456-5c5d3c4a0b90-kube-api-access-vfhmh\") pod \"dnsmasq-dns-946f77c87-k2cmv\" (UID: \"baad73e5-be59-4193-8456-5c5d3c4a0b90\") " pod="openstack/dnsmasq-dns-946f77c87-k2cmv" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.060335 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/baad73e5-be59-4193-8456-5c5d3c4a0b90-ovsdbserver-sb\") pod \"dnsmasq-dns-946f77c87-k2cmv\" (UID: \"baad73e5-be59-4193-8456-5c5d3c4a0b90\") " pod="openstack/dnsmasq-dns-946f77c87-k2cmv" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.060354 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/baad73e5-be59-4193-8456-5c5d3c4a0b90-dns-svc\") pod \"dnsmasq-dns-946f77c87-k2cmv\" (UID: \"baad73e5-be59-4193-8456-5c5d3c4a0b90\") " pod="openstack/dnsmasq-dns-946f77c87-k2cmv" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.060774 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baad73e5-be59-4193-8456-5c5d3c4a0b90-config\") pod \"dnsmasq-dns-946f77c87-k2cmv\" (UID: \"baad73e5-be59-4193-8456-5c5d3c4a0b90\") " pod="openstack/dnsmasq-dns-946f77c87-k2cmv" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.079694 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfhmh\" (UniqueName: \"kubernetes.io/projected/baad73e5-be59-4193-8456-5c5d3c4a0b90-kube-api-access-vfhmh\") pod \"dnsmasq-dns-946f77c87-k2cmv\" (UID: \"baad73e5-be59-4193-8456-5c5d3c4a0b90\") " pod="openstack/dnsmasq-dns-946f77c87-k2cmv" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.104358 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.161437 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cbaa5e3f-00a5-4af4-a775-968ad570939c-ovsdbserver-nb\") pod \"dnsmasq-dns-7b587f8db7-w8zvt\" (UID: \"cbaa5e3f-00a5-4af4-a775-968ad570939c\") " pod="openstack/dnsmasq-dns-7b587f8db7-w8zvt" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.161537 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdqv4\" (UniqueName: \"kubernetes.io/projected/cbaa5e3f-00a5-4af4-a775-968ad570939c-kube-api-access-jdqv4\") pod \"dnsmasq-dns-7b587f8db7-w8zvt\" (UID: \"cbaa5e3f-00a5-4af4-a775-968ad570939c\") " pod="openstack/dnsmasq-dns-7b587f8db7-w8zvt" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.161590 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cbaa5e3f-00a5-4af4-a775-968ad570939c-ovsdbserver-sb\") pod \"dnsmasq-dns-7b587f8db7-w8zvt\" (UID: \"cbaa5e3f-00a5-4af4-a775-968ad570939c\") " pod="openstack/dnsmasq-dns-7b587f8db7-w8zvt" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.161698 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbaa5e3f-00a5-4af4-a775-968ad570939c-config\") pod \"dnsmasq-dns-7b587f8db7-w8zvt\" (UID: \"cbaa5e3f-00a5-4af4-a775-968ad570939c\") " pod="openstack/dnsmasq-dns-7b587f8db7-w8zvt" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.161750 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cbaa5e3f-00a5-4af4-a775-968ad570939c-dns-svc\") pod \"dnsmasq-dns-7b587f8db7-w8zvt\" (UID: \"cbaa5e3f-00a5-4af4-a775-968ad570939c\") " pod="openstack/dnsmasq-dns-7b587f8db7-w8zvt" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.162699 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cbaa5e3f-00a5-4af4-a775-968ad570939c-dns-svc\") pod \"dnsmasq-dns-7b587f8db7-w8zvt\" (UID: \"cbaa5e3f-00a5-4af4-a775-968ad570939c\") " pod="openstack/dnsmasq-dns-7b587f8db7-w8zvt" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.163467 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cbaa5e3f-00a5-4af4-a775-968ad570939c-ovsdbserver-nb\") pod \"dnsmasq-dns-7b587f8db7-w8zvt\" (UID: \"cbaa5e3f-00a5-4af4-a775-968ad570939c\") " pod="openstack/dnsmasq-dns-7b587f8db7-w8zvt" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.164855 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cbaa5e3f-00a5-4af4-a775-968ad570939c-ovsdbserver-sb\") pod \"dnsmasq-dns-7b587f8db7-w8zvt\" (UID: \"cbaa5e3f-00a5-4af4-a775-968ad570939c\") " pod="openstack/dnsmasq-dns-7b587f8db7-w8zvt" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.164917 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbaa5e3f-00a5-4af4-a775-968ad570939c-config\") pod \"dnsmasq-dns-7b587f8db7-w8zvt\" (UID: \"cbaa5e3f-00a5-4af4-a775-968ad570939c\") " pod="openstack/dnsmasq-dns-7b587f8db7-w8zvt" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.187044 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdqv4\" (UniqueName: \"kubernetes.io/projected/cbaa5e3f-00a5-4af4-a775-968ad570939c-kube-api-access-jdqv4\") pod \"dnsmasq-dns-7b587f8db7-w8zvt\" (UID: \"cbaa5e3f-00a5-4af4-a775-968ad570939c\") " pod="openstack/dnsmasq-dns-7b587f8db7-w8zvt" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.280825 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-946f77c87-k2cmv" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.314486 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.318854 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.320364 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.320558 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-df7gb" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.320603 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.321617 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.328487 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.341232 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b587f8db7-w8zvt" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.467879 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1a7008b-3448-4108-81b0-4d16484a6f7b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c1a7008b-3448-4108-81b0-4d16484a6f7b\") " pod="openstack/ovn-northd-0" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.468009 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c1a7008b-3448-4108-81b0-4d16484a6f7b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c1a7008b-3448-4108-81b0-4d16484a6f7b\") " pod="openstack/ovn-northd-0" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.468040 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcrm2\" (UniqueName: \"kubernetes.io/projected/c1a7008b-3448-4108-81b0-4d16484a6f7b-kube-api-access-wcrm2\") pod \"ovn-northd-0\" (UID: \"c1a7008b-3448-4108-81b0-4d16484a6f7b\") " pod="openstack/ovn-northd-0" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.468075 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1a7008b-3448-4108-81b0-4d16484a6f7b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c1a7008b-3448-4108-81b0-4d16484a6f7b\") " pod="openstack/ovn-northd-0" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.468142 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a7008b-3448-4108-81b0-4d16484a6f7b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c1a7008b-3448-4108-81b0-4d16484a6f7b\") " pod="openstack/ovn-northd-0" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.468173 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a7008b-3448-4108-81b0-4d16484a6f7b-config\") pod \"ovn-northd-0\" (UID: \"c1a7008b-3448-4108-81b0-4d16484a6f7b\") " pod="openstack/ovn-northd-0" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.468238 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1a7008b-3448-4108-81b0-4d16484a6f7b-scripts\") pod \"ovn-northd-0\" (UID: \"c1a7008b-3448-4108-81b0-4d16484a6f7b\") " pod="openstack/ovn-northd-0" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.479705 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-545fb8c44f-v7pt9"] Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.489715 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-lx4t5"] Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.523379 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8575fc99d7-phl5s" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.569290 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a7008b-3448-4108-81b0-4d16484a6f7b-config\") pod \"ovn-northd-0\" (UID: \"c1a7008b-3448-4108-81b0-4d16484a6f7b\") " pod="openstack/ovn-northd-0" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.569359 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1a7008b-3448-4108-81b0-4d16484a6f7b-scripts\") pod \"ovn-northd-0\" (UID: \"c1a7008b-3448-4108-81b0-4d16484a6f7b\") " pod="openstack/ovn-northd-0" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.569407 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1a7008b-3448-4108-81b0-4d16484a6f7b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c1a7008b-3448-4108-81b0-4d16484a6f7b\") " pod="openstack/ovn-northd-0" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.569504 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c1a7008b-3448-4108-81b0-4d16484a6f7b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c1a7008b-3448-4108-81b0-4d16484a6f7b\") " pod="openstack/ovn-northd-0" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.569525 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcrm2\" (UniqueName: \"kubernetes.io/projected/c1a7008b-3448-4108-81b0-4d16484a6f7b-kube-api-access-wcrm2\") pod \"ovn-northd-0\" (UID: \"c1a7008b-3448-4108-81b0-4d16484a6f7b\") " pod="openstack/ovn-northd-0" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.569561 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1a7008b-3448-4108-81b0-4d16484a6f7b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c1a7008b-3448-4108-81b0-4d16484a6f7b\") " pod="openstack/ovn-northd-0" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.569609 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a7008b-3448-4108-81b0-4d16484a6f7b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c1a7008b-3448-4108-81b0-4d16484a6f7b\") " pod="openstack/ovn-northd-0" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.570938 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a7008b-3448-4108-81b0-4d16484a6f7b-config\") pod \"ovn-northd-0\" (UID: \"c1a7008b-3448-4108-81b0-4d16484a6f7b\") " pod="openstack/ovn-northd-0" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.570953 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1a7008b-3448-4108-81b0-4d16484a6f7b-scripts\") pod \"ovn-northd-0\" (UID: \"c1a7008b-3448-4108-81b0-4d16484a6f7b\") " pod="openstack/ovn-northd-0" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.571291 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c1a7008b-3448-4108-81b0-4d16484a6f7b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c1a7008b-3448-4108-81b0-4d16484a6f7b\") " pod="openstack/ovn-northd-0" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.572723 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77597f887-8wtwb" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.573513 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1a7008b-3448-4108-81b0-4d16484a6f7b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c1a7008b-3448-4108-81b0-4d16484a6f7b\") " pod="openstack/ovn-northd-0" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.573762 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1a7008b-3448-4108-81b0-4d16484a6f7b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c1a7008b-3448-4108-81b0-4d16484a6f7b\") " pod="openstack/ovn-northd-0" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.575384 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a7008b-3448-4108-81b0-4d16484a6f7b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c1a7008b-3448-4108-81b0-4d16484a6f7b\") " pod="openstack/ovn-northd-0" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.590103 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcrm2\" (UniqueName: \"kubernetes.io/projected/c1a7008b-3448-4108-81b0-4d16484a6f7b-kube-api-access-wcrm2\") pod \"ovn-northd-0\" (UID: \"c1a7008b-3448-4108-81b0-4d16484a6f7b\") " pod="openstack/ovn-northd-0" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.635775 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.671315 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.673490 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc889568-9917-471c-b7b6-02d71536e6db-config\") pod \"fc889568-9917-471c-b7b6-02d71536e6db\" (UID: \"fc889568-9917-471c-b7b6-02d71536e6db\") " Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.673578 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mm9lj\" (UniqueName: \"kubernetes.io/projected/e9859523-7b91-42bf-9439-b86433c88754-kube-api-access-mm9lj\") pod \"e9859523-7b91-42bf-9439-b86433c88754\" (UID: \"e9859523-7b91-42bf-9439-b86433c88754\") " Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.673694 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9859523-7b91-42bf-9439-b86433c88754-config\") pod \"e9859523-7b91-42bf-9439-b86433c88754\" (UID: \"e9859523-7b91-42bf-9439-b86433c88754\") " Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.673982 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q42ll\" (UniqueName: \"kubernetes.io/projected/fc889568-9917-471c-b7b6-02d71536e6db-kube-api-access-q42ll\") pod \"fc889568-9917-471c-b7b6-02d71536e6db\" (UID: \"fc889568-9917-471c-b7b6-02d71536e6db\") " Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.674065 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9859523-7b91-42bf-9439-b86433c88754-dns-svc\") pod \"e9859523-7b91-42bf-9439-b86433c88754\" (UID: \"e9859523-7b91-42bf-9439-b86433c88754\") " Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.674110 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc889568-9917-471c-b7b6-02d71536e6db-dns-svc\") pod \"fc889568-9917-471c-b7b6-02d71536e6db\" (UID: \"fc889568-9917-471c-b7b6-02d71536e6db\") " Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.677136 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc889568-9917-471c-b7b6-02d71536e6db-kube-api-access-q42ll" (OuterVolumeSpecName: "kube-api-access-q42ll") pod "fc889568-9917-471c-b7b6-02d71536e6db" (UID: "fc889568-9917-471c-b7b6-02d71536e6db"). InnerVolumeSpecName "kube-api-access-q42ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.678873 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9859523-7b91-42bf-9439-b86433c88754-kube-api-access-mm9lj" (OuterVolumeSpecName: "kube-api-access-mm9lj") pod "e9859523-7b91-42bf-9439-b86433c88754" (UID: "e9859523-7b91-42bf-9439-b86433c88754"). InnerVolumeSpecName "kube-api-access-mm9lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.742151 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc889568-9917-471c-b7b6-02d71536e6db-config" (OuterVolumeSpecName: "config") pod "fc889568-9917-471c-b7b6-02d71536e6db" (UID: "fc889568-9917-471c-b7b6-02d71536e6db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.757588 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9859523-7b91-42bf-9439-b86433c88754-config" (OuterVolumeSpecName: "config") pod "e9859523-7b91-42bf-9439-b86433c88754" (UID: "e9859523-7b91-42bf-9439-b86433c88754"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.774365 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9859523-7b91-42bf-9439-b86433c88754-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e9859523-7b91-42bf-9439-b86433c88754" (UID: "e9859523-7b91-42bf-9439-b86433c88754"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.774434 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc889568-9917-471c-b7b6-02d71536e6db-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fc889568-9917-471c-b7b6-02d71536e6db" (UID: "fc889568-9917-471c-b7b6-02d71536e6db"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.784288 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.790724 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc889568-9917-471c-b7b6-02d71536e6db-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.791360 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mm9lj\" (UniqueName: \"kubernetes.io/projected/e9859523-7b91-42bf-9439-b86433c88754-kube-api-access-mm9lj\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.791384 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9859523-7b91-42bf-9439-b86433c88754-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.791408 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q42ll\" (UniqueName: \"kubernetes.io/projected/fc889568-9917-471c-b7b6-02d71536e6db-kube-api-access-q42ll\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.791423 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9859523-7b91-42bf-9439-b86433c88754-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.791436 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc889568-9917-471c-b7b6-02d71536e6db-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.817703 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-946f77c87-k2cmv"] Oct 13 06:45:17 crc kubenswrapper[4833]: I1013 06:45:17.931892 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b587f8db7-w8zvt"] Oct 13 06:45:17 crc kubenswrapper[4833]: W1013 06:45:17.978468 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbaa5e3f_00a5_4af4_a775_968ad570939c.slice/crio-44a931d9e0711539c02a75e259c964a99056690ae4bf73ac870fb1930131deb3 WatchSource:0}: Error finding container 44a931d9e0711539c02a75e259c964a99056690ae4bf73ac870fb1930131deb3: Status 404 returned error can't find the container with id 44a931d9e0711539c02a75e259c964a99056690ae4bf73ac870fb1930131deb3 Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.022473 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Oct 13 06:45:18 crc kubenswrapper[4833]: E1013 06:45:18.022784 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc889568-9917-471c-b7b6-02d71536e6db" containerName="init" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.022800 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc889568-9917-471c-b7b6-02d71536e6db" containerName="init" Oct 13 06:45:18 crc kubenswrapper[4833]: E1013 06:45:18.022814 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc889568-9917-471c-b7b6-02d71536e6db" containerName="dnsmasq-dns" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.022820 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc889568-9917-471c-b7b6-02d71536e6db" containerName="dnsmasq-dns" Oct 13 06:45:18 crc kubenswrapper[4833]: E1013 06:45:18.022834 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9859523-7b91-42bf-9439-b86433c88754" containerName="dnsmasq-dns" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.022841 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9859523-7b91-42bf-9439-b86433c88754" containerName="dnsmasq-dns" Oct 13 06:45:18 crc kubenswrapper[4833]: E1013 06:45:18.022856 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9859523-7b91-42bf-9439-b86433c88754" containerName="init" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.022862 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9859523-7b91-42bf-9439-b86433c88754" containerName="init" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.023012 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc889568-9917-471c-b7b6-02d71536e6db" containerName="dnsmasq-dns" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.023030 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9859523-7b91-42bf-9439-b86433c88754" containerName="dnsmasq-dns" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.031764 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.039224 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.039375 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.039970 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-xkmh4" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.040121 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.040885 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.073944 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b587f8db7-w8zvt" event={"ID":"cbaa5e3f-00a5-4af4-a775-968ad570939c","Type":"ContainerStarted","Data":"44a931d9e0711539c02a75e259c964a99056690ae4bf73ac870fb1930131deb3"} Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.079327 4833 generic.go:334] "Generic (PLEG): container finished" podID="e9859523-7b91-42bf-9439-b86433c88754" containerID="f90207c307f761a1960137eca06aee1ecc46f72783ef24fde5e40fec68c8689d" exitCode=0 Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.079365 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8575fc99d7-phl5s" event={"ID":"e9859523-7b91-42bf-9439-b86433c88754","Type":"ContainerDied","Data":"f90207c307f761a1960137eca06aee1ecc46f72783ef24fde5e40fec68c8689d"} Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.079380 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8575fc99d7-phl5s" event={"ID":"e9859523-7b91-42bf-9439-b86433c88754","Type":"ContainerDied","Data":"c14c7864df31a6838b000dd0f45bdfbd4ff27f348ebb130aa6f358b585e960c2"} Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.079394 4833 scope.go:117] "RemoveContainer" containerID="f90207c307f761a1960137eca06aee1ecc46f72783ef24fde5e40fec68c8689d" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.079502 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8575fc99d7-phl5s" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.090480 4833 generic.go:334] "Generic (PLEG): container finished" podID="6741bee5-5b7b-46cf-ae60-c620f80784cf" containerID="51c7e847ad7b1aea6a261deb9df80964369da97912fa3f2f74506f283a3c1744" exitCode=0 Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.090747 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545fb8c44f-v7pt9" event={"ID":"6741bee5-5b7b-46cf-ae60-c620f80784cf","Type":"ContainerDied","Data":"51c7e847ad7b1aea6a261deb9df80964369da97912fa3f2f74506f283a3c1744"} Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.090800 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545fb8c44f-v7pt9" event={"ID":"6741bee5-5b7b-46cf-ae60-c620f80784cf","Type":"ContainerStarted","Data":"18a5bafd94dfb29f2140a48865e9e7e2f06fca61e2472eaf3e86b8593e4b6766"} Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.101284 4833 generic.go:334] "Generic (PLEG): container finished" podID="fc889568-9917-471c-b7b6-02d71536e6db" containerID="adbafea9f749ff7a5d7a91dc927c080454b6d648fcfd1b81a587d5e736c6e730" exitCode=0 Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.101343 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77597f887-8wtwb" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.101380 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77597f887-8wtwb" event={"ID":"fc889568-9917-471c-b7b6-02d71536e6db","Type":"ContainerDied","Data":"adbafea9f749ff7a5d7a91dc927c080454b6d648fcfd1b81a587d5e736c6e730"} Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.101427 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77597f887-8wtwb" event={"ID":"fc889568-9917-471c-b7b6-02d71536e6db","Type":"ContainerDied","Data":"3293b820fe3dd8baa4be06ffe801c807bb9e1da907d2196959bf7ad31dca579b"} Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.101840 4833 scope.go:117] "RemoveContainer" containerID="8d58252cf87ea3adc97c942848b280be283be3224920eedff5a9c091c96cef7f" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.106322 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-946f77c87-k2cmv" event={"ID":"baad73e5-be59-4193-8456-5c5d3c4a0b90","Type":"ContainerStarted","Data":"a5241b33c937315f0c27c29de754747190d3ce032c9119083ec5b748ba0a5ce9"} Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.111455 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-lx4t5" event={"ID":"c7b98eb9-459c-4a87-88e3-63624b7969b9","Type":"ContainerStarted","Data":"475bd41d6600098aca15ac0e690b3a40fb08bae6907e1462c6932c353651641a"} Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.111623 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-lx4t5" event={"ID":"c7b98eb9-459c-4a87-88e3-63624b7969b9","Type":"ContainerStarted","Data":"d3bf196a8a0c5a27ec39877a9487a2fb49faed62d9de1c40612c34a540858e39"} Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.139550 4833 scope.go:117] "RemoveContainer" containerID="f90207c307f761a1960137eca06aee1ecc46f72783ef24fde5e40fec68c8689d" Oct 13 06:45:18 crc kubenswrapper[4833]: E1013 06:45:18.140329 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f90207c307f761a1960137eca06aee1ecc46f72783ef24fde5e40fec68c8689d\": container with ID starting with f90207c307f761a1960137eca06aee1ecc46f72783ef24fde5e40fec68c8689d not found: ID does not exist" containerID="f90207c307f761a1960137eca06aee1ecc46f72783ef24fde5e40fec68c8689d" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.140423 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f90207c307f761a1960137eca06aee1ecc46f72783ef24fde5e40fec68c8689d"} err="failed to get container status \"f90207c307f761a1960137eca06aee1ecc46f72783ef24fde5e40fec68c8689d\": rpc error: code = NotFound desc = could not find container \"f90207c307f761a1960137eca06aee1ecc46f72783ef24fde5e40fec68c8689d\": container with ID starting with f90207c307f761a1960137eca06aee1ecc46f72783ef24fde5e40fec68c8689d not found: ID does not exist" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.140455 4833 scope.go:117] "RemoveContainer" containerID="8d58252cf87ea3adc97c942848b280be283be3224920eedff5a9c091c96cef7f" Oct 13 06:45:18 crc kubenswrapper[4833]: E1013 06:45:18.140970 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d58252cf87ea3adc97c942848b280be283be3224920eedff5a9c091c96cef7f\": container with ID starting with 8d58252cf87ea3adc97c942848b280be283be3224920eedff5a9c091c96cef7f not found: ID does not exist" containerID="8d58252cf87ea3adc97c942848b280be283be3224920eedff5a9c091c96cef7f" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.140988 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d58252cf87ea3adc97c942848b280be283be3224920eedff5a9c091c96cef7f"} err="failed to get container status \"8d58252cf87ea3adc97c942848b280be283be3224920eedff5a9c091c96cef7f\": rpc error: code = NotFound desc = could not find container \"8d58252cf87ea3adc97c942848b280be283be3224920eedff5a9c091c96cef7f\": container with ID starting with 8d58252cf87ea3adc97c942848b280be283be3224920eedff5a9c091c96cef7f not found: ID does not exist" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.141000 4833 scope.go:117] "RemoveContainer" containerID="adbafea9f749ff7a5d7a91dc927c080454b6d648fcfd1b81a587d5e736c6e730" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.182062 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-lx4t5" podStartSLOduration=2.182039481 podStartE2EDuration="2.182039481s" podCreationTimestamp="2025-10-13 06:45:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:45:18.142261846 +0000 UTC m=+1008.242684792" watchObservedRunningTime="2025-10-13 06:45:18.182039481 +0000 UTC m=+1008.282462397" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.198585 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-lock\") pod \"swift-storage-0\" (UID: \"23940e94-2a8f-4e11-b8aa-31fbcd8d9076\") " pod="openstack/swift-storage-0" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.198647 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-cache\") pod \"swift-storage-0\" (UID: \"23940e94-2a8f-4e11-b8aa-31fbcd8d9076\") " pod="openstack/swift-storage-0" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.198664 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-etc-swift\") pod \"swift-storage-0\" (UID: \"23940e94-2a8f-4e11-b8aa-31fbcd8d9076\") " pod="openstack/swift-storage-0" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.198683 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"23940e94-2a8f-4e11-b8aa-31fbcd8d9076\") " pod="openstack/swift-storage-0" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.198888 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95477\" (UniqueName: \"kubernetes.io/projected/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-kube-api-access-95477\") pod \"swift-storage-0\" (UID: \"23940e94-2a8f-4e11-b8aa-31fbcd8d9076\") " pod="openstack/swift-storage-0" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.200335 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8575fc99d7-phl5s"] Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.213109 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8575fc99d7-phl5s"] Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.215914 4833 scope.go:117] "RemoveContainer" containerID="98c0f186e0b73e50f0dfade994d9dcaa055e8e44538aea7d5f6678805ce163ff" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.225753 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77597f887-8wtwb"] Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.238676 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77597f887-8wtwb"] Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.248497 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.267411 4833 scope.go:117] "RemoveContainer" containerID="adbafea9f749ff7a5d7a91dc927c080454b6d648fcfd1b81a587d5e736c6e730" Oct 13 06:45:18 crc kubenswrapper[4833]: E1013 06:45:18.274185 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adbafea9f749ff7a5d7a91dc927c080454b6d648fcfd1b81a587d5e736c6e730\": container with ID starting with adbafea9f749ff7a5d7a91dc927c080454b6d648fcfd1b81a587d5e736c6e730 not found: ID does not exist" containerID="adbafea9f749ff7a5d7a91dc927c080454b6d648fcfd1b81a587d5e736c6e730" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.274239 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adbafea9f749ff7a5d7a91dc927c080454b6d648fcfd1b81a587d5e736c6e730"} err="failed to get container status \"adbafea9f749ff7a5d7a91dc927c080454b6d648fcfd1b81a587d5e736c6e730\": rpc error: code = NotFound desc = could not find container \"adbafea9f749ff7a5d7a91dc927c080454b6d648fcfd1b81a587d5e736c6e730\": container with ID starting with adbafea9f749ff7a5d7a91dc927c080454b6d648fcfd1b81a587d5e736c6e730 not found: ID does not exist" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.274273 4833 scope.go:117] "RemoveContainer" containerID="98c0f186e0b73e50f0dfade994d9dcaa055e8e44538aea7d5f6678805ce163ff" Oct 13 06:45:18 crc kubenswrapper[4833]: E1013 06:45:18.274676 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98c0f186e0b73e50f0dfade994d9dcaa055e8e44538aea7d5f6678805ce163ff\": container with ID starting with 98c0f186e0b73e50f0dfade994d9dcaa055e8e44538aea7d5f6678805ce163ff not found: ID does not exist" containerID="98c0f186e0b73e50f0dfade994d9dcaa055e8e44538aea7d5f6678805ce163ff" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.274703 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98c0f186e0b73e50f0dfade994d9dcaa055e8e44538aea7d5f6678805ce163ff"} err="failed to get container status \"98c0f186e0b73e50f0dfade994d9dcaa055e8e44538aea7d5f6678805ce163ff\": rpc error: code = NotFound desc = could not find container \"98c0f186e0b73e50f0dfade994d9dcaa055e8e44538aea7d5f6678805ce163ff\": container with ID starting with 98c0f186e0b73e50f0dfade994d9dcaa055e8e44538aea7d5f6678805ce163ff not found: ID does not exist" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.300855 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-lock\") pod \"swift-storage-0\" (UID: \"23940e94-2a8f-4e11-b8aa-31fbcd8d9076\") " pod="openstack/swift-storage-0" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.301234 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-cache\") pod \"swift-storage-0\" (UID: \"23940e94-2a8f-4e11-b8aa-31fbcd8d9076\") " pod="openstack/swift-storage-0" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.301254 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"23940e94-2a8f-4e11-b8aa-31fbcd8d9076\") " pod="openstack/swift-storage-0" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.301268 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-etc-swift\") pod \"swift-storage-0\" (UID: \"23940e94-2a8f-4e11-b8aa-31fbcd8d9076\") " pod="openstack/swift-storage-0" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.301336 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95477\" (UniqueName: \"kubernetes.io/projected/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-kube-api-access-95477\") pod \"swift-storage-0\" (UID: \"23940e94-2a8f-4e11-b8aa-31fbcd8d9076\") " pod="openstack/swift-storage-0" Oct 13 06:45:18 crc kubenswrapper[4833]: E1013 06:45:18.301715 4833 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 13 06:45:18 crc kubenswrapper[4833]: E1013 06:45:18.301728 4833 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 13 06:45:18 crc kubenswrapper[4833]: E1013 06:45:18.301804 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-etc-swift podName:23940e94-2a8f-4e11-b8aa-31fbcd8d9076 nodeName:}" failed. No retries permitted until 2025-10-13 06:45:18.801751045 +0000 UTC m=+1008.902173961 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-etc-swift") pod "swift-storage-0" (UID: "23940e94-2a8f-4e11-b8aa-31fbcd8d9076") : configmap "swift-ring-files" not found Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.301955 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"23940e94-2a8f-4e11-b8aa-31fbcd8d9076\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/swift-storage-0" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.303692 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-cache\") pod \"swift-storage-0\" (UID: \"23940e94-2a8f-4e11-b8aa-31fbcd8d9076\") " pod="openstack/swift-storage-0" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.304176 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-lock\") pod \"swift-storage-0\" (UID: \"23940e94-2a8f-4e11-b8aa-31fbcd8d9076\") " pod="openstack/swift-storage-0" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.327002 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95477\" (UniqueName: \"kubernetes.io/projected/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-kube-api-access-95477\") pod \"swift-storage-0\" (UID: \"23940e94-2a8f-4e11-b8aa-31fbcd8d9076\") " pod="openstack/swift-storage-0" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.329266 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"23940e94-2a8f-4e11-b8aa-31fbcd8d9076\") " pod="openstack/swift-storage-0" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.411416 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545fb8c44f-v7pt9" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.505114 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6741bee5-5b7b-46cf-ae60-c620f80784cf-ovsdbserver-sb\") pod \"6741bee5-5b7b-46cf-ae60-c620f80784cf\" (UID: \"6741bee5-5b7b-46cf-ae60-c620f80784cf\") " Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.505195 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6741bee5-5b7b-46cf-ae60-c620f80784cf-dns-svc\") pod \"6741bee5-5b7b-46cf-ae60-c620f80784cf\" (UID: \"6741bee5-5b7b-46cf-ae60-c620f80784cf\") " Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.505339 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6741bee5-5b7b-46cf-ae60-c620f80784cf-config\") pod \"6741bee5-5b7b-46cf-ae60-c620f80784cf\" (UID: \"6741bee5-5b7b-46cf-ae60-c620f80784cf\") " Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.505378 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fgkj\" (UniqueName: \"kubernetes.io/projected/6741bee5-5b7b-46cf-ae60-c620f80784cf-kube-api-access-5fgkj\") pod \"6741bee5-5b7b-46cf-ae60-c620f80784cf\" (UID: \"6741bee5-5b7b-46cf-ae60-c620f80784cf\") " Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.511596 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6741bee5-5b7b-46cf-ae60-c620f80784cf-kube-api-access-5fgkj" (OuterVolumeSpecName: "kube-api-access-5fgkj") pod "6741bee5-5b7b-46cf-ae60-c620f80784cf" (UID: "6741bee5-5b7b-46cf-ae60-c620f80784cf"). InnerVolumeSpecName "kube-api-access-5fgkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.536935 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-bt47h"] Oct 13 06:45:18 crc kubenswrapper[4833]: E1013 06:45:18.537238 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6741bee5-5b7b-46cf-ae60-c620f80784cf" containerName="init" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.537253 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6741bee5-5b7b-46cf-ae60-c620f80784cf" containerName="init" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.537415 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6741bee5-5b7b-46cf-ae60-c620f80784cf" containerName="init" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.537910 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bt47h" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.544196 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.544353 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.544504 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.548351 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-bt47h"] Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.548948 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6741bee5-5b7b-46cf-ae60-c620f80784cf-config" (OuterVolumeSpecName: "config") pod "6741bee5-5b7b-46cf-ae60-c620f80784cf" (UID: "6741bee5-5b7b-46cf-ae60-c620f80784cf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.578744 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6741bee5-5b7b-46cf-ae60-c620f80784cf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6741bee5-5b7b-46cf-ae60-c620f80784cf" (UID: "6741bee5-5b7b-46cf-ae60-c620f80784cf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.579246 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6741bee5-5b7b-46cf-ae60-c620f80784cf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6741bee5-5b7b-46cf-ae60-c620f80784cf" (UID: "6741bee5-5b7b-46cf-ae60-c620f80784cf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.610175 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/852eefdb-1f3c-4a86-a930-24627d79056e-scripts\") pod \"swift-ring-rebalance-bt47h\" (UID: \"852eefdb-1f3c-4a86-a930-24627d79056e\") " pod="openstack/swift-ring-rebalance-bt47h" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.610266 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/852eefdb-1f3c-4a86-a930-24627d79056e-combined-ca-bundle\") pod \"swift-ring-rebalance-bt47h\" (UID: \"852eefdb-1f3c-4a86-a930-24627d79056e\") " pod="openstack/swift-ring-rebalance-bt47h" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.610340 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/852eefdb-1f3c-4a86-a930-24627d79056e-etc-swift\") pod \"swift-ring-rebalance-bt47h\" (UID: \"852eefdb-1f3c-4a86-a930-24627d79056e\") " pod="openstack/swift-ring-rebalance-bt47h" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.610406 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cpht\" (UniqueName: \"kubernetes.io/projected/852eefdb-1f3c-4a86-a930-24627d79056e-kube-api-access-6cpht\") pod \"swift-ring-rebalance-bt47h\" (UID: \"852eefdb-1f3c-4a86-a930-24627d79056e\") " pod="openstack/swift-ring-rebalance-bt47h" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.610460 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/852eefdb-1f3c-4a86-a930-24627d79056e-ring-data-devices\") pod \"swift-ring-rebalance-bt47h\" (UID: \"852eefdb-1f3c-4a86-a930-24627d79056e\") " pod="openstack/swift-ring-rebalance-bt47h" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.610489 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/852eefdb-1f3c-4a86-a930-24627d79056e-dispersionconf\") pod \"swift-ring-rebalance-bt47h\" (UID: \"852eefdb-1f3c-4a86-a930-24627d79056e\") " pod="openstack/swift-ring-rebalance-bt47h" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.610563 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/852eefdb-1f3c-4a86-a930-24627d79056e-swiftconf\") pod \"swift-ring-rebalance-bt47h\" (UID: \"852eefdb-1f3c-4a86-a930-24627d79056e\") " pod="openstack/swift-ring-rebalance-bt47h" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.610660 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6741bee5-5b7b-46cf-ae60-c620f80784cf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.610695 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6741bee5-5b7b-46cf-ae60-c620f80784cf-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.610711 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6741bee5-5b7b-46cf-ae60-c620f80784cf-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.610722 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fgkj\" (UniqueName: \"kubernetes.io/projected/6741bee5-5b7b-46cf-ae60-c620f80784cf-kube-api-access-5fgkj\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.644791 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9859523-7b91-42bf-9439-b86433c88754" path="/var/lib/kubelet/pods/e9859523-7b91-42bf-9439-b86433c88754/volumes" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.645434 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc889568-9917-471c-b7b6-02d71536e6db" path="/var/lib/kubelet/pods/fc889568-9917-471c-b7b6-02d71536e6db/volumes" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.712717 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/852eefdb-1f3c-4a86-a930-24627d79056e-scripts\") pod \"swift-ring-rebalance-bt47h\" (UID: \"852eefdb-1f3c-4a86-a930-24627d79056e\") " pod="openstack/swift-ring-rebalance-bt47h" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.712781 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/852eefdb-1f3c-4a86-a930-24627d79056e-combined-ca-bundle\") pod \"swift-ring-rebalance-bt47h\" (UID: \"852eefdb-1f3c-4a86-a930-24627d79056e\") " pod="openstack/swift-ring-rebalance-bt47h" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.712840 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/852eefdb-1f3c-4a86-a930-24627d79056e-etc-swift\") pod \"swift-ring-rebalance-bt47h\" (UID: \"852eefdb-1f3c-4a86-a930-24627d79056e\") " pod="openstack/swift-ring-rebalance-bt47h" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.712884 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cpht\" (UniqueName: \"kubernetes.io/projected/852eefdb-1f3c-4a86-a930-24627d79056e-kube-api-access-6cpht\") pod \"swift-ring-rebalance-bt47h\" (UID: \"852eefdb-1f3c-4a86-a930-24627d79056e\") " pod="openstack/swift-ring-rebalance-bt47h" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.712940 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/852eefdb-1f3c-4a86-a930-24627d79056e-ring-data-devices\") pod \"swift-ring-rebalance-bt47h\" (UID: \"852eefdb-1f3c-4a86-a930-24627d79056e\") " pod="openstack/swift-ring-rebalance-bt47h" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.712989 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/852eefdb-1f3c-4a86-a930-24627d79056e-dispersionconf\") pod \"swift-ring-rebalance-bt47h\" (UID: \"852eefdb-1f3c-4a86-a930-24627d79056e\") " pod="openstack/swift-ring-rebalance-bt47h" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.713034 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/852eefdb-1f3c-4a86-a930-24627d79056e-swiftconf\") pod \"swift-ring-rebalance-bt47h\" (UID: \"852eefdb-1f3c-4a86-a930-24627d79056e\") " pod="openstack/swift-ring-rebalance-bt47h" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.714667 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/852eefdb-1f3c-4a86-a930-24627d79056e-scripts\") pod \"swift-ring-rebalance-bt47h\" (UID: \"852eefdb-1f3c-4a86-a930-24627d79056e\") " pod="openstack/swift-ring-rebalance-bt47h" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.722429 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/852eefdb-1f3c-4a86-a930-24627d79056e-combined-ca-bundle\") pod \"swift-ring-rebalance-bt47h\" (UID: \"852eefdb-1f3c-4a86-a930-24627d79056e\") " pod="openstack/swift-ring-rebalance-bt47h" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.723028 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/852eefdb-1f3c-4a86-a930-24627d79056e-ring-data-devices\") pod \"swift-ring-rebalance-bt47h\" (UID: \"852eefdb-1f3c-4a86-a930-24627d79056e\") " pod="openstack/swift-ring-rebalance-bt47h" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.733557 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/852eefdb-1f3c-4a86-a930-24627d79056e-etc-swift\") pod \"swift-ring-rebalance-bt47h\" (UID: \"852eefdb-1f3c-4a86-a930-24627d79056e\") " pod="openstack/swift-ring-rebalance-bt47h" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.737003 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/852eefdb-1f3c-4a86-a930-24627d79056e-swiftconf\") pod \"swift-ring-rebalance-bt47h\" (UID: \"852eefdb-1f3c-4a86-a930-24627d79056e\") " pod="openstack/swift-ring-rebalance-bt47h" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.740916 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/852eefdb-1f3c-4a86-a930-24627d79056e-dispersionconf\") pod \"swift-ring-rebalance-bt47h\" (UID: \"852eefdb-1f3c-4a86-a930-24627d79056e\") " pod="openstack/swift-ring-rebalance-bt47h" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.755745 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cpht\" (UniqueName: \"kubernetes.io/projected/852eefdb-1f3c-4a86-a930-24627d79056e-kube-api-access-6cpht\") pod \"swift-ring-rebalance-bt47h\" (UID: \"852eefdb-1f3c-4a86-a930-24627d79056e\") " pod="openstack/swift-ring-rebalance-bt47h" Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.814162 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-etc-swift\") pod \"swift-storage-0\" (UID: \"23940e94-2a8f-4e11-b8aa-31fbcd8d9076\") " pod="openstack/swift-storage-0" Oct 13 06:45:18 crc kubenswrapper[4833]: E1013 06:45:18.814353 4833 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 13 06:45:18 crc kubenswrapper[4833]: E1013 06:45:18.814376 4833 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 13 06:45:18 crc kubenswrapper[4833]: E1013 06:45:18.814438 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-etc-swift podName:23940e94-2a8f-4e11-b8aa-31fbcd8d9076 nodeName:}" failed. No retries permitted until 2025-10-13 06:45:19.814418794 +0000 UTC m=+1009.914841710 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-etc-swift") pod "swift-storage-0" (UID: "23940e94-2a8f-4e11-b8aa-31fbcd8d9076") : configmap "swift-ring-files" not found Oct 13 06:45:18 crc kubenswrapper[4833]: I1013 06:45:18.860786 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bt47h" Oct 13 06:45:19 crc kubenswrapper[4833]: I1013 06:45:19.120672 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545fb8c44f-v7pt9" event={"ID":"6741bee5-5b7b-46cf-ae60-c620f80784cf","Type":"ContainerDied","Data":"18a5bafd94dfb29f2140a48865e9e7e2f06fca61e2472eaf3e86b8593e4b6766"} Oct 13 06:45:19 crc kubenswrapper[4833]: I1013 06:45:19.120729 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545fb8c44f-v7pt9" Oct 13 06:45:19 crc kubenswrapper[4833]: I1013 06:45:19.120757 4833 scope.go:117] "RemoveContainer" containerID="51c7e847ad7b1aea6a261deb9df80964369da97912fa3f2f74506f283a3c1744" Oct 13 06:45:19 crc kubenswrapper[4833]: I1013 06:45:19.127441 4833 generic.go:334] "Generic (PLEG): container finished" podID="baad73e5-be59-4193-8456-5c5d3c4a0b90" containerID="c7063e6cc569055308ff2a82021349d98f79a63bb3dab1481e607ced1bf82a30" exitCode=0 Oct 13 06:45:19 crc kubenswrapper[4833]: I1013 06:45:19.127517 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-946f77c87-k2cmv" event={"ID":"baad73e5-be59-4193-8456-5c5d3c4a0b90","Type":"ContainerDied","Data":"c7063e6cc569055308ff2a82021349d98f79a63bb3dab1481e607ced1bf82a30"} Oct 13 06:45:19 crc kubenswrapper[4833]: I1013 06:45:19.128968 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c1a7008b-3448-4108-81b0-4d16484a6f7b","Type":"ContainerStarted","Data":"81f3e93d3087305b07519551ee186a721be27eea1cc8284e47a43644774e7604"} Oct 13 06:45:19 crc kubenswrapper[4833]: I1013 06:45:19.130547 4833 generic.go:334] "Generic (PLEG): container finished" podID="cbaa5e3f-00a5-4af4-a775-968ad570939c" containerID="6a6555e82084ec62bd3a08447b7e7c07186fb60d4b3fd3f75f6ec190f1c201c3" exitCode=0 Oct 13 06:45:19 crc kubenswrapper[4833]: I1013 06:45:19.130613 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b587f8db7-w8zvt" event={"ID":"cbaa5e3f-00a5-4af4-a775-968ad570939c","Type":"ContainerDied","Data":"6a6555e82084ec62bd3a08447b7e7c07186fb60d4b3fd3f75f6ec190f1c201c3"} Oct 13 06:45:19 crc kubenswrapper[4833]: I1013 06:45:19.197345 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-545fb8c44f-v7pt9"] Oct 13 06:45:19 crc kubenswrapper[4833]: I1013 06:45:19.203958 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-545fb8c44f-v7pt9"] Oct 13 06:45:19 crc kubenswrapper[4833]: I1013 06:45:19.283755 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-bt47h"] Oct 13 06:45:19 crc kubenswrapper[4833]: I1013 06:45:19.831603 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-etc-swift\") pod \"swift-storage-0\" (UID: \"23940e94-2a8f-4e11-b8aa-31fbcd8d9076\") " pod="openstack/swift-storage-0" Oct 13 06:45:19 crc kubenswrapper[4833]: E1013 06:45:19.831827 4833 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 13 06:45:19 crc kubenswrapper[4833]: E1013 06:45:19.831846 4833 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 13 06:45:19 crc kubenswrapper[4833]: E1013 06:45:19.831890 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-etc-swift podName:23940e94-2a8f-4e11-b8aa-31fbcd8d9076 nodeName:}" failed. No retries permitted until 2025-10-13 06:45:21.831873394 +0000 UTC m=+1011.932296310 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-etc-swift") pod "swift-storage-0" (UID: "23940e94-2a8f-4e11-b8aa-31fbcd8d9076") : configmap "swift-ring-files" not found Oct 13 06:45:20 crc kubenswrapper[4833]: I1013 06:45:20.142237 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b587f8db7-w8zvt" event={"ID":"cbaa5e3f-00a5-4af4-a775-968ad570939c","Type":"ContainerStarted","Data":"aa7fd135c331e5f89d3a061fdc4ae2d1ca62ab04de82a38d834713829a0610f7"} Oct 13 06:45:20 crc kubenswrapper[4833]: I1013 06:45:20.142946 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b587f8db7-w8zvt" Oct 13 06:45:20 crc kubenswrapper[4833]: I1013 06:45:20.144799 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bt47h" event={"ID":"852eefdb-1f3c-4a86-a930-24627d79056e","Type":"ContainerStarted","Data":"841a04c6a088a44216f09830a4f9f684a81934fe54bfb753c3b9fe9f4ad26b1f"} Oct 13 06:45:20 crc kubenswrapper[4833]: I1013 06:45:20.146162 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-946f77c87-k2cmv" event={"ID":"baad73e5-be59-4193-8456-5c5d3c4a0b90","Type":"ContainerStarted","Data":"a8965642a2ada74f07812289ba8d770c18c64c0c9916f75accfbefe31af0c92c"} Oct 13 06:45:20 crc kubenswrapper[4833]: I1013 06:45:20.146315 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-946f77c87-k2cmv" Oct 13 06:45:20 crc kubenswrapper[4833]: I1013 06:45:20.147367 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c1a7008b-3448-4108-81b0-4d16484a6f7b","Type":"ContainerStarted","Data":"54a34d37063fa7510c51a589e85db2af1e8eef4bc3dcb4482d914746021edcd6"} Oct 13 06:45:20 crc kubenswrapper[4833]: I1013 06:45:20.147392 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c1a7008b-3448-4108-81b0-4d16484a6f7b","Type":"ContainerStarted","Data":"e0d2353375289df900cadbe52a7dfd8067c5455ffa6c327d4b7380ccf466e04d"} Oct 13 06:45:20 crc kubenswrapper[4833]: I1013 06:45:20.189858 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b587f8db7-w8zvt" podStartSLOduration=4.189835283 podStartE2EDuration="4.189835283s" podCreationTimestamp="2025-10-13 06:45:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:45:20.18043574 +0000 UTC m=+1010.280858666" watchObservedRunningTime="2025-10-13 06:45:20.189835283 +0000 UTC m=+1010.290258199" Oct 13 06:45:20 crc kubenswrapper[4833]: I1013 06:45:20.205139 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-946f77c87-k2cmv" podStartSLOduration=4.205118926 podStartE2EDuration="4.205118926s" podCreationTimestamp="2025-10-13 06:45:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:45:20.203246452 +0000 UTC m=+1010.303669368" watchObservedRunningTime="2025-10-13 06:45:20.205118926 +0000 UTC m=+1010.305541842" Oct 13 06:45:20 crc kubenswrapper[4833]: I1013 06:45:20.218472 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.754814214 podStartE2EDuration="3.218455493s" podCreationTimestamp="2025-10-13 06:45:17 +0000 UTC" firstStartedPulling="2025-10-13 06:45:18.244102702 +0000 UTC m=+1008.344525618" lastFinishedPulling="2025-10-13 06:45:19.707743991 +0000 UTC m=+1009.808166897" observedRunningTime="2025-10-13 06:45:20.218312219 +0000 UTC m=+1010.318735135" watchObservedRunningTime="2025-10-13 06:45:20.218455493 +0000 UTC m=+1010.318878409" Oct 13 06:45:20 crc kubenswrapper[4833]: I1013 06:45:20.640399 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6741bee5-5b7b-46cf-ae60-c620f80784cf" path="/var/lib/kubelet/pods/6741bee5-5b7b-46cf-ae60-c620f80784cf/volumes" Oct 13 06:45:21 crc kubenswrapper[4833]: I1013 06:45:21.165759 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Oct 13 06:45:21 crc kubenswrapper[4833]: I1013 06:45:21.867214 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-etc-swift\") pod \"swift-storage-0\" (UID: \"23940e94-2a8f-4e11-b8aa-31fbcd8d9076\") " pod="openstack/swift-storage-0" Oct 13 06:45:21 crc kubenswrapper[4833]: E1013 06:45:21.867495 4833 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 13 06:45:21 crc kubenswrapper[4833]: E1013 06:45:21.867524 4833 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 13 06:45:21 crc kubenswrapper[4833]: E1013 06:45:21.867616 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-etc-swift podName:23940e94-2a8f-4e11-b8aa-31fbcd8d9076 nodeName:}" failed. No retries permitted until 2025-10-13 06:45:25.867596505 +0000 UTC m=+1015.968019421 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-etc-swift") pod "swift-storage-0" (UID: "23940e94-2a8f-4e11-b8aa-31fbcd8d9076") : configmap "swift-ring-files" not found Oct 13 06:45:23 crc kubenswrapper[4833]: I1013 06:45:23.184038 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bt47h" event={"ID":"852eefdb-1f3c-4a86-a930-24627d79056e","Type":"ContainerStarted","Data":"5695eb3a82d3a3e492348a23917f842cdbe3066717947a49e3101bca340c7b89"} Oct 13 06:45:23 crc kubenswrapper[4833]: I1013 06:45:23.206039 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-bt47h" podStartSLOduration=1.8765388 podStartE2EDuration="5.20601958s" podCreationTimestamp="2025-10-13 06:45:18 +0000 UTC" firstStartedPulling="2025-10-13 06:45:19.3225028 +0000 UTC m=+1009.422925716" lastFinishedPulling="2025-10-13 06:45:22.65198358 +0000 UTC m=+1012.752406496" observedRunningTime="2025-10-13 06:45:23.204935498 +0000 UTC m=+1013.305358424" watchObservedRunningTime="2025-10-13 06:45:23.20601958 +0000 UTC m=+1013.306442506" Oct 13 06:45:24 crc kubenswrapper[4833]: I1013 06:45:24.747042 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-mhvzb"] Oct 13 06:45:24 crc kubenswrapper[4833]: I1013 06:45:24.748080 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mhvzb" Oct 13 06:45:24 crc kubenswrapper[4833]: I1013 06:45:24.760765 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mhvzb"] Oct 13 06:45:24 crc kubenswrapper[4833]: I1013 06:45:24.835629 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd5bv\" (UniqueName: \"kubernetes.io/projected/152bb234-9831-4182-b04e-61e6693051f8-kube-api-access-jd5bv\") pod \"keystone-db-create-mhvzb\" (UID: \"152bb234-9831-4182-b04e-61e6693051f8\") " pod="openstack/keystone-db-create-mhvzb" Oct 13 06:45:24 crc kubenswrapper[4833]: I1013 06:45:24.937156 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd5bv\" (UniqueName: \"kubernetes.io/projected/152bb234-9831-4182-b04e-61e6693051f8-kube-api-access-jd5bv\") pod \"keystone-db-create-mhvzb\" (UID: \"152bb234-9831-4182-b04e-61e6693051f8\") " pod="openstack/keystone-db-create-mhvzb" Oct 13 06:45:24 crc kubenswrapper[4833]: I1013 06:45:24.960256 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd5bv\" (UniqueName: \"kubernetes.io/projected/152bb234-9831-4182-b04e-61e6693051f8-kube-api-access-jd5bv\") pod \"keystone-db-create-mhvzb\" (UID: \"152bb234-9831-4182-b04e-61e6693051f8\") " pod="openstack/keystone-db-create-mhvzb" Oct 13 06:45:24 crc kubenswrapper[4833]: I1013 06:45:24.963121 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-hqwdg"] Oct 13 06:45:24 crc kubenswrapper[4833]: I1013 06:45:24.964075 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hqwdg" Oct 13 06:45:24 crc kubenswrapper[4833]: I1013 06:45:24.969046 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hqwdg"] Oct 13 06:45:25 crc kubenswrapper[4833]: I1013 06:45:25.039806 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbq27\" (UniqueName: \"kubernetes.io/projected/18230b58-b3cf-42e9-afa9-cf99564680d4-kube-api-access-sbq27\") pod \"placement-db-create-hqwdg\" (UID: \"18230b58-b3cf-42e9-afa9-cf99564680d4\") " pod="openstack/placement-db-create-hqwdg" Oct 13 06:45:25 crc kubenswrapper[4833]: I1013 06:45:25.071389 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mhvzb" Oct 13 06:45:25 crc kubenswrapper[4833]: I1013 06:45:25.141611 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbq27\" (UniqueName: \"kubernetes.io/projected/18230b58-b3cf-42e9-afa9-cf99564680d4-kube-api-access-sbq27\") pod \"placement-db-create-hqwdg\" (UID: \"18230b58-b3cf-42e9-afa9-cf99564680d4\") " pod="openstack/placement-db-create-hqwdg" Oct 13 06:45:25 crc kubenswrapper[4833]: I1013 06:45:25.163077 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbq27\" (UniqueName: \"kubernetes.io/projected/18230b58-b3cf-42e9-afa9-cf99564680d4-kube-api-access-sbq27\") pod \"placement-db-create-hqwdg\" (UID: \"18230b58-b3cf-42e9-afa9-cf99564680d4\") " pod="openstack/placement-db-create-hqwdg" Oct 13 06:45:25 crc kubenswrapper[4833]: I1013 06:45:25.269361 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-kgzz2"] Oct 13 06:45:25 crc kubenswrapper[4833]: I1013 06:45:25.271267 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kgzz2" Oct 13 06:45:25 crc kubenswrapper[4833]: I1013 06:45:25.287329 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-kgzz2"] Oct 13 06:45:25 crc kubenswrapper[4833]: I1013 06:45:25.318234 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hqwdg" Oct 13 06:45:25 crc kubenswrapper[4833]: I1013 06:45:25.344744 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwzgb\" (UniqueName: \"kubernetes.io/projected/88fb9926-26ba-4c88-b633-7192f7391494-kube-api-access-nwzgb\") pod \"glance-db-create-kgzz2\" (UID: \"88fb9926-26ba-4c88-b633-7192f7391494\") " pod="openstack/glance-db-create-kgzz2" Oct 13 06:45:25 crc kubenswrapper[4833]: I1013 06:45:25.447333 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwzgb\" (UniqueName: \"kubernetes.io/projected/88fb9926-26ba-4c88-b633-7192f7391494-kube-api-access-nwzgb\") pod \"glance-db-create-kgzz2\" (UID: \"88fb9926-26ba-4c88-b633-7192f7391494\") " pod="openstack/glance-db-create-kgzz2" Oct 13 06:45:25 crc kubenswrapper[4833]: I1013 06:45:25.471616 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwzgb\" (UniqueName: \"kubernetes.io/projected/88fb9926-26ba-4c88-b633-7192f7391494-kube-api-access-nwzgb\") pod \"glance-db-create-kgzz2\" (UID: \"88fb9926-26ba-4c88-b633-7192f7391494\") " pod="openstack/glance-db-create-kgzz2" Oct 13 06:45:25 crc kubenswrapper[4833]: I1013 06:45:25.532881 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mhvzb"] Oct 13 06:45:25 crc kubenswrapper[4833]: I1013 06:45:25.540099 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hqwdg"] Oct 13 06:45:25 crc kubenswrapper[4833]: W1013 06:45:25.540928 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod152bb234_9831_4182_b04e_61e6693051f8.slice/crio-8afb7228adc1ff4464c8fe6d1d3d7332d2d7afb8e2b09659bf470a0ac16eafd3 WatchSource:0}: Error finding container 8afb7228adc1ff4464c8fe6d1d3d7332d2d7afb8e2b09659bf470a0ac16eafd3: Status 404 returned error can't find the container with id 8afb7228adc1ff4464c8fe6d1d3d7332d2d7afb8e2b09659bf470a0ac16eafd3 Oct 13 06:45:25 crc kubenswrapper[4833]: W1013 06:45:25.541214 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18230b58_b3cf_42e9_afa9_cf99564680d4.slice/crio-95ad3981bab672d05291cebd25ca267f61f51a1a071e8a2346d47c6784e19346 WatchSource:0}: Error finding container 95ad3981bab672d05291cebd25ca267f61f51a1a071e8a2346d47c6784e19346: Status 404 returned error can't find the container with id 95ad3981bab672d05291cebd25ca267f61f51a1a071e8a2346d47c6784e19346 Oct 13 06:45:25 crc kubenswrapper[4833]: I1013 06:45:25.604734 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kgzz2" Oct 13 06:45:25 crc kubenswrapper[4833]: I1013 06:45:25.957474 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-etc-swift\") pod \"swift-storage-0\" (UID: \"23940e94-2a8f-4e11-b8aa-31fbcd8d9076\") " pod="openstack/swift-storage-0" Oct 13 06:45:25 crc kubenswrapper[4833]: E1013 06:45:25.957703 4833 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 13 06:45:25 crc kubenswrapper[4833]: E1013 06:45:25.957913 4833 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 13 06:45:25 crc kubenswrapper[4833]: E1013 06:45:25.957986 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-etc-swift podName:23940e94-2a8f-4e11-b8aa-31fbcd8d9076 nodeName:}" failed. No retries permitted until 2025-10-13 06:45:33.957967678 +0000 UTC m=+1024.058390594 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-etc-swift") pod "swift-storage-0" (UID: "23940e94-2a8f-4e11-b8aa-31fbcd8d9076") : configmap "swift-ring-files" not found Oct 13 06:45:26 crc kubenswrapper[4833]: W1013 06:45:26.125566 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88fb9926_26ba_4c88_b633_7192f7391494.slice/crio-c4daccf0139f263a6d320042315006f873888610525690cfa12728871f792bea WatchSource:0}: Error finding container c4daccf0139f263a6d320042315006f873888610525690cfa12728871f792bea: Status 404 returned error can't find the container with id c4daccf0139f263a6d320042315006f873888610525690cfa12728871f792bea Oct 13 06:45:26 crc kubenswrapper[4833]: I1013 06:45:26.127027 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-kgzz2"] Oct 13 06:45:26 crc kubenswrapper[4833]: I1013 06:45:26.208084 4833 generic.go:334] "Generic (PLEG): container finished" podID="152bb234-9831-4182-b04e-61e6693051f8" containerID="6ab1238369a6f5949fd2069743bf6e4ae555e077f1842bd96f9fae2a7a21713c" exitCode=0 Oct 13 06:45:26 crc kubenswrapper[4833]: I1013 06:45:26.208150 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mhvzb" event={"ID":"152bb234-9831-4182-b04e-61e6693051f8","Type":"ContainerDied","Data":"6ab1238369a6f5949fd2069743bf6e4ae555e077f1842bd96f9fae2a7a21713c"} Oct 13 06:45:26 crc kubenswrapper[4833]: I1013 06:45:26.208176 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mhvzb" event={"ID":"152bb234-9831-4182-b04e-61e6693051f8","Type":"ContainerStarted","Data":"8afb7228adc1ff4464c8fe6d1d3d7332d2d7afb8e2b09659bf470a0ac16eafd3"} Oct 13 06:45:26 crc kubenswrapper[4833]: I1013 06:45:26.210705 4833 generic.go:334] "Generic (PLEG): container finished" podID="18230b58-b3cf-42e9-afa9-cf99564680d4" containerID="a71d16b69b5e8b1db89864394761fa02481b8e3f821d42d70b8b6ffb4df0bc2c" exitCode=0 Oct 13 06:45:26 crc kubenswrapper[4833]: I1013 06:45:26.210809 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hqwdg" event={"ID":"18230b58-b3cf-42e9-afa9-cf99564680d4","Type":"ContainerDied","Data":"a71d16b69b5e8b1db89864394761fa02481b8e3f821d42d70b8b6ffb4df0bc2c"} Oct 13 06:45:26 crc kubenswrapper[4833]: I1013 06:45:26.210845 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hqwdg" event={"ID":"18230b58-b3cf-42e9-afa9-cf99564680d4","Type":"ContainerStarted","Data":"95ad3981bab672d05291cebd25ca267f61f51a1a071e8a2346d47c6784e19346"} Oct 13 06:45:26 crc kubenswrapper[4833]: I1013 06:45:26.212087 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kgzz2" event={"ID":"88fb9926-26ba-4c88-b633-7192f7391494","Type":"ContainerStarted","Data":"c4daccf0139f263a6d320042315006f873888610525690cfa12728871f792bea"} Oct 13 06:45:27 crc kubenswrapper[4833]: I1013 06:45:27.220447 4833 generic.go:334] "Generic (PLEG): container finished" podID="88fb9926-26ba-4c88-b633-7192f7391494" containerID="dfb5554cc1e88bb53880bea3c9c794e8bd7e2ca9872d2a7c6cd5b75d875acd2b" exitCode=0 Oct 13 06:45:27 crc kubenswrapper[4833]: I1013 06:45:27.220502 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kgzz2" event={"ID":"88fb9926-26ba-4c88-b633-7192f7391494","Type":"ContainerDied","Data":"dfb5554cc1e88bb53880bea3c9c794e8bd7e2ca9872d2a7c6cd5b75d875acd2b"} Oct 13 06:45:27 crc kubenswrapper[4833]: I1013 06:45:27.282730 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-946f77c87-k2cmv" Oct 13 06:45:27 crc kubenswrapper[4833]: I1013 06:45:27.343726 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b587f8db7-w8zvt" Oct 13 06:45:27 crc kubenswrapper[4833]: I1013 06:45:27.410039 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-946f77c87-k2cmv"] Oct 13 06:45:27 crc kubenswrapper[4833]: I1013 06:45:27.546679 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hqwdg" Oct 13 06:45:27 crc kubenswrapper[4833]: I1013 06:45:27.584254 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbq27\" (UniqueName: \"kubernetes.io/projected/18230b58-b3cf-42e9-afa9-cf99564680d4-kube-api-access-sbq27\") pod \"18230b58-b3cf-42e9-afa9-cf99564680d4\" (UID: \"18230b58-b3cf-42e9-afa9-cf99564680d4\") " Oct 13 06:45:27 crc kubenswrapper[4833]: I1013 06:45:27.589495 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18230b58-b3cf-42e9-afa9-cf99564680d4-kube-api-access-sbq27" (OuterVolumeSpecName: "kube-api-access-sbq27") pod "18230b58-b3cf-42e9-afa9-cf99564680d4" (UID: "18230b58-b3cf-42e9-afa9-cf99564680d4"). InnerVolumeSpecName "kube-api-access-sbq27". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:45:27 crc kubenswrapper[4833]: I1013 06:45:27.624347 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mhvzb" Oct 13 06:45:27 crc kubenswrapper[4833]: I1013 06:45:27.685842 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jd5bv\" (UniqueName: \"kubernetes.io/projected/152bb234-9831-4182-b04e-61e6693051f8-kube-api-access-jd5bv\") pod \"152bb234-9831-4182-b04e-61e6693051f8\" (UID: \"152bb234-9831-4182-b04e-61e6693051f8\") " Oct 13 06:45:27 crc kubenswrapper[4833]: I1013 06:45:27.686377 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbq27\" (UniqueName: \"kubernetes.io/projected/18230b58-b3cf-42e9-afa9-cf99564680d4-kube-api-access-sbq27\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:27 crc kubenswrapper[4833]: I1013 06:45:27.688583 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/152bb234-9831-4182-b04e-61e6693051f8-kube-api-access-jd5bv" (OuterVolumeSpecName: "kube-api-access-jd5bv") pod "152bb234-9831-4182-b04e-61e6693051f8" (UID: "152bb234-9831-4182-b04e-61e6693051f8"). InnerVolumeSpecName "kube-api-access-jd5bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:45:27 crc kubenswrapper[4833]: I1013 06:45:27.788174 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jd5bv\" (UniqueName: \"kubernetes.io/projected/152bb234-9831-4182-b04e-61e6693051f8-kube-api-access-jd5bv\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:28 crc kubenswrapper[4833]: I1013 06:45:28.230364 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mhvzb" event={"ID":"152bb234-9831-4182-b04e-61e6693051f8","Type":"ContainerDied","Data":"8afb7228adc1ff4464c8fe6d1d3d7332d2d7afb8e2b09659bf470a0ac16eafd3"} Oct 13 06:45:28 crc kubenswrapper[4833]: I1013 06:45:28.230416 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8afb7228adc1ff4464c8fe6d1d3d7332d2d7afb8e2b09659bf470a0ac16eafd3" Oct 13 06:45:28 crc kubenswrapper[4833]: I1013 06:45:28.230386 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mhvzb" Oct 13 06:45:28 crc kubenswrapper[4833]: I1013 06:45:28.233973 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hqwdg" event={"ID":"18230b58-b3cf-42e9-afa9-cf99564680d4","Type":"ContainerDied","Data":"95ad3981bab672d05291cebd25ca267f61f51a1a071e8a2346d47c6784e19346"} Oct 13 06:45:28 crc kubenswrapper[4833]: I1013 06:45:28.234051 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95ad3981bab672d05291cebd25ca267f61f51a1a071e8a2346d47c6784e19346" Oct 13 06:45:28 crc kubenswrapper[4833]: I1013 06:45:28.234089 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-946f77c87-k2cmv" podUID="baad73e5-be59-4193-8456-5c5d3c4a0b90" containerName="dnsmasq-dns" containerID="cri-o://a8965642a2ada74f07812289ba8d770c18c64c0c9916f75accfbefe31af0c92c" gracePeriod=10 Oct 13 06:45:28 crc kubenswrapper[4833]: I1013 06:45:28.234360 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hqwdg" Oct 13 06:45:28 crc kubenswrapper[4833]: I1013 06:45:28.508598 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kgzz2" Oct 13 06:45:28 crc kubenswrapper[4833]: I1013 06:45:28.601604 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwzgb\" (UniqueName: \"kubernetes.io/projected/88fb9926-26ba-4c88-b633-7192f7391494-kube-api-access-nwzgb\") pod \"88fb9926-26ba-4c88-b633-7192f7391494\" (UID: \"88fb9926-26ba-4c88-b633-7192f7391494\") " Oct 13 06:45:28 crc kubenswrapper[4833]: I1013 06:45:28.608015 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88fb9926-26ba-4c88-b633-7192f7391494-kube-api-access-nwzgb" (OuterVolumeSpecName: "kube-api-access-nwzgb") pod "88fb9926-26ba-4c88-b633-7192f7391494" (UID: "88fb9926-26ba-4c88-b633-7192f7391494"). InnerVolumeSpecName "kube-api-access-nwzgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:45:28 crc kubenswrapper[4833]: I1013 06:45:28.703695 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwzgb\" (UniqueName: \"kubernetes.io/projected/88fb9926-26ba-4c88-b633-7192f7391494-kube-api-access-nwzgb\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:29 crc kubenswrapper[4833]: I1013 06:45:29.072248 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-946f77c87-k2cmv" Oct 13 06:45:29 crc kubenswrapper[4833]: I1013 06:45:29.118492 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/baad73e5-be59-4193-8456-5c5d3c4a0b90-ovsdbserver-sb\") pod \"baad73e5-be59-4193-8456-5c5d3c4a0b90\" (UID: \"baad73e5-be59-4193-8456-5c5d3c4a0b90\") " Oct 13 06:45:29 crc kubenswrapper[4833]: I1013 06:45:29.118629 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/baad73e5-be59-4193-8456-5c5d3c4a0b90-dns-svc\") pod \"baad73e5-be59-4193-8456-5c5d3c4a0b90\" (UID: \"baad73e5-be59-4193-8456-5c5d3c4a0b90\") " Oct 13 06:45:29 crc kubenswrapper[4833]: I1013 06:45:29.118652 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfhmh\" (UniqueName: \"kubernetes.io/projected/baad73e5-be59-4193-8456-5c5d3c4a0b90-kube-api-access-vfhmh\") pod \"baad73e5-be59-4193-8456-5c5d3c4a0b90\" (UID: \"baad73e5-be59-4193-8456-5c5d3c4a0b90\") " Oct 13 06:45:29 crc kubenswrapper[4833]: I1013 06:45:29.118708 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baad73e5-be59-4193-8456-5c5d3c4a0b90-config\") pod \"baad73e5-be59-4193-8456-5c5d3c4a0b90\" (UID: \"baad73e5-be59-4193-8456-5c5d3c4a0b90\") " Oct 13 06:45:29 crc kubenswrapper[4833]: I1013 06:45:29.128453 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baad73e5-be59-4193-8456-5c5d3c4a0b90-kube-api-access-vfhmh" (OuterVolumeSpecName: "kube-api-access-vfhmh") pod "baad73e5-be59-4193-8456-5c5d3c4a0b90" (UID: "baad73e5-be59-4193-8456-5c5d3c4a0b90"). InnerVolumeSpecName "kube-api-access-vfhmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:45:29 crc kubenswrapper[4833]: I1013 06:45:29.159660 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baad73e5-be59-4193-8456-5c5d3c4a0b90-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "baad73e5-be59-4193-8456-5c5d3c4a0b90" (UID: "baad73e5-be59-4193-8456-5c5d3c4a0b90"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:45:29 crc kubenswrapper[4833]: I1013 06:45:29.173307 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baad73e5-be59-4193-8456-5c5d3c4a0b90-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "baad73e5-be59-4193-8456-5c5d3c4a0b90" (UID: "baad73e5-be59-4193-8456-5c5d3c4a0b90"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:45:29 crc kubenswrapper[4833]: I1013 06:45:29.174819 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baad73e5-be59-4193-8456-5c5d3c4a0b90-config" (OuterVolumeSpecName: "config") pod "baad73e5-be59-4193-8456-5c5d3c4a0b90" (UID: "baad73e5-be59-4193-8456-5c5d3c4a0b90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:45:29 crc kubenswrapper[4833]: I1013 06:45:29.220811 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baad73e5-be59-4193-8456-5c5d3c4a0b90-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:29 crc kubenswrapper[4833]: I1013 06:45:29.220865 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/baad73e5-be59-4193-8456-5c5d3c4a0b90-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:29 crc kubenswrapper[4833]: I1013 06:45:29.220882 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/baad73e5-be59-4193-8456-5c5d3c4a0b90-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:29 crc kubenswrapper[4833]: I1013 06:45:29.220895 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfhmh\" (UniqueName: \"kubernetes.io/projected/baad73e5-be59-4193-8456-5c5d3c4a0b90-kube-api-access-vfhmh\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:29 crc kubenswrapper[4833]: I1013 06:45:29.241318 4833 generic.go:334] "Generic (PLEG): container finished" podID="baad73e5-be59-4193-8456-5c5d3c4a0b90" containerID="a8965642a2ada74f07812289ba8d770c18c64c0c9916f75accfbefe31af0c92c" exitCode=0 Oct 13 06:45:29 crc kubenswrapper[4833]: I1013 06:45:29.241367 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-946f77c87-k2cmv" event={"ID":"baad73e5-be59-4193-8456-5c5d3c4a0b90","Type":"ContainerDied","Data":"a8965642a2ada74f07812289ba8d770c18c64c0c9916f75accfbefe31af0c92c"} Oct 13 06:45:29 crc kubenswrapper[4833]: I1013 06:45:29.241396 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-946f77c87-k2cmv" Oct 13 06:45:29 crc kubenswrapper[4833]: I1013 06:45:29.241439 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-946f77c87-k2cmv" event={"ID":"baad73e5-be59-4193-8456-5c5d3c4a0b90","Type":"ContainerDied","Data":"a5241b33c937315f0c27c29de754747190d3ce032c9119083ec5b748ba0a5ce9"} Oct 13 06:45:29 crc kubenswrapper[4833]: I1013 06:45:29.241459 4833 scope.go:117] "RemoveContainer" containerID="a8965642a2ada74f07812289ba8d770c18c64c0c9916f75accfbefe31af0c92c" Oct 13 06:45:29 crc kubenswrapper[4833]: I1013 06:45:29.243307 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kgzz2" event={"ID":"88fb9926-26ba-4c88-b633-7192f7391494","Type":"ContainerDied","Data":"c4daccf0139f263a6d320042315006f873888610525690cfa12728871f792bea"} Oct 13 06:45:29 crc kubenswrapper[4833]: I1013 06:45:29.243333 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4daccf0139f263a6d320042315006f873888610525690cfa12728871f792bea" Oct 13 06:45:29 crc kubenswrapper[4833]: I1013 06:45:29.243456 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kgzz2" Oct 13 06:45:29 crc kubenswrapper[4833]: I1013 06:45:29.272094 4833 scope.go:117] "RemoveContainer" containerID="c7063e6cc569055308ff2a82021349d98f79a63bb3dab1481e607ced1bf82a30" Oct 13 06:45:29 crc kubenswrapper[4833]: I1013 06:45:29.297593 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-946f77c87-k2cmv"] Oct 13 06:45:29 crc kubenswrapper[4833]: I1013 06:45:29.298687 4833 scope.go:117] "RemoveContainer" containerID="a8965642a2ada74f07812289ba8d770c18c64c0c9916f75accfbefe31af0c92c" Oct 13 06:45:29 crc kubenswrapper[4833]: E1013 06:45:29.300863 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8965642a2ada74f07812289ba8d770c18c64c0c9916f75accfbefe31af0c92c\": container with ID starting with a8965642a2ada74f07812289ba8d770c18c64c0c9916f75accfbefe31af0c92c not found: ID does not exist" containerID="a8965642a2ada74f07812289ba8d770c18c64c0c9916f75accfbefe31af0c92c" Oct 13 06:45:29 crc kubenswrapper[4833]: I1013 06:45:29.300909 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8965642a2ada74f07812289ba8d770c18c64c0c9916f75accfbefe31af0c92c"} err="failed to get container status \"a8965642a2ada74f07812289ba8d770c18c64c0c9916f75accfbefe31af0c92c\": rpc error: code = NotFound desc = could not find container \"a8965642a2ada74f07812289ba8d770c18c64c0c9916f75accfbefe31af0c92c\": container with ID starting with a8965642a2ada74f07812289ba8d770c18c64c0c9916f75accfbefe31af0c92c not found: ID does not exist" Oct 13 06:45:29 crc kubenswrapper[4833]: I1013 06:45:29.300941 4833 scope.go:117] "RemoveContainer" containerID="c7063e6cc569055308ff2a82021349d98f79a63bb3dab1481e607ced1bf82a30" Oct 13 06:45:29 crc kubenswrapper[4833]: E1013 06:45:29.301595 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7063e6cc569055308ff2a82021349d98f79a63bb3dab1481e607ced1bf82a30\": container with ID starting with c7063e6cc569055308ff2a82021349d98f79a63bb3dab1481e607ced1bf82a30 not found: ID does not exist" containerID="c7063e6cc569055308ff2a82021349d98f79a63bb3dab1481e607ced1bf82a30" Oct 13 06:45:29 crc kubenswrapper[4833]: I1013 06:45:29.301643 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7063e6cc569055308ff2a82021349d98f79a63bb3dab1481e607ced1bf82a30"} err="failed to get container status \"c7063e6cc569055308ff2a82021349d98f79a63bb3dab1481e607ced1bf82a30\": rpc error: code = NotFound desc = could not find container \"c7063e6cc569055308ff2a82021349d98f79a63bb3dab1481e607ced1bf82a30\": container with ID starting with c7063e6cc569055308ff2a82021349d98f79a63bb3dab1481e607ced1bf82a30 not found: ID does not exist" Oct 13 06:45:29 crc kubenswrapper[4833]: I1013 06:45:29.305822 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-946f77c87-k2cmv"] Oct 13 06:45:30 crc kubenswrapper[4833]: I1013 06:45:30.253597 4833 generic.go:334] "Generic (PLEG): container finished" podID="852eefdb-1f3c-4a86-a930-24627d79056e" containerID="5695eb3a82d3a3e492348a23917f842cdbe3066717947a49e3101bca340c7b89" exitCode=0 Oct 13 06:45:30 crc kubenswrapper[4833]: I1013 06:45:30.253654 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bt47h" event={"ID":"852eefdb-1f3c-4a86-a930-24627d79056e","Type":"ContainerDied","Data":"5695eb3a82d3a3e492348a23917f842cdbe3066717947a49e3101bca340c7b89"} Oct 13 06:45:30 crc kubenswrapper[4833]: I1013 06:45:30.646683 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baad73e5-be59-4193-8456-5c5d3c4a0b90" path="/var/lib/kubelet/pods/baad73e5-be59-4193-8456-5c5d3c4a0b90/volumes" Oct 13 06:45:31 crc kubenswrapper[4833]: I1013 06:45:31.595649 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bt47h" Oct 13 06:45:31 crc kubenswrapper[4833]: I1013 06:45:31.761197 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/852eefdb-1f3c-4a86-a930-24627d79056e-swiftconf\") pod \"852eefdb-1f3c-4a86-a930-24627d79056e\" (UID: \"852eefdb-1f3c-4a86-a930-24627d79056e\") " Oct 13 06:45:31 crc kubenswrapper[4833]: I1013 06:45:31.761319 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/852eefdb-1f3c-4a86-a930-24627d79056e-etc-swift\") pod \"852eefdb-1f3c-4a86-a930-24627d79056e\" (UID: \"852eefdb-1f3c-4a86-a930-24627d79056e\") " Oct 13 06:45:31 crc kubenswrapper[4833]: I1013 06:45:31.761355 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/852eefdb-1f3c-4a86-a930-24627d79056e-ring-data-devices\") pod \"852eefdb-1f3c-4a86-a930-24627d79056e\" (UID: \"852eefdb-1f3c-4a86-a930-24627d79056e\") " Oct 13 06:45:31 crc kubenswrapper[4833]: I1013 06:45:31.761425 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/852eefdb-1f3c-4a86-a930-24627d79056e-dispersionconf\") pod \"852eefdb-1f3c-4a86-a930-24627d79056e\" (UID: \"852eefdb-1f3c-4a86-a930-24627d79056e\") " Oct 13 06:45:31 crc kubenswrapper[4833]: I1013 06:45:31.761499 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/852eefdb-1f3c-4a86-a930-24627d79056e-combined-ca-bundle\") pod \"852eefdb-1f3c-4a86-a930-24627d79056e\" (UID: \"852eefdb-1f3c-4a86-a930-24627d79056e\") " Oct 13 06:45:31 crc kubenswrapper[4833]: I1013 06:45:31.761520 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/852eefdb-1f3c-4a86-a930-24627d79056e-scripts\") pod \"852eefdb-1f3c-4a86-a930-24627d79056e\" (UID: \"852eefdb-1f3c-4a86-a930-24627d79056e\") " Oct 13 06:45:31 crc kubenswrapper[4833]: I1013 06:45:31.761607 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cpht\" (UniqueName: \"kubernetes.io/projected/852eefdb-1f3c-4a86-a930-24627d79056e-kube-api-access-6cpht\") pod \"852eefdb-1f3c-4a86-a930-24627d79056e\" (UID: \"852eefdb-1f3c-4a86-a930-24627d79056e\") " Oct 13 06:45:31 crc kubenswrapper[4833]: I1013 06:45:31.762709 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/852eefdb-1f3c-4a86-a930-24627d79056e-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "852eefdb-1f3c-4a86-a930-24627d79056e" (UID: "852eefdb-1f3c-4a86-a930-24627d79056e"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:45:31 crc kubenswrapper[4833]: I1013 06:45:31.763969 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/852eefdb-1f3c-4a86-a930-24627d79056e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "852eefdb-1f3c-4a86-a930-24627d79056e" (UID: "852eefdb-1f3c-4a86-a930-24627d79056e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:45:31 crc kubenswrapper[4833]: I1013 06:45:31.767191 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/852eefdb-1f3c-4a86-a930-24627d79056e-kube-api-access-6cpht" (OuterVolumeSpecName: "kube-api-access-6cpht") pod "852eefdb-1f3c-4a86-a930-24627d79056e" (UID: "852eefdb-1f3c-4a86-a930-24627d79056e"). InnerVolumeSpecName "kube-api-access-6cpht". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:45:31 crc kubenswrapper[4833]: I1013 06:45:31.776031 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/852eefdb-1f3c-4a86-a930-24627d79056e-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "852eefdb-1f3c-4a86-a930-24627d79056e" (UID: "852eefdb-1f3c-4a86-a930-24627d79056e"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:45:31 crc kubenswrapper[4833]: I1013 06:45:31.786324 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/852eefdb-1f3c-4a86-a930-24627d79056e-scripts" (OuterVolumeSpecName: "scripts") pod "852eefdb-1f3c-4a86-a930-24627d79056e" (UID: "852eefdb-1f3c-4a86-a930-24627d79056e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:45:31 crc kubenswrapper[4833]: I1013 06:45:31.790720 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/852eefdb-1f3c-4a86-a930-24627d79056e-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "852eefdb-1f3c-4a86-a930-24627d79056e" (UID: "852eefdb-1f3c-4a86-a930-24627d79056e"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:45:31 crc kubenswrapper[4833]: I1013 06:45:31.800605 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/852eefdb-1f3c-4a86-a930-24627d79056e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "852eefdb-1f3c-4a86-a930-24627d79056e" (UID: "852eefdb-1f3c-4a86-a930-24627d79056e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:45:31 crc kubenswrapper[4833]: I1013 06:45:31.863502 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/852eefdb-1f3c-4a86-a930-24627d79056e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:31 crc kubenswrapper[4833]: I1013 06:45:31.863559 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/852eefdb-1f3c-4a86-a930-24627d79056e-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:31 crc kubenswrapper[4833]: I1013 06:45:31.863574 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cpht\" (UniqueName: \"kubernetes.io/projected/852eefdb-1f3c-4a86-a930-24627d79056e-kube-api-access-6cpht\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:31 crc kubenswrapper[4833]: I1013 06:45:31.863587 4833 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/852eefdb-1f3c-4a86-a930-24627d79056e-swiftconf\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:31 crc kubenswrapper[4833]: I1013 06:45:31.863598 4833 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/852eefdb-1f3c-4a86-a930-24627d79056e-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:31 crc kubenswrapper[4833]: I1013 06:45:31.863608 4833 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/852eefdb-1f3c-4a86-a930-24627d79056e-ring-data-devices\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:31 crc kubenswrapper[4833]: I1013 06:45:31.863618 4833 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/852eefdb-1f3c-4a86-a930-24627d79056e-dispersionconf\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:32 crc kubenswrapper[4833]: I1013 06:45:32.280103 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bt47h" event={"ID":"852eefdb-1f3c-4a86-a930-24627d79056e","Type":"ContainerDied","Data":"841a04c6a088a44216f09830a4f9f684a81934fe54bfb753c3b9fe9f4ad26b1f"} Oct 13 06:45:32 crc kubenswrapper[4833]: I1013 06:45:32.280170 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="841a04c6a088a44216f09830a4f9f684a81934fe54bfb753c3b9fe9f4ad26b1f" Oct 13 06:45:32 crc kubenswrapper[4833]: I1013 06:45:32.280178 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bt47h" Oct 13 06:45:32 crc kubenswrapper[4833]: I1013 06:45:32.712270 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Oct 13 06:45:33 crc kubenswrapper[4833]: I1013 06:45:33.994764 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-etc-swift\") pod \"swift-storage-0\" (UID: \"23940e94-2a8f-4e11-b8aa-31fbcd8d9076\") " pod="openstack/swift-storage-0" Oct 13 06:45:34 crc kubenswrapper[4833]: I1013 06:45:34.001354 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-etc-swift\") pod \"swift-storage-0\" (UID: \"23940e94-2a8f-4e11-b8aa-31fbcd8d9076\") " pod="openstack/swift-storage-0" Oct 13 06:45:34 crc kubenswrapper[4833]: I1013 06:45:34.267758 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 13 06:45:34 crc kubenswrapper[4833]: I1013 06:45:34.784748 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-97cb-account-create-rn9b5"] Oct 13 06:45:34 crc kubenswrapper[4833]: E1013 06:45:34.785522 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88fb9926-26ba-4c88-b633-7192f7391494" containerName="mariadb-database-create" Oct 13 06:45:34 crc kubenswrapper[4833]: I1013 06:45:34.785567 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="88fb9926-26ba-4c88-b633-7192f7391494" containerName="mariadb-database-create" Oct 13 06:45:34 crc kubenswrapper[4833]: E1013 06:45:34.785588 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="852eefdb-1f3c-4a86-a930-24627d79056e" containerName="swift-ring-rebalance" Oct 13 06:45:34 crc kubenswrapper[4833]: I1013 06:45:34.785600 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="852eefdb-1f3c-4a86-a930-24627d79056e" containerName="swift-ring-rebalance" Oct 13 06:45:34 crc kubenswrapper[4833]: E1013 06:45:34.785622 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baad73e5-be59-4193-8456-5c5d3c4a0b90" containerName="init" Oct 13 06:45:34 crc kubenswrapper[4833]: I1013 06:45:34.785633 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="baad73e5-be59-4193-8456-5c5d3c4a0b90" containerName="init" Oct 13 06:45:34 crc kubenswrapper[4833]: E1013 06:45:34.785655 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18230b58-b3cf-42e9-afa9-cf99564680d4" containerName="mariadb-database-create" Oct 13 06:45:34 crc kubenswrapper[4833]: I1013 06:45:34.785664 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="18230b58-b3cf-42e9-afa9-cf99564680d4" containerName="mariadb-database-create" Oct 13 06:45:34 crc kubenswrapper[4833]: E1013 06:45:34.785683 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baad73e5-be59-4193-8456-5c5d3c4a0b90" containerName="dnsmasq-dns" Oct 13 06:45:34 crc kubenswrapper[4833]: I1013 06:45:34.785693 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="baad73e5-be59-4193-8456-5c5d3c4a0b90" containerName="dnsmasq-dns" Oct 13 06:45:34 crc kubenswrapper[4833]: E1013 06:45:34.785719 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="152bb234-9831-4182-b04e-61e6693051f8" containerName="mariadb-database-create" Oct 13 06:45:34 crc kubenswrapper[4833]: I1013 06:45:34.785732 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="152bb234-9831-4182-b04e-61e6693051f8" containerName="mariadb-database-create" Oct 13 06:45:34 crc kubenswrapper[4833]: I1013 06:45:34.786924 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="baad73e5-be59-4193-8456-5c5d3c4a0b90" containerName="dnsmasq-dns" Oct 13 06:45:34 crc kubenswrapper[4833]: I1013 06:45:34.786949 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="852eefdb-1f3c-4a86-a930-24627d79056e" containerName="swift-ring-rebalance" Oct 13 06:45:34 crc kubenswrapper[4833]: I1013 06:45:34.786969 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="152bb234-9831-4182-b04e-61e6693051f8" containerName="mariadb-database-create" Oct 13 06:45:34 crc kubenswrapper[4833]: I1013 06:45:34.786988 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="18230b58-b3cf-42e9-afa9-cf99564680d4" containerName="mariadb-database-create" Oct 13 06:45:34 crc kubenswrapper[4833]: I1013 06:45:34.787015 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="88fb9926-26ba-4c88-b633-7192f7391494" containerName="mariadb-database-create" Oct 13 06:45:34 crc kubenswrapper[4833]: I1013 06:45:34.787877 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-97cb-account-create-rn9b5" Oct 13 06:45:34 crc kubenswrapper[4833]: I1013 06:45:34.790366 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Oct 13 06:45:34 crc kubenswrapper[4833]: I1013 06:45:34.796020 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-97cb-account-create-rn9b5"] Oct 13 06:45:34 crc kubenswrapper[4833]: I1013 06:45:34.874700 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 13 06:45:34 crc kubenswrapper[4833]: I1013 06:45:34.908790 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfh64\" (UniqueName: \"kubernetes.io/projected/03bb2849-0073-48ac-b568-609f917fe111-kube-api-access-tfh64\") pod \"keystone-97cb-account-create-rn9b5\" (UID: \"03bb2849-0073-48ac-b568-609f917fe111\") " pod="openstack/keystone-97cb-account-create-rn9b5" Oct 13 06:45:35 crc kubenswrapper[4833]: I1013 06:45:35.011326 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfh64\" (UniqueName: \"kubernetes.io/projected/03bb2849-0073-48ac-b568-609f917fe111-kube-api-access-tfh64\") pod \"keystone-97cb-account-create-rn9b5\" (UID: \"03bb2849-0073-48ac-b568-609f917fe111\") " pod="openstack/keystone-97cb-account-create-rn9b5" Oct 13 06:45:35 crc kubenswrapper[4833]: I1013 06:45:35.033740 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfh64\" (UniqueName: \"kubernetes.io/projected/03bb2849-0073-48ac-b568-609f917fe111-kube-api-access-tfh64\") pod \"keystone-97cb-account-create-rn9b5\" (UID: \"03bb2849-0073-48ac-b568-609f917fe111\") " pod="openstack/keystone-97cb-account-create-rn9b5" Oct 13 06:45:35 crc kubenswrapper[4833]: I1013 06:45:35.091184 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-3a43-account-create-tzctv"] Oct 13 06:45:35 crc kubenswrapper[4833]: I1013 06:45:35.092412 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3a43-account-create-tzctv" Oct 13 06:45:35 crc kubenswrapper[4833]: I1013 06:45:35.095514 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Oct 13 06:45:35 crc kubenswrapper[4833]: I1013 06:45:35.098445 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3a43-account-create-tzctv"] Oct 13 06:45:35 crc kubenswrapper[4833]: I1013 06:45:35.109784 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-97cb-account-create-rn9b5" Oct 13 06:45:35 crc kubenswrapper[4833]: I1013 06:45:35.214393 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts786\" (UniqueName: \"kubernetes.io/projected/6d00616b-9c95-4ae5-aabc-60e2fb039035-kube-api-access-ts786\") pod \"placement-3a43-account-create-tzctv\" (UID: \"6d00616b-9c95-4ae5-aabc-60e2fb039035\") " pod="openstack/placement-3a43-account-create-tzctv" Oct 13 06:45:35 crc kubenswrapper[4833]: I1013 06:45:35.316324 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts786\" (UniqueName: \"kubernetes.io/projected/6d00616b-9c95-4ae5-aabc-60e2fb039035-kube-api-access-ts786\") pod \"placement-3a43-account-create-tzctv\" (UID: \"6d00616b-9c95-4ae5-aabc-60e2fb039035\") " pod="openstack/placement-3a43-account-create-tzctv" Oct 13 06:45:35 crc kubenswrapper[4833]: I1013 06:45:35.323995 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerStarted","Data":"d5b48aa7b32081d8b026e457fe94db17fe434a5b2762aebe8799ee491a7df1c2"} Oct 13 06:45:35 crc kubenswrapper[4833]: I1013 06:45:35.333496 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts786\" (UniqueName: \"kubernetes.io/projected/6d00616b-9c95-4ae5-aabc-60e2fb039035-kube-api-access-ts786\") pod \"placement-3a43-account-create-tzctv\" (UID: \"6d00616b-9c95-4ae5-aabc-60e2fb039035\") " pod="openstack/placement-3a43-account-create-tzctv" Oct 13 06:45:35 crc kubenswrapper[4833]: I1013 06:45:35.425427 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3a43-account-create-tzctv" Oct 13 06:45:35 crc kubenswrapper[4833]: I1013 06:45:35.449089 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-68a3-account-create-tmjzb"] Oct 13 06:45:35 crc kubenswrapper[4833]: I1013 06:45:35.450202 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-68a3-account-create-tmjzb" Oct 13 06:45:35 crc kubenswrapper[4833]: I1013 06:45:35.453154 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Oct 13 06:45:35 crc kubenswrapper[4833]: I1013 06:45:35.463366 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-68a3-account-create-tmjzb"] Oct 13 06:45:35 crc kubenswrapper[4833]: W1013 06:45:35.523383 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03bb2849_0073_48ac_b568_609f917fe111.slice/crio-b237b011f47bf8953c6887619bf70877a3b532a4786aefeecee3e9ee432844a0 WatchSource:0}: Error finding container b237b011f47bf8953c6887619bf70877a3b532a4786aefeecee3e9ee432844a0: Status 404 returned error can't find the container with id b237b011f47bf8953c6887619bf70877a3b532a4786aefeecee3e9ee432844a0 Oct 13 06:45:35 crc kubenswrapper[4833]: I1013 06:45:35.524203 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-97cb-account-create-rn9b5"] Oct 13 06:45:35 crc kubenswrapper[4833]: I1013 06:45:35.620764 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6g6x\" (UniqueName: \"kubernetes.io/projected/73076b06-20be-4053-9aeb-08c4e6db07a7-kube-api-access-d6g6x\") pod \"glance-68a3-account-create-tmjzb\" (UID: \"73076b06-20be-4053-9aeb-08c4e6db07a7\") " pod="openstack/glance-68a3-account-create-tmjzb" Oct 13 06:45:35 crc kubenswrapper[4833]: I1013 06:45:35.722244 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6g6x\" (UniqueName: \"kubernetes.io/projected/73076b06-20be-4053-9aeb-08c4e6db07a7-kube-api-access-d6g6x\") pod \"glance-68a3-account-create-tmjzb\" (UID: \"73076b06-20be-4053-9aeb-08c4e6db07a7\") " pod="openstack/glance-68a3-account-create-tmjzb" Oct 13 06:45:35 crc kubenswrapper[4833]: I1013 06:45:35.743104 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6g6x\" (UniqueName: \"kubernetes.io/projected/73076b06-20be-4053-9aeb-08c4e6db07a7-kube-api-access-d6g6x\") pod \"glance-68a3-account-create-tmjzb\" (UID: \"73076b06-20be-4053-9aeb-08c4e6db07a7\") " pod="openstack/glance-68a3-account-create-tmjzb" Oct 13 06:45:35 crc kubenswrapper[4833]: I1013 06:45:35.820734 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-68a3-account-create-tmjzb" Oct 13 06:45:35 crc kubenswrapper[4833]: I1013 06:45:35.875607 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3a43-account-create-tzctv"] Oct 13 06:45:35 crc kubenswrapper[4833]: W1013 06:45:35.876216 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d00616b_9c95_4ae5_aabc_60e2fb039035.slice/crio-34c71d1a8508d8f283f27a5dff07b1d832fa300086c525442b5c9665351ef7b8 WatchSource:0}: Error finding container 34c71d1a8508d8f283f27a5dff07b1d832fa300086c525442b5c9665351ef7b8: Status 404 returned error can't find the container with id 34c71d1a8508d8f283f27a5dff07b1d832fa300086c525442b5c9665351ef7b8 Oct 13 06:45:36 crc kubenswrapper[4833]: I1013 06:45:36.236176 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-68a3-account-create-tmjzb"] Oct 13 06:45:36 crc kubenswrapper[4833]: I1013 06:45:36.333461 4833 generic.go:334] "Generic (PLEG): container finished" podID="03bb2849-0073-48ac-b568-609f917fe111" containerID="afce58125c1568e6bf304d5afa6b0e91c5a9d7ebe531902badfc18775732770c" exitCode=0 Oct 13 06:45:36 crc kubenswrapper[4833]: I1013 06:45:36.333516 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-97cb-account-create-rn9b5" event={"ID":"03bb2849-0073-48ac-b568-609f917fe111","Type":"ContainerDied","Data":"afce58125c1568e6bf304d5afa6b0e91c5a9d7ebe531902badfc18775732770c"} Oct 13 06:45:36 crc kubenswrapper[4833]: I1013 06:45:36.333555 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-97cb-account-create-rn9b5" event={"ID":"03bb2849-0073-48ac-b568-609f917fe111","Type":"ContainerStarted","Data":"b237b011f47bf8953c6887619bf70877a3b532a4786aefeecee3e9ee432844a0"} Oct 13 06:45:36 crc kubenswrapper[4833]: I1013 06:45:36.335740 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-68a3-account-create-tmjzb" event={"ID":"73076b06-20be-4053-9aeb-08c4e6db07a7","Type":"ContainerStarted","Data":"58c0bef39ded56668f36384239632d7a3fcce98b714d54663d80287f4be5ed49"} Oct 13 06:45:36 crc kubenswrapper[4833]: I1013 06:45:36.344276 4833 generic.go:334] "Generic (PLEG): container finished" podID="6d00616b-9c95-4ae5-aabc-60e2fb039035" containerID="40d67ace0956fe6006bf47bbf62dbdb5ebc2c7ef856bdb62b1c074f6c99b2748" exitCode=0 Oct 13 06:45:36 crc kubenswrapper[4833]: I1013 06:45:36.344320 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3a43-account-create-tzctv" event={"ID":"6d00616b-9c95-4ae5-aabc-60e2fb039035","Type":"ContainerDied","Data":"40d67ace0956fe6006bf47bbf62dbdb5ebc2c7ef856bdb62b1c074f6c99b2748"} Oct 13 06:45:36 crc kubenswrapper[4833]: I1013 06:45:36.344343 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3a43-account-create-tzctv" event={"ID":"6d00616b-9c95-4ae5-aabc-60e2fb039035","Type":"ContainerStarted","Data":"34c71d1a8508d8f283f27a5dff07b1d832fa300086c525442b5c9665351ef7b8"} Oct 13 06:45:37 crc kubenswrapper[4833]: I1013 06:45:37.354045 4833 generic.go:334] "Generic (PLEG): container finished" podID="73076b06-20be-4053-9aeb-08c4e6db07a7" containerID="7d5393630528a3731b07a1e3a9290637e0b186b2594cc6d44c147e3d89bf11bd" exitCode=0 Oct 13 06:45:37 crc kubenswrapper[4833]: I1013 06:45:37.354117 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-68a3-account-create-tmjzb" event={"ID":"73076b06-20be-4053-9aeb-08c4e6db07a7","Type":"ContainerDied","Data":"7d5393630528a3731b07a1e3a9290637e0b186b2594cc6d44c147e3d89bf11bd"} Oct 13 06:45:37 crc kubenswrapper[4833]: I1013 06:45:37.358416 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerStarted","Data":"39a80ccb5dcfc3109b31f5ea15bdac0c69f4fb148fff6b2e14183efb30f32315"} Oct 13 06:45:37 crc kubenswrapper[4833]: I1013 06:45:37.358529 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerStarted","Data":"b40d94a3b28168dc3adfbd67bb111dd625c1b3a8e28dfcf65f21de1d71ac05ef"} Oct 13 06:45:37 crc kubenswrapper[4833]: I1013 06:45:37.358588 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerStarted","Data":"ddc798bf52735ed655b9f2029dcd6fac626a69a57beb0d6ecfacaf0af9255c10"} Oct 13 06:45:37 crc kubenswrapper[4833]: I1013 06:45:37.358605 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerStarted","Data":"7126480ee2e234f256253f3be3f11958f282b8685399c352e9fe1fed288e1a27"} Oct 13 06:45:37 crc kubenswrapper[4833]: I1013 06:45:37.705866 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-97cb-account-create-rn9b5" Oct 13 06:45:37 crc kubenswrapper[4833]: I1013 06:45:37.711891 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3a43-account-create-tzctv" Oct 13 06:45:37 crc kubenswrapper[4833]: I1013 06:45:37.768113 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ts786\" (UniqueName: \"kubernetes.io/projected/6d00616b-9c95-4ae5-aabc-60e2fb039035-kube-api-access-ts786\") pod \"6d00616b-9c95-4ae5-aabc-60e2fb039035\" (UID: \"6d00616b-9c95-4ae5-aabc-60e2fb039035\") " Oct 13 06:45:37 crc kubenswrapper[4833]: I1013 06:45:37.768193 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfh64\" (UniqueName: \"kubernetes.io/projected/03bb2849-0073-48ac-b568-609f917fe111-kube-api-access-tfh64\") pod \"03bb2849-0073-48ac-b568-609f917fe111\" (UID: \"03bb2849-0073-48ac-b568-609f917fe111\") " Oct 13 06:45:37 crc kubenswrapper[4833]: I1013 06:45:37.772994 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d00616b-9c95-4ae5-aabc-60e2fb039035-kube-api-access-ts786" (OuterVolumeSpecName: "kube-api-access-ts786") pod "6d00616b-9c95-4ae5-aabc-60e2fb039035" (UID: "6d00616b-9c95-4ae5-aabc-60e2fb039035"). InnerVolumeSpecName "kube-api-access-ts786". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:45:37 crc kubenswrapper[4833]: I1013 06:45:37.773483 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03bb2849-0073-48ac-b568-609f917fe111-kube-api-access-tfh64" (OuterVolumeSpecName: "kube-api-access-tfh64") pod "03bb2849-0073-48ac-b568-609f917fe111" (UID: "03bb2849-0073-48ac-b568-609f917fe111"). InnerVolumeSpecName "kube-api-access-tfh64". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:45:37 crc kubenswrapper[4833]: I1013 06:45:37.869664 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ts786\" (UniqueName: \"kubernetes.io/projected/6d00616b-9c95-4ae5-aabc-60e2fb039035-kube-api-access-ts786\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:37 crc kubenswrapper[4833]: I1013 06:45:37.869699 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfh64\" (UniqueName: \"kubernetes.io/projected/03bb2849-0073-48ac-b568-609f917fe111-kube-api-access-tfh64\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:38 crc kubenswrapper[4833]: I1013 06:45:38.378283 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-97cb-account-create-rn9b5" event={"ID":"03bb2849-0073-48ac-b568-609f917fe111","Type":"ContainerDied","Data":"b237b011f47bf8953c6887619bf70877a3b532a4786aefeecee3e9ee432844a0"} Oct 13 06:45:38 crc kubenswrapper[4833]: I1013 06:45:38.378659 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b237b011f47bf8953c6887619bf70877a3b532a4786aefeecee3e9ee432844a0" Oct 13 06:45:38 crc kubenswrapper[4833]: I1013 06:45:38.378444 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-97cb-account-create-rn9b5" Oct 13 06:45:38 crc kubenswrapper[4833]: I1013 06:45:38.382523 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerStarted","Data":"dda623bd500bc7d4d2d7d9bda0087208d82cc295d3ca8170fefd53b38c5cb99b"} Oct 13 06:45:38 crc kubenswrapper[4833]: I1013 06:45:38.382735 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerStarted","Data":"b4fe6dd76ddecca8a3c9f5a3f305a67a70a4c5075c8827646cbfd73ae58679f8"} Oct 13 06:45:38 crc kubenswrapper[4833]: I1013 06:45:38.382861 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerStarted","Data":"a338bdcb17781b39a4745895b5274ba984f3740577bcb756eb359867e4c8349d"} Oct 13 06:45:38 crc kubenswrapper[4833]: I1013 06:45:38.385417 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3a43-account-create-tzctv" Oct 13 06:45:38 crc kubenswrapper[4833]: I1013 06:45:38.385443 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3a43-account-create-tzctv" event={"ID":"6d00616b-9c95-4ae5-aabc-60e2fb039035","Type":"ContainerDied","Data":"34c71d1a8508d8f283f27a5dff07b1d832fa300086c525442b5c9665351ef7b8"} Oct 13 06:45:38 crc kubenswrapper[4833]: I1013 06:45:38.385799 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34c71d1a8508d8f283f27a5dff07b1d832fa300086c525442b5c9665351ef7b8" Oct 13 06:45:38 crc kubenswrapper[4833]: I1013 06:45:38.666231 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-68a3-account-create-tmjzb" Oct 13 06:45:38 crc kubenswrapper[4833]: I1013 06:45:38.681375 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6g6x\" (UniqueName: \"kubernetes.io/projected/73076b06-20be-4053-9aeb-08c4e6db07a7-kube-api-access-d6g6x\") pod \"73076b06-20be-4053-9aeb-08c4e6db07a7\" (UID: \"73076b06-20be-4053-9aeb-08c4e6db07a7\") " Oct 13 06:45:38 crc kubenswrapper[4833]: I1013 06:45:38.688509 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73076b06-20be-4053-9aeb-08c4e6db07a7-kube-api-access-d6g6x" (OuterVolumeSpecName: "kube-api-access-d6g6x") pod "73076b06-20be-4053-9aeb-08c4e6db07a7" (UID: "73076b06-20be-4053-9aeb-08c4e6db07a7"). InnerVolumeSpecName "kube-api-access-d6g6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:45:38 crc kubenswrapper[4833]: I1013 06:45:38.783168 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6g6x\" (UniqueName: \"kubernetes.io/projected/73076b06-20be-4053-9aeb-08c4e6db07a7-kube-api-access-d6g6x\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:39 crc kubenswrapper[4833]: I1013 06:45:39.395758 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-68a3-account-create-tmjzb" Oct 13 06:45:39 crc kubenswrapper[4833]: I1013 06:45:39.395942 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-68a3-account-create-tmjzb" event={"ID":"73076b06-20be-4053-9aeb-08c4e6db07a7","Type":"ContainerDied","Data":"58c0bef39ded56668f36384239632d7a3fcce98b714d54663d80287f4be5ed49"} Oct 13 06:45:39 crc kubenswrapper[4833]: I1013 06:45:39.396015 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58c0bef39ded56668f36384239632d7a3fcce98b714d54663d80287f4be5ed49" Oct 13 06:45:39 crc kubenswrapper[4833]: I1013 06:45:39.400629 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerStarted","Data":"7fbc873a90a0e18d29a4c28fb0bffb723bba4761bbd24dad68303e83c89729b5"} Oct 13 06:45:40 crc kubenswrapper[4833]: I1013 06:45:40.515930 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-z7kn8"] Oct 13 06:45:40 crc kubenswrapper[4833]: E1013 06:45:40.516615 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73076b06-20be-4053-9aeb-08c4e6db07a7" containerName="mariadb-account-create" Oct 13 06:45:40 crc kubenswrapper[4833]: I1013 06:45:40.516631 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="73076b06-20be-4053-9aeb-08c4e6db07a7" containerName="mariadb-account-create" Oct 13 06:45:40 crc kubenswrapper[4833]: E1013 06:45:40.516642 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03bb2849-0073-48ac-b568-609f917fe111" containerName="mariadb-account-create" Oct 13 06:45:40 crc kubenswrapper[4833]: I1013 06:45:40.516649 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="03bb2849-0073-48ac-b568-609f917fe111" containerName="mariadb-account-create" Oct 13 06:45:40 crc kubenswrapper[4833]: E1013 06:45:40.516673 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d00616b-9c95-4ae5-aabc-60e2fb039035" containerName="mariadb-account-create" Oct 13 06:45:40 crc kubenswrapper[4833]: I1013 06:45:40.516681 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d00616b-9c95-4ae5-aabc-60e2fb039035" containerName="mariadb-account-create" Oct 13 06:45:40 crc kubenswrapper[4833]: I1013 06:45:40.516883 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="73076b06-20be-4053-9aeb-08c4e6db07a7" containerName="mariadb-account-create" Oct 13 06:45:40 crc kubenswrapper[4833]: I1013 06:45:40.516900 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d00616b-9c95-4ae5-aabc-60e2fb039035" containerName="mariadb-account-create" Oct 13 06:45:40 crc kubenswrapper[4833]: I1013 06:45:40.516920 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="03bb2849-0073-48ac-b568-609f917fe111" containerName="mariadb-account-create" Oct 13 06:45:40 crc kubenswrapper[4833]: I1013 06:45:40.517579 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-z7kn8" Oct 13 06:45:40 crc kubenswrapper[4833]: I1013 06:45:40.519639 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-745xz" Oct 13 06:45:40 crc kubenswrapper[4833]: I1013 06:45:40.519806 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Oct 13 06:45:40 crc kubenswrapper[4833]: I1013 06:45:40.533228 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-z7kn8"] Oct 13 06:45:40 crc kubenswrapper[4833]: I1013 06:45:40.610799 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59fl8\" (UniqueName: \"kubernetes.io/projected/8b33d85d-c95b-4e57-a0d3-be407351e33b-kube-api-access-59fl8\") pod \"glance-db-sync-z7kn8\" (UID: \"8b33d85d-c95b-4e57-a0d3-be407351e33b\") " pod="openstack/glance-db-sync-z7kn8" Oct 13 06:45:40 crc kubenswrapper[4833]: I1013 06:45:40.611043 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b33d85d-c95b-4e57-a0d3-be407351e33b-combined-ca-bundle\") pod \"glance-db-sync-z7kn8\" (UID: \"8b33d85d-c95b-4e57-a0d3-be407351e33b\") " pod="openstack/glance-db-sync-z7kn8" Oct 13 06:45:40 crc kubenswrapper[4833]: I1013 06:45:40.611078 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8b33d85d-c95b-4e57-a0d3-be407351e33b-db-sync-config-data\") pod \"glance-db-sync-z7kn8\" (UID: \"8b33d85d-c95b-4e57-a0d3-be407351e33b\") " pod="openstack/glance-db-sync-z7kn8" Oct 13 06:45:40 crc kubenswrapper[4833]: I1013 06:45:40.611165 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b33d85d-c95b-4e57-a0d3-be407351e33b-config-data\") pod \"glance-db-sync-z7kn8\" (UID: \"8b33d85d-c95b-4e57-a0d3-be407351e33b\") " pod="openstack/glance-db-sync-z7kn8" Oct 13 06:45:40 crc kubenswrapper[4833]: I1013 06:45:40.712788 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b33d85d-c95b-4e57-a0d3-be407351e33b-combined-ca-bundle\") pod \"glance-db-sync-z7kn8\" (UID: \"8b33d85d-c95b-4e57-a0d3-be407351e33b\") " pod="openstack/glance-db-sync-z7kn8" Oct 13 06:45:40 crc kubenswrapper[4833]: I1013 06:45:40.712835 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8b33d85d-c95b-4e57-a0d3-be407351e33b-db-sync-config-data\") pod \"glance-db-sync-z7kn8\" (UID: \"8b33d85d-c95b-4e57-a0d3-be407351e33b\") " pod="openstack/glance-db-sync-z7kn8" Oct 13 06:45:40 crc kubenswrapper[4833]: I1013 06:45:40.712912 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b33d85d-c95b-4e57-a0d3-be407351e33b-config-data\") pod \"glance-db-sync-z7kn8\" (UID: \"8b33d85d-c95b-4e57-a0d3-be407351e33b\") " pod="openstack/glance-db-sync-z7kn8" Oct 13 06:45:40 crc kubenswrapper[4833]: I1013 06:45:40.712990 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59fl8\" (UniqueName: \"kubernetes.io/projected/8b33d85d-c95b-4e57-a0d3-be407351e33b-kube-api-access-59fl8\") pod \"glance-db-sync-z7kn8\" (UID: \"8b33d85d-c95b-4e57-a0d3-be407351e33b\") " pod="openstack/glance-db-sync-z7kn8" Oct 13 06:45:40 crc kubenswrapper[4833]: I1013 06:45:40.718997 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b33d85d-c95b-4e57-a0d3-be407351e33b-combined-ca-bundle\") pod \"glance-db-sync-z7kn8\" (UID: \"8b33d85d-c95b-4e57-a0d3-be407351e33b\") " pod="openstack/glance-db-sync-z7kn8" Oct 13 06:45:40 crc kubenswrapper[4833]: I1013 06:45:40.718991 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b33d85d-c95b-4e57-a0d3-be407351e33b-config-data\") pod \"glance-db-sync-z7kn8\" (UID: \"8b33d85d-c95b-4e57-a0d3-be407351e33b\") " pod="openstack/glance-db-sync-z7kn8" Oct 13 06:45:40 crc kubenswrapper[4833]: I1013 06:45:40.723161 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8b33d85d-c95b-4e57-a0d3-be407351e33b-db-sync-config-data\") pod \"glance-db-sync-z7kn8\" (UID: \"8b33d85d-c95b-4e57-a0d3-be407351e33b\") " pod="openstack/glance-db-sync-z7kn8" Oct 13 06:45:40 crc kubenswrapper[4833]: I1013 06:45:40.742960 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59fl8\" (UniqueName: \"kubernetes.io/projected/8b33d85d-c95b-4e57-a0d3-be407351e33b-kube-api-access-59fl8\") pod \"glance-db-sync-z7kn8\" (UID: \"8b33d85d-c95b-4e57-a0d3-be407351e33b\") " pod="openstack/glance-db-sync-z7kn8" Oct 13 06:45:40 crc kubenswrapper[4833]: I1013 06:45:40.885287 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-z7kn8" Oct 13 06:45:41 crc kubenswrapper[4833]: I1013 06:45:41.240093 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-rtrth" podUID="5fb7c39d-6b28-4530-b9b1-87c2af591f61" containerName="ovn-controller" probeResult="failure" output=< Oct 13 06:45:41 crc kubenswrapper[4833]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 13 06:45:41 crc kubenswrapper[4833]: > Oct 13 06:45:41 crc kubenswrapper[4833]: I1013 06:45:41.434723 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerStarted","Data":"9dad12e9c90578194f390432ae46d99079a4a5d4c95d825ba6dcc15e26e20fb2"} Oct 13 06:45:41 crc kubenswrapper[4833]: I1013 06:45:41.435029 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerStarted","Data":"02e170a5ebde87992af1b9ec82acf052249debf50eb102dbdc067004eac83dd6"} Oct 13 06:45:41 crc kubenswrapper[4833]: I1013 06:45:41.435045 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerStarted","Data":"ef4bcd2d312a9e41b4e42cf22758d715ea58715ab0b3bcd2ec00f09ab616489b"} Oct 13 06:45:41 crc kubenswrapper[4833]: I1013 06:45:41.435058 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerStarted","Data":"5847c7fbaaa19a0f3623af3ea4be590fad1d82ea8d09cd6086994de5af8c21c0"} Oct 13 06:45:41 crc kubenswrapper[4833]: I1013 06:45:41.438809 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-z7kn8"] Oct 13 06:45:41 crc kubenswrapper[4833]: W1013 06:45:41.453282 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b33d85d_c95b_4e57_a0d3_be407351e33b.slice/crio-9df7dd4759bccffa843cc06c319484599ecc8cf943b80672e255b4343282cccc WatchSource:0}: Error finding container 9df7dd4759bccffa843cc06c319484599ecc8cf943b80672e255b4343282cccc: Status 404 returned error can't find the container with id 9df7dd4759bccffa843cc06c319484599ecc8cf943b80672e255b4343282cccc Oct 13 06:45:42 crc kubenswrapper[4833]: I1013 06:45:42.455730 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerStarted","Data":"9b5d782d1b0574c39149c8bb487ccb192e4ad78574ba00d0053886812eecf629"} Oct 13 06:45:42 crc kubenswrapper[4833]: I1013 06:45:42.456151 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerStarted","Data":"b4b5158af1d09b9e60b53b67061ee2a7c79d89b8a882cf00a94e754f31eeb82c"} Oct 13 06:45:42 crc kubenswrapper[4833]: I1013 06:45:42.457577 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-z7kn8" event={"ID":"8b33d85d-c95b-4e57-a0d3-be407351e33b","Type":"ContainerStarted","Data":"9df7dd4759bccffa843cc06c319484599ecc8cf943b80672e255b4343282cccc"} Oct 13 06:45:43 crc kubenswrapper[4833]: I1013 06:45:43.471389 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerStarted","Data":"b8c0fd99cc7bf147089ee3034a7d63738ca80123381a9e4fcfb1fb0f59148960"} Oct 13 06:45:43 crc kubenswrapper[4833]: I1013 06:45:43.540720 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=21.785274167 podStartE2EDuration="27.540700575s" podCreationTimestamp="2025-10-13 06:45:16 +0000 UTC" firstStartedPulling="2025-10-13 06:45:34.883599973 +0000 UTC m=+1024.984022899" lastFinishedPulling="2025-10-13 06:45:40.639026391 +0000 UTC m=+1030.739449307" observedRunningTime="2025-10-13 06:45:43.522993491 +0000 UTC m=+1033.623416407" watchObservedRunningTime="2025-10-13 06:45:43.540700575 +0000 UTC m=+1033.641123491" Oct 13 06:45:43 crc kubenswrapper[4833]: I1013 06:45:43.778620 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-564965cbfc-cblgb"] Oct 13 06:45:43 crc kubenswrapper[4833]: I1013 06:45:43.780773 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-564965cbfc-cblgb" Oct 13 06:45:43 crc kubenswrapper[4833]: I1013 06:45:43.782397 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Oct 13 06:45:43 crc kubenswrapper[4833]: I1013 06:45:43.783464 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-564965cbfc-cblgb"] Oct 13 06:45:43 crc kubenswrapper[4833]: I1013 06:45:43.882366 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05705398-ade7-423c-8752-10e9255703f6-config\") pod \"dnsmasq-dns-564965cbfc-cblgb\" (UID: \"05705398-ade7-423c-8752-10e9255703f6\") " pod="openstack/dnsmasq-dns-564965cbfc-cblgb" Oct 13 06:45:43 crc kubenswrapper[4833]: I1013 06:45:43.882442 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05705398-ade7-423c-8752-10e9255703f6-dns-swift-storage-0\") pod \"dnsmasq-dns-564965cbfc-cblgb\" (UID: \"05705398-ade7-423c-8752-10e9255703f6\") " pod="openstack/dnsmasq-dns-564965cbfc-cblgb" Oct 13 06:45:43 crc kubenswrapper[4833]: I1013 06:45:43.882505 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05705398-ade7-423c-8752-10e9255703f6-ovsdbserver-sb\") pod \"dnsmasq-dns-564965cbfc-cblgb\" (UID: \"05705398-ade7-423c-8752-10e9255703f6\") " pod="openstack/dnsmasq-dns-564965cbfc-cblgb" Oct 13 06:45:43 crc kubenswrapper[4833]: I1013 06:45:43.882566 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05705398-ade7-423c-8752-10e9255703f6-dns-svc\") pod \"dnsmasq-dns-564965cbfc-cblgb\" (UID: \"05705398-ade7-423c-8752-10e9255703f6\") " pod="openstack/dnsmasq-dns-564965cbfc-cblgb" Oct 13 06:45:43 crc kubenswrapper[4833]: I1013 06:45:43.882590 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ctjk\" (UniqueName: \"kubernetes.io/projected/05705398-ade7-423c-8752-10e9255703f6-kube-api-access-7ctjk\") pod \"dnsmasq-dns-564965cbfc-cblgb\" (UID: \"05705398-ade7-423c-8752-10e9255703f6\") " pod="openstack/dnsmasq-dns-564965cbfc-cblgb" Oct 13 06:45:43 crc kubenswrapper[4833]: I1013 06:45:43.882642 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05705398-ade7-423c-8752-10e9255703f6-ovsdbserver-nb\") pod \"dnsmasq-dns-564965cbfc-cblgb\" (UID: \"05705398-ade7-423c-8752-10e9255703f6\") " pod="openstack/dnsmasq-dns-564965cbfc-cblgb" Oct 13 06:45:43 crc kubenswrapper[4833]: I1013 06:45:43.984138 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05705398-ade7-423c-8752-10e9255703f6-dns-svc\") pod \"dnsmasq-dns-564965cbfc-cblgb\" (UID: \"05705398-ade7-423c-8752-10e9255703f6\") " pod="openstack/dnsmasq-dns-564965cbfc-cblgb" Oct 13 06:45:43 crc kubenswrapper[4833]: I1013 06:45:43.984204 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ctjk\" (UniqueName: \"kubernetes.io/projected/05705398-ade7-423c-8752-10e9255703f6-kube-api-access-7ctjk\") pod \"dnsmasq-dns-564965cbfc-cblgb\" (UID: \"05705398-ade7-423c-8752-10e9255703f6\") " pod="openstack/dnsmasq-dns-564965cbfc-cblgb" Oct 13 06:45:43 crc kubenswrapper[4833]: I1013 06:45:43.984295 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05705398-ade7-423c-8752-10e9255703f6-ovsdbserver-nb\") pod \"dnsmasq-dns-564965cbfc-cblgb\" (UID: \"05705398-ade7-423c-8752-10e9255703f6\") " pod="openstack/dnsmasq-dns-564965cbfc-cblgb" Oct 13 06:45:43 crc kubenswrapper[4833]: I1013 06:45:43.984363 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05705398-ade7-423c-8752-10e9255703f6-config\") pod \"dnsmasq-dns-564965cbfc-cblgb\" (UID: \"05705398-ade7-423c-8752-10e9255703f6\") " pod="openstack/dnsmasq-dns-564965cbfc-cblgb" Oct 13 06:45:43 crc kubenswrapper[4833]: I1013 06:45:43.984411 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05705398-ade7-423c-8752-10e9255703f6-dns-swift-storage-0\") pod \"dnsmasq-dns-564965cbfc-cblgb\" (UID: \"05705398-ade7-423c-8752-10e9255703f6\") " pod="openstack/dnsmasq-dns-564965cbfc-cblgb" Oct 13 06:45:43 crc kubenswrapper[4833]: I1013 06:45:43.984475 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05705398-ade7-423c-8752-10e9255703f6-ovsdbserver-sb\") pod \"dnsmasq-dns-564965cbfc-cblgb\" (UID: \"05705398-ade7-423c-8752-10e9255703f6\") " pod="openstack/dnsmasq-dns-564965cbfc-cblgb" Oct 13 06:45:43 crc kubenswrapper[4833]: I1013 06:45:43.985399 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05705398-ade7-423c-8752-10e9255703f6-ovsdbserver-sb\") pod \"dnsmasq-dns-564965cbfc-cblgb\" (UID: \"05705398-ade7-423c-8752-10e9255703f6\") " pod="openstack/dnsmasq-dns-564965cbfc-cblgb" Oct 13 06:45:43 crc kubenswrapper[4833]: I1013 06:45:43.985959 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05705398-ade7-423c-8752-10e9255703f6-dns-svc\") pod \"dnsmasq-dns-564965cbfc-cblgb\" (UID: \"05705398-ade7-423c-8752-10e9255703f6\") " pod="openstack/dnsmasq-dns-564965cbfc-cblgb" Oct 13 06:45:43 crc kubenswrapper[4833]: I1013 06:45:43.986942 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05705398-ade7-423c-8752-10e9255703f6-ovsdbserver-nb\") pod \"dnsmasq-dns-564965cbfc-cblgb\" (UID: \"05705398-ade7-423c-8752-10e9255703f6\") " pod="openstack/dnsmasq-dns-564965cbfc-cblgb" Oct 13 06:45:43 crc kubenswrapper[4833]: I1013 06:45:43.988090 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05705398-ade7-423c-8752-10e9255703f6-config\") pod \"dnsmasq-dns-564965cbfc-cblgb\" (UID: \"05705398-ade7-423c-8752-10e9255703f6\") " pod="openstack/dnsmasq-dns-564965cbfc-cblgb" Oct 13 06:45:43 crc kubenswrapper[4833]: I1013 06:45:43.988804 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05705398-ade7-423c-8752-10e9255703f6-dns-swift-storage-0\") pod \"dnsmasq-dns-564965cbfc-cblgb\" (UID: \"05705398-ade7-423c-8752-10e9255703f6\") " pod="openstack/dnsmasq-dns-564965cbfc-cblgb" Oct 13 06:45:44 crc kubenswrapper[4833]: I1013 06:45:44.028844 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ctjk\" (UniqueName: \"kubernetes.io/projected/05705398-ade7-423c-8752-10e9255703f6-kube-api-access-7ctjk\") pod \"dnsmasq-dns-564965cbfc-cblgb\" (UID: \"05705398-ade7-423c-8752-10e9255703f6\") " pod="openstack/dnsmasq-dns-564965cbfc-cblgb" Oct 13 06:45:44 crc kubenswrapper[4833]: I1013 06:45:44.105483 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-564965cbfc-cblgb" Oct 13 06:45:44 crc kubenswrapper[4833]: I1013 06:45:44.557300 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-564965cbfc-cblgb"] Oct 13 06:45:45 crc kubenswrapper[4833]: I1013 06:45:45.486861 4833 generic.go:334] "Generic (PLEG): container finished" podID="0a6ab499-ed60-45e7-b510-5a43422aa7f5" containerID="81cf39063cd0366ae3391f599d627326a725a85022b63926955f72073a4f5bd7" exitCode=0 Oct 13 06:45:45 crc kubenswrapper[4833]: I1013 06:45:45.487338 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0a6ab499-ed60-45e7-b510-5a43422aa7f5","Type":"ContainerDied","Data":"81cf39063cd0366ae3391f599d627326a725a85022b63926955f72073a4f5bd7"} Oct 13 06:45:45 crc kubenswrapper[4833]: I1013 06:45:45.490308 4833 generic.go:334] "Generic (PLEG): container finished" podID="827f736f-2193-4ebd-ab7f-99fb22945d1e" containerID="8e0a7d40f38e036ffe265726cc3871a21f2953637eec4dba0015a2fbeb48b65a" exitCode=0 Oct 13 06:45:45 crc kubenswrapper[4833]: I1013 06:45:45.490410 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"827f736f-2193-4ebd-ab7f-99fb22945d1e","Type":"ContainerDied","Data":"8e0a7d40f38e036ffe265726cc3871a21f2953637eec4dba0015a2fbeb48b65a"} Oct 13 06:45:45 crc kubenswrapper[4833]: I1013 06:45:45.493531 4833 generic.go:334] "Generic (PLEG): container finished" podID="05705398-ade7-423c-8752-10e9255703f6" containerID="6172593b63d459c776d7c798d25eda158ccb4dc63722d97e0a19c9e4ffecebb8" exitCode=0 Oct 13 06:45:45 crc kubenswrapper[4833]: I1013 06:45:45.493584 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-564965cbfc-cblgb" event={"ID":"05705398-ade7-423c-8752-10e9255703f6","Type":"ContainerDied","Data":"6172593b63d459c776d7c798d25eda158ccb4dc63722d97e0a19c9e4ffecebb8"} Oct 13 06:45:45 crc kubenswrapper[4833]: I1013 06:45:45.493611 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-564965cbfc-cblgb" event={"ID":"05705398-ade7-423c-8752-10e9255703f6","Type":"ContainerStarted","Data":"907e63b89baae5ed23dd89c29cb0073b40b33d1f670272a8c48eb1a6dd4c22d9"} Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.247457 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-rtrth" podUID="5fb7c39d-6b28-4530-b9b1-87c2af591f61" containerName="ovn-controller" probeResult="failure" output=< Oct 13 06:45:46 crc kubenswrapper[4833]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 13 06:45:46 crc kubenswrapper[4833]: > Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.271102 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7j8gx" Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.273165 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7j8gx" Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.504183 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-564965cbfc-cblgb" event={"ID":"05705398-ade7-423c-8752-10e9255703f6","Type":"ContainerStarted","Data":"a731ee88aaa161625d5ec9432dd0f6a28884711cd60c27cd7ceb5b277b9b36fa"} Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.505236 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-564965cbfc-cblgb" Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.519952 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0a6ab499-ed60-45e7-b510-5a43422aa7f5","Type":"ContainerStarted","Data":"24aad4a10d73945e5a0646981275abbd2aeda300a5f6a5262692650bb4e35a27"} Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.520345 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.523375 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-rtrth-config-8mq2k"] Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.524611 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rtrth-config-8mq2k" Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.528054 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.532850 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"827f736f-2193-4ebd-ab7f-99fb22945d1e","Type":"ContainerStarted","Data":"0e7b21d947b33ba49437a8fc41d929e050f2e2654fda6595a5bdceb0af1cad5b"} Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.533302 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.540100 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rtrth-config-8mq2k"] Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.567699 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-564965cbfc-cblgb" podStartSLOduration=3.567674045 podStartE2EDuration="3.567674045s" podCreationTimestamp="2025-10-13 06:45:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:45:46.544514483 +0000 UTC m=+1036.644937399" watchObservedRunningTime="2025-10-13 06:45:46.567674045 +0000 UTC m=+1036.668096971" Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.582561 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371969.272247 podStartE2EDuration="1m7.582528546s" podCreationTimestamp="2025-10-13 06:44:39 +0000 UTC" firstStartedPulling="2025-10-13 06:44:41.362184404 +0000 UTC m=+971.462607320" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:45:46.582139245 +0000 UTC m=+1036.682562161" watchObservedRunningTime="2025-10-13 06:45:46.582528546 +0000 UTC m=+1036.682951462" Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.623971 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcdvz\" (UniqueName: \"kubernetes.io/projected/f3f7062b-a55e-45be-afba-0a6498795dc7-kube-api-access-jcdvz\") pod \"ovn-controller-rtrth-config-8mq2k\" (UID: \"f3f7062b-a55e-45be-afba-0a6498795dc7\") " pod="openstack/ovn-controller-rtrth-config-8mq2k" Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.624057 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f3f7062b-a55e-45be-afba-0a6498795dc7-var-run\") pod \"ovn-controller-rtrth-config-8mq2k\" (UID: \"f3f7062b-a55e-45be-afba-0a6498795dc7\") " pod="openstack/ovn-controller-rtrth-config-8mq2k" Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.624227 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f3f7062b-a55e-45be-afba-0a6498795dc7-var-log-ovn\") pod \"ovn-controller-rtrth-config-8mq2k\" (UID: \"f3f7062b-a55e-45be-afba-0a6498795dc7\") " pod="openstack/ovn-controller-rtrth-config-8mq2k" Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.624295 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3f7062b-a55e-45be-afba-0a6498795dc7-scripts\") pod \"ovn-controller-rtrth-config-8mq2k\" (UID: \"f3f7062b-a55e-45be-afba-0a6498795dc7\") " pod="openstack/ovn-controller-rtrth-config-8mq2k" Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.624473 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f3f7062b-a55e-45be-afba-0a6498795dc7-additional-scripts\") pod \"ovn-controller-rtrth-config-8mq2k\" (UID: \"f3f7062b-a55e-45be-afba-0a6498795dc7\") " pod="openstack/ovn-controller-rtrth-config-8mq2k" Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.625185 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f3f7062b-a55e-45be-afba-0a6498795dc7-var-run-ovn\") pod \"ovn-controller-rtrth-config-8mq2k\" (UID: \"f3f7062b-a55e-45be-afba-0a6498795dc7\") " pod="openstack/ovn-controller-rtrth-config-8mq2k" Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.727363 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f3f7062b-a55e-45be-afba-0a6498795dc7-var-run\") pod \"ovn-controller-rtrth-config-8mq2k\" (UID: \"f3f7062b-a55e-45be-afba-0a6498795dc7\") " pod="openstack/ovn-controller-rtrth-config-8mq2k" Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.727442 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f3f7062b-a55e-45be-afba-0a6498795dc7-var-log-ovn\") pod \"ovn-controller-rtrth-config-8mq2k\" (UID: \"f3f7062b-a55e-45be-afba-0a6498795dc7\") " pod="openstack/ovn-controller-rtrth-config-8mq2k" Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.727483 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3f7062b-a55e-45be-afba-0a6498795dc7-scripts\") pod \"ovn-controller-rtrth-config-8mq2k\" (UID: \"f3f7062b-a55e-45be-afba-0a6498795dc7\") " pod="openstack/ovn-controller-rtrth-config-8mq2k" Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.727530 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f3f7062b-a55e-45be-afba-0a6498795dc7-additional-scripts\") pod \"ovn-controller-rtrth-config-8mq2k\" (UID: \"f3f7062b-a55e-45be-afba-0a6498795dc7\") " pod="openstack/ovn-controller-rtrth-config-8mq2k" Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.727567 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f3f7062b-a55e-45be-afba-0a6498795dc7-var-run-ovn\") pod \"ovn-controller-rtrth-config-8mq2k\" (UID: \"f3f7062b-a55e-45be-afba-0a6498795dc7\") " pod="openstack/ovn-controller-rtrth-config-8mq2k" Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.727593 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcdvz\" (UniqueName: \"kubernetes.io/projected/f3f7062b-a55e-45be-afba-0a6498795dc7-kube-api-access-jcdvz\") pod \"ovn-controller-rtrth-config-8mq2k\" (UID: \"f3f7062b-a55e-45be-afba-0a6498795dc7\") " pod="openstack/ovn-controller-rtrth-config-8mq2k" Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.727768 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f3f7062b-a55e-45be-afba-0a6498795dc7-var-run\") pod \"ovn-controller-rtrth-config-8mq2k\" (UID: \"f3f7062b-a55e-45be-afba-0a6498795dc7\") " pod="openstack/ovn-controller-rtrth-config-8mq2k" Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.727770 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f3f7062b-a55e-45be-afba-0a6498795dc7-var-log-ovn\") pod \"ovn-controller-rtrth-config-8mq2k\" (UID: \"f3f7062b-a55e-45be-afba-0a6498795dc7\") " pod="openstack/ovn-controller-rtrth-config-8mq2k" Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.728643 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f3f7062b-a55e-45be-afba-0a6498795dc7-additional-scripts\") pod \"ovn-controller-rtrth-config-8mq2k\" (UID: \"f3f7062b-a55e-45be-afba-0a6498795dc7\") " pod="openstack/ovn-controller-rtrth-config-8mq2k" Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.729880 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3f7062b-a55e-45be-afba-0a6498795dc7-scripts\") pod \"ovn-controller-rtrth-config-8mq2k\" (UID: \"f3f7062b-a55e-45be-afba-0a6498795dc7\") " pod="openstack/ovn-controller-rtrth-config-8mq2k" Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.728152 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f3f7062b-a55e-45be-afba-0a6498795dc7-var-run-ovn\") pod \"ovn-controller-rtrth-config-8mq2k\" (UID: \"f3f7062b-a55e-45be-afba-0a6498795dc7\") " pod="openstack/ovn-controller-rtrth-config-8mq2k" Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.763561 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcdvz\" (UniqueName: \"kubernetes.io/projected/f3f7062b-a55e-45be-afba-0a6498795dc7-kube-api-access-jcdvz\") pod \"ovn-controller-rtrth-config-8mq2k\" (UID: \"f3f7062b-a55e-45be-afba-0a6498795dc7\") " pod="openstack/ovn-controller-rtrth-config-8mq2k" Oct 13 06:45:46 crc kubenswrapper[4833]: I1013 06:45:46.848910 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rtrth-config-8mq2k" Oct 13 06:45:47 crc kubenswrapper[4833]: I1013 06:45:47.329640 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.649838112 podStartE2EDuration="1m7.329615752s" podCreationTimestamp="2025-10-13 06:44:40 +0000 UTC" firstStartedPulling="2025-10-13 06:44:42.416922646 +0000 UTC m=+972.517345562" lastFinishedPulling="2025-10-13 06:45:11.096700286 +0000 UTC m=+1001.197123202" observedRunningTime="2025-10-13 06:45:46.606915814 +0000 UTC m=+1036.707338730" watchObservedRunningTime="2025-10-13 06:45:47.329615752 +0000 UTC m=+1037.430038678" Oct 13 06:45:47 crc kubenswrapper[4833]: I1013 06:45:47.331511 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rtrth-config-8mq2k"] Oct 13 06:45:47 crc kubenswrapper[4833]: I1013 06:45:47.546864 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rtrth-config-8mq2k" event={"ID":"f3f7062b-a55e-45be-afba-0a6498795dc7","Type":"ContainerStarted","Data":"769256175ad7da49e8ed1fb755d10ee42b410c9672efc91f42acfbca2b8b6d79"} Oct 13 06:45:48 crc kubenswrapper[4833]: I1013 06:45:48.555458 4833 generic.go:334] "Generic (PLEG): container finished" podID="f3f7062b-a55e-45be-afba-0a6498795dc7" containerID="e6de5ecc3f8c57e4a901a25b2899bef4da8d07a78b9d5ea4e9c74c5629425736" exitCode=0 Oct 13 06:45:48 crc kubenswrapper[4833]: I1013 06:45:48.555564 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rtrth-config-8mq2k" event={"ID":"f3f7062b-a55e-45be-afba-0a6498795dc7","Type":"ContainerDied","Data":"e6de5ecc3f8c57e4a901a25b2899bef4da8d07a78b9d5ea4e9c74c5629425736"} Oct 13 06:45:51 crc kubenswrapper[4833]: I1013 06:45:51.245866 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-rtrth" Oct 13 06:45:54 crc kubenswrapper[4833]: I1013 06:45:54.107418 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-564965cbfc-cblgb" Oct 13 06:45:54 crc kubenswrapper[4833]: I1013 06:45:54.176613 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b587f8db7-w8zvt"] Oct 13 06:45:54 crc kubenswrapper[4833]: I1013 06:45:54.177047 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b587f8db7-w8zvt" podUID="cbaa5e3f-00a5-4af4-a775-968ad570939c" containerName="dnsmasq-dns" containerID="cri-o://aa7fd135c331e5f89d3a061fdc4ae2d1ca62ab04de82a38d834713829a0610f7" gracePeriod=10 Oct 13 06:45:54 crc kubenswrapper[4833]: I1013 06:45:54.188092 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rtrth-config-8mq2k" Oct 13 06:45:54 crc kubenswrapper[4833]: I1013 06:45:54.259562 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcdvz\" (UniqueName: \"kubernetes.io/projected/f3f7062b-a55e-45be-afba-0a6498795dc7-kube-api-access-jcdvz\") pod \"f3f7062b-a55e-45be-afba-0a6498795dc7\" (UID: \"f3f7062b-a55e-45be-afba-0a6498795dc7\") " Oct 13 06:45:54 crc kubenswrapper[4833]: I1013 06:45:54.259754 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f3f7062b-a55e-45be-afba-0a6498795dc7-var-run\") pod \"f3f7062b-a55e-45be-afba-0a6498795dc7\" (UID: \"f3f7062b-a55e-45be-afba-0a6498795dc7\") " Oct 13 06:45:54 crc kubenswrapper[4833]: I1013 06:45:54.259837 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f3f7062b-a55e-45be-afba-0a6498795dc7-additional-scripts\") pod \"f3f7062b-a55e-45be-afba-0a6498795dc7\" (UID: \"f3f7062b-a55e-45be-afba-0a6498795dc7\") " Oct 13 06:45:54 crc kubenswrapper[4833]: I1013 06:45:54.259863 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3f7062b-a55e-45be-afba-0a6498795dc7-scripts\") pod \"f3f7062b-a55e-45be-afba-0a6498795dc7\" (UID: \"f3f7062b-a55e-45be-afba-0a6498795dc7\") " Oct 13 06:45:54 crc kubenswrapper[4833]: I1013 06:45:54.259890 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f3f7062b-a55e-45be-afba-0a6498795dc7-var-log-ovn\") pod \"f3f7062b-a55e-45be-afba-0a6498795dc7\" (UID: \"f3f7062b-a55e-45be-afba-0a6498795dc7\") " Oct 13 06:45:54 crc kubenswrapper[4833]: I1013 06:45:54.259942 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f3f7062b-a55e-45be-afba-0a6498795dc7-var-run-ovn\") pod \"f3f7062b-a55e-45be-afba-0a6498795dc7\" (UID: \"f3f7062b-a55e-45be-afba-0a6498795dc7\") " Oct 13 06:45:54 crc kubenswrapper[4833]: I1013 06:45:54.260855 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3f7062b-a55e-45be-afba-0a6498795dc7-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "f3f7062b-a55e-45be-afba-0a6498795dc7" (UID: "f3f7062b-a55e-45be-afba-0a6498795dc7"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:45:54 crc kubenswrapper[4833]: I1013 06:45:54.261094 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3f7062b-a55e-45be-afba-0a6498795dc7-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "f3f7062b-a55e-45be-afba-0a6498795dc7" (UID: "f3f7062b-a55e-45be-afba-0a6498795dc7"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:45:54 crc kubenswrapper[4833]: I1013 06:45:54.261130 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3f7062b-a55e-45be-afba-0a6498795dc7-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "f3f7062b-a55e-45be-afba-0a6498795dc7" (UID: "f3f7062b-a55e-45be-afba-0a6498795dc7"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:45:54 crc kubenswrapper[4833]: I1013 06:45:54.262326 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3f7062b-a55e-45be-afba-0a6498795dc7-var-run" (OuterVolumeSpecName: "var-run") pod "f3f7062b-a55e-45be-afba-0a6498795dc7" (UID: "f3f7062b-a55e-45be-afba-0a6498795dc7"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:45:54 crc kubenswrapper[4833]: I1013 06:45:54.265405 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3f7062b-a55e-45be-afba-0a6498795dc7-scripts" (OuterVolumeSpecName: "scripts") pod "f3f7062b-a55e-45be-afba-0a6498795dc7" (UID: "f3f7062b-a55e-45be-afba-0a6498795dc7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:45:54 crc kubenswrapper[4833]: I1013 06:45:54.265712 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3f7062b-a55e-45be-afba-0a6498795dc7-kube-api-access-jcdvz" (OuterVolumeSpecName: "kube-api-access-jcdvz") pod "f3f7062b-a55e-45be-afba-0a6498795dc7" (UID: "f3f7062b-a55e-45be-afba-0a6498795dc7"). InnerVolumeSpecName "kube-api-access-jcdvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:45:54 crc kubenswrapper[4833]: I1013 06:45:54.362400 4833 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f3f7062b-a55e-45be-afba-0a6498795dc7-var-run\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:54 crc kubenswrapper[4833]: I1013 06:45:54.362431 4833 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f3f7062b-a55e-45be-afba-0a6498795dc7-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:54 crc kubenswrapper[4833]: I1013 06:45:54.362443 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3f7062b-a55e-45be-afba-0a6498795dc7-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:54 crc kubenswrapper[4833]: I1013 06:45:54.362451 4833 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f3f7062b-a55e-45be-afba-0a6498795dc7-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:54 crc kubenswrapper[4833]: I1013 06:45:54.362460 4833 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f3f7062b-a55e-45be-afba-0a6498795dc7-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:54 crc kubenswrapper[4833]: I1013 06:45:54.362469 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcdvz\" (UniqueName: \"kubernetes.io/projected/f3f7062b-a55e-45be-afba-0a6498795dc7-kube-api-access-jcdvz\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:54.582982 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b587f8db7-w8zvt" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:54.617515 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rtrth-config-8mq2k" event={"ID":"f3f7062b-a55e-45be-afba-0a6498795dc7","Type":"ContainerDied","Data":"769256175ad7da49e8ed1fb755d10ee42b410c9672efc91f42acfbca2b8b6d79"} Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:54.617587 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="769256175ad7da49e8ed1fb755d10ee42b410c9672efc91f42acfbca2b8b6d79" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:54.617536 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rtrth-config-8mq2k" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:54.620253 4833 generic.go:334] "Generic (PLEG): container finished" podID="cbaa5e3f-00a5-4af4-a775-968ad570939c" containerID="aa7fd135c331e5f89d3a061fdc4ae2d1ca62ab04de82a38d834713829a0610f7" exitCode=0 Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:54.620295 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b587f8db7-w8zvt" event={"ID":"cbaa5e3f-00a5-4af4-a775-968ad570939c","Type":"ContainerDied","Data":"aa7fd135c331e5f89d3a061fdc4ae2d1ca62ab04de82a38d834713829a0610f7"} Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:54.620325 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b587f8db7-w8zvt" event={"ID":"cbaa5e3f-00a5-4af4-a775-968ad570939c","Type":"ContainerDied","Data":"44a931d9e0711539c02a75e259c964a99056690ae4bf73ac870fb1930131deb3"} Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:54.620345 4833 scope.go:117] "RemoveContainer" containerID="aa7fd135c331e5f89d3a061fdc4ae2d1ca62ab04de82a38d834713829a0610f7" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:54.620525 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b587f8db7-w8zvt" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:54.642630 4833 scope.go:117] "RemoveContainer" containerID="6a6555e82084ec62bd3a08447b7e7c07186fb60d4b3fd3f75f6ec190f1c201c3" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:54.673006 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cbaa5e3f-00a5-4af4-a775-968ad570939c-ovsdbserver-sb\") pod \"cbaa5e3f-00a5-4af4-a775-968ad570939c\" (UID: \"cbaa5e3f-00a5-4af4-a775-968ad570939c\") " Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:54.673092 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cbaa5e3f-00a5-4af4-a775-968ad570939c-ovsdbserver-nb\") pod \"cbaa5e3f-00a5-4af4-a775-968ad570939c\" (UID: \"cbaa5e3f-00a5-4af4-a775-968ad570939c\") " Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:54.673134 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cbaa5e3f-00a5-4af4-a775-968ad570939c-dns-svc\") pod \"cbaa5e3f-00a5-4af4-a775-968ad570939c\" (UID: \"cbaa5e3f-00a5-4af4-a775-968ad570939c\") " Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:54.673196 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdqv4\" (UniqueName: \"kubernetes.io/projected/cbaa5e3f-00a5-4af4-a775-968ad570939c-kube-api-access-jdqv4\") pod \"cbaa5e3f-00a5-4af4-a775-968ad570939c\" (UID: \"cbaa5e3f-00a5-4af4-a775-968ad570939c\") " Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:54.673251 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbaa5e3f-00a5-4af4-a775-968ad570939c-config\") pod \"cbaa5e3f-00a5-4af4-a775-968ad570939c\" (UID: \"cbaa5e3f-00a5-4af4-a775-968ad570939c\") " Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:54.679928 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbaa5e3f-00a5-4af4-a775-968ad570939c-kube-api-access-jdqv4" (OuterVolumeSpecName: "kube-api-access-jdqv4") pod "cbaa5e3f-00a5-4af4-a775-968ad570939c" (UID: "cbaa5e3f-00a5-4af4-a775-968ad570939c"). InnerVolumeSpecName "kube-api-access-jdqv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:54.684485 4833 scope.go:117] "RemoveContainer" containerID="aa7fd135c331e5f89d3a061fdc4ae2d1ca62ab04de82a38d834713829a0610f7" Oct 13 06:45:55 crc kubenswrapper[4833]: E1013 06:45:54.684900 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa7fd135c331e5f89d3a061fdc4ae2d1ca62ab04de82a38d834713829a0610f7\": container with ID starting with aa7fd135c331e5f89d3a061fdc4ae2d1ca62ab04de82a38d834713829a0610f7 not found: ID does not exist" containerID="aa7fd135c331e5f89d3a061fdc4ae2d1ca62ab04de82a38d834713829a0610f7" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:54.684942 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa7fd135c331e5f89d3a061fdc4ae2d1ca62ab04de82a38d834713829a0610f7"} err="failed to get container status \"aa7fd135c331e5f89d3a061fdc4ae2d1ca62ab04de82a38d834713829a0610f7\": rpc error: code = NotFound desc = could not find container \"aa7fd135c331e5f89d3a061fdc4ae2d1ca62ab04de82a38d834713829a0610f7\": container with ID starting with aa7fd135c331e5f89d3a061fdc4ae2d1ca62ab04de82a38d834713829a0610f7 not found: ID does not exist" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:54.684980 4833 scope.go:117] "RemoveContainer" containerID="6a6555e82084ec62bd3a08447b7e7c07186fb60d4b3fd3f75f6ec190f1c201c3" Oct 13 06:45:55 crc kubenswrapper[4833]: E1013 06:45:54.688893 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a6555e82084ec62bd3a08447b7e7c07186fb60d4b3fd3f75f6ec190f1c201c3\": container with ID starting with 6a6555e82084ec62bd3a08447b7e7c07186fb60d4b3fd3f75f6ec190f1c201c3 not found: ID does not exist" containerID="6a6555e82084ec62bd3a08447b7e7c07186fb60d4b3fd3f75f6ec190f1c201c3" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:54.688933 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a6555e82084ec62bd3a08447b7e7c07186fb60d4b3fd3f75f6ec190f1c201c3"} err="failed to get container status \"6a6555e82084ec62bd3a08447b7e7c07186fb60d4b3fd3f75f6ec190f1c201c3\": rpc error: code = NotFound desc = could not find container \"6a6555e82084ec62bd3a08447b7e7c07186fb60d4b3fd3f75f6ec190f1c201c3\": container with ID starting with 6a6555e82084ec62bd3a08447b7e7c07186fb60d4b3fd3f75f6ec190f1c201c3 not found: ID does not exist" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:54.723663 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbaa5e3f-00a5-4af4-a775-968ad570939c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cbaa5e3f-00a5-4af4-a775-968ad570939c" (UID: "cbaa5e3f-00a5-4af4-a775-968ad570939c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:54.730932 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbaa5e3f-00a5-4af4-a775-968ad570939c-config" (OuterVolumeSpecName: "config") pod "cbaa5e3f-00a5-4af4-a775-968ad570939c" (UID: "cbaa5e3f-00a5-4af4-a775-968ad570939c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:54.733678 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbaa5e3f-00a5-4af4-a775-968ad570939c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cbaa5e3f-00a5-4af4-a775-968ad570939c" (UID: "cbaa5e3f-00a5-4af4-a775-968ad570939c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:54.746107 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbaa5e3f-00a5-4af4-a775-968ad570939c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cbaa5e3f-00a5-4af4-a775-968ad570939c" (UID: "cbaa5e3f-00a5-4af4-a775-968ad570939c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:54.775321 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cbaa5e3f-00a5-4af4-a775-968ad570939c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:54.775352 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cbaa5e3f-00a5-4af4-a775-968ad570939c-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:54.775365 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdqv4\" (UniqueName: \"kubernetes.io/projected/cbaa5e3f-00a5-4af4-a775-968ad570939c-kube-api-access-jdqv4\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:54.775583 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbaa5e3f-00a5-4af4-a775-968ad570939c-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:54.775591 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cbaa5e3f-00a5-4af4-a775-968ad570939c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:54.986626 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b587f8db7-w8zvt"] Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:54.992200 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b587f8db7-w8zvt"] Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.295207 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-rtrth-config-8mq2k"] Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.304585 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-rtrth-config-8mq2k"] Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.395103 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-rtrth-config-slklq"] Oct 13 06:45:55 crc kubenswrapper[4833]: E1013 06:45:55.395417 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3f7062b-a55e-45be-afba-0a6498795dc7" containerName="ovn-config" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.395456 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3f7062b-a55e-45be-afba-0a6498795dc7" containerName="ovn-config" Oct 13 06:45:55 crc kubenswrapper[4833]: E1013 06:45:55.395489 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbaa5e3f-00a5-4af4-a775-968ad570939c" containerName="dnsmasq-dns" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.395496 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbaa5e3f-00a5-4af4-a775-968ad570939c" containerName="dnsmasq-dns" Oct 13 06:45:55 crc kubenswrapper[4833]: E1013 06:45:55.395515 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbaa5e3f-00a5-4af4-a775-968ad570939c" containerName="init" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.395522 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbaa5e3f-00a5-4af4-a775-968ad570939c" containerName="init" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.395688 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3f7062b-a55e-45be-afba-0a6498795dc7" containerName="ovn-config" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.395707 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbaa5e3f-00a5-4af4-a775-968ad570939c" containerName="dnsmasq-dns" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.396194 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rtrth-config-slklq" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.402996 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.408715 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rtrth-config-slklq"] Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.488316 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebd582e8-0456-422d-a744-c607bf04431c-scripts\") pod \"ovn-controller-rtrth-config-slklq\" (UID: \"ebd582e8-0456-422d-a744-c607bf04431c\") " pod="openstack/ovn-controller-rtrth-config-slklq" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.488388 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ebd582e8-0456-422d-a744-c607bf04431c-var-log-ovn\") pod \"ovn-controller-rtrth-config-slklq\" (UID: \"ebd582e8-0456-422d-a744-c607bf04431c\") " pod="openstack/ovn-controller-rtrth-config-slklq" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.488434 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ebd582e8-0456-422d-a744-c607bf04431c-var-run\") pod \"ovn-controller-rtrth-config-slklq\" (UID: \"ebd582e8-0456-422d-a744-c607bf04431c\") " pod="openstack/ovn-controller-rtrth-config-slklq" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.488466 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6hgl\" (UniqueName: \"kubernetes.io/projected/ebd582e8-0456-422d-a744-c607bf04431c-kube-api-access-k6hgl\") pod \"ovn-controller-rtrth-config-slklq\" (UID: \"ebd582e8-0456-422d-a744-c607bf04431c\") " pod="openstack/ovn-controller-rtrth-config-slklq" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.488488 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ebd582e8-0456-422d-a744-c607bf04431c-additional-scripts\") pod \"ovn-controller-rtrth-config-slklq\" (UID: \"ebd582e8-0456-422d-a744-c607bf04431c\") " pod="openstack/ovn-controller-rtrth-config-slklq" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.488524 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ebd582e8-0456-422d-a744-c607bf04431c-var-run-ovn\") pod \"ovn-controller-rtrth-config-slklq\" (UID: \"ebd582e8-0456-422d-a744-c607bf04431c\") " pod="openstack/ovn-controller-rtrth-config-slklq" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.589722 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ebd582e8-0456-422d-a744-c607bf04431c-var-run\") pod \"ovn-controller-rtrth-config-slklq\" (UID: \"ebd582e8-0456-422d-a744-c607bf04431c\") " pod="openstack/ovn-controller-rtrth-config-slklq" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.589785 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6hgl\" (UniqueName: \"kubernetes.io/projected/ebd582e8-0456-422d-a744-c607bf04431c-kube-api-access-k6hgl\") pod \"ovn-controller-rtrth-config-slklq\" (UID: \"ebd582e8-0456-422d-a744-c607bf04431c\") " pod="openstack/ovn-controller-rtrth-config-slklq" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.589812 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ebd582e8-0456-422d-a744-c607bf04431c-additional-scripts\") pod \"ovn-controller-rtrth-config-slklq\" (UID: \"ebd582e8-0456-422d-a744-c607bf04431c\") " pod="openstack/ovn-controller-rtrth-config-slklq" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.589845 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ebd582e8-0456-422d-a744-c607bf04431c-var-run-ovn\") pod \"ovn-controller-rtrth-config-slklq\" (UID: \"ebd582e8-0456-422d-a744-c607bf04431c\") " pod="openstack/ovn-controller-rtrth-config-slklq" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.589935 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebd582e8-0456-422d-a744-c607bf04431c-scripts\") pod \"ovn-controller-rtrth-config-slklq\" (UID: \"ebd582e8-0456-422d-a744-c607bf04431c\") " pod="openstack/ovn-controller-rtrth-config-slklq" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.589983 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ebd582e8-0456-422d-a744-c607bf04431c-var-log-ovn\") pod \"ovn-controller-rtrth-config-slklq\" (UID: \"ebd582e8-0456-422d-a744-c607bf04431c\") " pod="openstack/ovn-controller-rtrth-config-slklq" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.590142 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ebd582e8-0456-422d-a744-c607bf04431c-var-run\") pod \"ovn-controller-rtrth-config-slklq\" (UID: \"ebd582e8-0456-422d-a744-c607bf04431c\") " pod="openstack/ovn-controller-rtrth-config-slklq" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.590197 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ebd582e8-0456-422d-a744-c607bf04431c-var-log-ovn\") pod \"ovn-controller-rtrth-config-slklq\" (UID: \"ebd582e8-0456-422d-a744-c607bf04431c\") " pod="openstack/ovn-controller-rtrth-config-slklq" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.590231 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ebd582e8-0456-422d-a744-c607bf04431c-var-run-ovn\") pod \"ovn-controller-rtrth-config-slklq\" (UID: \"ebd582e8-0456-422d-a744-c607bf04431c\") " pod="openstack/ovn-controller-rtrth-config-slklq" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.591174 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ebd582e8-0456-422d-a744-c607bf04431c-additional-scripts\") pod \"ovn-controller-rtrth-config-slklq\" (UID: \"ebd582e8-0456-422d-a744-c607bf04431c\") " pod="openstack/ovn-controller-rtrth-config-slklq" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.594159 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebd582e8-0456-422d-a744-c607bf04431c-scripts\") pod \"ovn-controller-rtrth-config-slklq\" (UID: \"ebd582e8-0456-422d-a744-c607bf04431c\") " pod="openstack/ovn-controller-rtrth-config-slklq" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.608117 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6hgl\" (UniqueName: \"kubernetes.io/projected/ebd582e8-0456-422d-a744-c607bf04431c-kube-api-access-k6hgl\") pod \"ovn-controller-rtrth-config-slklq\" (UID: \"ebd582e8-0456-422d-a744-c607bf04431c\") " pod="openstack/ovn-controller-rtrth-config-slklq" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.632078 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-z7kn8" event={"ID":"8b33d85d-c95b-4e57-a0d3-be407351e33b","Type":"ContainerStarted","Data":"9424384411f7c288d34b772f293a05528e9898aec1796a567f88c98f287d2166"} Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.653698 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-z7kn8" podStartSLOduration=3.111297113 podStartE2EDuration="15.653677288s" podCreationTimestamp="2025-10-13 06:45:40 +0000 UTC" firstStartedPulling="2025-10-13 06:45:41.45558576 +0000 UTC m=+1031.556008676" lastFinishedPulling="2025-10-13 06:45:53.997965935 +0000 UTC m=+1044.098388851" observedRunningTime="2025-10-13 06:45:55.648501438 +0000 UTC m=+1045.748924364" watchObservedRunningTime="2025-10-13 06:45:55.653677288 +0000 UTC m=+1045.754100214" Oct 13 06:45:55 crc kubenswrapper[4833]: I1013 06:45:55.733697 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rtrth-config-slklq" Oct 13 06:45:56 crc kubenswrapper[4833]: I1013 06:45:56.216416 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rtrth-config-slklq"] Oct 13 06:45:56 crc kubenswrapper[4833]: W1013 06:45:56.230357 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebd582e8_0456_422d_a744_c607bf04431c.slice/crio-818c4c67ac86db3f15d9a92cdb6d29d9f7698104c6a7340f0b4f597b3f919b66 WatchSource:0}: Error finding container 818c4c67ac86db3f15d9a92cdb6d29d9f7698104c6a7340f0b4f597b3f919b66: Status 404 returned error can't find the container with id 818c4c67ac86db3f15d9a92cdb6d29d9f7698104c6a7340f0b4f597b3f919b66 Oct 13 06:45:56 crc kubenswrapper[4833]: I1013 06:45:56.642456 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbaa5e3f-00a5-4af4-a775-968ad570939c" path="/var/lib/kubelet/pods/cbaa5e3f-00a5-4af4-a775-968ad570939c/volumes" Oct 13 06:45:56 crc kubenswrapper[4833]: I1013 06:45:56.645775 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3f7062b-a55e-45be-afba-0a6498795dc7" path="/var/lib/kubelet/pods/f3f7062b-a55e-45be-afba-0a6498795dc7/volumes" Oct 13 06:45:56 crc kubenswrapper[4833]: I1013 06:45:56.646550 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rtrth-config-slklq" event={"ID":"ebd582e8-0456-422d-a744-c607bf04431c","Type":"ContainerStarted","Data":"2741b44dc5fc1f2672f5979bc81587b08dc2a228b75a1ce72591ec32cbe809c9"} Oct 13 06:45:56 crc kubenswrapper[4833]: I1013 06:45:56.646587 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rtrth-config-slklq" event={"ID":"ebd582e8-0456-422d-a744-c607bf04431c","Type":"ContainerStarted","Data":"818c4c67ac86db3f15d9a92cdb6d29d9f7698104c6a7340f0b4f597b3f919b66"} Oct 13 06:45:56 crc kubenswrapper[4833]: I1013 06:45:56.675918 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-rtrth-config-slklq" podStartSLOduration=1.675899013 podStartE2EDuration="1.675899013s" podCreationTimestamp="2025-10-13 06:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:45:56.670064883 +0000 UTC m=+1046.770487799" watchObservedRunningTime="2025-10-13 06:45:56.675899013 +0000 UTC m=+1046.776321939" Oct 13 06:45:57 crc kubenswrapper[4833]: I1013 06:45:57.659427 4833 generic.go:334] "Generic (PLEG): container finished" podID="ebd582e8-0456-422d-a744-c607bf04431c" containerID="2741b44dc5fc1f2672f5979bc81587b08dc2a228b75a1ce72591ec32cbe809c9" exitCode=0 Oct 13 06:45:57 crc kubenswrapper[4833]: I1013 06:45:57.659533 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rtrth-config-slklq" event={"ID":"ebd582e8-0456-422d-a744-c607bf04431c","Type":"ContainerDied","Data":"2741b44dc5fc1f2672f5979bc81587b08dc2a228b75a1ce72591ec32cbe809c9"} Oct 13 06:45:58 crc kubenswrapper[4833]: I1013 06:45:58.973356 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rtrth-config-slklq" Oct 13 06:45:59 crc kubenswrapper[4833]: I1013 06:45:59.151352 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ebd582e8-0456-422d-a744-c607bf04431c-var-log-ovn\") pod \"ebd582e8-0456-422d-a744-c607bf04431c\" (UID: \"ebd582e8-0456-422d-a744-c607bf04431c\") " Oct 13 06:45:59 crc kubenswrapper[4833]: I1013 06:45:59.151418 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6hgl\" (UniqueName: \"kubernetes.io/projected/ebd582e8-0456-422d-a744-c607bf04431c-kube-api-access-k6hgl\") pod \"ebd582e8-0456-422d-a744-c607bf04431c\" (UID: \"ebd582e8-0456-422d-a744-c607bf04431c\") " Oct 13 06:45:59 crc kubenswrapper[4833]: I1013 06:45:59.151445 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ebd582e8-0456-422d-a744-c607bf04431c-var-run-ovn\") pod \"ebd582e8-0456-422d-a744-c607bf04431c\" (UID: \"ebd582e8-0456-422d-a744-c607bf04431c\") " Oct 13 06:45:59 crc kubenswrapper[4833]: I1013 06:45:59.151491 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebd582e8-0456-422d-a744-c607bf04431c-scripts\") pod \"ebd582e8-0456-422d-a744-c607bf04431c\" (UID: \"ebd582e8-0456-422d-a744-c607bf04431c\") " Oct 13 06:45:59 crc kubenswrapper[4833]: I1013 06:45:59.151683 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ebd582e8-0456-422d-a744-c607bf04431c-var-run\") pod \"ebd582e8-0456-422d-a744-c607bf04431c\" (UID: \"ebd582e8-0456-422d-a744-c607bf04431c\") " Oct 13 06:45:59 crc kubenswrapper[4833]: I1013 06:45:59.151721 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ebd582e8-0456-422d-a744-c607bf04431c-additional-scripts\") pod \"ebd582e8-0456-422d-a744-c607bf04431c\" (UID: \"ebd582e8-0456-422d-a744-c607bf04431c\") " Oct 13 06:45:59 crc kubenswrapper[4833]: I1013 06:45:59.153045 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebd582e8-0456-422d-a744-c607bf04431c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "ebd582e8-0456-422d-a744-c607bf04431c" (UID: "ebd582e8-0456-422d-a744-c607bf04431c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:45:59 crc kubenswrapper[4833]: I1013 06:45:59.153089 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebd582e8-0456-422d-a744-c607bf04431c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "ebd582e8-0456-422d-a744-c607bf04431c" (UID: "ebd582e8-0456-422d-a744-c607bf04431c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:45:59 crc kubenswrapper[4833]: I1013 06:45:59.153708 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebd582e8-0456-422d-a744-c607bf04431c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "ebd582e8-0456-422d-a744-c607bf04431c" (UID: "ebd582e8-0456-422d-a744-c607bf04431c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:45:59 crc kubenswrapper[4833]: I1013 06:45:59.153789 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebd582e8-0456-422d-a744-c607bf04431c-var-run" (OuterVolumeSpecName: "var-run") pod "ebd582e8-0456-422d-a744-c607bf04431c" (UID: "ebd582e8-0456-422d-a744-c607bf04431c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:45:59 crc kubenswrapper[4833]: I1013 06:45:59.154293 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebd582e8-0456-422d-a744-c607bf04431c-scripts" (OuterVolumeSpecName: "scripts") pod "ebd582e8-0456-422d-a744-c607bf04431c" (UID: "ebd582e8-0456-422d-a744-c607bf04431c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:45:59 crc kubenswrapper[4833]: I1013 06:45:59.166449 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebd582e8-0456-422d-a744-c607bf04431c-kube-api-access-k6hgl" (OuterVolumeSpecName: "kube-api-access-k6hgl") pod "ebd582e8-0456-422d-a744-c607bf04431c" (UID: "ebd582e8-0456-422d-a744-c607bf04431c"). InnerVolumeSpecName "kube-api-access-k6hgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:45:59 crc kubenswrapper[4833]: I1013 06:45:59.253379 4833 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ebd582e8-0456-422d-a744-c607bf04431c-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:59 crc kubenswrapper[4833]: I1013 06:45:59.253407 4833 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ebd582e8-0456-422d-a744-c607bf04431c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:59 crc kubenswrapper[4833]: I1013 06:45:59.253417 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6hgl\" (UniqueName: \"kubernetes.io/projected/ebd582e8-0456-422d-a744-c607bf04431c-kube-api-access-k6hgl\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:59 crc kubenswrapper[4833]: I1013 06:45:59.253426 4833 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ebd582e8-0456-422d-a744-c607bf04431c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:59 crc kubenswrapper[4833]: I1013 06:45:59.253434 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebd582e8-0456-422d-a744-c607bf04431c-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:59 crc kubenswrapper[4833]: I1013 06:45:59.253442 4833 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ebd582e8-0456-422d-a744-c607bf04431c-var-run\") on node \"crc\" DevicePath \"\"" Oct 13 06:45:59 crc kubenswrapper[4833]: I1013 06:45:59.675157 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rtrth-config-slklq" event={"ID":"ebd582e8-0456-422d-a744-c607bf04431c","Type":"ContainerDied","Data":"818c4c67ac86db3f15d9a92cdb6d29d9f7698104c6a7340f0b4f597b3f919b66"} Oct 13 06:45:59 crc kubenswrapper[4833]: I1013 06:45:59.675206 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="818c4c67ac86db3f15d9a92cdb6d29d9f7698104c6a7340f0b4f597b3f919b66" Oct 13 06:45:59 crc kubenswrapper[4833]: I1013 06:45:59.675214 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rtrth-config-slklq" Oct 13 06:45:59 crc kubenswrapper[4833]: I1013 06:45:59.739184 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-rtrth-config-slklq"] Oct 13 06:45:59 crc kubenswrapper[4833]: I1013 06:45:59.745470 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-rtrth-config-slklq"] Oct 13 06:46:00 crc kubenswrapper[4833]: I1013 06:46:00.637867 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebd582e8-0456-422d-a744-c607bf04431c" path="/var/lib/kubelet/pods/ebd582e8-0456-422d-a744-c607bf04431c/volumes" Oct 13 06:46:00 crc kubenswrapper[4833]: I1013 06:46:00.686593 4833 generic.go:334] "Generic (PLEG): container finished" podID="8b33d85d-c95b-4e57-a0d3-be407351e33b" containerID="9424384411f7c288d34b772f293a05528e9898aec1796a567f88c98f287d2166" exitCode=0 Oct 13 06:46:00 crc kubenswrapper[4833]: I1013 06:46:00.686673 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-z7kn8" event={"ID":"8b33d85d-c95b-4e57-a0d3-be407351e33b","Type":"ContainerDied","Data":"9424384411f7c288d34b772f293a05528e9898aec1796a567f88c98f287d2166"} Oct 13 06:46:00 crc kubenswrapper[4833]: I1013 06:46:00.889331 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:46:01 crc kubenswrapper[4833]: I1013 06:46:01.824963 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 13 06:46:02 crc kubenswrapper[4833]: I1013 06:46:02.117019 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-z7kn8" Oct 13 06:46:02 crc kubenswrapper[4833]: I1013 06:46:02.202951 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8b33d85d-c95b-4e57-a0d3-be407351e33b-db-sync-config-data\") pod \"8b33d85d-c95b-4e57-a0d3-be407351e33b\" (UID: \"8b33d85d-c95b-4e57-a0d3-be407351e33b\") " Oct 13 06:46:02 crc kubenswrapper[4833]: I1013 06:46:02.202992 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b33d85d-c95b-4e57-a0d3-be407351e33b-config-data\") pod \"8b33d85d-c95b-4e57-a0d3-be407351e33b\" (UID: \"8b33d85d-c95b-4e57-a0d3-be407351e33b\") " Oct 13 06:46:02 crc kubenswrapper[4833]: I1013 06:46:02.203116 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59fl8\" (UniqueName: \"kubernetes.io/projected/8b33d85d-c95b-4e57-a0d3-be407351e33b-kube-api-access-59fl8\") pod \"8b33d85d-c95b-4e57-a0d3-be407351e33b\" (UID: \"8b33d85d-c95b-4e57-a0d3-be407351e33b\") " Oct 13 06:46:02 crc kubenswrapper[4833]: I1013 06:46:02.203147 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b33d85d-c95b-4e57-a0d3-be407351e33b-combined-ca-bundle\") pod \"8b33d85d-c95b-4e57-a0d3-be407351e33b\" (UID: \"8b33d85d-c95b-4e57-a0d3-be407351e33b\") " Oct 13 06:46:02 crc kubenswrapper[4833]: I1013 06:46:02.209217 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b33d85d-c95b-4e57-a0d3-be407351e33b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8b33d85d-c95b-4e57-a0d3-be407351e33b" (UID: "8b33d85d-c95b-4e57-a0d3-be407351e33b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:02 crc kubenswrapper[4833]: I1013 06:46:02.227285 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b33d85d-c95b-4e57-a0d3-be407351e33b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b33d85d-c95b-4e57-a0d3-be407351e33b" (UID: "8b33d85d-c95b-4e57-a0d3-be407351e33b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:02 crc kubenswrapper[4833]: I1013 06:46:02.231935 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b33d85d-c95b-4e57-a0d3-be407351e33b-kube-api-access-59fl8" (OuterVolumeSpecName: "kube-api-access-59fl8") pod "8b33d85d-c95b-4e57-a0d3-be407351e33b" (UID: "8b33d85d-c95b-4e57-a0d3-be407351e33b"). InnerVolumeSpecName "kube-api-access-59fl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:46:02 crc kubenswrapper[4833]: I1013 06:46:02.246938 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b33d85d-c95b-4e57-a0d3-be407351e33b-config-data" (OuterVolumeSpecName: "config-data") pod "8b33d85d-c95b-4e57-a0d3-be407351e33b" (UID: "8b33d85d-c95b-4e57-a0d3-be407351e33b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:02 crc kubenswrapper[4833]: I1013 06:46:02.304948 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59fl8\" (UniqueName: \"kubernetes.io/projected/8b33d85d-c95b-4e57-a0d3-be407351e33b-kube-api-access-59fl8\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:02 crc kubenswrapper[4833]: I1013 06:46:02.304990 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b33d85d-c95b-4e57-a0d3-be407351e33b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:02 crc kubenswrapper[4833]: I1013 06:46:02.305001 4833 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8b33d85d-c95b-4e57-a0d3-be407351e33b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:02 crc kubenswrapper[4833]: I1013 06:46:02.305009 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b33d85d-c95b-4e57-a0d3-be407351e33b-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:02 crc kubenswrapper[4833]: I1013 06:46:02.726504 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-z7kn8" event={"ID":"8b33d85d-c95b-4e57-a0d3-be407351e33b","Type":"ContainerDied","Data":"9df7dd4759bccffa843cc06c319484599ecc8cf943b80672e255b4343282cccc"} Oct 13 06:46:02 crc kubenswrapper[4833]: I1013 06:46:02.727303 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9df7dd4759bccffa843cc06c319484599ecc8cf943b80672e255b4343282cccc" Oct 13 06:46:02 crc kubenswrapper[4833]: I1013 06:46:02.727408 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-z7kn8" Oct 13 06:46:02 crc kubenswrapper[4833]: I1013 06:46:02.760869 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-cwhzf"] Oct 13 06:46:02 crc kubenswrapper[4833]: E1013 06:46:02.761273 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b33d85d-c95b-4e57-a0d3-be407351e33b" containerName="glance-db-sync" Oct 13 06:46:02 crc kubenswrapper[4833]: I1013 06:46:02.761290 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b33d85d-c95b-4e57-a0d3-be407351e33b" containerName="glance-db-sync" Oct 13 06:46:02 crc kubenswrapper[4833]: E1013 06:46:02.761313 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebd582e8-0456-422d-a744-c607bf04431c" containerName="ovn-config" Oct 13 06:46:02 crc kubenswrapper[4833]: I1013 06:46:02.761319 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebd582e8-0456-422d-a744-c607bf04431c" containerName="ovn-config" Oct 13 06:46:02 crc kubenswrapper[4833]: I1013 06:46:02.761510 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b33d85d-c95b-4e57-a0d3-be407351e33b" containerName="glance-db-sync" Oct 13 06:46:02 crc kubenswrapper[4833]: I1013 06:46:02.761549 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebd582e8-0456-422d-a744-c607bf04431c" containerName="ovn-config" Oct 13 06:46:02 crc kubenswrapper[4833]: I1013 06:46:02.762070 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cwhzf" Oct 13 06:46:02 crc kubenswrapper[4833]: I1013 06:46:02.777274 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-cwhzf"] Oct 13 06:46:02 crc kubenswrapper[4833]: I1013 06:46:02.868630 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-6csh9"] Oct 13 06:46:02 crc kubenswrapper[4833]: I1013 06:46:02.869881 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6csh9" Oct 13 06:46:02 crc kubenswrapper[4833]: I1013 06:46:02.888722 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-6csh9"] Oct 13 06:46:02 crc kubenswrapper[4833]: I1013 06:46:02.923265 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s2fh\" (UniqueName: \"kubernetes.io/projected/803c0a56-1e0a-4c20-a1c7-32ecf709cda4-kube-api-access-8s2fh\") pod \"cinder-db-create-cwhzf\" (UID: \"803c0a56-1e0a-4c20-a1c7-32ecf709cda4\") " pod="openstack/cinder-db-create-cwhzf" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.025003 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tktv6\" (UniqueName: \"kubernetes.io/projected/991076c0-40c3-4bdb-9766-a2c71b011caf-kube-api-access-tktv6\") pod \"barbican-db-create-6csh9\" (UID: \"991076c0-40c3-4bdb-9766-a2c71b011caf\") " pod="openstack/barbican-db-create-6csh9" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.025296 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s2fh\" (UniqueName: \"kubernetes.io/projected/803c0a56-1e0a-4c20-a1c7-32ecf709cda4-kube-api-access-8s2fh\") pod \"cinder-db-create-cwhzf\" (UID: \"803c0a56-1e0a-4c20-a1c7-32ecf709cda4\") " pod="openstack/cinder-db-create-cwhzf" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.071761 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-2s986"] Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.072891 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2s986" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.080753 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-2s986"] Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.083276 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s2fh\" (UniqueName: \"kubernetes.io/projected/803c0a56-1e0a-4c20-a1c7-32ecf709cda4-kube-api-access-8s2fh\") pod \"cinder-db-create-cwhzf\" (UID: \"803c0a56-1e0a-4c20-a1c7-32ecf709cda4\") " pod="openstack/cinder-db-create-cwhzf" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.132890 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cwhzf" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.135141 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tktv6\" (UniqueName: \"kubernetes.io/projected/991076c0-40c3-4bdb-9766-a2c71b011caf-kube-api-access-tktv6\") pod \"barbican-db-create-6csh9\" (UID: \"991076c0-40c3-4bdb-9766-a2c71b011caf\") " pod="openstack/barbican-db-create-6csh9" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.182416 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tktv6\" (UniqueName: \"kubernetes.io/projected/991076c0-40c3-4bdb-9766-a2c71b011caf-kube-api-access-tktv6\") pod \"barbican-db-create-6csh9\" (UID: \"991076c0-40c3-4bdb-9766-a2c71b011caf\") " pod="openstack/barbican-db-create-6csh9" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.210260 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6csh9" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.211752 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-pg5rw"] Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.212801 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pg5rw" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.217773 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-62lpv" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.217861 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.217906 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.218032 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.236342 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmlss\" (UniqueName: \"kubernetes.io/projected/705bcc31-a619-447e-b29a-e98c322e5617-kube-api-access-kmlss\") pod \"neutron-db-create-2s986\" (UID: \"705bcc31-a619-447e-b29a-e98c322e5617\") " pod="openstack/neutron-db-create-2s986" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.287081 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-795846498c-x69sr"] Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.288916 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795846498c-x69sr" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.298675 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-795846498c-x69sr"] Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.317889 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-pg5rw"] Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.337709 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmlss\" (UniqueName: \"kubernetes.io/projected/705bcc31-a619-447e-b29a-e98c322e5617-kube-api-access-kmlss\") pod \"neutron-db-create-2s986\" (UID: \"705bcc31-a619-447e-b29a-e98c322e5617\") " pod="openstack/neutron-db-create-2s986" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.337764 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/232038a6-156e-437b-9975-ac0fb0385c76-config-data\") pod \"keystone-db-sync-pg5rw\" (UID: \"232038a6-156e-437b-9975-ac0fb0385c76\") " pod="openstack/keystone-db-sync-pg5rw" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.337799 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/232038a6-156e-437b-9975-ac0fb0385c76-combined-ca-bundle\") pod \"keystone-db-sync-pg5rw\" (UID: \"232038a6-156e-437b-9975-ac0fb0385c76\") " pod="openstack/keystone-db-sync-pg5rw" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.337899 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7hnv\" (UniqueName: \"kubernetes.io/projected/232038a6-156e-437b-9975-ac0fb0385c76-kube-api-access-q7hnv\") pod \"keystone-db-sync-pg5rw\" (UID: \"232038a6-156e-437b-9975-ac0fb0385c76\") " pod="openstack/keystone-db-sync-pg5rw" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.364247 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmlss\" (UniqueName: \"kubernetes.io/projected/705bcc31-a619-447e-b29a-e98c322e5617-kube-api-access-kmlss\") pod \"neutron-db-create-2s986\" (UID: \"705bcc31-a619-447e-b29a-e98c322e5617\") " pod="openstack/neutron-db-create-2s986" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.439107 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f0e7b7a-7228-42df-a934-a2900ac292a5-config\") pod \"dnsmasq-dns-795846498c-x69sr\" (UID: \"1f0e7b7a-7228-42df-a934-a2900ac292a5\") " pod="openstack/dnsmasq-dns-795846498c-x69sr" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.439159 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f0e7b7a-7228-42df-a934-a2900ac292a5-dns-svc\") pod \"dnsmasq-dns-795846498c-x69sr\" (UID: \"1f0e7b7a-7228-42df-a934-a2900ac292a5\") " pod="openstack/dnsmasq-dns-795846498c-x69sr" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.439197 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7hnv\" (UniqueName: \"kubernetes.io/projected/232038a6-156e-437b-9975-ac0fb0385c76-kube-api-access-q7hnv\") pod \"keystone-db-sync-pg5rw\" (UID: \"232038a6-156e-437b-9975-ac0fb0385c76\") " pod="openstack/keystone-db-sync-pg5rw" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.439231 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/232038a6-156e-437b-9975-ac0fb0385c76-config-data\") pod \"keystone-db-sync-pg5rw\" (UID: \"232038a6-156e-437b-9975-ac0fb0385c76\") " pod="openstack/keystone-db-sync-pg5rw" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.439257 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f0e7b7a-7228-42df-a934-a2900ac292a5-dns-swift-storage-0\") pod \"dnsmasq-dns-795846498c-x69sr\" (UID: \"1f0e7b7a-7228-42df-a934-a2900ac292a5\") " pod="openstack/dnsmasq-dns-795846498c-x69sr" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.439276 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f0e7b7a-7228-42df-a934-a2900ac292a5-ovsdbserver-nb\") pod \"dnsmasq-dns-795846498c-x69sr\" (UID: \"1f0e7b7a-7228-42df-a934-a2900ac292a5\") " pod="openstack/dnsmasq-dns-795846498c-x69sr" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.439300 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/232038a6-156e-437b-9975-ac0fb0385c76-combined-ca-bundle\") pod \"keystone-db-sync-pg5rw\" (UID: \"232038a6-156e-437b-9975-ac0fb0385c76\") " pod="openstack/keystone-db-sync-pg5rw" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.439325 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj96j\" (UniqueName: \"kubernetes.io/projected/1f0e7b7a-7228-42df-a934-a2900ac292a5-kube-api-access-rj96j\") pod \"dnsmasq-dns-795846498c-x69sr\" (UID: \"1f0e7b7a-7228-42df-a934-a2900ac292a5\") " pod="openstack/dnsmasq-dns-795846498c-x69sr" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.439392 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f0e7b7a-7228-42df-a934-a2900ac292a5-ovsdbserver-sb\") pod \"dnsmasq-dns-795846498c-x69sr\" (UID: \"1f0e7b7a-7228-42df-a934-a2900ac292a5\") " pod="openstack/dnsmasq-dns-795846498c-x69sr" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.444193 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2s986" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.444547 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/232038a6-156e-437b-9975-ac0fb0385c76-config-data\") pod \"keystone-db-sync-pg5rw\" (UID: \"232038a6-156e-437b-9975-ac0fb0385c76\") " pod="openstack/keystone-db-sync-pg5rw" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.457937 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/232038a6-156e-437b-9975-ac0fb0385c76-combined-ca-bundle\") pod \"keystone-db-sync-pg5rw\" (UID: \"232038a6-156e-437b-9975-ac0fb0385c76\") " pod="openstack/keystone-db-sync-pg5rw" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.460726 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7hnv\" (UniqueName: \"kubernetes.io/projected/232038a6-156e-437b-9975-ac0fb0385c76-kube-api-access-q7hnv\") pod \"keystone-db-sync-pg5rw\" (UID: \"232038a6-156e-437b-9975-ac0fb0385c76\") " pod="openstack/keystone-db-sync-pg5rw" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.544597 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f0e7b7a-7228-42df-a934-a2900ac292a5-dns-swift-storage-0\") pod \"dnsmasq-dns-795846498c-x69sr\" (UID: \"1f0e7b7a-7228-42df-a934-a2900ac292a5\") " pod="openstack/dnsmasq-dns-795846498c-x69sr" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.545113 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f0e7b7a-7228-42df-a934-a2900ac292a5-ovsdbserver-nb\") pod \"dnsmasq-dns-795846498c-x69sr\" (UID: \"1f0e7b7a-7228-42df-a934-a2900ac292a5\") " pod="openstack/dnsmasq-dns-795846498c-x69sr" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.545495 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f0e7b7a-7228-42df-a934-a2900ac292a5-dns-swift-storage-0\") pod \"dnsmasq-dns-795846498c-x69sr\" (UID: \"1f0e7b7a-7228-42df-a934-a2900ac292a5\") " pod="openstack/dnsmasq-dns-795846498c-x69sr" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.546129 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f0e7b7a-7228-42df-a934-a2900ac292a5-ovsdbserver-nb\") pod \"dnsmasq-dns-795846498c-x69sr\" (UID: \"1f0e7b7a-7228-42df-a934-a2900ac292a5\") " pod="openstack/dnsmasq-dns-795846498c-x69sr" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.546277 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj96j\" (UniqueName: \"kubernetes.io/projected/1f0e7b7a-7228-42df-a934-a2900ac292a5-kube-api-access-rj96j\") pod \"dnsmasq-dns-795846498c-x69sr\" (UID: \"1f0e7b7a-7228-42df-a934-a2900ac292a5\") " pod="openstack/dnsmasq-dns-795846498c-x69sr" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.546855 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f0e7b7a-7228-42df-a934-a2900ac292a5-ovsdbserver-sb\") pod \"dnsmasq-dns-795846498c-x69sr\" (UID: \"1f0e7b7a-7228-42df-a934-a2900ac292a5\") " pod="openstack/dnsmasq-dns-795846498c-x69sr" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.546930 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f0e7b7a-7228-42df-a934-a2900ac292a5-config\") pod \"dnsmasq-dns-795846498c-x69sr\" (UID: \"1f0e7b7a-7228-42df-a934-a2900ac292a5\") " pod="openstack/dnsmasq-dns-795846498c-x69sr" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.547051 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f0e7b7a-7228-42df-a934-a2900ac292a5-dns-svc\") pod \"dnsmasq-dns-795846498c-x69sr\" (UID: \"1f0e7b7a-7228-42df-a934-a2900ac292a5\") " pod="openstack/dnsmasq-dns-795846498c-x69sr" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.547777 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f0e7b7a-7228-42df-a934-a2900ac292a5-dns-svc\") pod \"dnsmasq-dns-795846498c-x69sr\" (UID: \"1f0e7b7a-7228-42df-a934-a2900ac292a5\") " pod="openstack/dnsmasq-dns-795846498c-x69sr" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.547885 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f0e7b7a-7228-42df-a934-a2900ac292a5-config\") pod \"dnsmasq-dns-795846498c-x69sr\" (UID: \"1f0e7b7a-7228-42df-a934-a2900ac292a5\") " pod="openstack/dnsmasq-dns-795846498c-x69sr" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.548227 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f0e7b7a-7228-42df-a934-a2900ac292a5-ovsdbserver-sb\") pod \"dnsmasq-dns-795846498c-x69sr\" (UID: \"1f0e7b7a-7228-42df-a934-a2900ac292a5\") " pod="openstack/dnsmasq-dns-795846498c-x69sr" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.567395 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj96j\" (UniqueName: \"kubernetes.io/projected/1f0e7b7a-7228-42df-a934-a2900ac292a5-kube-api-access-rj96j\") pod \"dnsmasq-dns-795846498c-x69sr\" (UID: \"1f0e7b7a-7228-42df-a934-a2900ac292a5\") " pod="openstack/dnsmasq-dns-795846498c-x69sr" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.575907 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pg5rw" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.697872 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-6csh9"] Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.727889 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795846498c-x69sr" Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.745550 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6csh9" event={"ID":"991076c0-40c3-4bdb-9766-a2c71b011caf","Type":"ContainerStarted","Data":"34a415e1ae7a37c0ac73fe72e013acfcff397f8b391b5dd259d841e296866f6f"} Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.821894 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-cwhzf"] Oct 13 06:46:03 crc kubenswrapper[4833]: W1013 06:46:03.860394 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod803c0a56_1e0a_4c20_a1c7_32ecf709cda4.slice/crio-2084d7454140dd07b4e06276eba5c95ec02473ddcec21e205bfff0bc3fc38ac4 WatchSource:0}: Error finding container 2084d7454140dd07b4e06276eba5c95ec02473ddcec21e205bfff0bc3fc38ac4: Status 404 returned error can't find the container with id 2084d7454140dd07b4e06276eba5c95ec02473ddcec21e205bfff0bc3fc38ac4 Oct 13 06:46:03 crc kubenswrapper[4833]: I1013 06:46:03.963097 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-pg5rw"] Oct 13 06:46:03 crc kubenswrapper[4833]: W1013 06:46:03.982170 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod232038a6_156e_437b_9975_ac0fb0385c76.slice/crio-3678a49e1afa66c5c6e5d890acb46fcd6873a93b1a6ccf5a1f3d22217f178778 WatchSource:0}: Error finding container 3678a49e1afa66c5c6e5d890acb46fcd6873a93b1a6ccf5a1f3d22217f178778: Status 404 returned error can't find the container with id 3678a49e1afa66c5c6e5d890acb46fcd6873a93b1a6ccf5a1f3d22217f178778 Oct 13 06:46:04 crc kubenswrapper[4833]: I1013 06:46:04.020945 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-2s986"] Oct 13 06:46:04 crc kubenswrapper[4833]: W1013 06:46:04.024288 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod705bcc31_a619_447e_b29a_e98c322e5617.slice/crio-fe39828944a8587de590b47905b561ee70d8957170485577b438bd5b9adde27f WatchSource:0}: Error finding container fe39828944a8587de590b47905b561ee70d8957170485577b438bd5b9adde27f: Status 404 returned error can't find the container with id fe39828944a8587de590b47905b561ee70d8957170485577b438bd5b9adde27f Oct 13 06:46:04 crc kubenswrapper[4833]: I1013 06:46:04.268170 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-795846498c-x69sr"] Oct 13 06:46:04 crc kubenswrapper[4833]: I1013 06:46:04.756938 4833 generic.go:334] "Generic (PLEG): container finished" podID="1f0e7b7a-7228-42df-a934-a2900ac292a5" containerID="69b92c0f1e1d4bd719a9e9d2e3d8701cfc65207e7727d655ad710a94d883c39d" exitCode=0 Oct 13 06:46:04 crc kubenswrapper[4833]: I1013 06:46:04.757043 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795846498c-x69sr" event={"ID":"1f0e7b7a-7228-42df-a934-a2900ac292a5","Type":"ContainerDied","Data":"69b92c0f1e1d4bd719a9e9d2e3d8701cfc65207e7727d655ad710a94d883c39d"} Oct 13 06:46:04 crc kubenswrapper[4833]: I1013 06:46:04.757259 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795846498c-x69sr" event={"ID":"1f0e7b7a-7228-42df-a934-a2900ac292a5","Type":"ContainerStarted","Data":"4453edb659857abfa3b8f3ee2b6289284e7f322fe1fe47d3245730169a0a05c4"} Oct 13 06:46:04 crc kubenswrapper[4833]: I1013 06:46:04.760055 4833 generic.go:334] "Generic (PLEG): container finished" podID="803c0a56-1e0a-4c20-a1c7-32ecf709cda4" containerID="8d5de2fe55ed4c376d29367c220f041f0708e49fe8ccbde857152a78ed7c46e7" exitCode=0 Oct 13 06:46:04 crc kubenswrapper[4833]: I1013 06:46:04.760119 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cwhzf" event={"ID":"803c0a56-1e0a-4c20-a1c7-32ecf709cda4","Type":"ContainerDied","Data":"8d5de2fe55ed4c376d29367c220f041f0708e49fe8ccbde857152a78ed7c46e7"} Oct 13 06:46:04 crc kubenswrapper[4833]: I1013 06:46:04.760143 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cwhzf" event={"ID":"803c0a56-1e0a-4c20-a1c7-32ecf709cda4","Type":"ContainerStarted","Data":"2084d7454140dd07b4e06276eba5c95ec02473ddcec21e205bfff0bc3fc38ac4"} Oct 13 06:46:04 crc kubenswrapper[4833]: I1013 06:46:04.761697 4833 generic.go:334] "Generic (PLEG): container finished" podID="991076c0-40c3-4bdb-9766-a2c71b011caf" containerID="88dd9795a2f0d6b1192640550d57cb400e3c38ef205f9ffda1edfab60c02010b" exitCode=0 Oct 13 06:46:04 crc kubenswrapper[4833]: I1013 06:46:04.761754 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6csh9" event={"ID":"991076c0-40c3-4bdb-9766-a2c71b011caf","Type":"ContainerDied","Data":"88dd9795a2f0d6b1192640550d57cb400e3c38ef205f9ffda1edfab60c02010b"} Oct 13 06:46:04 crc kubenswrapper[4833]: I1013 06:46:04.763003 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pg5rw" event={"ID":"232038a6-156e-437b-9975-ac0fb0385c76","Type":"ContainerStarted","Data":"3678a49e1afa66c5c6e5d890acb46fcd6873a93b1a6ccf5a1f3d22217f178778"} Oct 13 06:46:04 crc kubenswrapper[4833]: I1013 06:46:04.765247 4833 generic.go:334] "Generic (PLEG): container finished" podID="705bcc31-a619-447e-b29a-e98c322e5617" containerID="b3cd93d18ecb51926f4360f17dfaf0788c2428eba5b194104e0d9ba177142a5c" exitCode=0 Oct 13 06:46:04 crc kubenswrapper[4833]: I1013 06:46:04.765338 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2s986" event={"ID":"705bcc31-a619-447e-b29a-e98c322e5617","Type":"ContainerDied","Data":"b3cd93d18ecb51926f4360f17dfaf0788c2428eba5b194104e0d9ba177142a5c"} Oct 13 06:46:04 crc kubenswrapper[4833]: I1013 06:46:04.765364 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2s986" event={"ID":"705bcc31-a619-447e-b29a-e98c322e5617","Type":"ContainerStarted","Data":"fe39828944a8587de590b47905b561ee70d8957170485577b438bd5b9adde27f"} Oct 13 06:46:05 crc kubenswrapper[4833]: I1013 06:46:05.790947 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795846498c-x69sr" event={"ID":"1f0e7b7a-7228-42df-a934-a2900ac292a5","Type":"ContainerStarted","Data":"1c268051faa093bf24583e046eec0edc68aaef5a964314997be5cfe10ad5c645"} Oct 13 06:46:05 crc kubenswrapper[4833]: I1013 06:46:05.815387 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-795846498c-x69sr" podStartSLOduration=2.815365834 podStartE2EDuration="2.815365834s" podCreationTimestamp="2025-10-13 06:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:46:05.813728657 +0000 UTC m=+1055.914151573" watchObservedRunningTime="2025-10-13 06:46:05.815365834 +0000 UTC m=+1055.915788750" Oct 13 06:46:06 crc kubenswrapper[4833]: I1013 06:46:06.801962 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-795846498c-x69sr" Oct 13 06:46:08 crc kubenswrapper[4833]: I1013 06:46:08.620342 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cwhzf" Oct 13 06:46:08 crc kubenswrapper[4833]: I1013 06:46:08.625621 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6csh9" Oct 13 06:46:08 crc kubenswrapper[4833]: I1013 06:46:08.630938 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2s986" Oct 13 06:46:08 crc kubenswrapper[4833]: I1013 06:46:08.733294 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tktv6\" (UniqueName: \"kubernetes.io/projected/991076c0-40c3-4bdb-9766-a2c71b011caf-kube-api-access-tktv6\") pod \"991076c0-40c3-4bdb-9766-a2c71b011caf\" (UID: \"991076c0-40c3-4bdb-9766-a2c71b011caf\") " Oct 13 06:46:08 crc kubenswrapper[4833]: I1013 06:46:08.733385 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmlss\" (UniqueName: \"kubernetes.io/projected/705bcc31-a619-447e-b29a-e98c322e5617-kube-api-access-kmlss\") pod \"705bcc31-a619-447e-b29a-e98c322e5617\" (UID: \"705bcc31-a619-447e-b29a-e98c322e5617\") " Oct 13 06:46:08 crc kubenswrapper[4833]: I1013 06:46:08.733530 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s2fh\" (UniqueName: \"kubernetes.io/projected/803c0a56-1e0a-4c20-a1c7-32ecf709cda4-kube-api-access-8s2fh\") pod \"803c0a56-1e0a-4c20-a1c7-32ecf709cda4\" (UID: \"803c0a56-1e0a-4c20-a1c7-32ecf709cda4\") " Oct 13 06:46:08 crc kubenswrapper[4833]: I1013 06:46:08.737327 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/803c0a56-1e0a-4c20-a1c7-32ecf709cda4-kube-api-access-8s2fh" (OuterVolumeSpecName: "kube-api-access-8s2fh") pod "803c0a56-1e0a-4c20-a1c7-32ecf709cda4" (UID: "803c0a56-1e0a-4c20-a1c7-32ecf709cda4"). InnerVolumeSpecName "kube-api-access-8s2fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:46:08 crc kubenswrapper[4833]: I1013 06:46:08.737406 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/991076c0-40c3-4bdb-9766-a2c71b011caf-kube-api-access-tktv6" (OuterVolumeSpecName: "kube-api-access-tktv6") pod "991076c0-40c3-4bdb-9766-a2c71b011caf" (UID: "991076c0-40c3-4bdb-9766-a2c71b011caf"). InnerVolumeSpecName "kube-api-access-tktv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:46:08 crc kubenswrapper[4833]: I1013 06:46:08.737639 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/705bcc31-a619-447e-b29a-e98c322e5617-kube-api-access-kmlss" (OuterVolumeSpecName: "kube-api-access-kmlss") pod "705bcc31-a619-447e-b29a-e98c322e5617" (UID: "705bcc31-a619-447e-b29a-e98c322e5617"). InnerVolumeSpecName "kube-api-access-kmlss". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:46:08 crc kubenswrapper[4833]: I1013 06:46:08.820450 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cwhzf" Oct 13 06:46:08 crc kubenswrapper[4833]: I1013 06:46:08.820449 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cwhzf" event={"ID":"803c0a56-1e0a-4c20-a1c7-32ecf709cda4","Type":"ContainerDied","Data":"2084d7454140dd07b4e06276eba5c95ec02473ddcec21e205bfff0bc3fc38ac4"} Oct 13 06:46:08 crc kubenswrapper[4833]: I1013 06:46:08.820593 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2084d7454140dd07b4e06276eba5c95ec02473ddcec21e205bfff0bc3fc38ac4" Oct 13 06:46:08 crc kubenswrapper[4833]: I1013 06:46:08.822754 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6csh9" event={"ID":"991076c0-40c3-4bdb-9766-a2c71b011caf","Type":"ContainerDied","Data":"34a415e1ae7a37c0ac73fe72e013acfcff397f8b391b5dd259d841e296866f6f"} Oct 13 06:46:08 crc kubenswrapper[4833]: I1013 06:46:08.822776 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34a415e1ae7a37c0ac73fe72e013acfcff397f8b391b5dd259d841e296866f6f" Oct 13 06:46:08 crc kubenswrapper[4833]: I1013 06:46:08.822902 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6csh9" Oct 13 06:46:08 crc kubenswrapper[4833]: I1013 06:46:08.824738 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pg5rw" event={"ID":"232038a6-156e-437b-9975-ac0fb0385c76","Type":"ContainerStarted","Data":"569297508f63a6c413306971b9b99483a9541ed3d6f625259c21948210f4cc46"} Oct 13 06:46:08 crc kubenswrapper[4833]: I1013 06:46:08.829800 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2s986" event={"ID":"705bcc31-a619-447e-b29a-e98c322e5617","Type":"ContainerDied","Data":"fe39828944a8587de590b47905b561ee70d8957170485577b438bd5b9adde27f"} Oct 13 06:46:08 crc kubenswrapper[4833]: I1013 06:46:08.829822 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe39828944a8587de590b47905b561ee70d8957170485577b438bd5b9adde27f" Oct 13 06:46:08 crc kubenswrapper[4833]: I1013 06:46:08.829864 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2s986" Oct 13 06:46:08 crc kubenswrapper[4833]: I1013 06:46:08.835629 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmlss\" (UniqueName: \"kubernetes.io/projected/705bcc31-a619-447e-b29a-e98c322e5617-kube-api-access-kmlss\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:08 crc kubenswrapper[4833]: I1013 06:46:08.835660 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s2fh\" (UniqueName: \"kubernetes.io/projected/803c0a56-1e0a-4c20-a1c7-32ecf709cda4-kube-api-access-8s2fh\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:08 crc kubenswrapper[4833]: I1013 06:46:08.835674 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tktv6\" (UniqueName: \"kubernetes.io/projected/991076c0-40c3-4bdb-9766-a2c71b011caf-kube-api-access-tktv6\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:09 crc kubenswrapper[4833]: I1013 06:46:09.660713 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-pg5rw" podStartSLOduration=2.148178738 podStartE2EDuration="6.660691249s" podCreationTimestamp="2025-10-13 06:46:03 +0000 UTC" firstStartedPulling="2025-10-13 06:46:03.986496307 +0000 UTC m=+1054.086919223" lastFinishedPulling="2025-10-13 06:46:08.499008818 +0000 UTC m=+1058.599431734" observedRunningTime="2025-10-13 06:46:08.846596062 +0000 UTC m=+1058.947018988" watchObservedRunningTime="2025-10-13 06:46:09.660691249 +0000 UTC m=+1059.761114165" Oct 13 06:46:11 crc kubenswrapper[4833]: I1013 06:46:11.861996 4833 generic.go:334] "Generic (PLEG): container finished" podID="232038a6-156e-437b-9975-ac0fb0385c76" containerID="569297508f63a6c413306971b9b99483a9541ed3d6f625259c21948210f4cc46" exitCode=0 Oct 13 06:46:11 crc kubenswrapper[4833]: I1013 06:46:11.862063 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pg5rw" event={"ID":"232038a6-156e-437b-9975-ac0fb0385c76","Type":"ContainerDied","Data":"569297508f63a6c413306971b9b99483a9541ed3d6f625259c21948210f4cc46"} Oct 13 06:46:12 crc kubenswrapper[4833]: I1013 06:46:12.836096 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b181-account-create-7wdwx"] Oct 13 06:46:12 crc kubenswrapper[4833]: E1013 06:46:12.836501 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="705bcc31-a619-447e-b29a-e98c322e5617" containerName="mariadb-database-create" Oct 13 06:46:12 crc kubenswrapper[4833]: I1013 06:46:12.836526 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="705bcc31-a619-447e-b29a-e98c322e5617" containerName="mariadb-database-create" Oct 13 06:46:12 crc kubenswrapper[4833]: E1013 06:46:12.836572 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="803c0a56-1e0a-4c20-a1c7-32ecf709cda4" containerName="mariadb-database-create" Oct 13 06:46:12 crc kubenswrapper[4833]: I1013 06:46:12.836582 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="803c0a56-1e0a-4c20-a1c7-32ecf709cda4" containerName="mariadb-database-create" Oct 13 06:46:12 crc kubenswrapper[4833]: E1013 06:46:12.836605 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="991076c0-40c3-4bdb-9766-a2c71b011caf" containerName="mariadb-database-create" Oct 13 06:46:12 crc kubenswrapper[4833]: I1013 06:46:12.836615 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="991076c0-40c3-4bdb-9766-a2c71b011caf" containerName="mariadb-database-create" Oct 13 06:46:12 crc kubenswrapper[4833]: I1013 06:46:12.836840 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="803c0a56-1e0a-4c20-a1c7-32ecf709cda4" containerName="mariadb-database-create" Oct 13 06:46:12 crc kubenswrapper[4833]: I1013 06:46:12.836867 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="705bcc31-a619-447e-b29a-e98c322e5617" containerName="mariadb-database-create" Oct 13 06:46:12 crc kubenswrapper[4833]: I1013 06:46:12.836894 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="991076c0-40c3-4bdb-9766-a2c71b011caf" containerName="mariadb-database-create" Oct 13 06:46:12 crc kubenswrapper[4833]: I1013 06:46:12.837574 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b181-account-create-7wdwx" Oct 13 06:46:12 crc kubenswrapper[4833]: I1013 06:46:12.839364 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Oct 13 06:46:12 crc kubenswrapper[4833]: I1013 06:46:12.849647 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b181-account-create-7wdwx"] Oct 13 06:46:13 crc kubenswrapper[4833]: I1013 06:46:13.006475 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xmzd\" (UniqueName: \"kubernetes.io/projected/5920217a-f2c4-4b9a-97ac-b5b98be2e85d-kube-api-access-6xmzd\") pod \"cinder-b181-account-create-7wdwx\" (UID: \"5920217a-f2c4-4b9a-97ac-b5b98be2e85d\") " pod="openstack/cinder-b181-account-create-7wdwx" Oct 13 06:46:13 crc kubenswrapper[4833]: I1013 06:46:13.108218 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xmzd\" (UniqueName: \"kubernetes.io/projected/5920217a-f2c4-4b9a-97ac-b5b98be2e85d-kube-api-access-6xmzd\") pod \"cinder-b181-account-create-7wdwx\" (UID: \"5920217a-f2c4-4b9a-97ac-b5b98be2e85d\") " pod="openstack/cinder-b181-account-create-7wdwx" Oct 13 06:46:13 crc kubenswrapper[4833]: I1013 06:46:13.144525 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xmzd\" (UniqueName: \"kubernetes.io/projected/5920217a-f2c4-4b9a-97ac-b5b98be2e85d-kube-api-access-6xmzd\") pod \"cinder-b181-account-create-7wdwx\" (UID: \"5920217a-f2c4-4b9a-97ac-b5b98be2e85d\") " pod="openstack/cinder-b181-account-create-7wdwx" Oct 13 06:46:13 crc kubenswrapper[4833]: I1013 06:46:13.220756 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b181-account-create-7wdwx" Oct 13 06:46:13 crc kubenswrapper[4833]: I1013 06:46:13.350531 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pg5rw" Oct 13 06:46:13 crc kubenswrapper[4833]: I1013 06:46:13.515764 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7hnv\" (UniqueName: \"kubernetes.io/projected/232038a6-156e-437b-9975-ac0fb0385c76-kube-api-access-q7hnv\") pod \"232038a6-156e-437b-9975-ac0fb0385c76\" (UID: \"232038a6-156e-437b-9975-ac0fb0385c76\") " Oct 13 06:46:13 crc kubenswrapper[4833]: I1013 06:46:13.515859 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/232038a6-156e-437b-9975-ac0fb0385c76-config-data\") pod \"232038a6-156e-437b-9975-ac0fb0385c76\" (UID: \"232038a6-156e-437b-9975-ac0fb0385c76\") " Oct 13 06:46:13 crc kubenswrapper[4833]: I1013 06:46:13.515910 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/232038a6-156e-437b-9975-ac0fb0385c76-combined-ca-bundle\") pod \"232038a6-156e-437b-9975-ac0fb0385c76\" (UID: \"232038a6-156e-437b-9975-ac0fb0385c76\") " Oct 13 06:46:13 crc kubenswrapper[4833]: I1013 06:46:13.521520 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/232038a6-156e-437b-9975-ac0fb0385c76-kube-api-access-q7hnv" (OuterVolumeSpecName: "kube-api-access-q7hnv") pod "232038a6-156e-437b-9975-ac0fb0385c76" (UID: "232038a6-156e-437b-9975-ac0fb0385c76"). InnerVolumeSpecName "kube-api-access-q7hnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:46:13 crc kubenswrapper[4833]: I1013 06:46:13.540953 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/232038a6-156e-437b-9975-ac0fb0385c76-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "232038a6-156e-437b-9975-ac0fb0385c76" (UID: "232038a6-156e-437b-9975-ac0fb0385c76"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:13 crc kubenswrapper[4833]: I1013 06:46:13.562878 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/232038a6-156e-437b-9975-ac0fb0385c76-config-data" (OuterVolumeSpecName: "config-data") pod "232038a6-156e-437b-9975-ac0fb0385c76" (UID: "232038a6-156e-437b-9975-ac0fb0385c76"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:13 crc kubenswrapper[4833]: I1013 06:46:13.617723 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7hnv\" (UniqueName: \"kubernetes.io/projected/232038a6-156e-437b-9975-ac0fb0385c76-kube-api-access-q7hnv\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:13 crc kubenswrapper[4833]: I1013 06:46:13.617773 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/232038a6-156e-437b-9975-ac0fb0385c76-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:13 crc kubenswrapper[4833]: I1013 06:46:13.617799 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/232038a6-156e-437b-9975-ac0fb0385c76-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:13 crc kubenswrapper[4833]: I1013 06:46:13.623018 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b181-account-create-7wdwx"] Oct 13 06:46:13 crc kubenswrapper[4833]: W1013 06:46:13.627733 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5920217a_f2c4_4b9a_97ac_b5b98be2e85d.slice/crio-0826e4a44e02b058b744a772180552f2e20f7a61d82f45630b415ee6f6265ffb WatchSource:0}: Error finding container 0826e4a44e02b058b744a772180552f2e20f7a61d82f45630b415ee6f6265ffb: Status 404 returned error can't find the container with id 0826e4a44e02b058b744a772180552f2e20f7a61d82f45630b415ee6f6265ffb Oct 13 06:46:13 crc kubenswrapper[4833]: I1013 06:46:13.729699 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-795846498c-x69sr" Oct 13 06:46:13 crc kubenswrapper[4833]: I1013 06:46:13.788412 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-564965cbfc-cblgb"] Oct 13 06:46:13 crc kubenswrapper[4833]: I1013 06:46:13.788945 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-564965cbfc-cblgb" podUID="05705398-ade7-423c-8752-10e9255703f6" containerName="dnsmasq-dns" containerID="cri-o://a731ee88aaa161625d5ec9432dd0f6a28884711cd60c27cd7ceb5b277b9b36fa" gracePeriod=10 Oct 13 06:46:13 crc kubenswrapper[4833]: I1013 06:46:13.894564 4833 generic.go:334] "Generic (PLEG): container finished" podID="5920217a-f2c4-4b9a-97ac-b5b98be2e85d" containerID="fe364208d348b90b70689ae14abb9a2d806a31109fc66fccd1952b74f9640686" exitCode=0 Oct 13 06:46:13 crc kubenswrapper[4833]: I1013 06:46:13.894667 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b181-account-create-7wdwx" event={"ID":"5920217a-f2c4-4b9a-97ac-b5b98be2e85d","Type":"ContainerDied","Data":"fe364208d348b90b70689ae14abb9a2d806a31109fc66fccd1952b74f9640686"} Oct 13 06:46:13 crc kubenswrapper[4833]: I1013 06:46:13.894700 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b181-account-create-7wdwx" event={"ID":"5920217a-f2c4-4b9a-97ac-b5b98be2e85d","Type":"ContainerStarted","Data":"0826e4a44e02b058b744a772180552f2e20f7a61d82f45630b415ee6f6265ffb"} Oct 13 06:46:13 crc kubenswrapper[4833]: I1013 06:46:13.900248 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pg5rw" event={"ID":"232038a6-156e-437b-9975-ac0fb0385c76","Type":"ContainerDied","Data":"3678a49e1afa66c5c6e5d890acb46fcd6873a93b1a6ccf5a1f3d22217f178778"} Oct 13 06:46:13 crc kubenswrapper[4833]: I1013 06:46:13.900294 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3678a49e1afa66c5c6e5d890acb46fcd6873a93b1a6ccf5a1f3d22217f178778" Oct 13 06:46:13 crc kubenswrapper[4833]: I1013 06:46:13.900358 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pg5rw" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.144261 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b4bfdd7f7-qt62p"] Oct 13 06:46:14 crc kubenswrapper[4833]: E1013 06:46:14.144665 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232038a6-156e-437b-9975-ac0fb0385c76" containerName="keystone-db-sync" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.144678 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="232038a6-156e-437b-9975-ac0fb0385c76" containerName="keystone-db-sync" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.144833 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="232038a6-156e-437b-9975-ac0fb0385c76" containerName="keystone-db-sync" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.145663 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b4bfdd7f7-qt62p" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.157172 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b4bfdd7f7-qt62p"] Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.200835 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-b5wjw"] Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.202079 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b5wjw" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.205312 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.205437 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-62lpv" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.205333 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.205766 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.215183 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-b5wjw"] Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.230177 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3218fe1c-c66e-4f62-aea5-e0aee54359af-ovsdbserver-sb\") pod \"dnsmasq-dns-6b4bfdd7f7-qt62p\" (UID: \"3218fe1c-c66e-4f62-aea5-e0aee54359af\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-qt62p" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.230272 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3218fe1c-c66e-4f62-aea5-e0aee54359af-dns-svc\") pod \"dnsmasq-dns-6b4bfdd7f7-qt62p\" (UID: \"3218fe1c-c66e-4f62-aea5-e0aee54359af\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-qt62p" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.230296 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7skln\" (UniqueName: \"kubernetes.io/projected/3218fe1c-c66e-4f62-aea5-e0aee54359af-kube-api-access-7skln\") pod \"dnsmasq-dns-6b4bfdd7f7-qt62p\" (UID: \"3218fe1c-c66e-4f62-aea5-e0aee54359af\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-qt62p" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.230328 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3218fe1c-c66e-4f62-aea5-e0aee54359af-ovsdbserver-nb\") pod \"dnsmasq-dns-6b4bfdd7f7-qt62p\" (UID: \"3218fe1c-c66e-4f62-aea5-e0aee54359af\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-qt62p" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.230423 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3218fe1c-c66e-4f62-aea5-e0aee54359af-dns-swift-storage-0\") pod \"dnsmasq-dns-6b4bfdd7f7-qt62p\" (UID: \"3218fe1c-c66e-4f62-aea5-e0aee54359af\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-qt62p" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.230450 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3218fe1c-c66e-4f62-aea5-e0aee54359af-config\") pod \"dnsmasq-dns-6b4bfdd7f7-qt62p\" (UID: \"3218fe1c-c66e-4f62-aea5-e0aee54359af\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-qt62p" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.307117 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-564965cbfc-cblgb" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.332563 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a28b5320-1046-4ca8-974d-185760b4e612-fernet-keys\") pod \"keystone-bootstrap-b5wjw\" (UID: \"a28b5320-1046-4ca8-974d-185760b4e612\") " pod="openstack/keystone-bootstrap-b5wjw" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.332610 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3218fe1c-c66e-4f62-aea5-e0aee54359af-config\") pod \"dnsmasq-dns-6b4bfdd7f7-qt62p\" (UID: \"3218fe1c-c66e-4f62-aea5-e0aee54359af\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-qt62p" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.332647 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a28b5320-1046-4ca8-974d-185760b4e612-config-data\") pod \"keystone-bootstrap-b5wjw\" (UID: \"a28b5320-1046-4ca8-974d-185760b4e612\") " pod="openstack/keystone-bootstrap-b5wjw" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.332673 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a28b5320-1046-4ca8-974d-185760b4e612-combined-ca-bundle\") pod \"keystone-bootstrap-b5wjw\" (UID: \"a28b5320-1046-4ca8-974d-185760b4e612\") " pod="openstack/keystone-bootstrap-b5wjw" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.332713 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a28b5320-1046-4ca8-974d-185760b4e612-scripts\") pod \"keystone-bootstrap-b5wjw\" (UID: \"a28b5320-1046-4ca8-974d-185760b4e612\") " pod="openstack/keystone-bootstrap-b5wjw" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.332748 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3218fe1c-c66e-4f62-aea5-e0aee54359af-ovsdbserver-sb\") pod \"dnsmasq-dns-6b4bfdd7f7-qt62p\" (UID: \"3218fe1c-c66e-4f62-aea5-e0aee54359af\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-qt62p" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.332771 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbtw7\" (UniqueName: \"kubernetes.io/projected/a28b5320-1046-4ca8-974d-185760b4e612-kube-api-access-xbtw7\") pod \"keystone-bootstrap-b5wjw\" (UID: \"a28b5320-1046-4ca8-974d-185760b4e612\") " pod="openstack/keystone-bootstrap-b5wjw" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.332847 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3218fe1c-c66e-4f62-aea5-e0aee54359af-dns-svc\") pod \"dnsmasq-dns-6b4bfdd7f7-qt62p\" (UID: \"3218fe1c-c66e-4f62-aea5-e0aee54359af\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-qt62p" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.332879 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7skln\" (UniqueName: \"kubernetes.io/projected/3218fe1c-c66e-4f62-aea5-e0aee54359af-kube-api-access-7skln\") pod \"dnsmasq-dns-6b4bfdd7f7-qt62p\" (UID: \"3218fe1c-c66e-4f62-aea5-e0aee54359af\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-qt62p" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.332900 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3218fe1c-c66e-4f62-aea5-e0aee54359af-ovsdbserver-nb\") pod \"dnsmasq-dns-6b4bfdd7f7-qt62p\" (UID: \"3218fe1c-c66e-4f62-aea5-e0aee54359af\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-qt62p" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.332945 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a28b5320-1046-4ca8-974d-185760b4e612-credential-keys\") pod \"keystone-bootstrap-b5wjw\" (UID: \"a28b5320-1046-4ca8-974d-185760b4e612\") " pod="openstack/keystone-bootstrap-b5wjw" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.332980 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3218fe1c-c66e-4f62-aea5-e0aee54359af-dns-swift-storage-0\") pod \"dnsmasq-dns-6b4bfdd7f7-qt62p\" (UID: \"3218fe1c-c66e-4f62-aea5-e0aee54359af\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-qt62p" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.337783 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3218fe1c-c66e-4f62-aea5-e0aee54359af-ovsdbserver-sb\") pod \"dnsmasq-dns-6b4bfdd7f7-qt62p\" (UID: \"3218fe1c-c66e-4f62-aea5-e0aee54359af\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-qt62p" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.338355 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3218fe1c-c66e-4f62-aea5-e0aee54359af-config\") pod \"dnsmasq-dns-6b4bfdd7f7-qt62p\" (UID: \"3218fe1c-c66e-4f62-aea5-e0aee54359af\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-qt62p" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.339401 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3218fe1c-c66e-4f62-aea5-e0aee54359af-dns-svc\") pod \"dnsmasq-dns-6b4bfdd7f7-qt62p\" (UID: \"3218fe1c-c66e-4f62-aea5-e0aee54359af\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-qt62p" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.339397 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3218fe1c-c66e-4f62-aea5-e0aee54359af-ovsdbserver-nb\") pod \"dnsmasq-dns-6b4bfdd7f7-qt62p\" (UID: \"3218fe1c-c66e-4f62-aea5-e0aee54359af\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-qt62p" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.339812 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3218fe1c-c66e-4f62-aea5-e0aee54359af-dns-swift-storage-0\") pod \"dnsmasq-dns-6b4bfdd7f7-qt62p\" (UID: \"3218fe1c-c66e-4f62-aea5-e0aee54359af\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-qt62p" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.385494 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7skln\" (UniqueName: \"kubernetes.io/projected/3218fe1c-c66e-4f62-aea5-e0aee54359af-kube-api-access-7skln\") pod \"dnsmasq-dns-6b4bfdd7f7-qt62p\" (UID: \"3218fe1c-c66e-4f62-aea5-e0aee54359af\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-qt62p" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.387955 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:46:14 crc kubenswrapper[4833]: E1013 06:46:14.388298 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05705398-ade7-423c-8752-10e9255703f6" containerName="init" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.388310 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="05705398-ade7-423c-8752-10e9255703f6" containerName="init" Oct 13 06:46:14 crc kubenswrapper[4833]: E1013 06:46:14.388343 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05705398-ade7-423c-8752-10e9255703f6" containerName="dnsmasq-dns" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.388349 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="05705398-ade7-423c-8752-10e9255703f6" containerName="dnsmasq-dns" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.388486 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="05705398-ade7-423c-8752-10e9255703f6" containerName="dnsmasq-dns" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.395207 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.398380 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.398611 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.401793 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.434326 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05705398-ade7-423c-8752-10e9255703f6-config\") pod \"05705398-ade7-423c-8752-10e9255703f6\" (UID: \"05705398-ade7-423c-8752-10e9255703f6\") " Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.434428 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05705398-ade7-423c-8752-10e9255703f6-dns-swift-storage-0\") pod \"05705398-ade7-423c-8752-10e9255703f6\" (UID: \"05705398-ade7-423c-8752-10e9255703f6\") " Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.434479 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05705398-ade7-423c-8752-10e9255703f6-ovsdbserver-sb\") pod \"05705398-ade7-423c-8752-10e9255703f6\" (UID: \"05705398-ade7-423c-8752-10e9255703f6\") " Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.434521 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ctjk\" (UniqueName: \"kubernetes.io/projected/05705398-ade7-423c-8752-10e9255703f6-kube-api-access-7ctjk\") pod \"05705398-ade7-423c-8752-10e9255703f6\" (UID: \"05705398-ade7-423c-8752-10e9255703f6\") " Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.434559 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05705398-ade7-423c-8752-10e9255703f6-ovsdbserver-nb\") pod \"05705398-ade7-423c-8752-10e9255703f6\" (UID: \"05705398-ade7-423c-8752-10e9255703f6\") " Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.434582 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05705398-ade7-423c-8752-10e9255703f6-dns-svc\") pod \"05705398-ade7-423c-8752-10e9255703f6\" (UID: \"05705398-ade7-423c-8752-10e9255703f6\") " Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.435582 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a28b5320-1046-4ca8-974d-185760b4e612-combined-ca-bundle\") pod \"keystone-bootstrap-b5wjw\" (UID: \"a28b5320-1046-4ca8-974d-185760b4e612\") " pod="openstack/keystone-bootstrap-b5wjw" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.435635 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a28b5320-1046-4ca8-974d-185760b4e612-scripts\") pod \"keystone-bootstrap-b5wjw\" (UID: \"a28b5320-1046-4ca8-974d-185760b4e612\") " pod="openstack/keystone-bootstrap-b5wjw" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.435710 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbtw7\" (UniqueName: \"kubernetes.io/projected/a28b5320-1046-4ca8-974d-185760b4e612-kube-api-access-xbtw7\") pod \"keystone-bootstrap-b5wjw\" (UID: \"a28b5320-1046-4ca8-974d-185760b4e612\") " pod="openstack/keystone-bootstrap-b5wjw" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.435844 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a28b5320-1046-4ca8-974d-185760b4e612-credential-keys\") pod \"keystone-bootstrap-b5wjw\" (UID: \"a28b5320-1046-4ca8-974d-185760b4e612\") " pod="openstack/keystone-bootstrap-b5wjw" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.435891 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a28b5320-1046-4ca8-974d-185760b4e612-fernet-keys\") pod \"keystone-bootstrap-b5wjw\" (UID: \"a28b5320-1046-4ca8-974d-185760b4e612\") " pod="openstack/keystone-bootstrap-b5wjw" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.435918 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a28b5320-1046-4ca8-974d-185760b4e612-config-data\") pod \"keystone-bootstrap-b5wjw\" (UID: \"a28b5320-1046-4ca8-974d-185760b4e612\") " pod="openstack/keystone-bootstrap-b5wjw" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.452873 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a28b5320-1046-4ca8-974d-185760b4e612-config-data\") pod \"keystone-bootstrap-b5wjw\" (UID: \"a28b5320-1046-4ca8-974d-185760b4e612\") " pod="openstack/keystone-bootstrap-b5wjw" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.453155 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a28b5320-1046-4ca8-974d-185760b4e612-fernet-keys\") pod \"keystone-bootstrap-b5wjw\" (UID: \"a28b5320-1046-4ca8-974d-185760b4e612\") " pod="openstack/keystone-bootstrap-b5wjw" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.457659 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a28b5320-1046-4ca8-974d-185760b4e612-combined-ca-bundle\") pod \"keystone-bootstrap-b5wjw\" (UID: \"a28b5320-1046-4ca8-974d-185760b4e612\") " pod="openstack/keystone-bootstrap-b5wjw" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.457996 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a28b5320-1046-4ca8-974d-185760b4e612-scripts\") pod \"keystone-bootstrap-b5wjw\" (UID: \"a28b5320-1046-4ca8-974d-185760b4e612\") " pod="openstack/keystone-bootstrap-b5wjw" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.463450 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a28b5320-1046-4ca8-974d-185760b4e612-credential-keys\") pod \"keystone-bootstrap-b5wjw\" (UID: \"a28b5320-1046-4ca8-974d-185760b4e612\") " pod="openstack/keystone-bootstrap-b5wjw" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.488670 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05705398-ade7-423c-8752-10e9255703f6-kube-api-access-7ctjk" (OuterVolumeSpecName: "kube-api-access-7ctjk") pod "05705398-ade7-423c-8752-10e9255703f6" (UID: "05705398-ade7-423c-8752-10e9255703f6"). InnerVolumeSpecName "kube-api-access-7ctjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.491717 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b4bfdd7f7-qt62p"] Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.492554 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b4bfdd7f7-qt62p" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.511922 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbtw7\" (UniqueName: \"kubernetes.io/projected/a28b5320-1046-4ca8-974d-185760b4e612-kube-api-access-xbtw7\") pod \"keystone-bootstrap-b5wjw\" (UID: \"a28b5320-1046-4ca8-974d-185760b4e612\") " pod="openstack/keystone-bootstrap-b5wjw" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.546212 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88476575-d57c-4196-bd20-eee1fd482ead-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"88476575-d57c-4196-bd20-eee1fd482ead\") " pod="openstack/ceilometer-0" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.546295 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88476575-d57c-4196-bd20-eee1fd482ead-scripts\") pod \"ceilometer-0\" (UID: \"88476575-d57c-4196-bd20-eee1fd482ead\") " pod="openstack/ceilometer-0" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.546424 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clj2v\" (UniqueName: \"kubernetes.io/projected/88476575-d57c-4196-bd20-eee1fd482ead-kube-api-access-clj2v\") pod \"ceilometer-0\" (UID: \"88476575-d57c-4196-bd20-eee1fd482ead\") " pod="openstack/ceilometer-0" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.546457 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88476575-d57c-4196-bd20-eee1fd482ead-config-data\") pod \"ceilometer-0\" (UID: \"88476575-d57c-4196-bd20-eee1fd482ead\") " pod="openstack/ceilometer-0" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.546494 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88476575-d57c-4196-bd20-eee1fd482ead-log-httpd\") pod \"ceilometer-0\" (UID: \"88476575-d57c-4196-bd20-eee1fd482ead\") " pod="openstack/ceilometer-0" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.546518 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88476575-d57c-4196-bd20-eee1fd482ead-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"88476575-d57c-4196-bd20-eee1fd482ead\") " pod="openstack/ceilometer-0" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.546700 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88476575-d57c-4196-bd20-eee1fd482ead-run-httpd\") pod \"ceilometer-0\" (UID: \"88476575-d57c-4196-bd20-eee1fd482ead\") " pod="openstack/ceilometer-0" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.546821 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ctjk\" (UniqueName: \"kubernetes.io/projected/05705398-ade7-423c-8752-10e9255703f6-kube-api-access-7ctjk\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.546895 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-vljcq"] Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.550565 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-vljcq" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.556316 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.556343 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4558w" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.556620 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.562104 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-vljcq"] Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.576059 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05705398-ade7-423c-8752-10e9255703f6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "05705398-ade7-423c-8752-10e9255703f6" (UID: "05705398-ade7-423c-8752-10e9255703f6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.578429 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05705398-ade7-423c-8752-10e9255703f6-config" (OuterVolumeSpecName: "config") pod "05705398-ade7-423c-8752-10e9255703f6" (UID: "05705398-ade7-423c-8752-10e9255703f6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.582009 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05705398-ade7-423c-8752-10e9255703f6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "05705398-ade7-423c-8752-10e9255703f6" (UID: "05705398-ade7-423c-8752-10e9255703f6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.587891 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05705398-ade7-423c-8752-10e9255703f6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "05705398-ade7-423c-8752-10e9255703f6" (UID: "05705398-ade7-423c-8752-10e9255703f6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.593052 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5dc68bd5-kndjd"] Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.595009 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.598222 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05705398-ade7-423c-8752-10e9255703f6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "05705398-ade7-423c-8752-10e9255703f6" (UID: "05705398-ade7-423c-8752-10e9255703f6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.607601 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc68bd5-kndjd"] Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.626916 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b5wjw" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.653689 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/401c9b31-e308-4305-b56e-29fc8594856d-combined-ca-bundle\") pod \"placement-db-sync-vljcq\" (UID: \"401c9b31-e308-4305-b56e-29fc8594856d\") " pod="openstack/placement-db-sync-vljcq" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.653761 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88476575-d57c-4196-bd20-eee1fd482ead-run-httpd\") pod \"ceilometer-0\" (UID: \"88476575-d57c-4196-bd20-eee1fd482ead\") " pod="openstack/ceilometer-0" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.653791 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/401c9b31-e308-4305-b56e-29fc8594856d-config-data\") pod \"placement-db-sync-vljcq\" (UID: \"401c9b31-e308-4305-b56e-29fc8594856d\") " pod="openstack/placement-db-sync-vljcq" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.653851 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trg8k\" (UniqueName: \"kubernetes.io/projected/401c9b31-e308-4305-b56e-29fc8594856d-kube-api-access-trg8k\") pod \"placement-db-sync-vljcq\" (UID: \"401c9b31-e308-4305-b56e-29fc8594856d\") " pod="openstack/placement-db-sync-vljcq" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.653879 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88476575-d57c-4196-bd20-eee1fd482ead-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"88476575-d57c-4196-bd20-eee1fd482ead\") " pod="openstack/ceilometer-0" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.653911 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/401c9b31-e308-4305-b56e-29fc8594856d-logs\") pod \"placement-db-sync-vljcq\" (UID: \"401c9b31-e308-4305-b56e-29fc8594856d\") " pod="openstack/placement-db-sync-vljcq" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.653933 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88476575-d57c-4196-bd20-eee1fd482ead-scripts\") pod \"ceilometer-0\" (UID: \"88476575-d57c-4196-bd20-eee1fd482ead\") " pod="openstack/ceilometer-0" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.653992 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clj2v\" (UniqueName: \"kubernetes.io/projected/88476575-d57c-4196-bd20-eee1fd482ead-kube-api-access-clj2v\") pod \"ceilometer-0\" (UID: \"88476575-d57c-4196-bd20-eee1fd482ead\") " pod="openstack/ceilometer-0" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.654017 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/401c9b31-e308-4305-b56e-29fc8594856d-scripts\") pod \"placement-db-sync-vljcq\" (UID: \"401c9b31-e308-4305-b56e-29fc8594856d\") " pod="openstack/placement-db-sync-vljcq" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.654042 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88476575-d57c-4196-bd20-eee1fd482ead-config-data\") pod \"ceilometer-0\" (UID: \"88476575-d57c-4196-bd20-eee1fd482ead\") " pod="openstack/ceilometer-0" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.654069 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88476575-d57c-4196-bd20-eee1fd482ead-log-httpd\") pod \"ceilometer-0\" (UID: \"88476575-d57c-4196-bd20-eee1fd482ead\") " pod="openstack/ceilometer-0" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.654091 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88476575-d57c-4196-bd20-eee1fd482ead-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"88476575-d57c-4196-bd20-eee1fd482ead\") " pod="openstack/ceilometer-0" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.654146 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05705398-ade7-423c-8752-10e9255703f6-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.654161 4833 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05705398-ade7-423c-8752-10e9255703f6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.654175 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05705398-ade7-423c-8752-10e9255703f6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.654187 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05705398-ade7-423c-8752-10e9255703f6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.654202 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05705398-ade7-423c-8752-10e9255703f6-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.654241 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88476575-d57c-4196-bd20-eee1fd482ead-run-httpd\") pod \"ceilometer-0\" (UID: \"88476575-d57c-4196-bd20-eee1fd482ead\") " pod="openstack/ceilometer-0" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.654973 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88476575-d57c-4196-bd20-eee1fd482ead-log-httpd\") pod \"ceilometer-0\" (UID: \"88476575-d57c-4196-bd20-eee1fd482ead\") " pod="openstack/ceilometer-0" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.660350 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88476575-d57c-4196-bd20-eee1fd482ead-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"88476575-d57c-4196-bd20-eee1fd482ead\") " pod="openstack/ceilometer-0" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.662988 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88476575-d57c-4196-bd20-eee1fd482ead-scripts\") pod \"ceilometer-0\" (UID: \"88476575-d57c-4196-bd20-eee1fd482ead\") " pod="openstack/ceilometer-0" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.663392 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88476575-d57c-4196-bd20-eee1fd482ead-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"88476575-d57c-4196-bd20-eee1fd482ead\") " pod="openstack/ceilometer-0" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.665743 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88476575-d57c-4196-bd20-eee1fd482ead-config-data\") pod \"ceilometer-0\" (UID: \"88476575-d57c-4196-bd20-eee1fd482ead\") " pod="openstack/ceilometer-0" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.682055 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clj2v\" (UniqueName: \"kubernetes.io/projected/88476575-d57c-4196-bd20-eee1fd482ead-kube-api-access-clj2v\") pod \"ceilometer-0\" (UID: \"88476575-d57c-4196-bd20-eee1fd482ead\") " pod="openstack/ceilometer-0" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.755608 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc68bd5-kndjd\" (UID: \"87554ede-75d3-4ee6-a16a-71c768cb09ef\") " pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.755976 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trg8k\" (UniqueName: \"kubernetes.io/projected/401c9b31-e308-4305-b56e-29fc8594856d-kube-api-access-trg8k\") pod \"placement-db-sync-vljcq\" (UID: \"401c9b31-e308-4305-b56e-29fc8594856d\") " pod="openstack/placement-db-sync-vljcq" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.756013 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-config\") pod \"dnsmasq-dns-5dc68bd5-kndjd\" (UID: \"87554ede-75d3-4ee6-a16a-71c768cb09ef\") " pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.756054 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ndtr\" (UniqueName: \"kubernetes.io/projected/87554ede-75d3-4ee6-a16a-71c768cb09ef-kube-api-access-6ndtr\") pod \"dnsmasq-dns-5dc68bd5-kndjd\" (UID: \"87554ede-75d3-4ee6-a16a-71c768cb09ef\") " pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.756085 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc68bd5-kndjd\" (UID: \"87554ede-75d3-4ee6-a16a-71c768cb09ef\") " pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.756107 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/401c9b31-e308-4305-b56e-29fc8594856d-logs\") pod \"placement-db-sync-vljcq\" (UID: \"401c9b31-e308-4305-b56e-29fc8594856d\") " pod="openstack/placement-db-sync-vljcq" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.756175 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/401c9b31-e308-4305-b56e-29fc8594856d-scripts\") pod \"placement-db-sync-vljcq\" (UID: \"401c9b31-e308-4305-b56e-29fc8594856d\") " pod="openstack/placement-db-sync-vljcq" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.756201 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-dns-svc\") pod \"dnsmasq-dns-5dc68bd5-kndjd\" (UID: \"87554ede-75d3-4ee6-a16a-71c768cb09ef\") " pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.756221 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc68bd5-kndjd\" (UID: \"87554ede-75d3-4ee6-a16a-71c768cb09ef\") " pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.756289 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/401c9b31-e308-4305-b56e-29fc8594856d-combined-ca-bundle\") pod \"placement-db-sync-vljcq\" (UID: \"401c9b31-e308-4305-b56e-29fc8594856d\") " pod="openstack/placement-db-sync-vljcq" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.756333 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/401c9b31-e308-4305-b56e-29fc8594856d-config-data\") pod \"placement-db-sync-vljcq\" (UID: \"401c9b31-e308-4305-b56e-29fc8594856d\") " pod="openstack/placement-db-sync-vljcq" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.759212 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/401c9b31-e308-4305-b56e-29fc8594856d-logs\") pod \"placement-db-sync-vljcq\" (UID: \"401c9b31-e308-4305-b56e-29fc8594856d\") " pod="openstack/placement-db-sync-vljcq" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.760821 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/401c9b31-e308-4305-b56e-29fc8594856d-config-data\") pod \"placement-db-sync-vljcq\" (UID: \"401c9b31-e308-4305-b56e-29fc8594856d\") " pod="openstack/placement-db-sync-vljcq" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.761504 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/401c9b31-e308-4305-b56e-29fc8594856d-scripts\") pod \"placement-db-sync-vljcq\" (UID: \"401c9b31-e308-4305-b56e-29fc8594856d\") " pod="openstack/placement-db-sync-vljcq" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.766227 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/401c9b31-e308-4305-b56e-29fc8594856d-combined-ca-bundle\") pod \"placement-db-sync-vljcq\" (UID: \"401c9b31-e308-4305-b56e-29fc8594856d\") " pod="openstack/placement-db-sync-vljcq" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.775717 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trg8k\" (UniqueName: \"kubernetes.io/projected/401c9b31-e308-4305-b56e-29fc8594856d-kube-api-access-trg8k\") pod \"placement-db-sync-vljcq\" (UID: \"401c9b31-e308-4305-b56e-29fc8594856d\") " pod="openstack/placement-db-sync-vljcq" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.812586 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.857671 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-dns-svc\") pod \"dnsmasq-dns-5dc68bd5-kndjd\" (UID: \"87554ede-75d3-4ee6-a16a-71c768cb09ef\") " pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.857740 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc68bd5-kndjd\" (UID: \"87554ede-75d3-4ee6-a16a-71c768cb09ef\") " pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.857867 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc68bd5-kndjd\" (UID: \"87554ede-75d3-4ee6-a16a-71c768cb09ef\") " pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.857902 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-config\") pod \"dnsmasq-dns-5dc68bd5-kndjd\" (UID: \"87554ede-75d3-4ee6-a16a-71c768cb09ef\") " pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.857929 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ndtr\" (UniqueName: \"kubernetes.io/projected/87554ede-75d3-4ee6-a16a-71c768cb09ef-kube-api-access-6ndtr\") pod \"dnsmasq-dns-5dc68bd5-kndjd\" (UID: \"87554ede-75d3-4ee6-a16a-71c768cb09ef\") " pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.857955 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc68bd5-kndjd\" (UID: \"87554ede-75d3-4ee6-a16a-71c768cb09ef\") " pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.858937 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc68bd5-kndjd\" (UID: \"87554ede-75d3-4ee6-a16a-71c768cb09ef\") " pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.859485 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-dns-svc\") pod \"dnsmasq-dns-5dc68bd5-kndjd\" (UID: \"87554ede-75d3-4ee6-a16a-71c768cb09ef\") " pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.860028 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc68bd5-kndjd\" (UID: \"87554ede-75d3-4ee6-a16a-71c768cb09ef\") " pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.861527 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-config\") pod \"dnsmasq-dns-5dc68bd5-kndjd\" (UID: \"87554ede-75d3-4ee6-a16a-71c768cb09ef\") " pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.862126 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc68bd5-kndjd\" (UID: \"87554ede-75d3-4ee6-a16a-71c768cb09ef\") " pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.876205 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-vljcq" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.878313 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ndtr\" (UniqueName: \"kubernetes.io/projected/87554ede-75d3-4ee6-a16a-71c768cb09ef-kube-api-access-6ndtr\") pod \"dnsmasq-dns-5dc68bd5-kndjd\" (UID: \"87554ede-75d3-4ee6-a16a-71c768cb09ef\") " pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.919074 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.939258 4833 generic.go:334] "Generic (PLEG): container finished" podID="05705398-ade7-423c-8752-10e9255703f6" containerID="a731ee88aaa161625d5ec9432dd0f6a28884711cd60c27cd7ceb5b277b9b36fa" exitCode=0 Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.939500 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-564965cbfc-cblgb" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.941109 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-564965cbfc-cblgb" event={"ID":"05705398-ade7-423c-8752-10e9255703f6","Type":"ContainerDied","Data":"a731ee88aaa161625d5ec9432dd0f6a28884711cd60c27cd7ceb5b277b9b36fa"} Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.941414 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-564965cbfc-cblgb" event={"ID":"05705398-ade7-423c-8752-10e9255703f6","Type":"ContainerDied","Data":"907e63b89baae5ed23dd89c29cb0073b40b33d1f670272a8c48eb1a6dd4c22d9"} Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.941445 4833 scope.go:117] "RemoveContainer" containerID="a731ee88aaa161625d5ec9432dd0f6a28884711cd60c27cd7ceb5b277b9b36fa" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.972676 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-564965cbfc-cblgb"] Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.979386 4833 scope.go:117] "RemoveContainer" containerID="6172593b63d459c776d7c798d25eda158ccb4dc63722d97e0a19c9e4ffecebb8" Oct 13 06:46:14 crc kubenswrapper[4833]: I1013 06:46:14.994029 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-564965cbfc-cblgb"] Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.021652 4833 scope.go:117] "RemoveContainer" containerID="a731ee88aaa161625d5ec9432dd0f6a28884711cd60c27cd7ceb5b277b9b36fa" Oct 13 06:46:15 crc kubenswrapper[4833]: E1013 06:46:15.030584 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a731ee88aaa161625d5ec9432dd0f6a28884711cd60c27cd7ceb5b277b9b36fa\": container with ID starting with a731ee88aaa161625d5ec9432dd0f6a28884711cd60c27cd7ceb5b277b9b36fa not found: ID does not exist" containerID="a731ee88aaa161625d5ec9432dd0f6a28884711cd60c27cd7ceb5b277b9b36fa" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.030634 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a731ee88aaa161625d5ec9432dd0f6a28884711cd60c27cd7ceb5b277b9b36fa"} err="failed to get container status \"a731ee88aaa161625d5ec9432dd0f6a28884711cd60c27cd7ceb5b277b9b36fa\": rpc error: code = NotFound desc = could not find container \"a731ee88aaa161625d5ec9432dd0f6a28884711cd60c27cd7ceb5b277b9b36fa\": container with ID starting with a731ee88aaa161625d5ec9432dd0f6a28884711cd60c27cd7ceb5b277b9b36fa not found: ID does not exist" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.030665 4833 scope.go:117] "RemoveContainer" containerID="6172593b63d459c776d7c798d25eda158ccb4dc63722d97e0a19c9e4ffecebb8" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.030997 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b4bfdd7f7-qt62p"] Oct 13 06:46:15 crc kubenswrapper[4833]: E1013 06:46:15.032259 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6172593b63d459c776d7c798d25eda158ccb4dc63722d97e0a19c9e4ffecebb8\": container with ID starting with 6172593b63d459c776d7c798d25eda158ccb4dc63722d97e0a19c9e4ffecebb8 not found: ID does not exist" containerID="6172593b63d459c776d7c798d25eda158ccb4dc63722d97e0a19c9e4ffecebb8" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.032295 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6172593b63d459c776d7c798d25eda158ccb4dc63722d97e0a19c9e4ffecebb8"} err="failed to get container status \"6172593b63d459c776d7c798d25eda158ccb4dc63722d97e0a19c9e4ffecebb8\": rpc error: code = NotFound desc = could not find container \"6172593b63d459c776d7c798d25eda158ccb4dc63722d97e0a19c9e4ffecebb8\": container with ID starting with 6172593b63d459c776d7c798d25eda158ccb4dc63722d97e0a19c9e4ffecebb8 not found: ID does not exist" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.147075 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-b5wjw"] Oct 13 06:46:15 crc kubenswrapper[4833]: W1013 06:46:15.183412 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda28b5320_1046_4ca8_974d_185760b4e612.slice/crio-1aabfb3906ab84d87d575531ed9bdfa3085dc4f806a3f8f6b35fba0dde2f4d3b WatchSource:0}: Error finding container 1aabfb3906ab84d87d575531ed9bdfa3085dc4f806a3f8f6b35fba0dde2f4d3b: Status 404 returned error can't find the container with id 1aabfb3906ab84d87d575531ed9bdfa3085dc4f806a3f8f6b35fba0dde2f4d3b Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.283545 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.288403 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.290669 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.293344 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.293609 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-745xz" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.293820 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.297566 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.352714 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.355960 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.359415 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.359747 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.370264 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.372315 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b9c9a79a-05fb-4a3f-86ce-db82148329b7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.372348 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9c9a79a-05fb-4a3f-86ce-db82148329b7-logs\") pod \"glance-default-external-api-0\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.372403 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9c9a79a-05fb-4a3f-86ce-db82148329b7-scripts\") pod \"glance-default-external-api-0\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.372420 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9c9a79a-05fb-4a3f-86ce-db82148329b7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.372443 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9c9a79a-05fb-4a3f-86ce-db82148329b7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.372462 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7srbl\" (UniqueName: \"kubernetes.io/projected/b9c9a79a-05fb-4a3f-86ce-db82148329b7-kube-api-access-7srbl\") pod \"glance-default-external-api-0\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.372483 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.372529 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9c9a79a-05fb-4a3f-86ce-db82148329b7-config-data\") pod \"glance-default-external-api-0\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.379275 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.450547 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc68bd5-kndjd"] Oct 13 06:46:15 crc kubenswrapper[4833]: W1013 06:46:15.468765 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87554ede_75d3_4ee6_a16a_71c768cb09ef.slice/crio-647deb5dbc02c360162c70ab012bd5b75679da6988456c7fa707b74b06e4bb3e WatchSource:0}: Error finding container 647deb5dbc02c360162c70ab012bd5b75679da6988456c7fa707b74b06e4bb3e: Status 404 returned error can't find the container with id 647deb5dbc02c360162c70ab012bd5b75679da6988456c7fa707b74b06e4bb3e Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.473932 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.474034 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/662118d5-f41a-4bd7-bb54-bdd93de97bb3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.474062 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9c9a79a-05fb-4a3f-86ce-db82148329b7-config-data\") pod \"glance-default-external-api-0\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.474097 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/662118d5-f41a-4bd7-bb54-bdd93de97bb3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.474135 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv8pm\" (UniqueName: \"kubernetes.io/projected/662118d5-f41a-4bd7-bb54-bdd93de97bb3-kube-api-access-gv8pm\") pod \"glance-default-internal-api-0\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.474172 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b9c9a79a-05fb-4a3f-86ce-db82148329b7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.474207 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9c9a79a-05fb-4a3f-86ce-db82148329b7-logs\") pod \"glance-default-external-api-0\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.474235 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/662118d5-f41a-4bd7-bb54-bdd93de97bb3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.474230 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.475246 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b9c9a79a-05fb-4a3f-86ce-db82148329b7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.475573 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9c9a79a-05fb-4a3f-86ce-db82148329b7-logs\") pod \"glance-default-external-api-0\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.476342 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/662118d5-f41a-4bd7-bb54-bdd93de97bb3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.476379 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/662118d5-f41a-4bd7-bb54-bdd93de97bb3-logs\") pod \"glance-default-internal-api-0\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.476429 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.476470 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9c9a79a-05fb-4a3f-86ce-db82148329b7-scripts\") pod \"glance-default-external-api-0\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.476499 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9c9a79a-05fb-4a3f-86ce-db82148329b7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.476550 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9c9a79a-05fb-4a3f-86ce-db82148329b7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.476587 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7srbl\" (UniqueName: \"kubernetes.io/projected/b9c9a79a-05fb-4a3f-86ce-db82148329b7-kube-api-access-7srbl\") pod \"glance-default-external-api-0\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.476615 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/662118d5-f41a-4bd7-bb54-bdd93de97bb3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.485826 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9c9a79a-05fb-4a3f-86ce-db82148329b7-config-data\") pod \"glance-default-external-api-0\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.492416 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9c9a79a-05fb-4a3f-86ce-db82148329b7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.499519 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9c9a79a-05fb-4a3f-86ce-db82148329b7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.516365 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9c9a79a-05fb-4a3f-86ce-db82148329b7-scripts\") pod \"glance-default-external-api-0\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.537321 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b181-account-create-7wdwx" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.553235 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7srbl\" (UniqueName: \"kubernetes.io/projected/b9c9a79a-05fb-4a3f-86ce-db82148329b7-kube-api-access-7srbl\") pod \"glance-default-external-api-0\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.553407 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.560963 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-vljcq"] Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.579054 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xmzd\" (UniqueName: \"kubernetes.io/projected/5920217a-f2c4-4b9a-97ac-b5b98be2e85d-kube-api-access-6xmzd\") pod \"5920217a-f2c4-4b9a-97ac-b5b98be2e85d\" (UID: \"5920217a-f2c4-4b9a-97ac-b5b98be2e85d\") " Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.579715 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/662118d5-f41a-4bd7-bb54-bdd93de97bb3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.579777 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/662118d5-f41a-4bd7-bb54-bdd93de97bb3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.579802 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/662118d5-f41a-4bd7-bb54-bdd93de97bb3-logs\") pod \"glance-default-internal-api-0\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.579841 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.579949 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/662118d5-f41a-4bd7-bb54-bdd93de97bb3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.580025 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/662118d5-f41a-4bd7-bb54-bdd93de97bb3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.580075 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/662118d5-f41a-4bd7-bb54-bdd93de97bb3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.580117 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv8pm\" (UniqueName: \"kubernetes.io/projected/662118d5-f41a-4bd7-bb54-bdd93de97bb3-kube-api-access-gv8pm\") pod \"glance-default-internal-api-0\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.581795 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/662118d5-f41a-4bd7-bb54-bdd93de97bb3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.584174 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/662118d5-f41a-4bd7-bb54-bdd93de97bb3-logs\") pod \"glance-default-internal-api-0\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.584294 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.586324 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/662118d5-f41a-4bd7-bb54-bdd93de97bb3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.588035 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/662118d5-f41a-4bd7-bb54-bdd93de97bb3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.595917 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5920217a-f2c4-4b9a-97ac-b5b98be2e85d-kube-api-access-6xmzd" (OuterVolumeSpecName: "kube-api-access-6xmzd") pod "5920217a-f2c4-4b9a-97ac-b5b98be2e85d" (UID: "5920217a-f2c4-4b9a-97ac-b5b98be2e85d"). InnerVolumeSpecName "kube-api-access-6xmzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.596919 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/662118d5-f41a-4bd7-bb54-bdd93de97bb3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.597921 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/662118d5-f41a-4bd7-bb54-bdd93de97bb3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.610961 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv8pm\" (UniqueName: \"kubernetes.io/projected/662118d5-f41a-4bd7-bb54-bdd93de97bb3-kube-api-access-gv8pm\") pod \"glance-default-internal-api-0\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.621445 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.685569 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xmzd\" (UniqueName: \"kubernetes.io/projected/5920217a-f2c4-4b9a-97ac-b5b98be2e85d-kube-api-access-6xmzd\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.790685 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.818220 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.987584 4833 generic.go:334] "Generic (PLEG): container finished" podID="3218fe1c-c66e-4f62-aea5-e0aee54359af" containerID="697c219a3e3efd40b369a5c15e000ed410b6492d98cbac53864e12ac21f72670" exitCode=0 Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.987920 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b4bfdd7f7-qt62p" event={"ID":"3218fe1c-c66e-4f62-aea5-e0aee54359af","Type":"ContainerDied","Data":"697c219a3e3efd40b369a5c15e000ed410b6492d98cbac53864e12ac21f72670"} Oct 13 06:46:15 crc kubenswrapper[4833]: I1013 06:46:15.987949 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b4bfdd7f7-qt62p" event={"ID":"3218fe1c-c66e-4f62-aea5-e0aee54359af","Type":"ContainerStarted","Data":"1ec870f2e5ad32095009a1261c016f03f706253d1e5f6ec73e6f40286567ab2c"} Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.000057 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-vljcq" event={"ID":"401c9b31-e308-4305-b56e-29fc8594856d","Type":"ContainerStarted","Data":"2bf42838d8feec31372ff081de6b666287cfa2db5fdf0b9639a35455dec2f0a8"} Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.005979 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88476575-d57c-4196-bd20-eee1fd482ead","Type":"ContainerStarted","Data":"f9a47226abeb39ecf5f631c40e286cbb695fb4fd5de13b8a075f4f51aee27efd"} Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.014207 4833 generic.go:334] "Generic (PLEG): container finished" podID="87554ede-75d3-4ee6-a16a-71c768cb09ef" containerID="ec50a07230d9abf532354ae3d840994406bec2abfb9b82f9150e98b445ac952c" exitCode=0 Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.014276 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" event={"ID":"87554ede-75d3-4ee6-a16a-71c768cb09ef","Type":"ContainerDied","Data":"ec50a07230d9abf532354ae3d840994406bec2abfb9b82f9150e98b445ac952c"} Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.014304 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" event={"ID":"87554ede-75d3-4ee6-a16a-71c768cb09ef","Type":"ContainerStarted","Data":"647deb5dbc02c360162c70ab012bd5b75679da6988456c7fa707b74b06e4bb3e"} Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.034613 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b5wjw" event={"ID":"a28b5320-1046-4ca8-974d-185760b4e612","Type":"ContainerStarted","Data":"af24bd2cda69df58cb2ef7d804c8efc24c48e8e22496a30e3272dfcd74a98be6"} Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.034658 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b5wjw" event={"ID":"a28b5320-1046-4ca8-974d-185760b4e612","Type":"ContainerStarted","Data":"1aabfb3906ab84d87d575531ed9bdfa3085dc4f806a3f8f6b35fba0dde2f4d3b"} Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.052186 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b181-account-create-7wdwx" event={"ID":"5920217a-f2c4-4b9a-97ac-b5b98be2e85d","Type":"ContainerDied","Data":"0826e4a44e02b058b744a772180552f2e20f7a61d82f45630b415ee6f6265ffb"} Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.052475 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0826e4a44e02b058b744a772180552f2e20f7a61d82f45630b415ee6f6265ffb" Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.052525 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b181-account-create-7wdwx" Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.073438 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-b5wjw" podStartSLOduration=2.073395126 podStartE2EDuration="2.073395126s" podCreationTimestamp="2025-10-13 06:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:46:16.056674931 +0000 UTC m=+1066.157097867" watchObservedRunningTime="2025-10-13 06:46:16.073395126 +0000 UTC m=+1066.173818042" Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.474587 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 06:46:16 crc kubenswrapper[4833]: W1013 06:46:16.498755 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9c9a79a_05fb_4a3f_86ce_db82148329b7.slice/crio-fb31c759c8e3844cdee6852a6986fac8f7d69b54e4b5a4dd70d12926e6057a3b WatchSource:0}: Error finding container fb31c759c8e3844cdee6852a6986fac8f7d69b54e4b5a4dd70d12926e6057a3b: Status 404 returned error can't find the container with id fb31c759c8e3844cdee6852a6986fac8f7d69b54e4b5a4dd70d12926e6057a3b Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.595802 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b4bfdd7f7-qt62p" Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.599010 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.649270 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05705398-ade7-423c-8752-10e9255703f6" path="/var/lib/kubelet/pods/05705398-ade7-423c-8752-10e9255703f6/volumes" Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.709440 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3218fe1c-c66e-4f62-aea5-e0aee54359af-dns-swift-storage-0\") pod \"3218fe1c-c66e-4f62-aea5-e0aee54359af\" (UID: \"3218fe1c-c66e-4f62-aea5-e0aee54359af\") " Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.709532 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3218fe1c-c66e-4f62-aea5-e0aee54359af-config\") pod \"3218fe1c-c66e-4f62-aea5-e0aee54359af\" (UID: \"3218fe1c-c66e-4f62-aea5-e0aee54359af\") " Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.709596 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7skln\" (UniqueName: \"kubernetes.io/projected/3218fe1c-c66e-4f62-aea5-e0aee54359af-kube-api-access-7skln\") pod \"3218fe1c-c66e-4f62-aea5-e0aee54359af\" (UID: \"3218fe1c-c66e-4f62-aea5-e0aee54359af\") " Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.709672 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3218fe1c-c66e-4f62-aea5-e0aee54359af-ovsdbserver-nb\") pod \"3218fe1c-c66e-4f62-aea5-e0aee54359af\" (UID: \"3218fe1c-c66e-4f62-aea5-e0aee54359af\") " Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.709786 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3218fe1c-c66e-4f62-aea5-e0aee54359af-ovsdbserver-sb\") pod \"3218fe1c-c66e-4f62-aea5-e0aee54359af\" (UID: \"3218fe1c-c66e-4f62-aea5-e0aee54359af\") " Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.709832 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3218fe1c-c66e-4f62-aea5-e0aee54359af-dns-svc\") pod \"3218fe1c-c66e-4f62-aea5-e0aee54359af\" (UID: \"3218fe1c-c66e-4f62-aea5-e0aee54359af\") " Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.715261 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3218fe1c-c66e-4f62-aea5-e0aee54359af-kube-api-access-7skln" (OuterVolumeSpecName: "kube-api-access-7skln") pod "3218fe1c-c66e-4f62-aea5-e0aee54359af" (UID: "3218fe1c-c66e-4f62-aea5-e0aee54359af"). InnerVolumeSpecName "kube-api-access-7skln". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.732890 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3218fe1c-c66e-4f62-aea5-e0aee54359af-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3218fe1c-c66e-4f62-aea5-e0aee54359af" (UID: "3218fe1c-c66e-4f62-aea5-e0aee54359af"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.735413 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3218fe1c-c66e-4f62-aea5-e0aee54359af-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3218fe1c-c66e-4f62-aea5-e0aee54359af" (UID: "3218fe1c-c66e-4f62-aea5-e0aee54359af"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.736704 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3218fe1c-c66e-4f62-aea5-e0aee54359af-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3218fe1c-c66e-4f62-aea5-e0aee54359af" (UID: "3218fe1c-c66e-4f62-aea5-e0aee54359af"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.738905 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3218fe1c-c66e-4f62-aea5-e0aee54359af-config" (OuterVolumeSpecName: "config") pod "3218fe1c-c66e-4f62-aea5-e0aee54359af" (UID: "3218fe1c-c66e-4f62-aea5-e0aee54359af"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.743896 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3218fe1c-c66e-4f62-aea5-e0aee54359af-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3218fe1c-c66e-4f62-aea5-e0aee54359af" (UID: "3218fe1c-c66e-4f62-aea5-e0aee54359af"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.812424 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3218fe1c-c66e-4f62-aea5-e0aee54359af-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.812463 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3218fe1c-c66e-4f62-aea5-e0aee54359af-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.812476 4833 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3218fe1c-c66e-4f62-aea5-e0aee54359af-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.812485 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3218fe1c-c66e-4f62-aea5-e0aee54359af-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.812494 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7skln\" (UniqueName: \"kubernetes.io/projected/3218fe1c-c66e-4f62-aea5-e0aee54359af-kube-api-access-7skln\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.812504 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3218fe1c-c66e-4f62-aea5-e0aee54359af-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:16 crc kubenswrapper[4833]: I1013 06:46:16.986678 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 06:46:17 crc kubenswrapper[4833]: I1013 06:46:17.027567 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:46:17 crc kubenswrapper[4833]: I1013 06:46:17.056254 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 06:46:17 crc kubenswrapper[4833]: I1013 06:46:17.067951 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b9c9a79a-05fb-4a3f-86ce-db82148329b7","Type":"ContainerStarted","Data":"fb31c759c8e3844cdee6852a6986fac8f7d69b54e4b5a4dd70d12926e6057a3b"} Oct 13 06:46:17 crc kubenswrapper[4833]: I1013 06:46:17.069417 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"662118d5-f41a-4bd7-bb54-bdd93de97bb3","Type":"ContainerStarted","Data":"dba18e9bb30e84b090ac850b3700c5245936bbb6495735bf0379db8a3575f9c8"} Oct 13 06:46:17 crc kubenswrapper[4833]: I1013 06:46:17.071032 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b4bfdd7f7-qt62p" event={"ID":"3218fe1c-c66e-4f62-aea5-e0aee54359af","Type":"ContainerDied","Data":"1ec870f2e5ad32095009a1261c016f03f706253d1e5f6ec73e6f40286567ab2c"} Oct 13 06:46:17 crc kubenswrapper[4833]: I1013 06:46:17.071063 4833 scope.go:117] "RemoveContainer" containerID="697c219a3e3efd40b369a5c15e000ed410b6492d98cbac53864e12ac21f72670" Oct 13 06:46:17 crc kubenswrapper[4833]: I1013 06:46:17.071081 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b4bfdd7f7-qt62p" Oct 13 06:46:17 crc kubenswrapper[4833]: I1013 06:46:17.074682 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" event={"ID":"87554ede-75d3-4ee6-a16a-71c768cb09ef","Type":"ContainerStarted","Data":"669a4b07d2f582062a00e9808d522c9427ff0c3677eab5936a594d9ecbbd1677"} Oct 13 06:46:17 crc kubenswrapper[4833]: I1013 06:46:17.075034 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" Oct 13 06:46:17 crc kubenswrapper[4833]: I1013 06:46:17.111595 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" podStartSLOduration=3.111517023 podStartE2EDuration="3.111517023s" podCreationTimestamp="2025-10-13 06:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:46:17.110680899 +0000 UTC m=+1067.211103825" watchObservedRunningTime="2025-10-13 06:46:17.111517023 +0000 UTC m=+1067.211939939" Oct 13 06:46:17 crc kubenswrapper[4833]: I1013 06:46:17.199178 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b4bfdd7f7-qt62p"] Oct 13 06:46:17 crc kubenswrapper[4833]: I1013 06:46:17.224840 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b4bfdd7f7-qt62p"] Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.040644 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-ltqfn"] Oct 13 06:46:18 crc kubenswrapper[4833]: E1013 06:46:18.041330 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3218fe1c-c66e-4f62-aea5-e0aee54359af" containerName="init" Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.041342 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3218fe1c-c66e-4f62-aea5-e0aee54359af" containerName="init" Oct 13 06:46:18 crc kubenswrapper[4833]: E1013 06:46:18.041359 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5920217a-f2c4-4b9a-97ac-b5b98be2e85d" containerName="mariadb-account-create" Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.041365 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5920217a-f2c4-4b9a-97ac-b5b98be2e85d" containerName="mariadb-account-create" Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.041559 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="5920217a-f2c4-4b9a-97ac-b5b98be2e85d" containerName="mariadb-account-create" Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.041595 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3218fe1c-c66e-4f62-aea5-e0aee54359af" containerName="init" Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.042129 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ltqfn" Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.047007 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.047038 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.047259 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-wjvmq" Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.059048 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-ltqfn"] Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.086634 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b9c9a79a-05fb-4a3f-86ce-db82148329b7","Type":"ContainerStarted","Data":"e76cc24a2ac94de806ac4fc9d7b14ca2fbf2c4168bff8415d42bbe0da3fe7c8e"} Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.086675 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b9c9a79a-05fb-4a3f-86ce-db82148329b7","Type":"ContainerStarted","Data":"2158994d9cef2d1f01dc7ac527dc46f90fdc320c4f2710c425f7843cb592ba4e"} Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.086738 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b9c9a79a-05fb-4a3f-86ce-db82148329b7" containerName="glance-log" containerID="cri-o://2158994d9cef2d1f01dc7ac527dc46f90fdc320c4f2710c425f7843cb592ba4e" gracePeriod=30 Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.086950 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b9c9a79a-05fb-4a3f-86ce-db82148329b7" containerName="glance-httpd" containerID="cri-o://e76cc24a2ac94de806ac4fc9d7b14ca2fbf2c4168bff8415d42bbe0da3fe7c8e" gracePeriod=30 Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.090015 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"662118d5-f41a-4bd7-bb54-bdd93de97bb3","Type":"ContainerStarted","Data":"2491b3c9fa559cb8315a06b34661008a09343924c4386d467d82e099aec1bed1"} Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.122253 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.122238284 podStartE2EDuration="4.122238284s" podCreationTimestamp="2025-10-13 06:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:46:18.118067183 +0000 UTC m=+1068.218490099" watchObservedRunningTime="2025-10-13 06:46:18.122238284 +0000 UTC m=+1068.222661200" Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.138674 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5d6e331-404e-48b3-b9ee-66386208af92-config-data\") pod \"cinder-db-sync-ltqfn\" (UID: \"b5d6e331-404e-48b3-b9ee-66386208af92\") " pod="openstack/cinder-db-sync-ltqfn" Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.138759 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b5d6e331-404e-48b3-b9ee-66386208af92-etc-machine-id\") pod \"cinder-db-sync-ltqfn\" (UID: \"b5d6e331-404e-48b3-b9ee-66386208af92\") " pod="openstack/cinder-db-sync-ltqfn" Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.138802 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5d6e331-404e-48b3-b9ee-66386208af92-scripts\") pod \"cinder-db-sync-ltqfn\" (UID: \"b5d6e331-404e-48b3-b9ee-66386208af92\") " pod="openstack/cinder-db-sync-ltqfn" Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.138924 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b5d6e331-404e-48b3-b9ee-66386208af92-db-sync-config-data\") pod \"cinder-db-sync-ltqfn\" (UID: \"b5d6e331-404e-48b3-b9ee-66386208af92\") " pod="openstack/cinder-db-sync-ltqfn" Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.138962 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5d6e331-404e-48b3-b9ee-66386208af92-combined-ca-bundle\") pod \"cinder-db-sync-ltqfn\" (UID: \"b5d6e331-404e-48b3-b9ee-66386208af92\") " pod="openstack/cinder-db-sync-ltqfn" Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.139130 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfq9w\" (UniqueName: \"kubernetes.io/projected/b5d6e331-404e-48b3-b9ee-66386208af92-kube-api-access-xfq9w\") pod \"cinder-db-sync-ltqfn\" (UID: \"b5d6e331-404e-48b3-b9ee-66386208af92\") " pod="openstack/cinder-db-sync-ltqfn" Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.243403 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfq9w\" (UniqueName: \"kubernetes.io/projected/b5d6e331-404e-48b3-b9ee-66386208af92-kube-api-access-xfq9w\") pod \"cinder-db-sync-ltqfn\" (UID: \"b5d6e331-404e-48b3-b9ee-66386208af92\") " pod="openstack/cinder-db-sync-ltqfn" Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.243474 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5d6e331-404e-48b3-b9ee-66386208af92-config-data\") pod \"cinder-db-sync-ltqfn\" (UID: \"b5d6e331-404e-48b3-b9ee-66386208af92\") " pod="openstack/cinder-db-sync-ltqfn" Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.243497 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b5d6e331-404e-48b3-b9ee-66386208af92-etc-machine-id\") pod \"cinder-db-sync-ltqfn\" (UID: \"b5d6e331-404e-48b3-b9ee-66386208af92\") " pod="openstack/cinder-db-sync-ltqfn" Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.243521 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5d6e331-404e-48b3-b9ee-66386208af92-scripts\") pod \"cinder-db-sync-ltqfn\" (UID: \"b5d6e331-404e-48b3-b9ee-66386208af92\") " pod="openstack/cinder-db-sync-ltqfn" Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.243569 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b5d6e331-404e-48b3-b9ee-66386208af92-db-sync-config-data\") pod \"cinder-db-sync-ltqfn\" (UID: \"b5d6e331-404e-48b3-b9ee-66386208af92\") " pod="openstack/cinder-db-sync-ltqfn" Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.243588 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5d6e331-404e-48b3-b9ee-66386208af92-combined-ca-bundle\") pod \"cinder-db-sync-ltqfn\" (UID: \"b5d6e331-404e-48b3-b9ee-66386208af92\") " pod="openstack/cinder-db-sync-ltqfn" Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.244591 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b5d6e331-404e-48b3-b9ee-66386208af92-etc-machine-id\") pod \"cinder-db-sync-ltqfn\" (UID: \"b5d6e331-404e-48b3-b9ee-66386208af92\") " pod="openstack/cinder-db-sync-ltqfn" Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.263362 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5d6e331-404e-48b3-b9ee-66386208af92-config-data\") pod \"cinder-db-sync-ltqfn\" (UID: \"b5d6e331-404e-48b3-b9ee-66386208af92\") " pod="openstack/cinder-db-sync-ltqfn" Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.269098 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5d6e331-404e-48b3-b9ee-66386208af92-combined-ca-bundle\") pod \"cinder-db-sync-ltqfn\" (UID: \"b5d6e331-404e-48b3-b9ee-66386208af92\") " pod="openstack/cinder-db-sync-ltqfn" Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.277037 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b5d6e331-404e-48b3-b9ee-66386208af92-db-sync-config-data\") pod \"cinder-db-sync-ltqfn\" (UID: \"b5d6e331-404e-48b3-b9ee-66386208af92\") " pod="openstack/cinder-db-sync-ltqfn" Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.277295 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5d6e331-404e-48b3-b9ee-66386208af92-scripts\") pod \"cinder-db-sync-ltqfn\" (UID: \"b5d6e331-404e-48b3-b9ee-66386208af92\") " pod="openstack/cinder-db-sync-ltqfn" Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.290257 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfq9w\" (UniqueName: \"kubernetes.io/projected/b5d6e331-404e-48b3-b9ee-66386208af92-kube-api-access-xfq9w\") pod \"cinder-db-sync-ltqfn\" (UID: \"b5d6e331-404e-48b3-b9ee-66386208af92\") " pod="openstack/cinder-db-sync-ltqfn" Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.361239 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ltqfn" Oct 13 06:46:18 crc kubenswrapper[4833]: I1013 06:46:18.638220 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3218fe1c-c66e-4f62-aea5-e0aee54359af" path="/var/lib/kubelet/pods/3218fe1c-c66e-4f62-aea5-e0aee54359af/volumes" Oct 13 06:46:19 crc kubenswrapper[4833]: I1013 06:46:19.106346 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-564965cbfc-cblgb" podUID="05705398-ade7-423c-8752-10e9255703f6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Oct 13 06:46:19 crc kubenswrapper[4833]: I1013 06:46:19.123774 4833 generic.go:334] "Generic (PLEG): container finished" podID="b9c9a79a-05fb-4a3f-86ce-db82148329b7" containerID="e76cc24a2ac94de806ac4fc9d7b14ca2fbf2c4168bff8415d42bbe0da3fe7c8e" exitCode=143 Oct 13 06:46:19 crc kubenswrapper[4833]: I1013 06:46:19.123810 4833 generic.go:334] "Generic (PLEG): container finished" podID="b9c9a79a-05fb-4a3f-86ce-db82148329b7" containerID="2158994d9cef2d1f01dc7ac527dc46f90fdc320c4f2710c425f7843cb592ba4e" exitCode=143 Oct 13 06:46:19 crc kubenswrapper[4833]: I1013 06:46:19.123821 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b9c9a79a-05fb-4a3f-86ce-db82148329b7","Type":"ContainerDied","Data":"e76cc24a2ac94de806ac4fc9d7b14ca2fbf2c4168bff8415d42bbe0da3fe7c8e"} Oct 13 06:46:19 crc kubenswrapper[4833]: I1013 06:46:19.123868 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b9c9a79a-05fb-4a3f-86ce-db82148329b7","Type":"ContainerDied","Data":"2158994d9cef2d1f01dc7ac527dc46f90fdc320c4f2710c425f7843cb592ba4e"} Oct 13 06:46:19 crc kubenswrapper[4833]: I1013 06:46:19.126161 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"662118d5-f41a-4bd7-bb54-bdd93de97bb3","Type":"ContainerStarted","Data":"d682073c8d5d15529859df40df9640910c3634d1fa3ac73f61b6e5d053204305"} Oct 13 06:46:19 crc kubenswrapper[4833]: I1013 06:46:19.126514 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="662118d5-f41a-4bd7-bb54-bdd93de97bb3" containerName="glance-httpd" containerID="cri-o://d682073c8d5d15529859df40df9640910c3634d1fa3ac73f61b6e5d053204305" gracePeriod=30 Oct 13 06:46:19 crc kubenswrapper[4833]: I1013 06:46:19.126465 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="662118d5-f41a-4bd7-bb54-bdd93de97bb3" containerName="glance-log" containerID="cri-o://2491b3c9fa559cb8315a06b34661008a09343924c4386d467d82e099aec1bed1" gracePeriod=30 Oct 13 06:46:19 crc kubenswrapper[4833]: I1013 06:46:19.162693 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.162671358 podStartE2EDuration="5.162671358s" podCreationTimestamp="2025-10-13 06:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:46:19.14791563 +0000 UTC m=+1069.248338566" watchObservedRunningTime="2025-10-13 06:46:19.162671358 +0000 UTC m=+1069.263094274" Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.136784 4833 generic.go:334] "Generic (PLEG): container finished" podID="a28b5320-1046-4ca8-974d-185760b4e612" containerID="af24bd2cda69df58cb2ef7d804c8efc24c48e8e22496a30e3272dfcd74a98be6" exitCode=0 Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.136862 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b5wjw" event={"ID":"a28b5320-1046-4ca8-974d-185760b4e612","Type":"ContainerDied","Data":"af24bd2cda69df58cb2ef7d804c8efc24c48e8e22496a30e3272dfcd74a98be6"} Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.140938 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b9c9a79a-05fb-4a3f-86ce-db82148329b7","Type":"ContainerDied","Data":"fb31c759c8e3844cdee6852a6986fac8f7d69b54e4b5a4dd70d12926e6057a3b"} Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.140974 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb31c759c8e3844cdee6852a6986fac8f7d69b54e4b5a4dd70d12926e6057a3b" Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.143901 4833 generic.go:334] "Generic (PLEG): container finished" podID="662118d5-f41a-4bd7-bb54-bdd93de97bb3" containerID="d682073c8d5d15529859df40df9640910c3634d1fa3ac73f61b6e5d053204305" exitCode=0 Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.143931 4833 generic.go:334] "Generic (PLEG): container finished" podID="662118d5-f41a-4bd7-bb54-bdd93de97bb3" containerID="2491b3c9fa559cb8315a06b34661008a09343924c4386d467d82e099aec1bed1" exitCode=143 Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.143957 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"662118d5-f41a-4bd7-bb54-bdd93de97bb3","Type":"ContainerDied","Data":"d682073c8d5d15529859df40df9640910c3634d1fa3ac73f61b6e5d053204305"} Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.143983 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"662118d5-f41a-4bd7-bb54-bdd93de97bb3","Type":"ContainerDied","Data":"2491b3c9fa559cb8315a06b34661008a09343924c4386d467d82e099aec1bed1"} Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.195605 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.288317 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.288391 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9c9a79a-05fb-4a3f-86ce-db82148329b7-public-tls-certs\") pod \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.288419 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9c9a79a-05fb-4a3f-86ce-db82148329b7-combined-ca-bundle\") pod \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.288528 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b9c9a79a-05fb-4a3f-86ce-db82148329b7-httpd-run\") pod \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.288606 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9c9a79a-05fb-4a3f-86ce-db82148329b7-config-data\") pod \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.288645 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7srbl\" (UniqueName: \"kubernetes.io/projected/b9c9a79a-05fb-4a3f-86ce-db82148329b7-kube-api-access-7srbl\") pod \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.288674 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9c9a79a-05fb-4a3f-86ce-db82148329b7-logs\") pod \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.288767 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9c9a79a-05fb-4a3f-86ce-db82148329b7-scripts\") pod \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\" (UID: \"b9c9a79a-05fb-4a3f-86ce-db82148329b7\") " Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.289905 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9c9a79a-05fb-4a3f-86ce-db82148329b7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b9c9a79a-05fb-4a3f-86ce-db82148329b7" (UID: "b9c9a79a-05fb-4a3f-86ce-db82148329b7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.291519 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9c9a79a-05fb-4a3f-86ce-db82148329b7-logs" (OuterVolumeSpecName: "logs") pod "b9c9a79a-05fb-4a3f-86ce-db82148329b7" (UID: "b9c9a79a-05fb-4a3f-86ce-db82148329b7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.302798 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9c9a79a-05fb-4a3f-86ce-db82148329b7-scripts" (OuterVolumeSpecName: "scripts") pod "b9c9a79a-05fb-4a3f-86ce-db82148329b7" (UID: "b9c9a79a-05fb-4a3f-86ce-db82148329b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.308426 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9c9a79a-05fb-4a3f-86ce-db82148329b7-kube-api-access-7srbl" (OuterVolumeSpecName: "kube-api-access-7srbl") pod "b9c9a79a-05fb-4a3f-86ce-db82148329b7" (UID: "b9c9a79a-05fb-4a3f-86ce-db82148329b7"). InnerVolumeSpecName "kube-api-access-7srbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.311053 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "b9c9a79a-05fb-4a3f-86ce-db82148329b7" (UID: "b9c9a79a-05fb-4a3f-86ce-db82148329b7"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.342249 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9c9a79a-05fb-4a3f-86ce-db82148329b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9c9a79a-05fb-4a3f-86ce-db82148329b7" (UID: "b9c9a79a-05fb-4a3f-86ce-db82148329b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.363104 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9c9a79a-05fb-4a3f-86ce-db82148329b7-config-data" (OuterVolumeSpecName: "config-data") pod "b9c9a79a-05fb-4a3f-86ce-db82148329b7" (UID: "b9c9a79a-05fb-4a3f-86ce-db82148329b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.366814 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9c9a79a-05fb-4a3f-86ce-db82148329b7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b9c9a79a-05fb-4a3f-86ce-db82148329b7" (UID: "b9c9a79a-05fb-4a3f-86ce-db82148329b7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.390489 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9c9a79a-05fb-4a3f-86ce-db82148329b7-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.390575 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.390592 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9c9a79a-05fb-4a3f-86ce-db82148329b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.390606 4833 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9c9a79a-05fb-4a3f-86ce-db82148329b7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.390616 4833 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b9c9a79a-05fb-4a3f-86ce-db82148329b7-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.390627 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9c9a79a-05fb-4a3f-86ce-db82148329b7-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.390637 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7srbl\" (UniqueName: \"kubernetes.io/projected/b9c9a79a-05fb-4a3f-86ce-db82148329b7-kube-api-access-7srbl\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.390647 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9c9a79a-05fb-4a3f-86ce-db82148329b7-logs\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.411903 4833 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Oct 13 06:46:20 crc kubenswrapper[4833]: I1013 06:46:20.492782 4833 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.154835 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.180341 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.190884 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.219720 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 06:46:21 crc kubenswrapper[4833]: E1013 06:46:21.220308 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9c9a79a-05fb-4a3f-86ce-db82148329b7" containerName="glance-log" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.220327 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9c9a79a-05fb-4a3f-86ce-db82148329b7" containerName="glance-log" Oct 13 06:46:21 crc kubenswrapper[4833]: E1013 06:46:21.220346 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9c9a79a-05fb-4a3f-86ce-db82148329b7" containerName="glance-httpd" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.220355 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9c9a79a-05fb-4a3f-86ce-db82148329b7" containerName="glance-httpd" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.220520 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9c9a79a-05fb-4a3f-86ce-db82148329b7" containerName="glance-log" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.220586 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9c9a79a-05fb-4a3f-86ce-db82148329b7" containerName="glance-httpd" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.221594 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.226053 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.226291 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.228023 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.315493 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f571ae4-3483-4a8e-8f33-f445c77395c2-scripts\") pod \"glance-default-external-api-0\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.315558 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f571ae4-3483-4a8e-8f33-f445c77395c2-logs\") pod \"glance-default-external-api-0\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.315586 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f571ae4-3483-4a8e-8f33-f445c77395c2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.315635 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f571ae4-3483-4a8e-8f33-f445c77395c2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.315656 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f571ae4-3483-4a8e-8f33-f445c77395c2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.315709 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22q57\" (UniqueName: \"kubernetes.io/projected/5f571ae4-3483-4a8e-8f33-f445c77395c2-kube-api-access-22q57\") pod \"glance-default-external-api-0\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.315849 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f571ae4-3483-4a8e-8f33-f445c77395c2-config-data\") pod \"glance-default-external-api-0\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.316023 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.417388 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f571ae4-3483-4a8e-8f33-f445c77395c2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.417688 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f571ae4-3483-4a8e-8f33-f445c77395c2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.417767 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22q57\" (UniqueName: \"kubernetes.io/projected/5f571ae4-3483-4a8e-8f33-f445c77395c2-kube-api-access-22q57\") pod \"glance-default-external-api-0\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.417808 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f571ae4-3483-4a8e-8f33-f445c77395c2-config-data\") pod \"glance-default-external-api-0\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.417866 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.417918 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f571ae4-3483-4a8e-8f33-f445c77395c2-scripts\") pod \"glance-default-external-api-0\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.417954 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f571ae4-3483-4a8e-8f33-f445c77395c2-logs\") pod \"glance-default-external-api-0\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.417993 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f571ae4-3483-4a8e-8f33-f445c77395c2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.418962 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f571ae4-3483-4a8e-8f33-f445c77395c2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.420314 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f571ae4-3483-4a8e-8f33-f445c77395c2-logs\") pod \"glance-default-external-api-0\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.420668 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.422573 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f571ae4-3483-4a8e-8f33-f445c77395c2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.430410 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f571ae4-3483-4a8e-8f33-f445c77395c2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.436380 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f571ae4-3483-4a8e-8f33-f445c77395c2-scripts\") pod \"glance-default-external-api-0\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.442384 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f571ae4-3483-4a8e-8f33-f445c77395c2-config-data\") pod \"glance-default-external-api-0\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.444572 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22q57\" (UniqueName: \"kubernetes.io/projected/5f571ae4-3483-4a8e-8f33-f445c77395c2-kube-api-access-22q57\") pod \"glance-default-external-api-0\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.456802 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " pod="openstack/glance-default-external-api-0" Oct 13 06:46:21 crc kubenswrapper[4833]: I1013 06:46:21.553037 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.648294 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9c9a79a-05fb-4a3f-86ce-db82148329b7" path="/var/lib/kubelet/pods/b9c9a79a-05fb-4a3f-86ce-db82148329b7/volumes" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.666495 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.678429 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b5wjw" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.741656 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbtw7\" (UniqueName: \"kubernetes.io/projected/a28b5320-1046-4ca8-974d-185760b4e612-kube-api-access-xbtw7\") pod \"a28b5320-1046-4ca8-974d-185760b4e612\" (UID: \"a28b5320-1046-4ca8-974d-185760b4e612\") " Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.741719 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/662118d5-f41a-4bd7-bb54-bdd93de97bb3-httpd-run\") pod \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.741765 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/662118d5-f41a-4bd7-bb54-bdd93de97bb3-combined-ca-bundle\") pod \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.741797 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/662118d5-f41a-4bd7-bb54-bdd93de97bb3-internal-tls-certs\") pod \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.741821 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a28b5320-1046-4ca8-974d-185760b4e612-fernet-keys\") pod \"a28b5320-1046-4ca8-974d-185760b4e612\" (UID: \"a28b5320-1046-4ca8-974d-185760b4e612\") " Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.741845 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a28b5320-1046-4ca8-974d-185760b4e612-credential-keys\") pod \"a28b5320-1046-4ca8-974d-185760b4e612\" (UID: \"a28b5320-1046-4ca8-974d-185760b4e612\") " Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.741886 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv8pm\" (UniqueName: \"kubernetes.io/projected/662118d5-f41a-4bd7-bb54-bdd93de97bb3-kube-api-access-gv8pm\") pod \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.741950 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a28b5320-1046-4ca8-974d-185760b4e612-scripts\") pod \"a28b5320-1046-4ca8-974d-185760b4e612\" (UID: \"a28b5320-1046-4ca8-974d-185760b4e612\") " Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.742016 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/662118d5-f41a-4bd7-bb54-bdd93de97bb3-config-data\") pod \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.742066 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.742101 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a28b5320-1046-4ca8-974d-185760b4e612-combined-ca-bundle\") pod \"a28b5320-1046-4ca8-974d-185760b4e612\" (UID: \"a28b5320-1046-4ca8-974d-185760b4e612\") " Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.742273 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/662118d5-f41a-4bd7-bb54-bdd93de97bb3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "662118d5-f41a-4bd7-bb54-bdd93de97bb3" (UID: "662118d5-f41a-4bd7-bb54-bdd93de97bb3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.742424 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/662118d5-f41a-4bd7-bb54-bdd93de97bb3-scripts\") pod \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.742484 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a28b5320-1046-4ca8-974d-185760b4e612-config-data\") pod \"a28b5320-1046-4ca8-974d-185760b4e612\" (UID: \"a28b5320-1046-4ca8-974d-185760b4e612\") " Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.742516 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/662118d5-f41a-4bd7-bb54-bdd93de97bb3-logs\") pod \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\" (UID: \"662118d5-f41a-4bd7-bb54-bdd93de97bb3\") " Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.743286 4833 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/662118d5-f41a-4bd7-bb54-bdd93de97bb3-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.743871 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/662118d5-f41a-4bd7-bb54-bdd93de97bb3-logs" (OuterVolumeSpecName: "logs") pod "662118d5-f41a-4bd7-bb54-bdd93de97bb3" (UID: "662118d5-f41a-4bd7-bb54-bdd93de97bb3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.750796 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a28b5320-1046-4ca8-974d-185760b4e612-kube-api-access-xbtw7" (OuterVolumeSpecName: "kube-api-access-xbtw7") pod "a28b5320-1046-4ca8-974d-185760b4e612" (UID: "a28b5320-1046-4ca8-974d-185760b4e612"). InnerVolumeSpecName "kube-api-access-xbtw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.750835 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "662118d5-f41a-4bd7-bb54-bdd93de97bb3" (UID: "662118d5-f41a-4bd7-bb54-bdd93de97bb3"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.750871 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/662118d5-f41a-4bd7-bb54-bdd93de97bb3-scripts" (OuterVolumeSpecName: "scripts") pod "662118d5-f41a-4bd7-bb54-bdd93de97bb3" (UID: "662118d5-f41a-4bd7-bb54-bdd93de97bb3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.751688 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a28b5320-1046-4ca8-974d-185760b4e612-scripts" (OuterVolumeSpecName: "scripts") pod "a28b5320-1046-4ca8-974d-185760b4e612" (UID: "a28b5320-1046-4ca8-974d-185760b4e612"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.752623 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a28b5320-1046-4ca8-974d-185760b4e612-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a28b5320-1046-4ca8-974d-185760b4e612" (UID: "a28b5320-1046-4ca8-974d-185760b4e612"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.756963 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a28b5320-1046-4ca8-974d-185760b4e612-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a28b5320-1046-4ca8-974d-185760b4e612" (UID: "a28b5320-1046-4ca8-974d-185760b4e612"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.760801 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/662118d5-f41a-4bd7-bb54-bdd93de97bb3-kube-api-access-gv8pm" (OuterVolumeSpecName: "kube-api-access-gv8pm") pod "662118d5-f41a-4bd7-bb54-bdd93de97bb3" (UID: "662118d5-f41a-4bd7-bb54-bdd93de97bb3"). InnerVolumeSpecName "kube-api-access-gv8pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.842492 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-c086-account-create-2xc6m"] Oct 13 06:46:22 crc kubenswrapper[4833]: E1013 06:46:22.843683 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a28b5320-1046-4ca8-974d-185760b4e612" containerName="keystone-bootstrap" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.843703 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a28b5320-1046-4ca8-974d-185760b4e612" containerName="keystone-bootstrap" Oct 13 06:46:22 crc kubenswrapper[4833]: E1013 06:46:22.843716 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="662118d5-f41a-4bd7-bb54-bdd93de97bb3" containerName="glance-httpd" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.843723 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="662118d5-f41a-4bd7-bb54-bdd93de97bb3" containerName="glance-httpd" Oct 13 06:46:22 crc kubenswrapper[4833]: E1013 06:46:22.843738 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="662118d5-f41a-4bd7-bb54-bdd93de97bb3" containerName="glance-log" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.843744 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="662118d5-f41a-4bd7-bb54-bdd93de97bb3" containerName="glance-log" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.843899 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="662118d5-f41a-4bd7-bb54-bdd93de97bb3" containerName="glance-httpd" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.843918 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="662118d5-f41a-4bd7-bb54-bdd93de97bb3" containerName="glance-log" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.843942 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a28b5320-1046-4ca8-974d-185760b4e612" containerName="keystone-bootstrap" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.845344 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/662118d5-f41a-4bd7-bb54-bdd93de97bb3-logs\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.845364 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbtw7\" (UniqueName: \"kubernetes.io/projected/a28b5320-1046-4ca8-974d-185760b4e612-kube-api-access-xbtw7\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.845375 4833 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a28b5320-1046-4ca8-974d-185760b4e612-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.845385 4833 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a28b5320-1046-4ca8-974d-185760b4e612-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.845393 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv8pm\" (UniqueName: \"kubernetes.io/projected/662118d5-f41a-4bd7-bb54-bdd93de97bb3-kube-api-access-gv8pm\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.845402 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a28b5320-1046-4ca8-974d-185760b4e612-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.845425 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.845434 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/662118d5-f41a-4bd7-bb54-bdd93de97bb3-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.845472 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c086-account-create-2xc6m" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.848021 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.868765 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-c086-account-create-2xc6m"] Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.871187 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/662118d5-f41a-4bd7-bb54-bdd93de97bb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "662118d5-f41a-4bd7-bb54-bdd93de97bb3" (UID: "662118d5-f41a-4bd7-bb54-bdd93de97bb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.890122 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a28b5320-1046-4ca8-974d-185760b4e612-config-data" (OuterVolumeSpecName: "config-data") pod "a28b5320-1046-4ca8-974d-185760b4e612" (UID: "a28b5320-1046-4ca8-974d-185760b4e612"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.893346 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a28b5320-1046-4ca8-974d-185760b4e612-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a28b5320-1046-4ca8-974d-185760b4e612" (UID: "a28b5320-1046-4ca8-974d-185760b4e612"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.897916 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-ltqfn"] Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.903703 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/662118d5-f41a-4bd7-bb54-bdd93de97bb3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "662118d5-f41a-4bd7-bb54-bdd93de97bb3" (UID: "662118d5-f41a-4bd7-bb54-bdd93de97bb3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.917922 4833 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.927889 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/662118d5-f41a-4bd7-bb54-bdd93de97bb3-config-data" (OuterVolumeSpecName: "config-data") pod "662118d5-f41a-4bd7-bb54-bdd93de97bb3" (UID: "662118d5-f41a-4bd7-bb54-bdd93de97bb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.946691 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j746h\" (UniqueName: \"kubernetes.io/projected/6571f0ea-a7f7-4ba4-bd41-a59f92642ddc-kube-api-access-j746h\") pod \"barbican-c086-account-create-2xc6m\" (UID: \"6571f0ea-a7f7-4ba4-bd41-a59f92642ddc\") " pod="openstack/barbican-c086-account-create-2xc6m" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.947127 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/662118d5-f41a-4bd7-bb54-bdd93de97bb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.947190 4833 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/662118d5-f41a-4bd7-bb54-bdd93de97bb3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.947202 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/662118d5-f41a-4bd7-bb54-bdd93de97bb3-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.947212 4833 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.947221 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a28b5320-1046-4ca8-974d-185760b4e612-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:22 crc kubenswrapper[4833]: I1013 06:46:22.947231 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a28b5320-1046-4ca8-974d-185760b4e612-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.030468 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-f5ac-account-create-dhkt2"] Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.031494 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f5ac-account-create-dhkt2" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.033126 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.041129 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f5ac-account-create-dhkt2"] Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.049399 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j746h\" (UniqueName: \"kubernetes.io/projected/6571f0ea-a7f7-4ba4-bd41-a59f92642ddc-kube-api-access-j746h\") pod \"barbican-c086-account-create-2xc6m\" (UID: \"6571f0ea-a7f7-4ba4-bd41-a59f92642ddc\") " pod="openstack/barbican-c086-account-create-2xc6m" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.077275 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j746h\" (UniqueName: \"kubernetes.io/projected/6571f0ea-a7f7-4ba4-bd41-a59f92642ddc-kube-api-access-j746h\") pod \"barbican-c086-account-create-2xc6m\" (UID: \"6571f0ea-a7f7-4ba4-bd41-a59f92642ddc\") " pod="openstack/barbican-c086-account-create-2xc6m" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.143804 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 06:46:23 crc kubenswrapper[4833]: W1013 06:46:23.148388 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f571ae4_3483_4a8e_8f33_f445c77395c2.slice/crio-e72bf7c77dc7789deed88c0d63b45c7b3c2274e6425a22d2b374b439f501f871 WatchSource:0}: Error finding container e72bf7c77dc7789deed88c0d63b45c7b3c2274e6425a22d2b374b439f501f871: Status 404 returned error can't find the container with id e72bf7c77dc7789deed88c0d63b45c7b3c2274e6425a22d2b374b439f501f871 Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.150891 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p27zv\" (UniqueName: \"kubernetes.io/projected/9d2b8534-5a7a-4f8c-95d6-f3ceb6475639-kube-api-access-p27zv\") pod \"neutron-f5ac-account-create-dhkt2\" (UID: \"9d2b8534-5a7a-4f8c-95d6-f3ceb6475639\") " pod="openstack/neutron-f5ac-account-create-dhkt2" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.169722 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c086-account-create-2xc6m" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.175688 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b5wjw" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.177585 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b5wjw" event={"ID":"a28b5320-1046-4ca8-974d-185760b4e612","Type":"ContainerDied","Data":"1aabfb3906ab84d87d575531ed9bdfa3085dc4f806a3f8f6b35fba0dde2f4d3b"} Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.177621 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1aabfb3906ab84d87d575531ed9bdfa3085dc4f806a3f8f6b35fba0dde2f4d3b" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.180088 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"662118d5-f41a-4bd7-bb54-bdd93de97bb3","Type":"ContainerDied","Data":"dba18e9bb30e84b090ac850b3700c5245936bbb6495735bf0379db8a3575f9c8"} Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.180174 4833 scope.go:117] "RemoveContainer" containerID="d682073c8d5d15529859df40df9640910c3634d1fa3ac73f61b6e5d053204305" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.180205 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.181668 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ltqfn" event={"ID":"b5d6e331-404e-48b3-b9ee-66386208af92","Type":"ContainerStarted","Data":"81a0b378f31bd6d899f9c4dfba05c3dd6bcd7e5caca3a7c448508c1eba0d678e"} Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.183221 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5f571ae4-3483-4a8e-8f33-f445c77395c2","Type":"ContainerStarted","Data":"e72bf7c77dc7789deed88c0d63b45c7b3c2274e6425a22d2b374b439f501f871"} Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.184691 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-vljcq" event={"ID":"401c9b31-e308-4305-b56e-29fc8594856d","Type":"ContainerStarted","Data":"718295aa7ee717c333352fdf443a7bba813f3a4c32d380f3c3fa23c934f30af0"} Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.187525 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88476575-d57c-4196-bd20-eee1fd482ead","Type":"ContainerStarted","Data":"d8c98d0953ee55656c844a28bd55c66ada0a1229e0786eaf1a44d52d3d7eb93d"} Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.209947 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-vljcq" podStartSLOduration=2.265588082 podStartE2EDuration="9.209932591s" podCreationTimestamp="2025-10-13 06:46:14 +0000 UTC" firstStartedPulling="2025-10-13 06:46:15.586640995 +0000 UTC m=+1065.687063911" lastFinishedPulling="2025-10-13 06:46:22.530985494 +0000 UTC m=+1072.631408420" observedRunningTime="2025-10-13 06:46:23.204997658 +0000 UTC m=+1073.305420574" watchObservedRunningTime="2025-10-13 06:46:23.209932591 +0000 UTC m=+1073.310355497" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.223114 4833 scope.go:117] "RemoveContainer" containerID="2491b3c9fa559cb8315a06b34661008a09343924c4386d467d82e099aec1bed1" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.228630 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.237398 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.253499 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p27zv\" (UniqueName: \"kubernetes.io/projected/9d2b8534-5a7a-4f8c-95d6-f3ceb6475639-kube-api-access-p27zv\") pod \"neutron-f5ac-account-create-dhkt2\" (UID: \"9d2b8534-5a7a-4f8c-95d6-f3ceb6475639\") " pod="openstack/neutron-f5ac-account-create-dhkt2" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.256381 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.258076 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.260204 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.261099 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.275595 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p27zv\" (UniqueName: \"kubernetes.io/projected/9d2b8534-5a7a-4f8c-95d6-f3ceb6475639-kube-api-access-p27zv\") pod \"neutron-f5ac-account-create-dhkt2\" (UID: \"9d2b8534-5a7a-4f8c-95d6-f3ceb6475639\") " pod="openstack/neutron-f5ac-account-create-dhkt2" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.308610 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.356846 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f5ac-account-create-dhkt2" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.361435 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.361513 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxf44\" (UniqueName: \"kubernetes.io/projected/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-kube-api-access-xxf44\") pod \"glance-default-internal-api-0\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.361695 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-logs\") pod \"glance-default-internal-api-0\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.361815 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.361947 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.361996 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.362065 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.362160 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.463393 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.463451 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.463486 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.463527 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.463569 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.463591 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxf44\" (UniqueName: \"kubernetes.io/projected/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-kube-api-access-xxf44\") pod \"glance-default-internal-api-0\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.463626 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-logs\") pod \"glance-default-internal-api-0\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.463660 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.466679 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-logs\") pod \"glance-default-internal-api-0\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.466858 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.466938 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.471217 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.471298 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.487787 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.484290 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.489166 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxf44\" (UniqueName: \"kubernetes.io/projected/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-kube-api-access-xxf44\") pod \"glance-default-internal-api-0\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.513389 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.616979 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.680302 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-c086-account-create-2xc6m"] Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.799463 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-b5wjw"] Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.809535 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-b5wjw"] Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.848687 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f5ac-account-create-dhkt2"] Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.869799 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-l8snz"] Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.870842 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l8snz" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.881745 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-l8snz"] Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.885206 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.885461 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.885769 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.885961 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-62lpv" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.975773 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-combined-ca-bundle\") pod \"keystone-bootstrap-l8snz\" (UID: \"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23\") " pod="openstack/keystone-bootstrap-l8snz" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.975878 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndxsx\" (UniqueName: \"kubernetes.io/projected/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-kube-api-access-ndxsx\") pod \"keystone-bootstrap-l8snz\" (UID: \"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23\") " pod="openstack/keystone-bootstrap-l8snz" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.975905 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-config-data\") pod \"keystone-bootstrap-l8snz\" (UID: \"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23\") " pod="openstack/keystone-bootstrap-l8snz" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.976190 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-fernet-keys\") pod \"keystone-bootstrap-l8snz\" (UID: \"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23\") " pod="openstack/keystone-bootstrap-l8snz" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.976257 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-scripts\") pod \"keystone-bootstrap-l8snz\" (UID: \"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23\") " pod="openstack/keystone-bootstrap-l8snz" Oct 13 06:46:23 crc kubenswrapper[4833]: I1013 06:46:23.976330 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-credential-keys\") pod \"keystone-bootstrap-l8snz\" (UID: \"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23\") " pod="openstack/keystone-bootstrap-l8snz" Oct 13 06:46:24 crc kubenswrapper[4833]: I1013 06:46:24.077414 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-combined-ca-bundle\") pod \"keystone-bootstrap-l8snz\" (UID: \"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23\") " pod="openstack/keystone-bootstrap-l8snz" Oct 13 06:46:24 crc kubenswrapper[4833]: I1013 06:46:24.077713 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndxsx\" (UniqueName: \"kubernetes.io/projected/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-kube-api-access-ndxsx\") pod \"keystone-bootstrap-l8snz\" (UID: \"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23\") " pod="openstack/keystone-bootstrap-l8snz" Oct 13 06:46:24 crc kubenswrapper[4833]: I1013 06:46:24.077740 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-config-data\") pod \"keystone-bootstrap-l8snz\" (UID: \"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23\") " pod="openstack/keystone-bootstrap-l8snz" Oct 13 06:46:24 crc kubenswrapper[4833]: I1013 06:46:24.077953 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-fernet-keys\") pod \"keystone-bootstrap-l8snz\" (UID: \"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23\") " pod="openstack/keystone-bootstrap-l8snz" Oct 13 06:46:24 crc kubenswrapper[4833]: I1013 06:46:24.077985 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-scripts\") pod \"keystone-bootstrap-l8snz\" (UID: \"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23\") " pod="openstack/keystone-bootstrap-l8snz" Oct 13 06:46:24 crc kubenswrapper[4833]: I1013 06:46:24.078172 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-credential-keys\") pod \"keystone-bootstrap-l8snz\" (UID: \"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23\") " pod="openstack/keystone-bootstrap-l8snz" Oct 13 06:46:24 crc kubenswrapper[4833]: I1013 06:46:24.081317 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-credential-keys\") pod \"keystone-bootstrap-l8snz\" (UID: \"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23\") " pod="openstack/keystone-bootstrap-l8snz" Oct 13 06:46:24 crc kubenswrapper[4833]: I1013 06:46:24.081621 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-fernet-keys\") pod \"keystone-bootstrap-l8snz\" (UID: \"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23\") " pod="openstack/keystone-bootstrap-l8snz" Oct 13 06:46:24 crc kubenswrapper[4833]: I1013 06:46:24.081943 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-scripts\") pod \"keystone-bootstrap-l8snz\" (UID: \"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23\") " pod="openstack/keystone-bootstrap-l8snz" Oct 13 06:46:24 crc kubenswrapper[4833]: I1013 06:46:24.081968 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-combined-ca-bundle\") pod \"keystone-bootstrap-l8snz\" (UID: \"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23\") " pod="openstack/keystone-bootstrap-l8snz" Oct 13 06:46:24 crc kubenswrapper[4833]: I1013 06:46:24.082497 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-config-data\") pod \"keystone-bootstrap-l8snz\" (UID: \"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23\") " pod="openstack/keystone-bootstrap-l8snz" Oct 13 06:46:24 crc kubenswrapper[4833]: I1013 06:46:24.096669 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndxsx\" (UniqueName: \"kubernetes.io/projected/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-kube-api-access-ndxsx\") pod \"keystone-bootstrap-l8snz\" (UID: \"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23\") " pod="openstack/keystone-bootstrap-l8snz" Oct 13 06:46:24 crc kubenswrapper[4833]: I1013 06:46:24.193631 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 06:46:24 crc kubenswrapper[4833]: I1013 06:46:24.205952 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5f571ae4-3483-4a8e-8f33-f445c77395c2","Type":"ContainerStarted","Data":"37dd09c6646170fa6f3886c244589036ff9754a0e7a5801c48e6491e8dcac529"} Oct 13 06:46:24 crc kubenswrapper[4833]: I1013 06:46:24.208043 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f5ac-account-create-dhkt2" event={"ID":"9d2b8534-5a7a-4f8c-95d6-f3ceb6475639","Type":"ContainerStarted","Data":"6a381821c0f526eb4dd163bddb35e3cfd9331a5f4d095b7aef5070b6481ca55e"} Oct 13 06:46:24 crc kubenswrapper[4833]: I1013 06:46:24.212164 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c086-account-create-2xc6m" event={"ID":"6571f0ea-a7f7-4ba4-bd41-a59f92642ddc","Type":"ContainerStarted","Data":"deb8cba5ece9f8611d161426287e6b8c0c943a7706726b8b53ed0122357cee8a"} Oct 13 06:46:24 crc kubenswrapper[4833]: I1013 06:46:24.212191 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c086-account-create-2xc6m" event={"ID":"6571f0ea-a7f7-4ba4-bd41-a59f92642ddc","Type":"ContainerStarted","Data":"15c1819b5001f0292736b6fcd68b0261e96a77281f7f2eea1e3c5c1052b25a54"} Oct 13 06:46:24 crc kubenswrapper[4833]: W1013 06:46:24.216107 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1f1a519_8ba6_44b4_9230_b93b13f25ff4.slice/crio-cdc76984f6c2dbf92690452970a5b9caa55c3c1d9722432cedd7e693dd16191f WatchSource:0}: Error finding container cdc76984f6c2dbf92690452970a5b9caa55c3c1d9722432cedd7e693dd16191f: Status 404 returned error can't find the container with id cdc76984f6c2dbf92690452970a5b9caa55c3c1d9722432cedd7e693dd16191f Oct 13 06:46:24 crc kubenswrapper[4833]: I1013 06:46:24.226095 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l8snz" Oct 13 06:46:24 crc kubenswrapper[4833]: I1013 06:46:24.636582 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="662118d5-f41a-4bd7-bb54-bdd93de97bb3" path="/var/lib/kubelet/pods/662118d5-f41a-4bd7-bb54-bdd93de97bb3/volumes" Oct 13 06:46:24 crc kubenswrapper[4833]: I1013 06:46:24.637835 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a28b5320-1046-4ca8-974d-185760b4e612" path="/var/lib/kubelet/pods/a28b5320-1046-4ca8-974d-185760b4e612/volumes" Oct 13 06:46:24 crc kubenswrapper[4833]: I1013 06:46:24.685749 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-l8snz"] Oct 13 06:46:24 crc kubenswrapper[4833]: W1013 06:46:24.690236 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod604fdcdc_4fc5_4dcb_98b2_42e44f2bad23.slice/crio-941c1f20897dc3b7218ab1b865c06431c199f4a3a7cc27513cdd3f09475f97ee WatchSource:0}: Error finding container 941c1f20897dc3b7218ab1b865c06431c199f4a3a7cc27513cdd3f09475f97ee: Status 404 returned error can't find the container with id 941c1f20897dc3b7218ab1b865c06431c199f4a3a7cc27513cdd3f09475f97ee Oct 13 06:46:24 crc kubenswrapper[4833]: I1013 06:46:24.921530 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" Oct 13 06:46:24 crc kubenswrapper[4833]: I1013 06:46:24.985644 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-795846498c-x69sr"] Oct 13 06:46:24 crc kubenswrapper[4833]: I1013 06:46:24.985957 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-795846498c-x69sr" podUID="1f0e7b7a-7228-42df-a934-a2900ac292a5" containerName="dnsmasq-dns" containerID="cri-o://1c268051faa093bf24583e046eec0edc68aaef5a964314997be5cfe10ad5c645" gracePeriod=10 Oct 13 06:46:25 crc kubenswrapper[4833]: I1013 06:46:25.221566 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l8snz" event={"ID":"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23","Type":"ContainerStarted","Data":"941c1f20897dc3b7218ab1b865c06431c199f4a3a7cc27513cdd3f09475f97ee"} Oct 13 06:46:25 crc kubenswrapper[4833]: I1013 06:46:25.224210 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d1f1a519-8ba6-44b4-9230-b93b13f25ff4","Type":"ContainerStarted","Data":"cdc76984f6c2dbf92690452970a5b9caa55c3c1d9722432cedd7e693dd16191f"} Oct 13 06:46:25 crc kubenswrapper[4833]: I1013 06:46:25.235741 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f5ac-account-create-dhkt2" event={"ID":"9d2b8534-5a7a-4f8c-95d6-f3ceb6475639","Type":"ContainerStarted","Data":"7e9b9ddb1647f7c0accf076a96f7e4e26b43c8c7d1b79a8c05c59c3623bb0777"} Oct 13 06:46:25 crc kubenswrapper[4833]: I1013 06:46:25.238085 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5f571ae4-3483-4a8e-8f33-f445c77395c2","Type":"ContainerStarted","Data":"f840acbd624d040e57c1e9a49cb3766d68bf39e90eead2700c4dd3deb340c011"} Oct 13 06:46:25 crc kubenswrapper[4833]: I1013 06:46:25.279453 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-f5ac-account-create-dhkt2" podStartSLOduration=2.279430438 podStartE2EDuration="2.279430438s" podCreationTimestamp="2025-10-13 06:46:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:46:25.252240099 +0000 UTC m=+1075.352663095" watchObservedRunningTime="2025-10-13 06:46:25.279430438 +0000 UTC m=+1075.379853354" Oct 13 06:46:25 crc kubenswrapper[4833]: I1013 06:46:25.281393 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-c086-account-create-2xc6m" podStartSLOduration=3.281373484 podStartE2EDuration="3.281373484s" podCreationTimestamp="2025-10-13 06:46:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:46:25.267382358 +0000 UTC m=+1075.367805284" watchObservedRunningTime="2025-10-13 06:46:25.281373484 +0000 UTC m=+1075.381796400" Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.099640 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795846498c-x69sr" Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.218097 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f0e7b7a-7228-42df-a934-a2900ac292a5-ovsdbserver-nb\") pod \"1f0e7b7a-7228-42df-a934-a2900ac292a5\" (UID: \"1f0e7b7a-7228-42df-a934-a2900ac292a5\") " Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.218161 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f0e7b7a-7228-42df-a934-a2900ac292a5-dns-svc\") pod \"1f0e7b7a-7228-42df-a934-a2900ac292a5\" (UID: \"1f0e7b7a-7228-42df-a934-a2900ac292a5\") " Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.218260 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f0e7b7a-7228-42df-a934-a2900ac292a5-config\") pod \"1f0e7b7a-7228-42df-a934-a2900ac292a5\" (UID: \"1f0e7b7a-7228-42df-a934-a2900ac292a5\") " Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.218289 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f0e7b7a-7228-42df-a934-a2900ac292a5-ovsdbserver-sb\") pod \"1f0e7b7a-7228-42df-a934-a2900ac292a5\" (UID: \"1f0e7b7a-7228-42df-a934-a2900ac292a5\") " Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.218320 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj96j\" (UniqueName: \"kubernetes.io/projected/1f0e7b7a-7228-42df-a934-a2900ac292a5-kube-api-access-rj96j\") pod \"1f0e7b7a-7228-42df-a934-a2900ac292a5\" (UID: \"1f0e7b7a-7228-42df-a934-a2900ac292a5\") " Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.218336 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f0e7b7a-7228-42df-a934-a2900ac292a5-dns-swift-storage-0\") pod \"1f0e7b7a-7228-42df-a934-a2900ac292a5\" (UID: \"1f0e7b7a-7228-42df-a934-a2900ac292a5\") " Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.238473 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f0e7b7a-7228-42df-a934-a2900ac292a5-kube-api-access-rj96j" (OuterVolumeSpecName: "kube-api-access-rj96j") pod "1f0e7b7a-7228-42df-a934-a2900ac292a5" (UID: "1f0e7b7a-7228-42df-a934-a2900ac292a5"). InnerVolumeSpecName "kube-api-access-rj96j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.252924 4833 generic.go:334] "Generic (PLEG): container finished" podID="6571f0ea-a7f7-4ba4-bd41-a59f92642ddc" containerID="deb8cba5ece9f8611d161426287e6b8c0c943a7706726b8b53ed0122357cee8a" exitCode=0 Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.253010 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c086-account-create-2xc6m" event={"ID":"6571f0ea-a7f7-4ba4-bd41-a59f92642ddc","Type":"ContainerDied","Data":"deb8cba5ece9f8611d161426287e6b8c0c943a7706726b8b53ed0122357cee8a"} Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.261417 4833 generic.go:334] "Generic (PLEG): container finished" podID="1f0e7b7a-7228-42df-a934-a2900ac292a5" containerID="1c268051faa093bf24583e046eec0edc68aaef5a964314997be5cfe10ad5c645" exitCode=0 Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.261497 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795846498c-x69sr" event={"ID":"1f0e7b7a-7228-42df-a934-a2900ac292a5","Type":"ContainerDied","Data":"1c268051faa093bf24583e046eec0edc68aaef5a964314997be5cfe10ad5c645"} Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.261526 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795846498c-x69sr" event={"ID":"1f0e7b7a-7228-42df-a934-a2900ac292a5","Type":"ContainerDied","Data":"4453edb659857abfa3b8f3ee2b6289284e7f322fe1fe47d3245730169a0a05c4"} Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.261562 4833 scope.go:117] "RemoveContainer" containerID="1c268051faa093bf24583e046eec0edc68aaef5a964314997be5cfe10ad5c645" Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.261709 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795846498c-x69sr" Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.268856 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l8snz" event={"ID":"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23","Type":"ContainerStarted","Data":"305356998f5d389dcf783b5d0672aa457d5023b743e7a50dc3fee24c7afd5da4"} Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.270723 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d1f1a519-8ba6-44b4-9230-b93b13f25ff4","Type":"ContainerStarted","Data":"962e9cff01281b7ce16e6fae829fd64a8ed217dbd472918821700633c7bdbef2"} Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.273284 4833 generic.go:334] "Generic (PLEG): container finished" podID="9d2b8534-5a7a-4f8c-95d6-f3ceb6475639" containerID="7e9b9ddb1647f7c0accf076a96f7e4e26b43c8c7d1b79a8c05c59c3623bb0777" exitCode=0 Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.273420 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f5ac-account-create-dhkt2" event={"ID":"9d2b8534-5a7a-4f8c-95d6-f3ceb6475639","Type":"ContainerDied","Data":"7e9b9ddb1647f7c0accf076a96f7e4e26b43c8c7d1b79a8c05c59c3623bb0777"} Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.282077 4833 scope.go:117] "RemoveContainer" containerID="69b92c0f1e1d4bd719a9e9d2e3d8701cfc65207e7727d655ad710a94d883c39d" Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.294486 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-l8snz" podStartSLOduration=3.294465565 podStartE2EDuration="3.294465565s" podCreationTimestamp="2025-10-13 06:46:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:46:26.288578964 +0000 UTC m=+1076.389001900" watchObservedRunningTime="2025-10-13 06:46:26.294465565 +0000 UTC m=+1076.394888481" Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.322745 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj96j\" (UniqueName: \"kubernetes.io/projected/1f0e7b7a-7228-42df-a934-a2900ac292a5-kube-api-access-rj96j\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.330106 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.330082328 podStartE2EDuration="5.330082328s" podCreationTimestamp="2025-10-13 06:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:46:26.326501814 +0000 UTC m=+1076.426924740" watchObservedRunningTime="2025-10-13 06:46:26.330082328 +0000 UTC m=+1076.430505244" Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.402033 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f0e7b7a-7228-42df-a934-a2900ac292a5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1f0e7b7a-7228-42df-a934-a2900ac292a5" (UID: "1f0e7b7a-7228-42df-a934-a2900ac292a5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.403395 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f0e7b7a-7228-42df-a934-a2900ac292a5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1f0e7b7a-7228-42df-a934-a2900ac292a5" (UID: "1f0e7b7a-7228-42df-a934-a2900ac292a5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.418034 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f0e7b7a-7228-42df-a934-a2900ac292a5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1f0e7b7a-7228-42df-a934-a2900ac292a5" (UID: "1f0e7b7a-7228-42df-a934-a2900ac292a5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.424646 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f0e7b7a-7228-42df-a934-a2900ac292a5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.424679 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f0e7b7a-7228-42df-a934-a2900ac292a5-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.424690 4833 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f0e7b7a-7228-42df-a934-a2900ac292a5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.439780 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f0e7b7a-7228-42df-a934-a2900ac292a5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1f0e7b7a-7228-42df-a934-a2900ac292a5" (UID: "1f0e7b7a-7228-42df-a934-a2900ac292a5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.456019 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f0e7b7a-7228-42df-a934-a2900ac292a5-config" (OuterVolumeSpecName: "config") pod "1f0e7b7a-7228-42df-a934-a2900ac292a5" (UID: "1f0e7b7a-7228-42df-a934-a2900ac292a5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.526365 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f0e7b7a-7228-42df-a934-a2900ac292a5-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.527106 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f0e7b7a-7228-42df-a934-a2900ac292a5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.609892 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-795846498c-x69sr"] Oct 13 06:46:26 crc kubenswrapper[4833]: I1013 06:46:26.642422 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-795846498c-x69sr"] Oct 13 06:46:27 crc kubenswrapper[4833]: I1013 06:46:27.285092 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d1f1a519-8ba6-44b4-9230-b93b13f25ff4","Type":"ContainerStarted","Data":"39dc2c6e7d3dae7680675bf6375804e22b92b7c42942d41bd52d866978cb9f5f"} Oct 13 06:46:27 crc kubenswrapper[4833]: I1013 06:46:27.314461 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.314438625 podStartE2EDuration="4.314438625s" podCreationTimestamp="2025-10-13 06:46:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:46:27.31046842 +0000 UTC m=+1077.410891336" watchObservedRunningTime="2025-10-13 06:46:27.314438625 +0000 UTC m=+1077.414861541" Oct 13 06:46:27 crc kubenswrapper[4833]: I1013 06:46:27.334989 4833 scope.go:117] "RemoveContainer" containerID="1c268051faa093bf24583e046eec0edc68aaef5a964314997be5cfe10ad5c645" Oct 13 06:46:27 crc kubenswrapper[4833]: E1013 06:46:27.336762 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c268051faa093bf24583e046eec0edc68aaef5a964314997be5cfe10ad5c645\": container with ID starting with 1c268051faa093bf24583e046eec0edc68aaef5a964314997be5cfe10ad5c645 not found: ID does not exist" containerID="1c268051faa093bf24583e046eec0edc68aaef5a964314997be5cfe10ad5c645" Oct 13 06:46:27 crc kubenswrapper[4833]: I1013 06:46:27.336813 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c268051faa093bf24583e046eec0edc68aaef5a964314997be5cfe10ad5c645"} err="failed to get container status \"1c268051faa093bf24583e046eec0edc68aaef5a964314997be5cfe10ad5c645\": rpc error: code = NotFound desc = could not find container \"1c268051faa093bf24583e046eec0edc68aaef5a964314997be5cfe10ad5c645\": container with ID starting with 1c268051faa093bf24583e046eec0edc68aaef5a964314997be5cfe10ad5c645 not found: ID does not exist" Oct 13 06:46:27 crc kubenswrapper[4833]: I1013 06:46:27.336838 4833 scope.go:117] "RemoveContainer" containerID="69b92c0f1e1d4bd719a9e9d2e3d8701cfc65207e7727d655ad710a94d883c39d" Oct 13 06:46:27 crc kubenswrapper[4833]: E1013 06:46:27.337311 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69b92c0f1e1d4bd719a9e9d2e3d8701cfc65207e7727d655ad710a94d883c39d\": container with ID starting with 69b92c0f1e1d4bd719a9e9d2e3d8701cfc65207e7727d655ad710a94d883c39d not found: ID does not exist" containerID="69b92c0f1e1d4bd719a9e9d2e3d8701cfc65207e7727d655ad710a94d883c39d" Oct 13 06:46:27 crc kubenswrapper[4833]: I1013 06:46:27.337365 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69b92c0f1e1d4bd719a9e9d2e3d8701cfc65207e7727d655ad710a94d883c39d"} err="failed to get container status \"69b92c0f1e1d4bd719a9e9d2e3d8701cfc65207e7727d655ad710a94d883c39d\": rpc error: code = NotFound desc = could not find container \"69b92c0f1e1d4bd719a9e9d2e3d8701cfc65207e7727d655ad710a94d883c39d\": container with ID starting with 69b92c0f1e1d4bd719a9e9d2e3d8701cfc65207e7727d655ad710a94d883c39d not found: ID does not exist" Oct 13 06:46:27 crc kubenswrapper[4833]: I1013 06:46:27.572826 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c086-account-create-2xc6m" Oct 13 06:46:27 crc kubenswrapper[4833]: I1013 06:46:27.748598 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j746h\" (UniqueName: \"kubernetes.io/projected/6571f0ea-a7f7-4ba4-bd41-a59f92642ddc-kube-api-access-j746h\") pod \"6571f0ea-a7f7-4ba4-bd41-a59f92642ddc\" (UID: \"6571f0ea-a7f7-4ba4-bd41-a59f92642ddc\") " Oct 13 06:46:27 crc kubenswrapper[4833]: I1013 06:46:27.753175 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6571f0ea-a7f7-4ba4-bd41-a59f92642ddc-kube-api-access-j746h" (OuterVolumeSpecName: "kube-api-access-j746h") pod "6571f0ea-a7f7-4ba4-bd41-a59f92642ddc" (UID: "6571f0ea-a7f7-4ba4-bd41-a59f92642ddc"). InnerVolumeSpecName "kube-api-access-j746h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:46:27 crc kubenswrapper[4833]: I1013 06:46:27.789064 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f5ac-account-create-dhkt2" Oct 13 06:46:27 crc kubenswrapper[4833]: I1013 06:46:27.850856 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j746h\" (UniqueName: \"kubernetes.io/projected/6571f0ea-a7f7-4ba4-bd41-a59f92642ddc-kube-api-access-j746h\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:27 crc kubenswrapper[4833]: I1013 06:46:27.951705 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p27zv\" (UniqueName: \"kubernetes.io/projected/9d2b8534-5a7a-4f8c-95d6-f3ceb6475639-kube-api-access-p27zv\") pod \"9d2b8534-5a7a-4f8c-95d6-f3ceb6475639\" (UID: \"9d2b8534-5a7a-4f8c-95d6-f3ceb6475639\") " Oct 13 06:46:27 crc kubenswrapper[4833]: I1013 06:46:27.955305 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d2b8534-5a7a-4f8c-95d6-f3ceb6475639-kube-api-access-p27zv" (OuterVolumeSpecName: "kube-api-access-p27zv") pod "9d2b8534-5a7a-4f8c-95d6-f3ceb6475639" (UID: "9d2b8534-5a7a-4f8c-95d6-f3ceb6475639"). InnerVolumeSpecName "kube-api-access-p27zv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:46:28 crc kubenswrapper[4833]: I1013 06:46:28.055259 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p27zv\" (UniqueName: \"kubernetes.io/projected/9d2b8534-5a7a-4f8c-95d6-f3ceb6475639-kube-api-access-p27zv\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:28 crc kubenswrapper[4833]: I1013 06:46:28.302659 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f5ac-account-create-dhkt2" event={"ID":"9d2b8534-5a7a-4f8c-95d6-f3ceb6475639","Type":"ContainerDied","Data":"6a381821c0f526eb4dd163bddb35e3cfd9331a5f4d095b7aef5070b6481ca55e"} Oct 13 06:46:28 crc kubenswrapper[4833]: I1013 06:46:28.303001 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a381821c0f526eb4dd163bddb35e3cfd9331a5f4d095b7aef5070b6481ca55e" Oct 13 06:46:28 crc kubenswrapper[4833]: I1013 06:46:28.302682 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f5ac-account-create-dhkt2" Oct 13 06:46:28 crc kubenswrapper[4833]: I1013 06:46:28.306492 4833 generic.go:334] "Generic (PLEG): container finished" podID="401c9b31-e308-4305-b56e-29fc8594856d" containerID="718295aa7ee717c333352fdf443a7bba813f3a4c32d380f3c3fa23c934f30af0" exitCode=0 Oct 13 06:46:28 crc kubenswrapper[4833]: I1013 06:46:28.306638 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-vljcq" event={"ID":"401c9b31-e308-4305-b56e-29fc8594856d","Type":"ContainerDied","Data":"718295aa7ee717c333352fdf443a7bba813f3a4c32d380f3c3fa23c934f30af0"} Oct 13 06:46:28 crc kubenswrapper[4833]: I1013 06:46:28.308365 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88476575-d57c-4196-bd20-eee1fd482ead","Type":"ContainerStarted","Data":"1320b3849b11e6e71c70fb037d3f1a888780684fe0adba1d5f0575053e5b790d"} Oct 13 06:46:28 crc kubenswrapper[4833]: I1013 06:46:28.310924 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c086-account-create-2xc6m" Oct 13 06:46:28 crc kubenswrapper[4833]: I1013 06:46:28.310961 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c086-account-create-2xc6m" event={"ID":"6571f0ea-a7f7-4ba4-bd41-a59f92642ddc","Type":"ContainerDied","Data":"15c1819b5001f0292736b6fcd68b0261e96a77281f7f2eea1e3c5c1052b25a54"} Oct 13 06:46:28 crc kubenswrapper[4833]: I1013 06:46:28.310984 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15c1819b5001f0292736b6fcd68b0261e96a77281f7f2eea1e3c5c1052b25a54" Oct 13 06:46:28 crc kubenswrapper[4833]: I1013 06:46:28.640551 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f0e7b7a-7228-42df-a934-a2900ac292a5" path="/var/lib/kubelet/pods/1f0e7b7a-7228-42df-a934-a2900ac292a5/volumes" Oct 13 06:46:29 crc kubenswrapper[4833]: I1013 06:46:29.325037 4833 generic.go:334] "Generic (PLEG): container finished" podID="604fdcdc-4fc5-4dcb-98b2-42e44f2bad23" containerID="305356998f5d389dcf783b5d0672aa457d5023b743e7a50dc3fee24c7afd5da4" exitCode=0 Oct 13 06:46:29 crc kubenswrapper[4833]: I1013 06:46:29.325128 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l8snz" event={"ID":"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23","Type":"ContainerDied","Data":"305356998f5d389dcf783b5d0672aa457d5023b743e7a50dc3fee24c7afd5da4"} Oct 13 06:46:29 crc kubenswrapper[4833]: I1013 06:46:29.654405 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-vljcq" Oct 13 06:46:29 crc kubenswrapper[4833]: I1013 06:46:29.784816 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/401c9b31-e308-4305-b56e-29fc8594856d-combined-ca-bundle\") pod \"401c9b31-e308-4305-b56e-29fc8594856d\" (UID: \"401c9b31-e308-4305-b56e-29fc8594856d\") " Oct 13 06:46:29 crc kubenswrapper[4833]: I1013 06:46:29.784943 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/401c9b31-e308-4305-b56e-29fc8594856d-logs\") pod \"401c9b31-e308-4305-b56e-29fc8594856d\" (UID: \"401c9b31-e308-4305-b56e-29fc8594856d\") " Oct 13 06:46:29 crc kubenswrapper[4833]: I1013 06:46:29.785004 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/401c9b31-e308-4305-b56e-29fc8594856d-config-data\") pod \"401c9b31-e308-4305-b56e-29fc8594856d\" (UID: \"401c9b31-e308-4305-b56e-29fc8594856d\") " Oct 13 06:46:29 crc kubenswrapper[4833]: I1013 06:46:29.785110 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/401c9b31-e308-4305-b56e-29fc8594856d-scripts\") pod \"401c9b31-e308-4305-b56e-29fc8594856d\" (UID: \"401c9b31-e308-4305-b56e-29fc8594856d\") " Oct 13 06:46:29 crc kubenswrapper[4833]: I1013 06:46:29.785150 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trg8k\" (UniqueName: \"kubernetes.io/projected/401c9b31-e308-4305-b56e-29fc8594856d-kube-api-access-trg8k\") pod \"401c9b31-e308-4305-b56e-29fc8594856d\" (UID: \"401c9b31-e308-4305-b56e-29fc8594856d\") " Oct 13 06:46:29 crc kubenswrapper[4833]: I1013 06:46:29.785471 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/401c9b31-e308-4305-b56e-29fc8594856d-logs" (OuterVolumeSpecName: "logs") pod "401c9b31-e308-4305-b56e-29fc8594856d" (UID: "401c9b31-e308-4305-b56e-29fc8594856d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:46:29 crc kubenswrapper[4833]: I1013 06:46:29.786132 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/401c9b31-e308-4305-b56e-29fc8594856d-logs\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:29 crc kubenswrapper[4833]: I1013 06:46:29.790370 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/401c9b31-e308-4305-b56e-29fc8594856d-kube-api-access-trg8k" (OuterVolumeSpecName: "kube-api-access-trg8k") pod "401c9b31-e308-4305-b56e-29fc8594856d" (UID: "401c9b31-e308-4305-b56e-29fc8594856d"). InnerVolumeSpecName "kube-api-access-trg8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:46:29 crc kubenswrapper[4833]: I1013 06:46:29.791787 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/401c9b31-e308-4305-b56e-29fc8594856d-scripts" (OuterVolumeSpecName: "scripts") pod "401c9b31-e308-4305-b56e-29fc8594856d" (UID: "401c9b31-e308-4305-b56e-29fc8594856d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:29 crc kubenswrapper[4833]: I1013 06:46:29.814005 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/401c9b31-e308-4305-b56e-29fc8594856d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "401c9b31-e308-4305-b56e-29fc8594856d" (UID: "401c9b31-e308-4305-b56e-29fc8594856d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:29 crc kubenswrapper[4833]: I1013 06:46:29.818855 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/401c9b31-e308-4305-b56e-29fc8594856d-config-data" (OuterVolumeSpecName: "config-data") pod "401c9b31-e308-4305-b56e-29fc8594856d" (UID: "401c9b31-e308-4305-b56e-29fc8594856d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:29 crc kubenswrapper[4833]: I1013 06:46:29.887693 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/401c9b31-e308-4305-b56e-29fc8594856d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:29 crc kubenswrapper[4833]: I1013 06:46:29.887775 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/401c9b31-e308-4305-b56e-29fc8594856d-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:29 crc kubenswrapper[4833]: I1013 06:46:29.887819 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/401c9b31-e308-4305-b56e-29fc8594856d-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:29 crc kubenswrapper[4833]: I1013 06:46:29.887831 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trg8k\" (UniqueName: \"kubernetes.io/projected/401c9b31-e308-4305-b56e-29fc8594856d-kube-api-access-trg8k\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.341080 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-vljcq" event={"ID":"401c9b31-e308-4305-b56e-29fc8594856d","Type":"ContainerDied","Data":"2bf42838d8feec31372ff081de6b666287cfa2db5fdf0b9639a35455dec2f0a8"} Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.341117 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-vljcq" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.341158 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bf42838d8feec31372ff081de6b666287cfa2db5fdf0b9639a35455dec2f0a8" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.507285 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6869bc4646-lrqdg"] Oct 13 06:46:30 crc kubenswrapper[4833]: E1013 06:46:30.508151 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6571f0ea-a7f7-4ba4-bd41-a59f92642ddc" containerName="mariadb-account-create" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.508177 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6571f0ea-a7f7-4ba4-bd41-a59f92642ddc" containerName="mariadb-account-create" Oct 13 06:46:30 crc kubenswrapper[4833]: E1013 06:46:30.508200 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401c9b31-e308-4305-b56e-29fc8594856d" containerName="placement-db-sync" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.508207 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="401c9b31-e308-4305-b56e-29fc8594856d" containerName="placement-db-sync" Oct 13 06:46:30 crc kubenswrapper[4833]: E1013 06:46:30.508215 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2b8534-5a7a-4f8c-95d6-f3ceb6475639" containerName="mariadb-account-create" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.508221 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2b8534-5a7a-4f8c-95d6-f3ceb6475639" containerName="mariadb-account-create" Oct 13 06:46:30 crc kubenswrapper[4833]: E1013 06:46:30.508238 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f0e7b7a-7228-42df-a934-a2900ac292a5" containerName="init" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.508243 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f0e7b7a-7228-42df-a934-a2900ac292a5" containerName="init" Oct 13 06:46:30 crc kubenswrapper[4833]: E1013 06:46:30.508257 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f0e7b7a-7228-42df-a934-a2900ac292a5" containerName="dnsmasq-dns" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.508263 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f0e7b7a-7228-42df-a934-a2900ac292a5" containerName="dnsmasq-dns" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.508411 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6571f0ea-a7f7-4ba4-bd41-a59f92642ddc" containerName="mariadb-account-create" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.508431 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="401c9b31-e308-4305-b56e-29fc8594856d" containerName="placement-db-sync" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.508441 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f0e7b7a-7228-42df-a934-a2900ac292a5" containerName="dnsmasq-dns" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.508452 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d2b8534-5a7a-4f8c-95d6-f3ceb6475639" containerName="mariadb-account-create" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.509465 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6869bc4646-lrqdg" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.511487 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.515984 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4558w" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.516181 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.516285 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.518026 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.530255 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6869bc4646-lrqdg"] Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.603145 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/626d71e0-e957-4a46-9565-d19058a575c9-combined-ca-bundle\") pod \"placement-6869bc4646-lrqdg\" (UID: \"626d71e0-e957-4a46-9565-d19058a575c9\") " pod="openstack/placement-6869bc4646-lrqdg" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.603558 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/626d71e0-e957-4a46-9565-d19058a575c9-scripts\") pod \"placement-6869bc4646-lrqdg\" (UID: \"626d71e0-e957-4a46-9565-d19058a575c9\") " pod="openstack/placement-6869bc4646-lrqdg" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.603654 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnsqr\" (UniqueName: \"kubernetes.io/projected/626d71e0-e957-4a46-9565-d19058a575c9-kube-api-access-mnsqr\") pod \"placement-6869bc4646-lrqdg\" (UID: \"626d71e0-e957-4a46-9565-d19058a575c9\") " pod="openstack/placement-6869bc4646-lrqdg" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.603683 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/626d71e0-e957-4a46-9565-d19058a575c9-public-tls-certs\") pod \"placement-6869bc4646-lrqdg\" (UID: \"626d71e0-e957-4a46-9565-d19058a575c9\") " pod="openstack/placement-6869bc4646-lrqdg" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.603702 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/626d71e0-e957-4a46-9565-d19058a575c9-logs\") pod \"placement-6869bc4646-lrqdg\" (UID: \"626d71e0-e957-4a46-9565-d19058a575c9\") " pod="openstack/placement-6869bc4646-lrqdg" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.603721 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/626d71e0-e957-4a46-9565-d19058a575c9-internal-tls-certs\") pod \"placement-6869bc4646-lrqdg\" (UID: \"626d71e0-e957-4a46-9565-d19058a575c9\") " pod="openstack/placement-6869bc4646-lrqdg" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.603945 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/626d71e0-e957-4a46-9565-d19058a575c9-config-data\") pod \"placement-6869bc4646-lrqdg\" (UID: \"626d71e0-e957-4a46-9565-d19058a575c9\") " pod="openstack/placement-6869bc4646-lrqdg" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.704302 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l8snz" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.705082 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/626d71e0-e957-4a46-9565-d19058a575c9-combined-ca-bundle\") pod \"placement-6869bc4646-lrqdg\" (UID: \"626d71e0-e957-4a46-9565-d19058a575c9\") " pod="openstack/placement-6869bc4646-lrqdg" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.705128 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/626d71e0-e957-4a46-9565-d19058a575c9-scripts\") pod \"placement-6869bc4646-lrqdg\" (UID: \"626d71e0-e957-4a46-9565-d19058a575c9\") " pod="openstack/placement-6869bc4646-lrqdg" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.705238 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnsqr\" (UniqueName: \"kubernetes.io/projected/626d71e0-e957-4a46-9565-d19058a575c9-kube-api-access-mnsqr\") pod \"placement-6869bc4646-lrqdg\" (UID: \"626d71e0-e957-4a46-9565-d19058a575c9\") " pod="openstack/placement-6869bc4646-lrqdg" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.705283 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/626d71e0-e957-4a46-9565-d19058a575c9-public-tls-certs\") pod \"placement-6869bc4646-lrqdg\" (UID: \"626d71e0-e957-4a46-9565-d19058a575c9\") " pod="openstack/placement-6869bc4646-lrqdg" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.705307 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/626d71e0-e957-4a46-9565-d19058a575c9-logs\") pod \"placement-6869bc4646-lrqdg\" (UID: \"626d71e0-e957-4a46-9565-d19058a575c9\") " pod="openstack/placement-6869bc4646-lrqdg" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.705340 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/626d71e0-e957-4a46-9565-d19058a575c9-internal-tls-certs\") pod \"placement-6869bc4646-lrqdg\" (UID: \"626d71e0-e957-4a46-9565-d19058a575c9\") " pod="openstack/placement-6869bc4646-lrqdg" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.705380 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/626d71e0-e957-4a46-9565-d19058a575c9-config-data\") pod \"placement-6869bc4646-lrqdg\" (UID: \"626d71e0-e957-4a46-9565-d19058a575c9\") " pod="openstack/placement-6869bc4646-lrqdg" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.707561 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/626d71e0-e957-4a46-9565-d19058a575c9-logs\") pod \"placement-6869bc4646-lrqdg\" (UID: \"626d71e0-e957-4a46-9565-d19058a575c9\") " pod="openstack/placement-6869bc4646-lrqdg" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.708638 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.708736 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.708849 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.710737 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.714049 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/626d71e0-e957-4a46-9565-d19058a575c9-combined-ca-bundle\") pod \"placement-6869bc4646-lrqdg\" (UID: \"626d71e0-e957-4a46-9565-d19058a575c9\") " pod="openstack/placement-6869bc4646-lrqdg" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.723103 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/626d71e0-e957-4a46-9565-d19058a575c9-public-tls-certs\") pod \"placement-6869bc4646-lrqdg\" (UID: \"626d71e0-e957-4a46-9565-d19058a575c9\") " pod="openstack/placement-6869bc4646-lrqdg" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.724599 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/626d71e0-e957-4a46-9565-d19058a575c9-internal-tls-certs\") pod \"placement-6869bc4646-lrqdg\" (UID: \"626d71e0-e957-4a46-9565-d19058a575c9\") " pod="openstack/placement-6869bc4646-lrqdg" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.724854 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/626d71e0-e957-4a46-9565-d19058a575c9-scripts\") pod \"placement-6869bc4646-lrqdg\" (UID: \"626d71e0-e957-4a46-9565-d19058a575c9\") " pod="openstack/placement-6869bc4646-lrqdg" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.725586 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnsqr\" (UniqueName: \"kubernetes.io/projected/626d71e0-e957-4a46-9565-d19058a575c9-kube-api-access-mnsqr\") pod \"placement-6869bc4646-lrqdg\" (UID: \"626d71e0-e957-4a46-9565-d19058a575c9\") " pod="openstack/placement-6869bc4646-lrqdg" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.730247 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/626d71e0-e957-4a46-9565-d19058a575c9-config-data\") pod \"placement-6869bc4646-lrqdg\" (UID: \"626d71e0-e957-4a46-9565-d19058a575c9\") " pod="openstack/placement-6869bc4646-lrqdg" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.805992 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndxsx\" (UniqueName: \"kubernetes.io/projected/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-kube-api-access-ndxsx\") pod \"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23\" (UID: \"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23\") " Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.806094 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-scripts\") pod \"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23\" (UID: \"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23\") " Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.806113 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-fernet-keys\") pod \"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23\" (UID: \"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23\") " Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.806154 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-config-data\") pod \"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23\" (UID: \"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23\") " Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.806270 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-credential-keys\") pod \"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23\" (UID: \"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23\") " Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.806384 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-combined-ca-bundle\") pod \"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23\" (UID: \"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23\") " Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.809348 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "604fdcdc-4fc5-4dcb-98b2-42e44f2bad23" (UID: "604fdcdc-4fc5-4dcb-98b2-42e44f2bad23"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.812932 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-scripts" (OuterVolumeSpecName: "scripts") pod "604fdcdc-4fc5-4dcb-98b2-42e44f2bad23" (UID: "604fdcdc-4fc5-4dcb-98b2-42e44f2bad23"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.812972 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "604fdcdc-4fc5-4dcb-98b2-42e44f2bad23" (UID: "604fdcdc-4fc5-4dcb-98b2-42e44f2bad23"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.813367 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-kube-api-access-ndxsx" (OuterVolumeSpecName: "kube-api-access-ndxsx") pod "604fdcdc-4fc5-4dcb-98b2-42e44f2bad23" (UID: "604fdcdc-4fc5-4dcb-98b2-42e44f2bad23"). InnerVolumeSpecName "kube-api-access-ndxsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.829887 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "604fdcdc-4fc5-4dcb-98b2-42e44f2bad23" (UID: "604fdcdc-4fc5-4dcb-98b2-42e44f2bad23"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.833500 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-config-data" (OuterVolumeSpecName: "config-data") pod "604fdcdc-4fc5-4dcb-98b2-42e44f2bad23" (UID: "604fdcdc-4fc5-4dcb-98b2-42e44f2bad23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.838681 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4558w" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.847636 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6869bc4646-lrqdg" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.911705 4833 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.911746 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.911762 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndxsx\" (UniqueName: \"kubernetes.io/projected/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-kube-api-access-ndxsx\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.911772 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.911783 4833 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:30 crc kubenswrapper[4833]: I1013 06:46:30.911795 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.319323 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6869bc4646-lrqdg"] Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.352839 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l8snz" event={"ID":"604fdcdc-4fc5-4dcb-98b2-42e44f2bad23","Type":"ContainerDied","Data":"941c1f20897dc3b7218ab1b865c06431c199f4a3a7cc27513cdd3f09475f97ee"} Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.352883 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="941c1f20897dc3b7218ab1b865c06431c199f4a3a7cc27513cdd3f09475f97ee" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.352908 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l8snz" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.497100 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-558c47b6d4-9zp2v"] Oct 13 06:46:31 crc kubenswrapper[4833]: E1013 06:46:31.497515 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="604fdcdc-4fc5-4dcb-98b2-42e44f2bad23" containerName="keystone-bootstrap" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.497553 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="604fdcdc-4fc5-4dcb-98b2-42e44f2bad23" containerName="keystone-bootstrap" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.497785 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="604fdcdc-4fc5-4dcb-98b2-42e44f2bad23" containerName="keystone-bootstrap" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.498476 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-558c47b6d4-9zp2v" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.501430 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.501654 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.501774 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.501954 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.501989 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.502074 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-62lpv" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.505733 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-558c47b6d4-9zp2v"] Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.554019 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.554110 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.593510 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.607259 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.624334 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-combined-ca-bundle\") pod \"keystone-558c47b6d4-9zp2v\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " pod="openstack/keystone-558c47b6d4-9zp2v" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.624612 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-internal-tls-certs\") pod \"keystone-558c47b6d4-9zp2v\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " pod="openstack/keystone-558c47b6d4-9zp2v" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.624744 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-public-tls-certs\") pod \"keystone-558c47b6d4-9zp2v\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " pod="openstack/keystone-558c47b6d4-9zp2v" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.624858 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkhtt\" (UniqueName: \"kubernetes.io/projected/c03db41f-e7fb-4188-bd67-13f35c231490-kube-api-access-rkhtt\") pod \"keystone-558c47b6d4-9zp2v\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " pod="openstack/keystone-558c47b6d4-9zp2v" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.625013 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-config-data\") pod \"keystone-558c47b6d4-9zp2v\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " pod="openstack/keystone-558c47b6d4-9zp2v" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.625110 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-scripts\") pod \"keystone-558c47b6d4-9zp2v\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " pod="openstack/keystone-558c47b6d4-9zp2v" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.625234 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-fernet-keys\") pod \"keystone-558c47b6d4-9zp2v\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " pod="openstack/keystone-558c47b6d4-9zp2v" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.625410 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-credential-keys\") pod \"keystone-558c47b6d4-9zp2v\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " pod="openstack/keystone-558c47b6d4-9zp2v" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.726912 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkhtt\" (UniqueName: \"kubernetes.io/projected/c03db41f-e7fb-4188-bd67-13f35c231490-kube-api-access-rkhtt\") pod \"keystone-558c47b6d4-9zp2v\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " pod="openstack/keystone-558c47b6d4-9zp2v" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.726981 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-config-data\") pod \"keystone-558c47b6d4-9zp2v\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " pod="openstack/keystone-558c47b6d4-9zp2v" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.727004 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-scripts\") pod \"keystone-558c47b6d4-9zp2v\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " pod="openstack/keystone-558c47b6d4-9zp2v" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.727042 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-fernet-keys\") pod \"keystone-558c47b6d4-9zp2v\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " pod="openstack/keystone-558c47b6d4-9zp2v" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.727124 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-credential-keys\") pod \"keystone-558c47b6d4-9zp2v\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " pod="openstack/keystone-558c47b6d4-9zp2v" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.727194 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-combined-ca-bundle\") pod \"keystone-558c47b6d4-9zp2v\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " pod="openstack/keystone-558c47b6d4-9zp2v" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.727239 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-internal-tls-certs\") pod \"keystone-558c47b6d4-9zp2v\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " pod="openstack/keystone-558c47b6d4-9zp2v" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.727295 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-public-tls-certs\") pod \"keystone-558c47b6d4-9zp2v\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " pod="openstack/keystone-558c47b6d4-9zp2v" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.732461 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-combined-ca-bundle\") pod \"keystone-558c47b6d4-9zp2v\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " pod="openstack/keystone-558c47b6d4-9zp2v" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.732684 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-credential-keys\") pod \"keystone-558c47b6d4-9zp2v\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " pod="openstack/keystone-558c47b6d4-9zp2v" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.733114 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-fernet-keys\") pod \"keystone-558c47b6d4-9zp2v\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " pod="openstack/keystone-558c47b6d4-9zp2v" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.734042 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-scripts\") pod \"keystone-558c47b6d4-9zp2v\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " pod="openstack/keystone-558c47b6d4-9zp2v" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.738827 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-config-data\") pod \"keystone-558c47b6d4-9zp2v\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " pod="openstack/keystone-558c47b6d4-9zp2v" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.749588 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkhtt\" (UniqueName: \"kubernetes.io/projected/c03db41f-e7fb-4188-bd67-13f35c231490-kube-api-access-rkhtt\") pod \"keystone-558c47b6d4-9zp2v\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " pod="openstack/keystone-558c47b6d4-9zp2v" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.750100 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-internal-tls-certs\") pod \"keystone-558c47b6d4-9zp2v\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " pod="openstack/keystone-558c47b6d4-9zp2v" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.750617 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-public-tls-certs\") pod \"keystone-558c47b6d4-9zp2v\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " pod="openstack/keystone-558c47b6d4-9zp2v" Oct 13 06:46:31 crc kubenswrapper[4833]: I1013 06:46:31.826855 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-558c47b6d4-9zp2v" Oct 13 06:46:32 crc kubenswrapper[4833]: I1013 06:46:32.362125 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 13 06:46:32 crc kubenswrapper[4833]: I1013 06:46:32.362179 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.023739 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-l6lww"] Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.032574 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-l6lww" Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.034307 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-vqtz4" Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.035168 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.058979 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-l6lww"] Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.153944 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82be7aac-cd15-4ed6-bec2-07ff9928d194-combined-ca-bundle\") pod \"barbican-db-sync-l6lww\" (UID: \"82be7aac-cd15-4ed6-bec2-07ff9928d194\") " pod="openstack/barbican-db-sync-l6lww" Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.154012 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grctk\" (UniqueName: \"kubernetes.io/projected/82be7aac-cd15-4ed6-bec2-07ff9928d194-kube-api-access-grctk\") pod \"barbican-db-sync-l6lww\" (UID: \"82be7aac-cd15-4ed6-bec2-07ff9928d194\") " pod="openstack/barbican-db-sync-l6lww" Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.154144 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/82be7aac-cd15-4ed6-bec2-07ff9928d194-db-sync-config-data\") pod \"barbican-db-sync-l6lww\" (UID: \"82be7aac-cd15-4ed6-bec2-07ff9928d194\") " pod="openstack/barbican-db-sync-l6lww" Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.255844 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82be7aac-cd15-4ed6-bec2-07ff9928d194-combined-ca-bundle\") pod \"barbican-db-sync-l6lww\" (UID: \"82be7aac-cd15-4ed6-bec2-07ff9928d194\") " pod="openstack/barbican-db-sync-l6lww" Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.255931 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grctk\" (UniqueName: \"kubernetes.io/projected/82be7aac-cd15-4ed6-bec2-07ff9928d194-kube-api-access-grctk\") pod \"barbican-db-sync-l6lww\" (UID: \"82be7aac-cd15-4ed6-bec2-07ff9928d194\") " pod="openstack/barbican-db-sync-l6lww" Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.256029 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/82be7aac-cd15-4ed6-bec2-07ff9928d194-db-sync-config-data\") pod \"barbican-db-sync-l6lww\" (UID: \"82be7aac-cd15-4ed6-bec2-07ff9928d194\") " pod="openstack/barbican-db-sync-l6lww" Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.262842 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/82be7aac-cd15-4ed6-bec2-07ff9928d194-db-sync-config-data\") pod \"barbican-db-sync-l6lww\" (UID: \"82be7aac-cd15-4ed6-bec2-07ff9928d194\") " pod="openstack/barbican-db-sync-l6lww" Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.281057 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-6zmp4"] Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.283678 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6zmp4" Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.286923 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.287124 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82be7aac-cd15-4ed6-bec2-07ff9928d194-combined-ca-bundle\") pod \"barbican-db-sync-l6lww\" (UID: \"82be7aac-cd15-4ed6-bec2-07ff9928d194\") " pod="openstack/barbican-db-sync-l6lww" Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.287266 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qc2b8" Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.287278 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.292982 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6zmp4"] Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.296330 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grctk\" (UniqueName: \"kubernetes.io/projected/82be7aac-cd15-4ed6-bec2-07ff9928d194-kube-api-access-grctk\") pod \"barbican-db-sync-l6lww\" (UID: \"82be7aac-cd15-4ed6-bec2-07ff9928d194\") " pod="openstack/barbican-db-sync-l6lww" Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.357265 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d211019-3f1c-40de-82fc-7ed19c831c7c-combined-ca-bundle\") pod \"neutron-db-sync-6zmp4\" (UID: \"8d211019-3f1c-40de-82fc-7ed19c831c7c\") " pod="openstack/neutron-db-sync-6zmp4" Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.357586 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktklj\" (UniqueName: \"kubernetes.io/projected/8d211019-3f1c-40de-82fc-7ed19c831c7c-kube-api-access-ktklj\") pod \"neutron-db-sync-6zmp4\" (UID: \"8d211019-3f1c-40de-82fc-7ed19c831c7c\") " pod="openstack/neutron-db-sync-6zmp4" Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.357716 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8d211019-3f1c-40de-82fc-7ed19c831c7c-config\") pod \"neutron-db-sync-6zmp4\" (UID: \"8d211019-3f1c-40de-82fc-7ed19c831c7c\") " pod="openstack/neutron-db-sync-6zmp4" Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.370510 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-l6lww" Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.373910 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6869bc4646-lrqdg" event={"ID":"626d71e0-e957-4a46-9565-d19058a575c9","Type":"ContainerStarted","Data":"3c00a96fa2cd6e1491463d8d7820e7911b1a7848da1885898a55495e12e3893f"} Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.459754 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8d211019-3f1c-40de-82fc-7ed19c831c7c-config\") pod \"neutron-db-sync-6zmp4\" (UID: \"8d211019-3f1c-40de-82fc-7ed19c831c7c\") " pod="openstack/neutron-db-sync-6zmp4" Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.459811 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d211019-3f1c-40de-82fc-7ed19c831c7c-combined-ca-bundle\") pod \"neutron-db-sync-6zmp4\" (UID: \"8d211019-3f1c-40de-82fc-7ed19c831c7c\") " pod="openstack/neutron-db-sync-6zmp4" Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.459932 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktklj\" (UniqueName: \"kubernetes.io/projected/8d211019-3f1c-40de-82fc-7ed19c831c7c-kube-api-access-ktklj\") pod \"neutron-db-sync-6zmp4\" (UID: \"8d211019-3f1c-40de-82fc-7ed19c831c7c\") " pod="openstack/neutron-db-sync-6zmp4" Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.464613 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d211019-3f1c-40de-82fc-7ed19c831c7c-combined-ca-bundle\") pod \"neutron-db-sync-6zmp4\" (UID: \"8d211019-3f1c-40de-82fc-7ed19c831c7c\") " pod="openstack/neutron-db-sync-6zmp4" Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.474358 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8d211019-3f1c-40de-82fc-7ed19c831c7c-config\") pod \"neutron-db-sync-6zmp4\" (UID: \"8d211019-3f1c-40de-82fc-7ed19c831c7c\") " pod="openstack/neutron-db-sync-6zmp4" Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.481513 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktklj\" (UniqueName: \"kubernetes.io/projected/8d211019-3f1c-40de-82fc-7ed19c831c7c-kube-api-access-ktklj\") pod \"neutron-db-sync-6zmp4\" (UID: \"8d211019-3f1c-40de-82fc-7ed19c831c7c\") " pod="openstack/neutron-db-sync-6zmp4" Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.620728 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.621765 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.660668 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.668726 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 13 06:46:33 crc kubenswrapper[4833]: I1013 06:46:33.683018 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6zmp4" Oct 13 06:46:34 crc kubenswrapper[4833]: I1013 06:46:34.328189 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 13 06:46:34 crc kubenswrapper[4833]: I1013 06:46:34.332707 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 13 06:46:34 crc kubenswrapper[4833]: I1013 06:46:34.382506 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 13 06:46:34 crc kubenswrapper[4833]: I1013 06:46:34.382830 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 13 06:46:36 crc kubenswrapper[4833]: I1013 06:46:36.498882 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 13 06:46:36 crc kubenswrapper[4833]: I1013 06:46:36.499286 4833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 06:46:36 crc kubenswrapper[4833]: I1013 06:46:36.571911 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 13 06:46:42 crc kubenswrapper[4833]: E1013 06:46:42.296179 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/sg-core@sha256:09b5017c95d7697e66b9c64846bc48ef5826a009cba89b956ec54561e5f4a2d1" Oct 13 06:46:42 crc kubenswrapper[4833]: E1013 06:46:42.297021 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core@sha256:09b5017c95d7697e66b9c64846bc48ef5826a009cba89b956ec54561e5f4a2d1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-clj2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(88476575-d57c-4196-bd20-eee1fd482ead): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 13 06:46:42 crc kubenswrapper[4833]: I1013 06:46:42.678668 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-558c47b6d4-9zp2v"] Oct 13 06:46:43 crc kubenswrapper[4833]: E1013 06:46:43.374614 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:85c75d60e1bd2f8a9ea0a2bb21a8df64c0a6f7b504cc1a05a355981d4b90e92f" Oct 13 06:46:43 crc kubenswrapper[4833]: E1013 06:46:43.374783 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:85c75d60e1bd2f8a9ea0a2bb21a8df64c0a6f7b504cc1a05a355981d4b90e92f,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xfq9w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-ltqfn_openstack(b5d6e331-404e-48b3-b9ee-66386208af92): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 13 06:46:43 crc kubenswrapper[4833]: E1013 06:46:43.376104 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-ltqfn" podUID="b5d6e331-404e-48b3-b9ee-66386208af92" Oct 13 06:46:43 crc kubenswrapper[4833]: I1013 06:46:43.473595 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-558c47b6d4-9zp2v" event={"ID":"c03db41f-e7fb-4188-bd67-13f35c231490","Type":"ContainerStarted","Data":"2680ccaa3743c79ad8df0ae3fe46dc4d517812491a0140cb5e4ac3352d30fdf7"} Oct 13 06:46:43 crc kubenswrapper[4833]: E1013 06:46:43.478955 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:85c75d60e1bd2f8a9ea0a2bb21a8df64c0a6f7b504cc1a05a355981d4b90e92f\\\"\"" pod="openstack/cinder-db-sync-ltqfn" podUID="b5d6e331-404e-48b3-b9ee-66386208af92" Oct 13 06:46:43 crc kubenswrapper[4833]: I1013 06:46:43.843759 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6zmp4"] Oct 13 06:46:43 crc kubenswrapper[4833]: W1013 06:46:43.893384 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82be7aac_cd15_4ed6_bec2_07ff9928d194.slice/crio-1eb13bae73f71d641f8eaf66450528fe636609eaf85992c518dc3ed4c9b7232f WatchSource:0}: Error finding container 1eb13bae73f71d641f8eaf66450528fe636609eaf85992c518dc3ed4c9b7232f: Status 404 returned error can't find the container with id 1eb13bae73f71d641f8eaf66450528fe636609eaf85992c518dc3ed4c9b7232f Oct 13 06:46:43 crc kubenswrapper[4833]: I1013 06:46:43.894401 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-l6lww"] Oct 13 06:46:44 crc kubenswrapper[4833]: I1013 06:46:44.494460 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-558c47b6d4-9zp2v" event={"ID":"c03db41f-e7fb-4188-bd67-13f35c231490","Type":"ContainerStarted","Data":"9e6742134cdf90f69643cd249cb0d8765be245ae58a1aadfbd64d0b1618d524e"} Oct 13 06:46:44 crc kubenswrapper[4833]: I1013 06:46:44.494738 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-558c47b6d4-9zp2v" Oct 13 06:46:44 crc kubenswrapper[4833]: I1013 06:46:44.500603 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6zmp4" event={"ID":"8d211019-3f1c-40de-82fc-7ed19c831c7c","Type":"ContainerStarted","Data":"ab4603996e10efb1837fd43c7ec7732d2eee4186c2e23d81c201b5e44bc74800"} Oct 13 06:46:44 crc kubenswrapper[4833]: I1013 06:46:44.500643 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6zmp4" event={"ID":"8d211019-3f1c-40de-82fc-7ed19c831c7c","Type":"ContainerStarted","Data":"1773b76ece2e0388455979a68ab02e8ab15e68c3ea9b27a00de5872c349fa3a1"} Oct 13 06:46:44 crc kubenswrapper[4833]: I1013 06:46:44.502701 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-l6lww" event={"ID":"82be7aac-cd15-4ed6-bec2-07ff9928d194","Type":"ContainerStarted","Data":"1eb13bae73f71d641f8eaf66450528fe636609eaf85992c518dc3ed4c9b7232f"} Oct 13 06:46:44 crc kubenswrapper[4833]: I1013 06:46:44.521808 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-558c47b6d4-9zp2v" podStartSLOduration=13.52178706 podStartE2EDuration="13.52178706s" podCreationTimestamp="2025-10-13 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:46:44.517328951 +0000 UTC m=+1094.617751867" watchObservedRunningTime="2025-10-13 06:46:44.52178706 +0000 UTC m=+1094.622209976" Oct 13 06:46:44 crc kubenswrapper[4833]: I1013 06:46:44.527596 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6869bc4646-lrqdg" event={"ID":"626d71e0-e957-4a46-9565-d19058a575c9","Type":"ContainerStarted","Data":"65e6c17688173888d5cd8825b6cc56823eb1c0169fcf3185554758380d59282a"} Oct 13 06:46:44 crc kubenswrapper[4833]: I1013 06:46:44.527637 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6869bc4646-lrqdg" event={"ID":"626d71e0-e957-4a46-9565-d19058a575c9","Type":"ContainerStarted","Data":"b30092ce1d6d2a4148dfa7a2b34e676c77f00b1ab27c3818e714c5780982199f"} Oct 13 06:46:44 crc kubenswrapper[4833]: I1013 06:46:44.528284 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6869bc4646-lrqdg" Oct 13 06:46:44 crc kubenswrapper[4833]: I1013 06:46:44.528313 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6869bc4646-lrqdg" Oct 13 06:46:44 crc kubenswrapper[4833]: I1013 06:46:44.536713 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-6zmp4" podStartSLOduration=11.536700353 podStartE2EDuration="11.536700353s" podCreationTimestamp="2025-10-13 06:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:46:44.529305609 +0000 UTC m=+1094.629728525" watchObservedRunningTime="2025-10-13 06:46:44.536700353 +0000 UTC m=+1094.637123269" Oct 13 06:46:44 crc kubenswrapper[4833]: I1013 06:46:44.555192 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6869bc4646-lrqdg" podStartSLOduration=14.555175999 podStartE2EDuration="14.555175999s" podCreationTimestamp="2025-10-13 06:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:46:44.552413919 +0000 UTC m=+1094.652836835" watchObservedRunningTime="2025-10-13 06:46:44.555175999 +0000 UTC m=+1094.655598915" Oct 13 06:46:52 crc kubenswrapper[4833]: E1013 06:46:52.397724 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="88476575-d57c-4196-bd20-eee1fd482ead" Oct 13 06:46:52 crc kubenswrapper[4833]: I1013 06:46:52.610563 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88476575-d57c-4196-bd20-eee1fd482ead","Type":"ContainerStarted","Data":"87bd62b135f6ebd994227b0c2e9a22c394219ff5c6a9cd91d84c596fa5be6361"} Oct 13 06:46:52 crc kubenswrapper[4833]: I1013 06:46:52.610789 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 13 06:46:52 crc kubenswrapper[4833]: I1013 06:46:52.610791 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="88476575-d57c-4196-bd20-eee1fd482ead" containerName="ceilometer-central-agent" containerID="cri-o://d8c98d0953ee55656c844a28bd55c66ada0a1229e0786eaf1a44d52d3d7eb93d" gracePeriod=30 Oct 13 06:46:52 crc kubenswrapper[4833]: I1013 06:46:52.610866 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="88476575-d57c-4196-bd20-eee1fd482ead" containerName="ceilometer-notification-agent" containerID="cri-o://1320b3849b11e6e71c70fb037d3f1a888780684fe0adba1d5f0575053e5b790d" gracePeriod=30 Oct 13 06:46:52 crc kubenswrapper[4833]: I1013 06:46:52.610840 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="88476575-d57c-4196-bd20-eee1fd482ead" containerName="proxy-httpd" containerID="cri-o://87bd62b135f6ebd994227b0c2e9a22c394219ff5c6a9cd91d84c596fa5be6361" gracePeriod=30 Oct 13 06:46:52 crc kubenswrapper[4833]: I1013 06:46:52.620615 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-l6lww" event={"ID":"82be7aac-cd15-4ed6-bec2-07ff9928d194","Type":"ContainerStarted","Data":"9326acc1fd0f02265b36428d0471646f19f4f0d184be080ae968bf52c850e15b"} Oct 13 06:46:52 crc kubenswrapper[4833]: I1013 06:46:52.659083 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-l6lww" podStartSLOduration=11.567593259 podStartE2EDuration="19.659062999s" podCreationTimestamp="2025-10-13 06:46:33 +0000 UTC" firstStartedPulling="2025-10-13 06:46:43.895940204 +0000 UTC m=+1093.996363120" lastFinishedPulling="2025-10-13 06:46:51.987409944 +0000 UTC m=+1102.087832860" observedRunningTime="2025-10-13 06:46:52.650761898 +0000 UTC m=+1102.751184824" watchObservedRunningTime="2025-10-13 06:46:52.659062999 +0000 UTC m=+1102.759485925" Oct 13 06:46:53 crc kubenswrapper[4833]: I1013 06:46:53.630942 4833 generic.go:334] "Generic (PLEG): container finished" podID="88476575-d57c-4196-bd20-eee1fd482ead" containerID="87bd62b135f6ebd994227b0c2e9a22c394219ff5c6a9cd91d84c596fa5be6361" exitCode=0 Oct 13 06:46:53 crc kubenswrapper[4833]: I1013 06:46:53.632178 4833 generic.go:334] "Generic (PLEG): container finished" podID="88476575-d57c-4196-bd20-eee1fd482ead" containerID="d8c98d0953ee55656c844a28bd55c66ada0a1229e0786eaf1a44d52d3d7eb93d" exitCode=0 Oct 13 06:46:53 crc kubenswrapper[4833]: I1013 06:46:53.631093 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88476575-d57c-4196-bd20-eee1fd482ead","Type":"ContainerDied","Data":"87bd62b135f6ebd994227b0c2e9a22c394219ff5c6a9cd91d84c596fa5be6361"} Oct 13 06:46:53 crc kubenswrapper[4833]: I1013 06:46:53.632344 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88476575-d57c-4196-bd20-eee1fd482ead","Type":"ContainerDied","Data":"d8c98d0953ee55656c844a28bd55c66ada0a1229e0786eaf1a44d52d3d7eb93d"} Oct 13 06:46:55 crc kubenswrapper[4833]: I1013 06:46:55.652472 4833 generic.go:334] "Generic (PLEG): container finished" podID="82be7aac-cd15-4ed6-bec2-07ff9928d194" containerID="9326acc1fd0f02265b36428d0471646f19f4f0d184be080ae968bf52c850e15b" exitCode=0 Oct 13 06:46:55 crc kubenswrapper[4833]: I1013 06:46:55.652525 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-l6lww" event={"ID":"82be7aac-cd15-4ed6-bec2-07ff9928d194","Type":"ContainerDied","Data":"9326acc1fd0f02265b36428d0471646f19f4f0d184be080ae968bf52c850e15b"} Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.360030 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.489360 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88476575-d57c-4196-bd20-eee1fd482ead-config-data\") pod \"88476575-d57c-4196-bd20-eee1fd482ead\" (UID: \"88476575-d57c-4196-bd20-eee1fd482ead\") " Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.489403 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88476575-d57c-4196-bd20-eee1fd482ead-combined-ca-bundle\") pod \"88476575-d57c-4196-bd20-eee1fd482ead\" (UID: \"88476575-d57c-4196-bd20-eee1fd482ead\") " Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.489521 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88476575-d57c-4196-bd20-eee1fd482ead-log-httpd\") pod \"88476575-d57c-4196-bd20-eee1fd482ead\" (UID: \"88476575-d57c-4196-bd20-eee1fd482ead\") " Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.489563 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clj2v\" (UniqueName: \"kubernetes.io/projected/88476575-d57c-4196-bd20-eee1fd482ead-kube-api-access-clj2v\") pod \"88476575-d57c-4196-bd20-eee1fd482ead\" (UID: \"88476575-d57c-4196-bd20-eee1fd482ead\") " Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.489598 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88476575-d57c-4196-bd20-eee1fd482ead-scripts\") pod \"88476575-d57c-4196-bd20-eee1fd482ead\" (UID: \"88476575-d57c-4196-bd20-eee1fd482ead\") " Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.489641 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88476575-d57c-4196-bd20-eee1fd482ead-run-httpd\") pod \"88476575-d57c-4196-bd20-eee1fd482ead\" (UID: \"88476575-d57c-4196-bd20-eee1fd482ead\") " Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.489683 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88476575-d57c-4196-bd20-eee1fd482ead-sg-core-conf-yaml\") pod \"88476575-d57c-4196-bd20-eee1fd482ead\" (UID: \"88476575-d57c-4196-bd20-eee1fd482ead\") " Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.490618 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88476575-d57c-4196-bd20-eee1fd482ead-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "88476575-d57c-4196-bd20-eee1fd482ead" (UID: "88476575-d57c-4196-bd20-eee1fd482ead"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.490833 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88476575-d57c-4196-bd20-eee1fd482ead-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "88476575-d57c-4196-bd20-eee1fd482ead" (UID: "88476575-d57c-4196-bd20-eee1fd482ead"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.495163 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88476575-d57c-4196-bd20-eee1fd482ead-kube-api-access-clj2v" (OuterVolumeSpecName: "kube-api-access-clj2v") pod "88476575-d57c-4196-bd20-eee1fd482ead" (UID: "88476575-d57c-4196-bd20-eee1fd482ead"). InnerVolumeSpecName "kube-api-access-clj2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.495616 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88476575-d57c-4196-bd20-eee1fd482ead-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "88476575-d57c-4196-bd20-eee1fd482ead" (UID: "88476575-d57c-4196-bd20-eee1fd482ead"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.496223 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88476575-d57c-4196-bd20-eee1fd482ead-scripts" (OuterVolumeSpecName: "scripts") pod "88476575-d57c-4196-bd20-eee1fd482ead" (UID: "88476575-d57c-4196-bd20-eee1fd482ead"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.570765 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88476575-d57c-4196-bd20-eee1fd482ead-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88476575-d57c-4196-bd20-eee1fd482ead" (UID: "88476575-d57c-4196-bd20-eee1fd482ead"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.574873 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88476575-d57c-4196-bd20-eee1fd482ead-config-data" (OuterVolumeSpecName: "config-data") pod "88476575-d57c-4196-bd20-eee1fd482ead" (UID: "88476575-d57c-4196-bd20-eee1fd482ead"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.591112 4833 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88476575-d57c-4196-bd20-eee1fd482ead-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.591152 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88476575-d57c-4196-bd20-eee1fd482ead-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.591164 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88476575-d57c-4196-bd20-eee1fd482ead-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.591176 4833 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88476575-d57c-4196-bd20-eee1fd482ead-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.591187 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clj2v\" (UniqueName: \"kubernetes.io/projected/88476575-d57c-4196-bd20-eee1fd482ead-kube-api-access-clj2v\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.591202 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88476575-d57c-4196-bd20-eee1fd482ead-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.591213 4833 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88476575-d57c-4196-bd20-eee1fd482ead-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.671015 4833 generic.go:334] "Generic (PLEG): container finished" podID="88476575-d57c-4196-bd20-eee1fd482ead" containerID="1320b3849b11e6e71c70fb037d3f1a888780684fe0adba1d5f0575053e5b790d" exitCode=0 Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.671065 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.671085 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88476575-d57c-4196-bd20-eee1fd482ead","Type":"ContainerDied","Data":"1320b3849b11e6e71c70fb037d3f1a888780684fe0adba1d5f0575053e5b790d"} Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.672161 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88476575-d57c-4196-bd20-eee1fd482ead","Type":"ContainerDied","Data":"f9a47226abeb39ecf5f631c40e286cbb695fb4fd5de13b8a075f4f51aee27efd"} Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.672192 4833 scope.go:117] "RemoveContainer" containerID="87bd62b135f6ebd994227b0c2e9a22c394219ff5c6a9cd91d84c596fa5be6361" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.678542 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ltqfn" event={"ID":"b5d6e331-404e-48b3-b9ee-66386208af92","Type":"ContainerStarted","Data":"cef1392c2e441a2957315fea42ef8d028a266ebdbd1b0a9b26eb9e298a5bb917"} Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.715942 4833 scope.go:117] "RemoveContainer" containerID="1320b3849b11e6e71c70fb037d3f1a888780684fe0adba1d5f0575053e5b790d" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.737287 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.744496 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.752908 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-ltqfn" podStartSLOduration=6.453333043 podStartE2EDuration="38.752890364s" podCreationTimestamp="2025-10-13 06:46:18 +0000 UTC" firstStartedPulling="2025-10-13 06:46:22.906455117 +0000 UTC m=+1073.006878033" lastFinishedPulling="2025-10-13 06:46:55.206012448 +0000 UTC m=+1105.306435354" observedRunningTime="2025-10-13 06:46:56.73726247 +0000 UTC m=+1106.837685386" watchObservedRunningTime="2025-10-13 06:46:56.752890364 +0000 UTC m=+1106.853313280" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.759969 4833 scope.go:117] "RemoveContainer" containerID="d8c98d0953ee55656c844a28bd55c66ada0a1229e0786eaf1a44d52d3d7eb93d" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.765954 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:46:56 crc kubenswrapper[4833]: E1013 06:46:56.766502 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88476575-d57c-4196-bd20-eee1fd482ead" containerName="proxy-httpd" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.766529 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="88476575-d57c-4196-bd20-eee1fd482ead" containerName="proxy-httpd" Oct 13 06:46:56 crc kubenswrapper[4833]: E1013 06:46:56.766571 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88476575-d57c-4196-bd20-eee1fd482ead" containerName="ceilometer-central-agent" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.766580 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="88476575-d57c-4196-bd20-eee1fd482ead" containerName="ceilometer-central-agent" Oct 13 06:46:56 crc kubenswrapper[4833]: E1013 06:46:56.766615 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88476575-d57c-4196-bd20-eee1fd482ead" containerName="ceilometer-notification-agent" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.766625 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="88476575-d57c-4196-bd20-eee1fd482ead" containerName="ceilometer-notification-agent" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.766838 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="88476575-d57c-4196-bd20-eee1fd482ead" containerName="proxy-httpd" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.766877 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="88476575-d57c-4196-bd20-eee1fd482ead" containerName="ceilometer-central-agent" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.766889 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="88476575-d57c-4196-bd20-eee1fd482ead" containerName="ceilometer-notification-agent" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.768734 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.772866 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.775910 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.777957 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.802579 4833 scope.go:117] "RemoveContainer" containerID="87bd62b135f6ebd994227b0c2e9a22c394219ff5c6a9cd91d84c596fa5be6361" Oct 13 06:46:56 crc kubenswrapper[4833]: E1013 06:46:56.803065 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87bd62b135f6ebd994227b0c2e9a22c394219ff5c6a9cd91d84c596fa5be6361\": container with ID starting with 87bd62b135f6ebd994227b0c2e9a22c394219ff5c6a9cd91d84c596fa5be6361 not found: ID does not exist" containerID="87bd62b135f6ebd994227b0c2e9a22c394219ff5c6a9cd91d84c596fa5be6361" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.803093 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87bd62b135f6ebd994227b0c2e9a22c394219ff5c6a9cd91d84c596fa5be6361"} err="failed to get container status \"87bd62b135f6ebd994227b0c2e9a22c394219ff5c6a9cd91d84c596fa5be6361\": rpc error: code = NotFound desc = could not find container \"87bd62b135f6ebd994227b0c2e9a22c394219ff5c6a9cd91d84c596fa5be6361\": container with ID starting with 87bd62b135f6ebd994227b0c2e9a22c394219ff5c6a9cd91d84c596fa5be6361 not found: ID does not exist" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.803114 4833 scope.go:117] "RemoveContainer" containerID="1320b3849b11e6e71c70fb037d3f1a888780684fe0adba1d5f0575053e5b790d" Oct 13 06:46:56 crc kubenswrapper[4833]: E1013 06:46:56.804151 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1320b3849b11e6e71c70fb037d3f1a888780684fe0adba1d5f0575053e5b790d\": container with ID starting with 1320b3849b11e6e71c70fb037d3f1a888780684fe0adba1d5f0575053e5b790d not found: ID does not exist" containerID="1320b3849b11e6e71c70fb037d3f1a888780684fe0adba1d5f0575053e5b790d" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.804188 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1320b3849b11e6e71c70fb037d3f1a888780684fe0adba1d5f0575053e5b790d"} err="failed to get container status \"1320b3849b11e6e71c70fb037d3f1a888780684fe0adba1d5f0575053e5b790d\": rpc error: code = NotFound desc = could not find container \"1320b3849b11e6e71c70fb037d3f1a888780684fe0adba1d5f0575053e5b790d\": container with ID starting with 1320b3849b11e6e71c70fb037d3f1a888780684fe0adba1d5f0575053e5b790d not found: ID does not exist" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.804207 4833 scope.go:117] "RemoveContainer" containerID="d8c98d0953ee55656c844a28bd55c66ada0a1229e0786eaf1a44d52d3d7eb93d" Oct 13 06:46:56 crc kubenswrapper[4833]: E1013 06:46:56.804430 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8c98d0953ee55656c844a28bd55c66ada0a1229e0786eaf1a44d52d3d7eb93d\": container with ID starting with d8c98d0953ee55656c844a28bd55c66ada0a1229e0786eaf1a44d52d3d7eb93d not found: ID does not exist" containerID="d8c98d0953ee55656c844a28bd55c66ada0a1229e0786eaf1a44d52d3d7eb93d" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.804447 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8c98d0953ee55656c844a28bd55c66ada0a1229e0786eaf1a44d52d3d7eb93d"} err="failed to get container status \"d8c98d0953ee55656c844a28bd55c66ada0a1229e0786eaf1a44d52d3d7eb93d\": rpc error: code = NotFound desc = could not find container \"d8c98d0953ee55656c844a28bd55c66ada0a1229e0786eaf1a44d52d3d7eb93d\": container with ID starting with d8c98d0953ee55656c844a28bd55c66ada0a1229e0786eaf1a44d52d3d7eb93d not found: ID does not exist" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.898169 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\") " pod="openstack/ceilometer-0" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.898609 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-log-httpd\") pod \"ceilometer-0\" (UID: \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\") " pod="openstack/ceilometer-0" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.898634 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\") " pod="openstack/ceilometer-0" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.898687 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-scripts\") pod \"ceilometer-0\" (UID: \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\") " pod="openstack/ceilometer-0" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.898772 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk6cn\" (UniqueName: \"kubernetes.io/projected/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-kube-api-access-bk6cn\") pod \"ceilometer-0\" (UID: \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\") " pod="openstack/ceilometer-0" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.898865 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-run-httpd\") pod \"ceilometer-0\" (UID: \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\") " pod="openstack/ceilometer-0" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.898910 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-config-data\") pod \"ceilometer-0\" (UID: \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\") " pod="openstack/ceilometer-0" Oct 13 06:46:56 crc kubenswrapper[4833]: I1013 06:46:56.973088 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-l6lww" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.000979 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82be7aac-cd15-4ed6-bec2-07ff9928d194-combined-ca-bundle\") pod \"82be7aac-cd15-4ed6-bec2-07ff9928d194\" (UID: \"82be7aac-cd15-4ed6-bec2-07ff9928d194\") " Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.001036 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/82be7aac-cd15-4ed6-bec2-07ff9928d194-db-sync-config-data\") pod \"82be7aac-cd15-4ed6-bec2-07ff9928d194\" (UID: \"82be7aac-cd15-4ed6-bec2-07ff9928d194\") " Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.001198 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\") " pod="openstack/ceilometer-0" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.001247 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-log-httpd\") pod \"ceilometer-0\" (UID: \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\") " pod="openstack/ceilometer-0" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.001273 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\") " pod="openstack/ceilometer-0" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.001310 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-scripts\") pod \"ceilometer-0\" (UID: \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\") " pod="openstack/ceilometer-0" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.001380 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk6cn\" (UniqueName: \"kubernetes.io/projected/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-kube-api-access-bk6cn\") pod \"ceilometer-0\" (UID: \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\") " pod="openstack/ceilometer-0" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.001438 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-run-httpd\") pod \"ceilometer-0\" (UID: \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\") " pod="openstack/ceilometer-0" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.001490 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-config-data\") pod \"ceilometer-0\" (UID: \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\") " pod="openstack/ceilometer-0" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.002742 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-log-httpd\") pod \"ceilometer-0\" (UID: \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\") " pod="openstack/ceilometer-0" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.004658 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-run-httpd\") pod \"ceilometer-0\" (UID: \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\") " pod="openstack/ceilometer-0" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.008197 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\") " pod="openstack/ceilometer-0" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.008801 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82be7aac-cd15-4ed6-bec2-07ff9928d194-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "82be7aac-cd15-4ed6-bec2-07ff9928d194" (UID: "82be7aac-cd15-4ed6-bec2-07ff9928d194"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.009944 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-scripts\") pod \"ceilometer-0\" (UID: \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\") " pod="openstack/ceilometer-0" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.013565 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\") " pod="openstack/ceilometer-0" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.018469 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-config-data\") pod \"ceilometer-0\" (UID: \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\") " pod="openstack/ceilometer-0" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.027825 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk6cn\" (UniqueName: \"kubernetes.io/projected/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-kube-api-access-bk6cn\") pod \"ceilometer-0\" (UID: \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\") " pod="openstack/ceilometer-0" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.030347 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82be7aac-cd15-4ed6-bec2-07ff9928d194-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82be7aac-cd15-4ed6-bec2-07ff9928d194" (UID: "82be7aac-cd15-4ed6-bec2-07ff9928d194"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.092474 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.103100 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grctk\" (UniqueName: \"kubernetes.io/projected/82be7aac-cd15-4ed6-bec2-07ff9928d194-kube-api-access-grctk\") pod \"82be7aac-cd15-4ed6-bec2-07ff9928d194\" (UID: \"82be7aac-cd15-4ed6-bec2-07ff9928d194\") " Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.103800 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82be7aac-cd15-4ed6-bec2-07ff9928d194-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.103822 4833 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/82be7aac-cd15-4ed6-bec2-07ff9928d194-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.112083 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82be7aac-cd15-4ed6-bec2-07ff9928d194-kube-api-access-grctk" (OuterVolumeSpecName: "kube-api-access-grctk") pod "82be7aac-cd15-4ed6-bec2-07ff9928d194" (UID: "82be7aac-cd15-4ed6-bec2-07ff9928d194"). InnerVolumeSpecName "kube-api-access-grctk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.224606 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grctk\" (UniqueName: \"kubernetes.io/projected/82be7aac-cd15-4ed6-bec2-07ff9928d194-kube-api-access-grctk\") on node \"crc\" DevicePath \"\"" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.564137 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:46:57 crc kubenswrapper[4833]: W1013 06:46:57.569720 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f8bb26b_2a01_4e9a_9216_3a016a8a9a2d.slice/crio-b8a1626c4455bbf77649d6ebd208e3c8f0f76fe9482176e8f41a72b7e3f53a35 WatchSource:0}: Error finding container b8a1626c4455bbf77649d6ebd208e3c8f0f76fe9482176e8f41a72b7e3f53a35: Status 404 returned error can't find the container with id b8a1626c4455bbf77649d6ebd208e3c8f0f76fe9482176e8f41a72b7e3f53a35 Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.691635 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-l6lww" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.691659 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-l6lww" event={"ID":"82be7aac-cd15-4ed6-bec2-07ff9928d194","Type":"ContainerDied","Data":"1eb13bae73f71d641f8eaf66450528fe636609eaf85992c518dc3ed4c9b7232f"} Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.691704 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1eb13bae73f71d641f8eaf66450528fe636609eaf85992c518dc3ed4c9b7232f" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.694725 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d","Type":"ContainerStarted","Data":"b8a1626c4455bbf77649d6ebd208e3c8f0f76fe9482176e8f41a72b7e3f53a35"} Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.888245 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-bf7fd98f9-j4rff"] Oct 13 06:46:57 crc kubenswrapper[4833]: E1013 06:46:57.904663 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82be7aac-cd15-4ed6-bec2-07ff9928d194" containerName="barbican-db-sync" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.904702 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="82be7aac-cd15-4ed6-bec2-07ff9928d194" containerName="barbican-db-sync" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.905457 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="82be7aac-cd15-4ed6-bec2-07ff9928d194" containerName="barbican-db-sync" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.920503 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-bf7fd98f9-j4rff" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.926002 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.926303 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-vqtz4" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.926420 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.926502 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5f847dcbd8-p95b9"] Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.928020 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f847dcbd8-p95b9" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.929087 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.957412 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-bf7fd98f9-j4rff"] Oct 13 06:46:57 crc kubenswrapper[4833]: I1013 06:46:57.989743 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5f847dcbd8-p95b9"] Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.000047 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54f9cb888f-7c29b"] Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.001581 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9cb888f-7c29b" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.017498 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f9cb888f-7c29b"] Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.043227 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a18c26d-a476-4e4b-9320-84369da38cf2-combined-ca-bundle\") pod \"barbican-keystone-listener-5f847dcbd8-p95b9\" (UID: \"8a18c26d-a476-4e4b-9320-84369da38cf2\") " pod="openstack/barbican-keystone-listener-5f847dcbd8-p95b9" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.043320 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f85d40e-16b8-4ece-a268-8b4d227ac36c-config-data-custom\") pod \"barbican-worker-bf7fd98f9-j4rff\" (UID: \"6f85d40e-16b8-4ece-a268-8b4d227ac36c\") " pod="openstack/barbican-worker-bf7fd98f9-j4rff" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.043353 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a18c26d-a476-4e4b-9320-84369da38cf2-config-data\") pod \"barbican-keystone-listener-5f847dcbd8-p95b9\" (UID: \"8a18c26d-a476-4e4b-9320-84369da38cf2\") " pod="openstack/barbican-keystone-listener-5f847dcbd8-p95b9" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.043385 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a18c26d-a476-4e4b-9320-84369da38cf2-config-data-custom\") pod \"barbican-keystone-listener-5f847dcbd8-p95b9\" (UID: \"8a18c26d-a476-4e4b-9320-84369da38cf2\") " pod="openstack/barbican-keystone-listener-5f847dcbd8-p95b9" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.043419 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f85d40e-16b8-4ece-a268-8b4d227ac36c-combined-ca-bundle\") pod \"barbican-worker-bf7fd98f9-j4rff\" (UID: \"6f85d40e-16b8-4ece-a268-8b4d227ac36c\") " pod="openstack/barbican-worker-bf7fd98f9-j4rff" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.043447 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f85d40e-16b8-4ece-a268-8b4d227ac36c-logs\") pod \"barbican-worker-bf7fd98f9-j4rff\" (UID: \"6f85d40e-16b8-4ece-a268-8b4d227ac36c\") " pod="openstack/barbican-worker-bf7fd98f9-j4rff" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.043485 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dncg\" (UniqueName: \"kubernetes.io/projected/8a18c26d-a476-4e4b-9320-84369da38cf2-kube-api-access-7dncg\") pod \"barbican-keystone-listener-5f847dcbd8-p95b9\" (UID: \"8a18c26d-a476-4e4b-9320-84369da38cf2\") " pod="openstack/barbican-keystone-listener-5f847dcbd8-p95b9" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.043560 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a18c26d-a476-4e4b-9320-84369da38cf2-logs\") pod \"barbican-keystone-listener-5f847dcbd8-p95b9\" (UID: \"8a18c26d-a476-4e4b-9320-84369da38cf2\") " pod="openstack/barbican-keystone-listener-5f847dcbd8-p95b9" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.043590 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f85d40e-16b8-4ece-a268-8b4d227ac36c-config-data\") pod \"barbican-worker-bf7fd98f9-j4rff\" (UID: \"6f85d40e-16b8-4ece-a268-8b4d227ac36c\") " pod="openstack/barbican-worker-bf7fd98f9-j4rff" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.043616 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b7nh\" (UniqueName: \"kubernetes.io/projected/6f85d40e-16b8-4ece-a268-8b4d227ac36c-kube-api-access-7b7nh\") pod \"barbican-worker-bf7fd98f9-j4rff\" (UID: \"6f85d40e-16b8-4ece-a268-8b4d227ac36c\") " pod="openstack/barbican-worker-bf7fd98f9-j4rff" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.100330 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-699bdfffd4-dzv2d"] Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.102152 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-699bdfffd4-dzv2d" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.105463 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.125275 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-699bdfffd4-dzv2d"] Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.145062 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4d093b1-7af9-4157-902f-78359796b208-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9cb888f-7c29b\" (UID: \"f4d093b1-7af9-4157-902f-78359796b208\") " pod="openstack/dnsmasq-dns-54f9cb888f-7c29b" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.145301 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x66cb\" (UniqueName: \"kubernetes.io/projected/f4d093b1-7af9-4157-902f-78359796b208-kube-api-access-x66cb\") pod \"dnsmasq-dns-54f9cb888f-7c29b\" (UID: \"f4d093b1-7af9-4157-902f-78359796b208\") " pod="openstack/dnsmasq-dns-54f9cb888f-7c29b" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.145330 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f85d40e-16b8-4ece-a268-8b4d227ac36c-config-data-custom\") pod \"barbican-worker-bf7fd98f9-j4rff\" (UID: \"6f85d40e-16b8-4ece-a268-8b4d227ac36c\") " pod="openstack/barbican-worker-bf7fd98f9-j4rff" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.145356 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a18c26d-a476-4e4b-9320-84369da38cf2-config-data\") pod \"barbican-keystone-listener-5f847dcbd8-p95b9\" (UID: \"8a18c26d-a476-4e4b-9320-84369da38cf2\") " pod="openstack/barbican-keystone-listener-5f847dcbd8-p95b9" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.145382 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a18c26d-a476-4e4b-9320-84369da38cf2-config-data-custom\") pod \"barbican-keystone-listener-5f847dcbd8-p95b9\" (UID: \"8a18c26d-a476-4e4b-9320-84369da38cf2\") " pod="openstack/barbican-keystone-listener-5f847dcbd8-p95b9" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.145396 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4d093b1-7af9-4157-902f-78359796b208-dns-svc\") pod \"dnsmasq-dns-54f9cb888f-7c29b\" (UID: \"f4d093b1-7af9-4157-902f-78359796b208\") " pod="openstack/dnsmasq-dns-54f9cb888f-7c29b" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.145419 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f85d40e-16b8-4ece-a268-8b4d227ac36c-combined-ca-bundle\") pod \"barbican-worker-bf7fd98f9-j4rff\" (UID: \"6f85d40e-16b8-4ece-a268-8b4d227ac36c\") " pod="openstack/barbican-worker-bf7fd98f9-j4rff" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.145434 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f85d40e-16b8-4ece-a268-8b4d227ac36c-logs\") pod \"barbican-worker-bf7fd98f9-j4rff\" (UID: \"6f85d40e-16b8-4ece-a268-8b4d227ac36c\") " pod="openstack/barbican-worker-bf7fd98f9-j4rff" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.145460 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4d093b1-7af9-4157-902f-78359796b208-dns-swift-storage-0\") pod \"dnsmasq-dns-54f9cb888f-7c29b\" (UID: \"f4d093b1-7af9-4157-902f-78359796b208\") " pod="openstack/dnsmasq-dns-54f9cb888f-7c29b" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.145478 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dncg\" (UniqueName: \"kubernetes.io/projected/8a18c26d-a476-4e4b-9320-84369da38cf2-kube-api-access-7dncg\") pod \"barbican-keystone-listener-5f847dcbd8-p95b9\" (UID: \"8a18c26d-a476-4e4b-9320-84369da38cf2\") " pod="openstack/barbican-keystone-listener-5f847dcbd8-p95b9" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.145514 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a18c26d-a476-4e4b-9320-84369da38cf2-logs\") pod \"barbican-keystone-listener-5f847dcbd8-p95b9\" (UID: \"8a18c26d-a476-4e4b-9320-84369da38cf2\") " pod="openstack/barbican-keystone-listener-5f847dcbd8-p95b9" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.145579 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f85d40e-16b8-4ece-a268-8b4d227ac36c-config-data\") pod \"barbican-worker-bf7fd98f9-j4rff\" (UID: \"6f85d40e-16b8-4ece-a268-8b4d227ac36c\") " pod="openstack/barbican-worker-bf7fd98f9-j4rff" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.145601 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b7nh\" (UniqueName: \"kubernetes.io/projected/6f85d40e-16b8-4ece-a268-8b4d227ac36c-kube-api-access-7b7nh\") pod \"barbican-worker-bf7fd98f9-j4rff\" (UID: \"6f85d40e-16b8-4ece-a268-8b4d227ac36c\") " pod="openstack/barbican-worker-bf7fd98f9-j4rff" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.145621 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4d093b1-7af9-4157-902f-78359796b208-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9cb888f-7c29b\" (UID: \"f4d093b1-7af9-4157-902f-78359796b208\") " pod="openstack/dnsmasq-dns-54f9cb888f-7c29b" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.145646 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4d093b1-7af9-4157-902f-78359796b208-config\") pod \"dnsmasq-dns-54f9cb888f-7c29b\" (UID: \"f4d093b1-7af9-4157-902f-78359796b208\") " pod="openstack/dnsmasq-dns-54f9cb888f-7c29b" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.145670 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a18c26d-a476-4e4b-9320-84369da38cf2-combined-ca-bundle\") pod \"barbican-keystone-listener-5f847dcbd8-p95b9\" (UID: \"8a18c26d-a476-4e4b-9320-84369da38cf2\") " pod="openstack/barbican-keystone-listener-5f847dcbd8-p95b9" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.146463 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f85d40e-16b8-4ece-a268-8b4d227ac36c-logs\") pod \"barbican-worker-bf7fd98f9-j4rff\" (UID: \"6f85d40e-16b8-4ece-a268-8b4d227ac36c\") " pod="openstack/barbican-worker-bf7fd98f9-j4rff" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.149642 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a18c26d-a476-4e4b-9320-84369da38cf2-logs\") pod \"barbican-keystone-listener-5f847dcbd8-p95b9\" (UID: \"8a18c26d-a476-4e4b-9320-84369da38cf2\") " pod="openstack/barbican-keystone-listener-5f847dcbd8-p95b9" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.149858 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f85d40e-16b8-4ece-a268-8b4d227ac36c-config-data-custom\") pod \"barbican-worker-bf7fd98f9-j4rff\" (UID: \"6f85d40e-16b8-4ece-a268-8b4d227ac36c\") " pod="openstack/barbican-worker-bf7fd98f9-j4rff" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.159036 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a18c26d-a476-4e4b-9320-84369da38cf2-combined-ca-bundle\") pod \"barbican-keystone-listener-5f847dcbd8-p95b9\" (UID: \"8a18c26d-a476-4e4b-9320-84369da38cf2\") " pod="openstack/barbican-keystone-listener-5f847dcbd8-p95b9" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.161322 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f85d40e-16b8-4ece-a268-8b4d227ac36c-config-data\") pod \"barbican-worker-bf7fd98f9-j4rff\" (UID: \"6f85d40e-16b8-4ece-a268-8b4d227ac36c\") " pod="openstack/barbican-worker-bf7fd98f9-j4rff" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.161359 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a18c26d-a476-4e4b-9320-84369da38cf2-config-data\") pod \"barbican-keystone-listener-5f847dcbd8-p95b9\" (UID: \"8a18c26d-a476-4e4b-9320-84369da38cf2\") " pod="openstack/barbican-keystone-listener-5f847dcbd8-p95b9" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.163034 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f85d40e-16b8-4ece-a268-8b4d227ac36c-combined-ca-bundle\") pod \"barbican-worker-bf7fd98f9-j4rff\" (UID: \"6f85d40e-16b8-4ece-a268-8b4d227ac36c\") " pod="openstack/barbican-worker-bf7fd98f9-j4rff" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.165643 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b7nh\" (UniqueName: \"kubernetes.io/projected/6f85d40e-16b8-4ece-a268-8b4d227ac36c-kube-api-access-7b7nh\") pod \"barbican-worker-bf7fd98f9-j4rff\" (UID: \"6f85d40e-16b8-4ece-a268-8b4d227ac36c\") " pod="openstack/barbican-worker-bf7fd98f9-j4rff" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.167975 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a18c26d-a476-4e4b-9320-84369da38cf2-config-data-custom\") pod \"barbican-keystone-listener-5f847dcbd8-p95b9\" (UID: \"8a18c26d-a476-4e4b-9320-84369da38cf2\") " pod="openstack/barbican-keystone-listener-5f847dcbd8-p95b9" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.170436 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dncg\" (UniqueName: \"kubernetes.io/projected/8a18c26d-a476-4e4b-9320-84369da38cf2-kube-api-access-7dncg\") pod \"barbican-keystone-listener-5f847dcbd8-p95b9\" (UID: \"8a18c26d-a476-4e4b-9320-84369da38cf2\") " pod="openstack/barbican-keystone-listener-5f847dcbd8-p95b9" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.247261 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82e87d62-aa7e-466c-a479-8b0c6e3deb64-config-data-custom\") pod \"barbican-api-699bdfffd4-dzv2d\" (UID: \"82e87d62-aa7e-466c-a479-8b0c6e3deb64\") " pod="openstack/barbican-api-699bdfffd4-dzv2d" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.247310 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82e87d62-aa7e-466c-a479-8b0c6e3deb64-logs\") pod \"barbican-api-699bdfffd4-dzv2d\" (UID: \"82e87d62-aa7e-466c-a479-8b0c6e3deb64\") " pod="openstack/barbican-api-699bdfffd4-dzv2d" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.247353 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82e87d62-aa7e-466c-a479-8b0c6e3deb64-combined-ca-bundle\") pod \"barbican-api-699bdfffd4-dzv2d\" (UID: \"82e87d62-aa7e-466c-a479-8b0c6e3deb64\") " pod="openstack/barbican-api-699bdfffd4-dzv2d" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.247436 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4d093b1-7af9-4157-902f-78359796b208-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9cb888f-7c29b\" (UID: \"f4d093b1-7af9-4157-902f-78359796b208\") " pod="openstack/dnsmasq-dns-54f9cb888f-7c29b" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.247474 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4d093b1-7af9-4157-902f-78359796b208-config\") pod \"dnsmasq-dns-54f9cb888f-7c29b\" (UID: \"f4d093b1-7af9-4157-902f-78359796b208\") " pod="openstack/dnsmasq-dns-54f9cb888f-7c29b" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.247517 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82e87d62-aa7e-466c-a479-8b0c6e3deb64-config-data\") pod \"barbican-api-699bdfffd4-dzv2d\" (UID: \"82e87d62-aa7e-466c-a479-8b0c6e3deb64\") " pod="openstack/barbican-api-699bdfffd4-dzv2d" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.247570 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4d093b1-7af9-4157-902f-78359796b208-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9cb888f-7c29b\" (UID: \"f4d093b1-7af9-4157-902f-78359796b208\") " pod="openstack/dnsmasq-dns-54f9cb888f-7c29b" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.247598 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x66cb\" (UniqueName: \"kubernetes.io/projected/f4d093b1-7af9-4157-902f-78359796b208-kube-api-access-x66cb\") pod \"dnsmasq-dns-54f9cb888f-7c29b\" (UID: \"f4d093b1-7af9-4157-902f-78359796b208\") " pod="openstack/dnsmasq-dns-54f9cb888f-7c29b" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.247652 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4d093b1-7af9-4157-902f-78359796b208-dns-svc\") pod \"dnsmasq-dns-54f9cb888f-7c29b\" (UID: \"f4d093b1-7af9-4157-902f-78359796b208\") " pod="openstack/dnsmasq-dns-54f9cb888f-7c29b" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.247697 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmk4l\" (UniqueName: \"kubernetes.io/projected/82e87d62-aa7e-466c-a479-8b0c6e3deb64-kube-api-access-pmk4l\") pod \"barbican-api-699bdfffd4-dzv2d\" (UID: \"82e87d62-aa7e-466c-a479-8b0c6e3deb64\") " pod="openstack/barbican-api-699bdfffd4-dzv2d" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.247730 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4d093b1-7af9-4157-902f-78359796b208-dns-swift-storage-0\") pod \"dnsmasq-dns-54f9cb888f-7c29b\" (UID: \"f4d093b1-7af9-4157-902f-78359796b208\") " pod="openstack/dnsmasq-dns-54f9cb888f-7c29b" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.248392 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4d093b1-7af9-4157-902f-78359796b208-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9cb888f-7c29b\" (UID: \"f4d093b1-7af9-4157-902f-78359796b208\") " pod="openstack/dnsmasq-dns-54f9cb888f-7c29b" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.248472 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4d093b1-7af9-4157-902f-78359796b208-dns-swift-storage-0\") pod \"dnsmasq-dns-54f9cb888f-7c29b\" (UID: \"f4d093b1-7af9-4157-902f-78359796b208\") " pod="openstack/dnsmasq-dns-54f9cb888f-7c29b" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.248929 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4d093b1-7af9-4157-902f-78359796b208-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9cb888f-7c29b\" (UID: \"f4d093b1-7af9-4157-902f-78359796b208\") " pod="openstack/dnsmasq-dns-54f9cb888f-7c29b" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.249402 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4d093b1-7af9-4157-902f-78359796b208-config\") pod \"dnsmasq-dns-54f9cb888f-7c29b\" (UID: \"f4d093b1-7af9-4157-902f-78359796b208\") " pod="openstack/dnsmasq-dns-54f9cb888f-7c29b" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.250172 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4d093b1-7af9-4157-902f-78359796b208-dns-svc\") pod \"dnsmasq-dns-54f9cb888f-7c29b\" (UID: \"f4d093b1-7af9-4157-902f-78359796b208\") " pod="openstack/dnsmasq-dns-54f9cb888f-7c29b" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.255249 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-bf7fd98f9-j4rff" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.269298 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f847dcbd8-p95b9" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.270007 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x66cb\" (UniqueName: \"kubernetes.io/projected/f4d093b1-7af9-4157-902f-78359796b208-kube-api-access-x66cb\") pod \"dnsmasq-dns-54f9cb888f-7c29b\" (UID: \"f4d093b1-7af9-4157-902f-78359796b208\") " pod="openstack/dnsmasq-dns-54f9cb888f-7c29b" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.322459 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9cb888f-7c29b" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.349516 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82e87d62-aa7e-466c-a479-8b0c6e3deb64-config-data\") pod \"barbican-api-699bdfffd4-dzv2d\" (UID: \"82e87d62-aa7e-466c-a479-8b0c6e3deb64\") " pod="openstack/barbican-api-699bdfffd4-dzv2d" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.349673 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmk4l\" (UniqueName: \"kubernetes.io/projected/82e87d62-aa7e-466c-a479-8b0c6e3deb64-kube-api-access-pmk4l\") pod \"barbican-api-699bdfffd4-dzv2d\" (UID: \"82e87d62-aa7e-466c-a479-8b0c6e3deb64\") " pod="openstack/barbican-api-699bdfffd4-dzv2d" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.349715 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82e87d62-aa7e-466c-a479-8b0c6e3deb64-config-data-custom\") pod \"barbican-api-699bdfffd4-dzv2d\" (UID: \"82e87d62-aa7e-466c-a479-8b0c6e3deb64\") " pod="openstack/barbican-api-699bdfffd4-dzv2d" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.349741 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82e87d62-aa7e-466c-a479-8b0c6e3deb64-logs\") pod \"barbican-api-699bdfffd4-dzv2d\" (UID: \"82e87d62-aa7e-466c-a479-8b0c6e3deb64\") " pod="openstack/barbican-api-699bdfffd4-dzv2d" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.349765 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82e87d62-aa7e-466c-a479-8b0c6e3deb64-combined-ca-bundle\") pod \"barbican-api-699bdfffd4-dzv2d\" (UID: \"82e87d62-aa7e-466c-a479-8b0c6e3deb64\") " pod="openstack/barbican-api-699bdfffd4-dzv2d" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.353493 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82e87d62-aa7e-466c-a479-8b0c6e3deb64-logs\") pod \"barbican-api-699bdfffd4-dzv2d\" (UID: \"82e87d62-aa7e-466c-a479-8b0c6e3deb64\") " pod="openstack/barbican-api-699bdfffd4-dzv2d" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.354041 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82e87d62-aa7e-466c-a479-8b0c6e3deb64-config-data\") pod \"barbican-api-699bdfffd4-dzv2d\" (UID: \"82e87d62-aa7e-466c-a479-8b0c6e3deb64\") " pod="openstack/barbican-api-699bdfffd4-dzv2d" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.355516 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82e87d62-aa7e-466c-a479-8b0c6e3deb64-combined-ca-bundle\") pod \"barbican-api-699bdfffd4-dzv2d\" (UID: \"82e87d62-aa7e-466c-a479-8b0c6e3deb64\") " pod="openstack/barbican-api-699bdfffd4-dzv2d" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.356388 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82e87d62-aa7e-466c-a479-8b0c6e3deb64-config-data-custom\") pod \"barbican-api-699bdfffd4-dzv2d\" (UID: \"82e87d62-aa7e-466c-a479-8b0c6e3deb64\") " pod="openstack/barbican-api-699bdfffd4-dzv2d" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.376936 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmk4l\" (UniqueName: \"kubernetes.io/projected/82e87d62-aa7e-466c-a479-8b0c6e3deb64-kube-api-access-pmk4l\") pod \"barbican-api-699bdfffd4-dzv2d\" (UID: \"82e87d62-aa7e-466c-a479-8b0c6e3deb64\") " pod="openstack/barbican-api-699bdfffd4-dzv2d" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.542126 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-699bdfffd4-dzv2d" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.656081 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88476575-d57c-4196-bd20-eee1fd482ead" path="/var/lib/kubelet/pods/88476575-d57c-4196-bd20-eee1fd482ead/volumes" Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.657370 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5f847dcbd8-p95b9"] Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.709794 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d","Type":"ContainerStarted","Data":"c48cf04e811c4344f099ce9cf34c4e3083c0fdad051024faf84fdd53e28a1725"} Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.710953 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f847dcbd8-p95b9" event={"ID":"8a18c26d-a476-4e4b-9320-84369da38cf2","Type":"ContainerStarted","Data":"3b968c0d9589f8dc9186d50c9a1e3b9eb80e98b8ec3f6cd28a674143e69a7e2e"} Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.739663 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-bf7fd98f9-j4rff"] Oct 13 06:46:58 crc kubenswrapper[4833]: W1013 06:46:58.744179 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f85d40e_16b8_4ece_a268_8b4d227ac36c.slice/crio-a961829225c701bb0ce386f6eebbd368a8e79e25764f59f09a1f23636ae17f42 WatchSource:0}: Error finding container a961829225c701bb0ce386f6eebbd368a8e79e25764f59f09a1f23636ae17f42: Status 404 returned error can't find the container with id a961829225c701bb0ce386f6eebbd368a8e79e25764f59f09a1f23636ae17f42 Oct 13 06:46:58 crc kubenswrapper[4833]: W1013 06:46:58.903692 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4d093b1_7af9_4157_902f_78359796b208.slice/crio-ff4213ac298c2cae4e04d07af912b4c3c69a295b2a758a7e80b7072706ef4916 WatchSource:0}: Error finding container ff4213ac298c2cae4e04d07af912b4c3c69a295b2a758a7e80b7072706ef4916: Status 404 returned error can't find the container with id ff4213ac298c2cae4e04d07af912b4c3c69a295b2a758a7e80b7072706ef4916 Oct 13 06:46:58 crc kubenswrapper[4833]: I1013 06:46:58.906337 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f9cb888f-7c29b"] Oct 13 06:46:59 crc kubenswrapper[4833]: W1013 06:46:59.021810 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82e87d62_aa7e_466c_a479_8b0c6e3deb64.slice/crio-49284366641c19517c22851685ee9ce9dec2d675df092fa9316b8cf2e83d2c54 WatchSource:0}: Error finding container 49284366641c19517c22851685ee9ce9dec2d675df092fa9316b8cf2e83d2c54: Status 404 returned error can't find the container with id 49284366641c19517c22851685ee9ce9dec2d675df092fa9316b8cf2e83d2c54 Oct 13 06:46:59 crc kubenswrapper[4833]: I1013 06:46:59.023525 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-699bdfffd4-dzv2d"] Oct 13 06:46:59 crc kubenswrapper[4833]: I1013 06:46:59.725859 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-bf7fd98f9-j4rff" event={"ID":"6f85d40e-16b8-4ece-a268-8b4d227ac36c","Type":"ContainerStarted","Data":"a961829225c701bb0ce386f6eebbd368a8e79e25764f59f09a1f23636ae17f42"} Oct 13 06:46:59 crc kubenswrapper[4833]: I1013 06:46:59.728352 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d","Type":"ContainerStarted","Data":"4f1a44196150c66ed38bc160ef64b3a59417e5f14fa3a722c97f2886256ddebe"} Oct 13 06:46:59 crc kubenswrapper[4833]: I1013 06:46:59.729571 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4d093b1-7af9-4157-902f-78359796b208" containerID="a312f45e423f2417aebdb51edcfea9161eb4d3497e46e9da0e5847e8340d2ff4" exitCode=0 Oct 13 06:46:59 crc kubenswrapper[4833]: I1013 06:46:59.729611 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9cb888f-7c29b" event={"ID":"f4d093b1-7af9-4157-902f-78359796b208","Type":"ContainerDied","Data":"a312f45e423f2417aebdb51edcfea9161eb4d3497e46e9da0e5847e8340d2ff4"} Oct 13 06:46:59 crc kubenswrapper[4833]: I1013 06:46:59.729627 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9cb888f-7c29b" event={"ID":"f4d093b1-7af9-4157-902f-78359796b208","Type":"ContainerStarted","Data":"ff4213ac298c2cae4e04d07af912b4c3c69a295b2a758a7e80b7072706ef4916"} Oct 13 06:46:59 crc kubenswrapper[4833]: I1013 06:46:59.735598 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-699bdfffd4-dzv2d" event={"ID":"82e87d62-aa7e-466c-a479-8b0c6e3deb64","Type":"ContainerStarted","Data":"ea826397e4620c9aa6802a28450a26408a048afd091c87c85d9c0e538a386d5f"} Oct 13 06:46:59 crc kubenswrapper[4833]: I1013 06:46:59.735650 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-699bdfffd4-dzv2d" event={"ID":"82e87d62-aa7e-466c-a479-8b0c6e3deb64","Type":"ContainerStarted","Data":"49284366641c19517c22851685ee9ce9dec2d675df092fa9316b8cf2e83d2c54"} Oct 13 06:47:00 crc kubenswrapper[4833]: I1013 06:47:00.542150 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 06:47:00 crc kubenswrapper[4833]: I1013 06:47:00.542505 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 06:47:00 crc kubenswrapper[4833]: I1013 06:47:00.747153 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-699bdfffd4-dzv2d" event={"ID":"82e87d62-aa7e-466c-a479-8b0c6e3deb64","Type":"ContainerStarted","Data":"bf36514ddef85dd3ee99972e11cc3f1f90b0bfd44f94b8cee098277e01bd4b36"} Oct 13 06:47:00 crc kubenswrapper[4833]: I1013 06:47:00.747304 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-699bdfffd4-dzv2d" Oct 13 06:47:00 crc kubenswrapper[4833]: I1013 06:47:00.747339 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-699bdfffd4-dzv2d" Oct 13 06:47:00 crc kubenswrapper[4833]: I1013 06:47:00.749439 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9cb888f-7c29b" event={"ID":"f4d093b1-7af9-4157-902f-78359796b208","Type":"ContainerStarted","Data":"dbd6ac2a64e81cc6af84b83d350eb067bb96ea5873ddac6c0941d36320782c89"} Oct 13 06:47:00 crc kubenswrapper[4833]: I1013 06:47:00.749769 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54f9cb888f-7c29b" Oct 13 06:47:00 crc kubenswrapper[4833]: I1013 06:47:00.771764 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-699bdfffd4-dzv2d" podStartSLOduration=2.771743343 podStartE2EDuration="2.771743343s" podCreationTimestamp="2025-10-13 06:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:47:00.762849375 +0000 UTC m=+1110.863272291" watchObservedRunningTime="2025-10-13 06:47:00.771743343 +0000 UTC m=+1110.872166259" Oct 13 06:47:00 crc kubenswrapper[4833]: I1013 06:47:00.798513 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54f9cb888f-7c29b" podStartSLOduration=3.798486339 podStartE2EDuration="3.798486339s" podCreationTimestamp="2025-10-13 06:46:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:47:00.78680086 +0000 UTC m=+1110.887223776" watchObservedRunningTime="2025-10-13 06:47:00.798486339 +0000 UTC m=+1110.898909255" Oct 13 06:47:01 crc kubenswrapper[4833]: I1013 06:47:01.758985 4833 generic.go:334] "Generic (PLEG): container finished" podID="8d211019-3f1c-40de-82fc-7ed19c831c7c" containerID="ab4603996e10efb1837fd43c7ec7732d2eee4186c2e23d81c201b5e44bc74800" exitCode=0 Oct 13 06:47:01 crc kubenswrapper[4833]: I1013 06:47:01.759073 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6zmp4" event={"ID":"8d211019-3f1c-40de-82fc-7ed19c831c7c","Type":"ContainerDied","Data":"ab4603996e10efb1837fd43c7ec7732d2eee4186c2e23d81c201b5e44bc74800"} Oct 13 06:47:01 crc kubenswrapper[4833]: I1013 06:47:01.764371 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-bf7fd98f9-j4rff" event={"ID":"6f85d40e-16b8-4ece-a268-8b4d227ac36c","Type":"ContainerStarted","Data":"c1bd611de8c17665166390a0cbc9052a69c7ff68323956a33d62985368f8cd99"} Oct 13 06:47:01 crc kubenswrapper[4833]: I1013 06:47:01.764414 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-bf7fd98f9-j4rff" event={"ID":"6f85d40e-16b8-4ece-a268-8b4d227ac36c","Type":"ContainerStarted","Data":"f36c7308b02b9cd8d73f30a8ea3b598f9d78fa61234c951811b74664fe47b465"} Oct 13 06:47:01 crc kubenswrapper[4833]: I1013 06:47:01.767080 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d","Type":"ContainerStarted","Data":"42c273be61601c62ad5e6c7cfbad29f367aa6e1f643df15bf038851fd84eff5c"} Oct 13 06:47:01 crc kubenswrapper[4833]: I1013 06:47:01.769130 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f847dcbd8-p95b9" event={"ID":"8a18c26d-a476-4e4b-9320-84369da38cf2","Type":"ContainerStarted","Data":"40c6e393bbfaf517c5fecd9b2453770dae8d96c73815045f791a0be9bcebd55d"} Oct 13 06:47:01 crc kubenswrapper[4833]: I1013 06:47:01.769162 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f847dcbd8-p95b9" event={"ID":"8a18c26d-a476-4e4b-9320-84369da38cf2","Type":"ContainerStarted","Data":"4c2d835c2cdf83c5990f9e667ecb740187ba835cbe395bfdce7fceef0f080f02"} Oct 13 06:47:01 crc kubenswrapper[4833]: I1013 06:47:01.800171 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5f847dcbd8-p95b9" podStartSLOduration=2.574102269 podStartE2EDuration="4.800157157s" podCreationTimestamp="2025-10-13 06:46:57 +0000 UTC" firstStartedPulling="2025-10-13 06:46:58.650307549 +0000 UTC m=+1108.750730465" lastFinishedPulling="2025-10-13 06:47:00.876362437 +0000 UTC m=+1110.976785353" observedRunningTime="2025-10-13 06:47:01.798682845 +0000 UTC m=+1111.899105761" watchObservedRunningTime="2025-10-13 06:47:01.800157157 +0000 UTC m=+1111.900580073" Oct 13 06:47:01 crc kubenswrapper[4833]: I1013 06:47:01.816841 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-bf7fd98f9-j4rff" podStartSLOduration=2.689353472 podStartE2EDuration="4.816825171s" podCreationTimestamp="2025-10-13 06:46:57 +0000 UTC" firstStartedPulling="2025-10-13 06:46:58.747403445 +0000 UTC m=+1108.847826371" lastFinishedPulling="2025-10-13 06:47:00.874875154 +0000 UTC m=+1110.975298070" observedRunningTime="2025-10-13 06:47:01.81508146 +0000 UTC m=+1111.915504376" watchObservedRunningTime="2025-10-13 06:47:01.816825171 +0000 UTC m=+1111.917248087" Oct 13 06:47:02 crc kubenswrapper[4833]: I1013 06:47:02.327125 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6869bc4646-lrqdg" Oct 13 06:47:02 crc kubenswrapper[4833]: I1013 06:47:02.386957 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6869bc4646-lrqdg" Oct 13 06:47:02 crc kubenswrapper[4833]: I1013 06:47:02.495415 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-595797578d-ddhnv"] Oct 13 06:47:02 crc kubenswrapper[4833]: I1013 06:47:02.497189 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-595797578d-ddhnv" Oct 13 06:47:02 crc kubenswrapper[4833]: I1013 06:47:02.502371 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Oct 13 06:47:02 crc kubenswrapper[4833]: I1013 06:47:02.502681 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Oct 13 06:47:02 crc kubenswrapper[4833]: I1013 06:47:02.511463 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-595797578d-ddhnv"] Oct 13 06:47:02 crc kubenswrapper[4833]: I1013 06:47:02.653635 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7be1410c-e237-4abe-9a2d-c8e8b5242d93-config-data-custom\") pod \"barbican-api-595797578d-ddhnv\" (UID: \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\") " pod="openstack/barbican-api-595797578d-ddhnv" Oct 13 06:47:02 crc kubenswrapper[4833]: I1013 06:47:02.653694 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7be1410c-e237-4abe-9a2d-c8e8b5242d93-logs\") pod \"barbican-api-595797578d-ddhnv\" (UID: \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\") " pod="openstack/barbican-api-595797578d-ddhnv" Oct 13 06:47:02 crc kubenswrapper[4833]: I1013 06:47:02.653728 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be1410c-e237-4abe-9a2d-c8e8b5242d93-public-tls-certs\") pod \"barbican-api-595797578d-ddhnv\" (UID: \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\") " pod="openstack/barbican-api-595797578d-ddhnv" Oct 13 06:47:02 crc kubenswrapper[4833]: I1013 06:47:02.653782 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be1410c-e237-4abe-9a2d-c8e8b5242d93-combined-ca-bundle\") pod \"barbican-api-595797578d-ddhnv\" (UID: \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\") " pod="openstack/barbican-api-595797578d-ddhnv" Oct 13 06:47:02 crc kubenswrapper[4833]: I1013 06:47:02.653847 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be1410c-e237-4abe-9a2d-c8e8b5242d93-config-data\") pod \"barbican-api-595797578d-ddhnv\" (UID: \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\") " pod="openstack/barbican-api-595797578d-ddhnv" Oct 13 06:47:02 crc kubenswrapper[4833]: I1013 06:47:02.653948 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be1410c-e237-4abe-9a2d-c8e8b5242d93-internal-tls-certs\") pod \"barbican-api-595797578d-ddhnv\" (UID: \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\") " pod="openstack/barbican-api-595797578d-ddhnv" Oct 13 06:47:02 crc kubenswrapper[4833]: I1013 06:47:02.653976 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs2px\" (UniqueName: \"kubernetes.io/projected/7be1410c-e237-4abe-9a2d-c8e8b5242d93-kube-api-access-gs2px\") pod \"barbican-api-595797578d-ddhnv\" (UID: \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\") " pod="openstack/barbican-api-595797578d-ddhnv" Oct 13 06:47:02 crc kubenswrapper[4833]: I1013 06:47:02.758445 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7be1410c-e237-4abe-9a2d-c8e8b5242d93-config-data-custom\") pod \"barbican-api-595797578d-ddhnv\" (UID: \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\") " pod="openstack/barbican-api-595797578d-ddhnv" Oct 13 06:47:02 crc kubenswrapper[4833]: I1013 06:47:02.758720 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7be1410c-e237-4abe-9a2d-c8e8b5242d93-logs\") pod \"barbican-api-595797578d-ddhnv\" (UID: \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\") " pod="openstack/barbican-api-595797578d-ddhnv" Oct 13 06:47:02 crc kubenswrapper[4833]: I1013 06:47:02.758743 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be1410c-e237-4abe-9a2d-c8e8b5242d93-public-tls-certs\") pod \"barbican-api-595797578d-ddhnv\" (UID: \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\") " pod="openstack/barbican-api-595797578d-ddhnv" Oct 13 06:47:02 crc kubenswrapper[4833]: I1013 06:47:02.758785 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be1410c-e237-4abe-9a2d-c8e8b5242d93-combined-ca-bundle\") pod \"barbican-api-595797578d-ddhnv\" (UID: \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\") " pod="openstack/barbican-api-595797578d-ddhnv" Oct 13 06:47:02 crc kubenswrapper[4833]: I1013 06:47:02.758832 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be1410c-e237-4abe-9a2d-c8e8b5242d93-config-data\") pod \"barbican-api-595797578d-ddhnv\" (UID: \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\") " pod="openstack/barbican-api-595797578d-ddhnv" Oct 13 06:47:02 crc kubenswrapper[4833]: I1013 06:47:02.758923 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be1410c-e237-4abe-9a2d-c8e8b5242d93-internal-tls-certs\") pod \"barbican-api-595797578d-ddhnv\" (UID: \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\") " pod="openstack/barbican-api-595797578d-ddhnv" Oct 13 06:47:02 crc kubenswrapper[4833]: I1013 06:47:02.758942 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs2px\" (UniqueName: \"kubernetes.io/projected/7be1410c-e237-4abe-9a2d-c8e8b5242d93-kube-api-access-gs2px\") pod \"barbican-api-595797578d-ddhnv\" (UID: \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\") " pod="openstack/barbican-api-595797578d-ddhnv" Oct 13 06:47:02 crc kubenswrapper[4833]: I1013 06:47:02.777217 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7be1410c-e237-4abe-9a2d-c8e8b5242d93-logs\") pod \"barbican-api-595797578d-ddhnv\" (UID: \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\") " pod="openstack/barbican-api-595797578d-ddhnv" Oct 13 06:47:02 crc kubenswrapper[4833]: I1013 06:47:02.799943 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be1410c-e237-4abe-9a2d-c8e8b5242d93-config-data\") pod \"barbican-api-595797578d-ddhnv\" (UID: \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\") " pod="openstack/barbican-api-595797578d-ddhnv" Oct 13 06:47:02 crc kubenswrapper[4833]: I1013 06:47:02.802917 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7be1410c-e237-4abe-9a2d-c8e8b5242d93-config-data-custom\") pod \"barbican-api-595797578d-ddhnv\" (UID: \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\") " pod="openstack/barbican-api-595797578d-ddhnv" Oct 13 06:47:02 crc kubenswrapper[4833]: I1013 06:47:02.804956 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be1410c-e237-4abe-9a2d-c8e8b5242d93-public-tls-certs\") pod \"barbican-api-595797578d-ddhnv\" (UID: \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\") " pod="openstack/barbican-api-595797578d-ddhnv" Oct 13 06:47:02 crc kubenswrapper[4833]: I1013 06:47:02.806665 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be1410c-e237-4abe-9a2d-c8e8b5242d93-internal-tls-certs\") pod \"barbican-api-595797578d-ddhnv\" (UID: \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\") " pod="openstack/barbican-api-595797578d-ddhnv" Oct 13 06:47:02 crc kubenswrapper[4833]: I1013 06:47:02.807924 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs2px\" (UniqueName: \"kubernetes.io/projected/7be1410c-e237-4abe-9a2d-c8e8b5242d93-kube-api-access-gs2px\") pod \"barbican-api-595797578d-ddhnv\" (UID: \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\") " pod="openstack/barbican-api-595797578d-ddhnv" Oct 13 06:47:02 crc kubenswrapper[4833]: I1013 06:47:02.809203 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be1410c-e237-4abe-9a2d-c8e8b5242d93-combined-ca-bundle\") pod \"barbican-api-595797578d-ddhnv\" (UID: \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\") " pod="openstack/barbican-api-595797578d-ddhnv" Oct 13 06:47:02 crc kubenswrapper[4833]: I1013 06:47:02.833961 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-595797578d-ddhnv" Oct 13 06:47:03 crc kubenswrapper[4833]: I1013 06:47:03.947249 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6zmp4" Oct 13 06:47:03 crc kubenswrapper[4833]: I1013 06:47:03.964092 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-558c47b6d4-9zp2v" Oct 13 06:47:04 crc kubenswrapper[4833]: I1013 06:47:04.082893 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktklj\" (UniqueName: \"kubernetes.io/projected/8d211019-3f1c-40de-82fc-7ed19c831c7c-kube-api-access-ktklj\") pod \"8d211019-3f1c-40de-82fc-7ed19c831c7c\" (UID: \"8d211019-3f1c-40de-82fc-7ed19c831c7c\") " Oct 13 06:47:04 crc kubenswrapper[4833]: I1013 06:47:04.083321 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8d211019-3f1c-40de-82fc-7ed19c831c7c-config\") pod \"8d211019-3f1c-40de-82fc-7ed19c831c7c\" (UID: \"8d211019-3f1c-40de-82fc-7ed19c831c7c\") " Oct 13 06:47:04 crc kubenswrapper[4833]: I1013 06:47:04.083395 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d211019-3f1c-40de-82fc-7ed19c831c7c-combined-ca-bundle\") pod \"8d211019-3f1c-40de-82fc-7ed19c831c7c\" (UID: \"8d211019-3f1c-40de-82fc-7ed19c831c7c\") " Oct 13 06:47:04 crc kubenswrapper[4833]: I1013 06:47:04.092057 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d211019-3f1c-40de-82fc-7ed19c831c7c-kube-api-access-ktklj" (OuterVolumeSpecName: "kube-api-access-ktklj") pod "8d211019-3f1c-40de-82fc-7ed19c831c7c" (UID: "8d211019-3f1c-40de-82fc-7ed19c831c7c"). InnerVolumeSpecName "kube-api-access-ktklj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:47:04 crc kubenswrapper[4833]: I1013 06:47:04.117098 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d211019-3f1c-40de-82fc-7ed19c831c7c-config" (OuterVolumeSpecName: "config") pod "8d211019-3f1c-40de-82fc-7ed19c831c7c" (UID: "8d211019-3f1c-40de-82fc-7ed19c831c7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:04 crc kubenswrapper[4833]: I1013 06:47:04.121213 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d211019-3f1c-40de-82fc-7ed19c831c7c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d211019-3f1c-40de-82fc-7ed19c831c7c" (UID: "8d211019-3f1c-40de-82fc-7ed19c831c7c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:04 crc kubenswrapper[4833]: I1013 06:47:04.185876 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d211019-3f1c-40de-82fc-7ed19c831c7c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:04 crc kubenswrapper[4833]: I1013 06:47:04.185926 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktklj\" (UniqueName: \"kubernetes.io/projected/8d211019-3f1c-40de-82fc-7ed19c831c7c-kube-api-access-ktklj\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:04 crc kubenswrapper[4833]: I1013 06:47:04.185943 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8d211019-3f1c-40de-82fc-7ed19c831c7c-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:04 crc kubenswrapper[4833]: I1013 06:47:04.437461 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-595797578d-ddhnv"] Oct 13 06:47:04 crc kubenswrapper[4833]: W1013 06:47:04.442997 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7be1410c_e237_4abe_9a2d_c8e8b5242d93.slice/crio-12dfff0ff48505296d04e2748bf6622892fb8dcb7dc873c4cbe91b500026adee WatchSource:0}: Error finding container 12dfff0ff48505296d04e2748bf6622892fb8dcb7dc873c4cbe91b500026adee: Status 404 returned error can't find the container with id 12dfff0ff48505296d04e2748bf6622892fb8dcb7dc873c4cbe91b500026adee Oct 13 06:47:04 crc kubenswrapper[4833]: I1013 06:47:04.812630 4833 generic.go:334] "Generic (PLEG): container finished" podID="b5d6e331-404e-48b3-b9ee-66386208af92" containerID="cef1392c2e441a2957315fea42ef8d028a266ebdbd1b0a9b26eb9e298a5bb917" exitCode=0 Oct 13 06:47:04 crc kubenswrapper[4833]: I1013 06:47:04.812792 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ltqfn" event={"ID":"b5d6e331-404e-48b3-b9ee-66386208af92","Type":"ContainerDied","Data":"cef1392c2e441a2957315fea42ef8d028a266ebdbd1b0a9b26eb9e298a5bb917"} Oct 13 06:47:04 crc kubenswrapper[4833]: I1013 06:47:04.816992 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-595797578d-ddhnv" event={"ID":"7be1410c-e237-4abe-9a2d-c8e8b5242d93","Type":"ContainerStarted","Data":"e7c4fb08195b32e50c609a55ad8f5ba6ccf4ebfb598a3cf9e868edc5d55a8023"} Oct 13 06:47:04 crc kubenswrapper[4833]: I1013 06:47:04.817039 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-595797578d-ddhnv" event={"ID":"7be1410c-e237-4abe-9a2d-c8e8b5242d93","Type":"ContainerStarted","Data":"12dfff0ff48505296d04e2748bf6622892fb8dcb7dc873c4cbe91b500026adee"} Oct 13 06:47:04 crc kubenswrapper[4833]: I1013 06:47:04.819020 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6zmp4" event={"ID":"8d211019-3f1c-40de-82fc-7ed19c831c7c","Type":"ContainerDied","Data":"1773b76ece2e0388455979a68ab02e8ab15e68c3ea9b27a00de5872c349fa3a1"} Oct 13 06:47:04 crc kubenswrapper[4833]: I1013 06:47:04.819050 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1773b76ece2e0388455979a68ab02e8ab15e68c3ea9b27a00de5872c349fa3a1" Oct 13 06:47:04 crc kubenswrapper[4833]: I1013 06:47:04.819132 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6zmp4" Oct 13 06:47:04 crc kubenswrapper[4833]: I1013 06:47:04.822547 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d","Type":"ContainerStarted","Data":"5acb9e02c1b90b2983691f74932d3024b23ddf2e04312a1fca2aa178c00fe6fd"} Oct 13 06:47:04 crc kubenswrapper[4833]: I1013 06:47:04.822845 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 13 06:47:04 crc kubenswrapper[4833]: I1013 06:47:04.869296 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.019184149 podStartE2EDuration="8.869275844s" podCreationTimestamp="2025-10-13 06:46:56 +0000 UTC" firstStartedPulling="2025-10-13 06:46:57.571427399 +0000 UTC m=+1107.671850315" lastFinishedPulling="2025-10-13 06:47:04.421519094 +0000 UTC m=+1114.521942010" observedRunningTime="2025-10-13 06:47:04.86704849 +0000 UTC m=+1114.967471406" watchObservedRunningTime="2025-10-13 06:47:04.869275844 +0000 UTC m=+1114.969698760" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.239259 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f9cb888f-7c29b"] Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.240087 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54f9cb888f-7c29b" podUID="f4d093b1-7af9-4157-902f-78359796b208" containerName="dnsmasq-dns" containerID="cri-o://dbd6ac2a64e81cc6af84b83d350eb067bb96ea5873ddac6c0941d36320782c89" gracePeriod=10 Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.249230 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54f9cb888f-7c29b" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.283683 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c78787df7-crk9m"] Oct 13 06:47:05 crc kubenswrapper[4833]: E1013 06:47:05.284114 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d211019-3f1c-40de-82fc-7ed19c831c7c" containerName="neutron-db-sync" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.284129 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d211019-3f1c-40de-82fc-7ed19c831c7c" containerName="neutron-db-sync" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.284360 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d211019-3f1c-40de-82fc-7ed19c831c7c" containerName="neutron-db-sync" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.285527 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c78787df7-crk9m" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.294323 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c78787df7-crk9m"] Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.355164 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-cbbcdc7cb-9s5cb"] Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.357721 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cbbcdc7cb-9s5cb" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.363485 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qc2b8" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.363639 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.363937 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.399393 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cbbcdc7cb-9s5cb"] Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.408017 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.512840 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90d6e69b-7990-402a-90b1-affbd44376cd-ovndb-tls-certs\") pod \"neutron-cbbcdc7cb-9s5cb\" (UID: \"90d6e69b-7990-402a-90b1-affbd44376cd\") " pod="openstack/neutron-cbbcdc7cb-9s5cb" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.513099 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90d6e69b-7990-402a-90b1-affbd44376cd-combined-ca-bundle\") pod \"neutron-cbbcdc7cb-9s5cb\" (UID: \"90d6e69b-7990-402a-90b1-affbd44376cd\") " pod="openstack/neutron-cbbcdc7cb-9s5cb" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.513152 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/90d6e69b-7990-402a-90b1-affbd44376cd-httpd-config\") pod \"neutron-cbbcdc7cb-9s5cb\" (UID: \"90d6e69b-7990-402a-90b1-affbd44376cd\") " pod="openstack/neutron-cbbcdc7cb-9s5cb" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.513287 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-dns-swift-storage-0\") pod \"dnsmasq-dns-5c78787df7-crk9m\" (UID: \"df3eb85a-16b6-46fa-9d0a-1c209cd72d04\") " pod="openstack/dnsmasq-dns-5c78787df7-crk9m" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.513322 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z79kq\" (UniqueName: \"kubernetes.io/projected/90d6e69b-7990-402a-90b1-affbd44376cd-kube-api-access-z79kq\") pod \"neutron-cbbcdc7cb-9s5cb\" (UID: \"90d6e69b-7990-402a-90b1-affbd44376cd\") " pod="openstack/neutron-cbbcdc7cb-9s5cb" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.513458 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-dns-svc\") pod \"dnsmasq-dns-5c78787df7-crk9m\" (UID: \"df3eb85a-16b6-46fa-9d0a-1c209cd72d04\") " pod="openstack/dnsmasq-dns-5c78787df7-crk9m" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.513499 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-ovsdbserver-sb\") pod \"dnsmasq-dns-5c78787df7-crk9m\" (UID: \"df3eb85a-16b6-46fa-9d0a-1c209cd72d04\") " pod="openstack/dnsmasq-dns-5c78787df7-crk9m" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.513554 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2rq8\" (UniqueName: \"kubernetes.io/projected/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-kube-api-access-h2rq8\") pod \"dnsmasq-dns-5c78787df7-crk9m\" (UID: \"df3eb85a-16b6-46fa-9d0a-1c209cd72d04\") " pod="openstack/dnsmasq-dns-5c78787df7-crk9m" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.513644 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-ovsdbserver-nb\") pod \"dnsmasq-dns-5c78787df7-crk9m\" (UID: \"df3eb85a-16b6-46fa-9d0a-1c209cd72d04\") " pod="openstack/dnsmasq-dns-5c78787df7-crk9m" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.513673 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-config\") pod \"dnsmasq-dns-5c78787df7-crk9m\" (UID: \"df3eb85a-16b6-46fa-9d0a-1c209cd72d04\") " pod="openstack/dnsmasq-dns-5c78787df7-crk9m" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.513696 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/90d6e69b-7990-402a-90b1-affbd44376cd-config\") pod \"neutron-cbbcdc7cb-9s5cb\" (UID: \"90d6e69b-7990-402a-90b1-affbd44376cd\") " pod="openstack/neutron-cbbcdc7cb-9s5cb" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.616045 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90d6e69b-7990-402a-90b1-affbd44376cd-combined-ca-bundle\") pod \"neutron-cbbcdc7cb-9s5cb\" (UID: \"90d6e69b-7990-402a-90b1-affbd44376cd\") " pod="openstack/neutron-cbbcdc7cb-9s5cb" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.616097 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/90d6e69b-7990-402a-90b1-affbd44376cd-httpd-config\") pod \"neutron-cbbcdc7cb-9s5cb\" (UID: \"90d6e69b-7990-402a-90b1-affbd44376cd\") " pod="openstack/neutron-cbbcdc7cb-9s5cb" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.616143 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-dns-swift-storage-0\") pod \"dnsmasq-dns-5c78787df7-crk9m\" (UID: \"df3eb85a-16b6-46fa-9d0a-1c209cd72d04\") " pod="openstack/dnsmasq-dns-5c78787df7-crk9m" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.616160 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z79kq\" (UniqueName: \"kubernetes.io/projected/90d6e69b-7990-402a-90b1-affbd44376cd-kube-api-access-z79kq\") pod \"neutron-cbbcdc7cb-9s5cb\" (UID: \"90d6e69b-7990-402a-90b1-affbd44376cd\") " pod="openstack/neutron-cbbcdc7cb-9s5cb" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.616227 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-dns-svc\") pod \"dnsmasq-dns-5c78787df7-crk9m\" (UID: \"df3eb85a-16b6-46fa-9d0a-1c209cd72d04\") " pod="openstack/dnsmasq-dns-5c78787df7-crk9m" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.616249 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-ovsdbserver-sb\") pod \"dnsmasq-dns-5c78787df7-crk9m\" (UID: \"df3eb85a-16b6-46fa-9d0a-1c209cd72d04\") " pod="openstack/dnsmasq-dns-5c78787df7-crk9m" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.616270 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2rq8\" (UniqueName: \"kubernetes.io/projected/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-kube-api-access-h2rq8\") pod \"dnsmasq-dns-5c78787df7-crk9m\" (UID: \"df3eb85a-16b6-46fa-9d0a-1c209cd72d04\") " pod="openstack/dnsmasq-dns-5c78787df7-crk9m" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.616319 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-ovsdbserver-nb\") pod \"dnsmasq-dns-5c78787df7-crk9m\" (UID: \"df3eb85a-16b6-46fa-9d0a-1c209cd72d04\") " pod="openstack/dnsmasq-dns-5c78787df7-crk9m" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.616334 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-config\") pod \"dnsmasq-dns-5c78787df7-crk9m\" (UID: \"df3eb85a-16b6-46fa-9d0a-1c209cd72d04\") " pod="openstack/dnsmasq-dns-5c78787df7-crk9m" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.616349 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/90d6e69b-7990-402a-90b1-affbd44376cd-config\") pod \"neutron-cbbcdc7cb-9s5cb\" (UID: \"90d6e69b-7990-402a-90b1-affbd44376cd\") " pod="openstack/neutron-cbbcdc7cb-9s5cb" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.616369 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90d6e69b-7990-402a-90b1-affbd44376cd-ovndb-tls-certs\") pod \"neutron-cbbcdc7cb-9s5cb\" (UID: \"90d6e69b-7990-402a-90b1-affbd44376cd\") " pod="openstack/neutron-cbbcdc7cb-9s5cb" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.618337 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-ovsdbserver-sb\") pod \"dnsmasq-dns-5c78787df7-crk9m\" (UID: \"df3eb85a-16b6-46fa-9d0a-1c209cd72d04\") " pod="openstack/dnsmasq-dns-5c78787df7-crk9m" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.619392 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-ovsdbserver-nb\") pod \"dnsmasq-dns-5c78787df7-crk9m\" (UID: \"df3eb85a-16b6-46fa-9d0a-1c209cd72d04\") " pod="openstack/dnsmasq-dns-5c78787df7-crk9m" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.620486 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-dns-svc\") pod \"dnsmasq-dns-5c78787df7-crk9m\" (UID: \"df3eb85a-16b6-46fa-9d0a-1c209cd72d04\") " pod="openstack/dnsmasq-dns-5c78787df7-crk9m" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.620670 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-config\") pod \"dnsmasq-dns-5c78787df7-crk9m\" (UID: \"df3eb85a-16b6-46fa-9d0a-1c209cd72d04\") " pod="openstack/dnsmasq-dns-5c78787df7-crk9m" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.622178 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-dns-swift-storage-0\") pod \"dnsmasq-dns-5c78787df7-crk9m\" (UID: \"df3eb85a-16b6-46fa-9d0a-1c209cd72d04\") " pod="openstack/dnsmasq-dns-5c78787df7-crk9m" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.633531 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90d6e69b-7990-402a-90b1-affbd44376cd-combined-ca-bundle\") pod \"neutron-cbbcdc7cb-9s5cb\" (UID: \"90d6e69b-7990-402a-90b1-affbd44376cd\") " pod="openstack/neutron-cbbcdc7cb-9s5cb" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.641037 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2rq8\" (UniqueName: \"kubernetes.io/projected/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-kube-api-access-h2rq8\") pod \"dnsmasq-dns-5c78787df7-crk9m\" (UID: \"df3eb85a-16b6-46fa-9d0a-1c209cd72d04\") " pod="openstack/dnsmasq-dns-5c78787df7-crk9m" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.641248 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z79kq\" (UniqueName: \"kubernetes.io/projected/90d6e69b-7990-402a-90b1-affbd44376cd-kube-api-access-z79kq\") pod \"neutron-cbbcdc7cb-9s5cb\" (UID: \"90d6e69b-7990-402a-90b1-affbd44376cd\") " pod="openstack/neutron-cbbcdc7cb-9s5cb" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.641415 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c78787df7-crk9m" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.642053 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90d6e69b-7990-402a-90b1-affbd44376cd-ovndb-tls-certs\") pod \"neutron-cbbcdc7cb-9s5cb\" (UID: \"90d6e69b-7990-402a-90b1-affbd44376cd\") " pod="openstack/neutron-cbbcdc7cb-9s5cb" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.643471 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/90d6e69b-7990-402a-90b1-affbd44376cd-config\") pod \"neutron-cbbcdc7cb-9s5cb\" (UID: \"90d6e69b-7990-402a-90b1-affbd44376cd\") " pod="openstack/neutron-cbbcdc7cb-9s5cb" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.656383 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/90d6e69b-7990-402a-90b1-affbd44376cd-httpd-config\") pod \"neutron-cbbcdc7cb-9s5cb\" (UID: \"90d6e69b-7990-402a-90b1-affbd44376cd\") " pod="openstack/neutron-cbbcdc7cb-9s5cb" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.736145 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cbbcdc7cb-9s5cb" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.854011 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4d093b1-7af9-4157-902f-78359796b208" containerID="dbd6ac2a64e81cc6af84b83d350eb067bb96ea5873ddac6c0941d36320782c89" exitCode=0 Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.854449 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9cb888f-7c29b" event={"ID":"f4d093b1-7af9-4157-902f-78359796b208","Type":"ContainerDied","Data":"dbd6ac2a64e81cc6af84b83d350eb067bb96ea5873ddac6c0941d36320782c89"} Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.886074 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-595797578d-ddhnv" event={"ID":"7be1410c-e237-4abe-9a2d-c8e8b5242d93","Type":"ContainerStarted","Data":"cbe232a2ef3f6d567a6669fbc2756b76c79d3815ef20ff4c3ce4de44b9dfa6da"} Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.886875 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-595797578d-ddhnv" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.916837 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-595797578d-ddhnv" podStartSLOduration=3.916784573 podStartE2EDuration="3.916784573s" podCreationTimestamp="2025-10-13 06:47:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:47:05.909605275 +0000 UTC m=+1116.010028191" watchObservedRunningTime="2025-10-13 06:47:05.916784573 +0000 UTC m=+1116.017207489" Oct 13 06:47:05 crc kubenswrapper[4833]: I1013 06:47:05.975807 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c78787df7-crk9m"] Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.082502 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9cb888f-7c29b" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.229808 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4d093b1-7af9-4157-902f-78359796b208-ovsdbserver-nb\") pod \"f4d093b1-7af9-4157-902f-78359796b208\" (UID: \"f4d093b1-7af9-4157-902f-78359796b208\") " Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.229920 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x66cb\" (UniqueName: \"kubernetes.io/projected/f4d093b1-7af9-4157-902f-78359796b208-kube-api-access-x66cb\") pod \"f4d093b1-7af9-4157-902f-78359796b208\" (UID: \"f4d093b1-7af9-4157-902f-78359796b208\") " Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.229951 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4d093b1-7af9-4157-902f-78359796b208-dns-swift-storage-0\") pod \"f4d093b1-7af9-4157-902f-78359796b208\" (UID: \"f4d093b1-7af9-4157-902f-78359796b208\") " Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.230005 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4d093b1-7af9-4157-902f-78359796b208-ovsdbserver-sb\") pod \"f4d093b1-7af9-4157-902f-78359796b208\" (UID: \"f4d093b1-7af9-4157-902f-78359796b208\") " Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.230134 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4d093b1-7af9-4157-902f-78359796b208-config\") pod \"f4d093b1-7af9-4157-902f-78359796b208\" (UID: \"f4d093b1-7af9-4157-902f-78359796b208\") " Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.230185 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4d093b1-7af9-4157-902f-78359796b208-dns-svc\") pod \"f4d093b1-7af9-4157-902f-78359796b208\" (UID: \"f4d093b1-7af9-4157-902f-78359796b208\") " Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.263612 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 13 06:47:06 crc kubenswrapper[4833]: E1013 06:47:06.264247 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4d093b1-7af9-4157-902f-78359796b208" containerName="dnsmasq-dns" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.264263 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d093b1-7af9-4157-902f-78359796b208" containerName="dnsmasq-dns" Oct 13 06:47:06 crc kubenswrapper[4833]: E1013 06:47:06.264435 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4d093b1-7af9-4157-902f-78359796b208" containerName="init" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.264444 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d093b1-7af9-4157-902f-78359796b208" containerName="init" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.264660 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4d093b1-7af9-4157-902f-78359796b208" containerName="dnsmasq-dns" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.265406 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.281382 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.294418 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-jvnl5" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.294723 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.294688 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.320445 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4d093b1-7af9-4157-902f-78359796b208-kube-api-access-x66cb" (OuterVolumeSpecName: "kube-api-access-x66cb") pod "f4d093b1-7af9-4157-902f-78359796b208" (UID: "f4d093b1-7af9-4157-902f-78359796b208"). InnerVolumeSpecName "kube-api-access-x66cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.338344 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360072b6-cb90-4df5-9762-d7e212a17b1e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"360072b6-cb90-4df5-9762-d7e212a17b1e\") " pod="openstack/openstackclient" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.338384 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/360072b6-cb90-4df5-9762-d7e212a17b1e-openstack-config-secret\") pod \"openstackclient\" (UID: \"360072b6-cb90-4df5-9762-d7e212a17b1e\") " pod="openstack/openstackclient" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.338412 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwcql\" (UniqueName: \"kubernetes.io/projected/360072b6-cb90-4df5-9762-d7e212a17b1e-kube-api-access-wwcql\") pod \"openstackclient\" (UID: \"360072b6-cb90-4df5-9762-d7e212a17b1e\") " pod="openstack/openstackclient" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.338456 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/360072b6-cb90-4df5-9762-d7e212a17b1e-openstack-config\") pod \"openstackclient\" (UID: \"360072b6-cb90-4df5-9762-d7e212a17b1e\") " pod="openstack/openstackclient" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.338715 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x66cb\" (UniqueName: \"kubernetes.io/projected/f4d093b1-7af9-4157-902f-78359796b208-kube-api-access-x66cb\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.389768 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ltqfn" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.441792 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360072b6-cb90-4df5-9762-d7e212a17b1e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"360072b6-cb90-4df5-9762-d7e212a17b1e\") " pod="openstack/openstackclient" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.442260 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/360072b6-cb90-4df5-9762-d7e212a17b1e-openstack-config-secret\") pod \"openstackclient\" (UID: \"360072b6-cb90-4df5-9762-d7e212a17b1e\") " pod="openstack/openstackclient" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.442282 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwcql\" (UniqueName: \"kubernetes.io/projected/360072b6-cb90-4df5-9762-d7e212a17b1e-kube-api-access-wwcql\") pod \"openstackclient\" (UID: \"360072b6-cb90-4df5-9762-d7e212a17b1e\") " pod="openstack/openstackclient" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.442306 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/360072b6-cb90-4df5-9762-d7e212a17b1e-openstack-config\") pod \"openstackclient\" (UID: \"360072b6-cb90-4df5-9762-d7e212a17b1e\") " pod="openstack/openstackclient" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.443085 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/360072b6-cb90-4df5-9762-d7e212a17b1e-openstack-config\") pod \"openstackclient\" (UID: \"360072b6-cb90-4df5-9762-d7e212a17b1e\") " pod="openstack/openstackclient" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.447529 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4d093b1-7af9-4157-902f-78359796b208-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f4d093b1-7af9-4157-902f-78359796b208" (UID: "f4d093b1-7af9-4157-902f-78359796b208"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.452614 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360072b6-cb90-4df5-9762-d7e212a17b1e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"360072b6-cb90-4df5-9762-d7e212a17b1e\") " pod="openstack/openstackclient" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.454114 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/360072b6-cb90-4df5-9762-d7e212a17b1e-openstack-config-secret\") pod \"openstackclient\" (UID: \"360072b6-cb90-4df5-9762-d7e212a17b1e\") " pod="openstack/openstackclient" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.473215 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwcql\" (UniqueName: \"kubernetes.io/projected/360072b6-cb90-4df5-9762-d7e212a17b1e-kube-api-access-wwcql\") pod \"openstackclient\" (UID: \"360072b6-cb90-4df5-9762-d7e212a17b1e\") " pod="openstack/openstackclient" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.487230 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4d093b1-7af9-4157-902f-78359796b208-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f4d093b1-7af9-4157-902f-78359796b208" (UID: "f4d093b1-7af9-4157-902f-78359796b208"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.507600 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4d093b1-7af9-4157-902f-78359796b208-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f4d093b1-7af9-4157-902f-78359796b208" (UID: "f4d093b1-7af9-4157-902f-78359796b208"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.515056 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4d093b1-7af9-4157-902f-78359796b208-config" (OuterVolumeSpecName: "config") pod "f4d093b1-7af9-4157-902f-78359796b208" (UID: "f4d093b1-7af9-4157-902f-78359796b208"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.532076 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4d093b1-7af9-4157-902f-78359796b208-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f4d093b1-7af9-4157-902f-78359796b208" (UID: "f4d093b1-7af9-4157-902f-78359796b208"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.532149 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.532793 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.542296 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.544099 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5d6e331-404e-48b3-b9ee-66386208af92-config-data\") pod \"b5d6e331-404e-48b3-b9ee-66386208af92\" (UID: \"b5d6e331-404e-48b3-b9ee-66386208af92\") " Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.544186 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b5d6e331-404e-48b3-b9ee-66386208af92-db-sync-config-data\") pod \"b5d6e331-404e-48b3-b9ee-66386208af92\" (UID: \"b5d6e331-404e-48b3-b9ee-66386208af92\") " Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.544280 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfq9w\" (UniqueName: \"kubernetes.io/projected/b5d6e331-404e-48b3-b9ee-66386208af92-kube-api-access-xfq9w\") pod \"b5d6e331-404e-48b3-b9ee-66386208af92\" (UID: \"b5d6e331-404e-48b3-b9ee-66386208af92\") " Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.544325 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5d6e331-404e-48b3-b9ee-66386208af92-scripts\") pod \"b5d6e331-404e-48b3-b9ee-66386208af92\" (UID: \"b5d6e331-404e-48b3-b9ee-66386208af92\") " Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.544445 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b5d6e331-404e-48b3-b9ee-66386208af92-etc-machine-id\") pod \"b5d6e331-404e-48b3-b9ee-66386208af92\" (UID: \"b5d6e331-404e-48b3-b9ee-66386208af92\") " Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.544468 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5d6e331-404e-48b3-b9ee-66386208af92-combined-ca-bundle\") pod \"b5d6e331-404e-48b3-b9ee-66386208af92\" (UID: \"b5d6e331-404e-48b3-b9ee-66386208af92\") " Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.544832 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4d093b1-7af9-4157-902f-78359796b208-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.544849 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4d093b1-7af9-4157-902f-78359796b208-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.544858 4833 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4d093b1-7af9-4157-902f-78359796b208-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.544866 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4d093b1-7af9-4157-902f-78359796b208-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.544874 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4d093b1-7af9-4157-902f-78359796b208-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.546631 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5d6e331-404e-48b3-b9ee-66386208af92-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b5d6e331-404e-48b3-b9ee-66386208af92" (UID: "b5d6e331-404e-48b3-b9ee-66386208af92"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.550173 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5d6e331-404e-48b3-b9ee-66386208af92-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b5d6e331-404e-48b3-b9ee-66386208af92" (UID: "b5d6e331-404e-48b3-b9ee-66386208af92"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.553662 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5d6e331-404e-48b3-b9ee-66386208af92-scripts" (OuterVolumeSpecName: "scripts") pod "b5d6e331-404e-48b3-b9ee-66386208af92" (UID: "b5d6e331-404e-48b3-b9ee-66386208af92"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.565806 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5d6e331-404e-48b3-b9ee-66386208af92-kube-api-access-xfq9w" (OuterVolumeSpecName: "kube-api-access-xfq9w") pod "b5d6e331-404e-48b3-b9ee-66386208af92" (UID: "b5d6e331-404e-48b3-b9ee-66386208af92"). InnerVolumeSpecName "kube-api-access-xfq9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.572549 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 13 06:47:06 crc kubenswrapper[4833]: E1013 06:47:06.573399 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5d6e331-404e-48b3-b9ee-66386208af92" containerName="cinder-db-sync" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.575089 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5d6e331-404e-48b3-b9ee-66386208af92" containerName="cinder-db-sync" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.575568 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5d6e331-404e-48b3-b9ee-66386208af92" containerName="cinder-db-sync" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.577739 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.585273 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.614724 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5d6e331-404e-48b3-b9ee-66386208af92-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5d6e331-404e-48b3-b9ee-66386208af92" (UID: "b5d6e331-404e-48b3-b9ee-66386208af92"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.633060 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5d6e331-404e-48b3-b9ee-66386208af92-config-data" (OuterVolumeSpecName: "config-data") pod "b5d6e331-404e-48b3-b9ee-66386208af92" (UID: "b5d6e331-404e-48b3-b9ee-66386208af92"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.646628 4833 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b5d6e331-404e-48b3-b9ee-66386208af92-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.646669 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5d6e331-404e-48b3-b9ee-66386208af92-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.646680 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5d6e331-404e-48b3-b9ee-66386208af92-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.646687 4833 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b5d6e331-404e-48b3-b9ee-66386208af92-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.646697 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfq9w\" (UniqueName: \"kubernetes.io/projected/b5d6e331-404e-48b3-b9ee-66386208af92-kube-api-access-xfq9w\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.646708 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5d6e331-404e-48b3-b9ee-66386208af92-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:06 crc kubenswrapper[4833]: W1013 06:47:06.689610 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90d6e69b_7990_402a_90b1_affbd44376cd.slice/crio-95e559b240dc2787177b1ddc8dfdae8672e9457b79ea305bf2acfe212d88728a WatchSource:0}: Error finding container 95e559b240dc2787177b1ddc8dfdae8672e9457b79ea305bf2acfe212d88728a: Status 404 returned error can't find the container with id 95e559b240dc2787177b1ddc8dfdae8672e9457b79ea305bf2acfe212d88728a Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.691523 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cbbcdc7cb-9s5cb"] Oct 13 06:47:06 crc kubenswrapper[4833]: E1013 06:47:06.724950 4833 log.go:32] "RunPodSandbox from runtime service failed" err=< Oct 13 06:47:06 crc kubenswrapper[4833]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_360072b6-cb90-4df5-9762-d7e212a17b1e_0(1a4d4d8f1e4b1f24f69bfa60a713627ca67ff55a3ba1ec23d497dd4730b78b2f): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1a4d4d8f1e4b1f24f69bfa60a713627ca67ff55a3ba1ec23d497dd4730b78b2f" Netns:"/var/run/netns/49a867d9-619e-4306-883e-45cc64979af1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=1a4d4d8f1e4b1f24f69bfa60a713627ca67ff55a3ba1ec23d497dd4730b78b2f;K8S_POD_UID=360072b6-cb90-4df5-9762-d7e212a17b1e" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/360072b6-cb90-4df5-9762-d7e212a17b1e]: expected pod UID "360072b6-cb90-4df5-9762-d7e212a17b1e" but got "5783401d-3007-4df3-a902-1869d62c4acc" from Kube API Oct 13 06:47:06 crc kubenswrapper[4833]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Oct 13 06:47:06 crc kubenswrapper[4833]: > Oct 13 06:47:06 crc kubenswrapper[4833]: E1013 06:47:06.725027 4833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Oct 13 06:47:06 crc kubenswrapper[4833]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_360072b6-cb90-4df5-9762-d7e212a17b1e_0(1a4d4d8f1e4b1f24f69bfa60a713627ca67ff55a3ba1ec23d497dd4730b78b2f): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1a4d4d8f1e4b1f24f69bfa60a713627ca67ff55a3ba1ec23d497dd4730b78b2f" Netns:"/var/run/netns/49a867d9-619e-4306-883e-45cc64979af1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=1a4d4d8f1e4b1f24f69bfa60a713627ca67ff55a3ba1ec23d497dd4730b78b2f;K8S_POD_UID=360072b6-cb90-4df5-9762-d7e212a17b1e" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/360072b6-cb90-4df5-9762-d7e212a17b1e]: expected pod UID "360072b6-cb90-4df5-9762-d7e212a17b1e" but got "5783401d-3007-4df3-a902-1869d62c4acc" from Kube API Oct 13 06:47:06 crc kubenswrapper[4833]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Oct 13 06:47:06 crc kubenswrapper[4833]: > pod="openstack/openstackclient" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.748489 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5783401d-3007-4df3-a902-1869d62c4acc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5783401d-3007-4df3-a902-1869d62c4acc\") " pod="openstack/openstackclient" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.748624 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5783401d-3007-4df3-a902-1869d62c4acc-openstack-config-secret\") pod \"openstackclient\" (UID: \"5783401d-3007-4df3-a902-1869d62c4acc\") " pod="openstack/openstackclient" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.748719 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdn74\" (UniqueName: \"kubernetes.io/projected/5783401d-3007-4df3-a902-1869d62c4acc-kube-api-access-fdn74\") pod \"openstackclient\" (UID: \"5783401d-3007-4df3-a902-1869d62c4acc\") " pod="openstack/openstackclient" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.748743 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5783401d-3007-4df3-a902-1869d62c4acc-openstack-config\") pod \"openstackclient\" (UID: \"5783401d-3007-4df3-a902-1869d62c4acc\") " pod="openstack/openstackclient" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.851041 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5783401d-3007-4df3-a902-1869d62c4acc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5783401d-3007-4df3-a902-1869d62c4acc\") " pod="openstack/openstackclient" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.851379 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5783401d-3007-4df3-a902-1869d62c4acc-openstack-config-secret\") pod \"openstackclient\" (UID: \"5783401d-3007-4df3-a902-1869d62c4acc\") " pod="openstack/openstackclient" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.851425 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdn74\" (UniqueName: \"kubernetes.io/projected/5783401d-3007-4df3-a902-1869d62c4acc-kube-api-access-fdn74\") pod \"openstackclient\" (UID: \"5783401d-3007-4df3-a902-1869d62c4acc\") " pod="openstack/openstackclient" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.851444 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5783401d-3007-4df3-a902-1869d62c4acc-openstack-config\") pod \"openstackclient\" (UID: \"5783401d-3007-4df3-a902-1869d62c4acc\") " pod="openstack/openstackclient" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.852515 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5783401d-3007-4df3-a902-1869d62c4acc-openstack-config\") pod \"openstackclient\" (UID: \"5783401d-3007-4df3-a902-1869d62c4acc\") " pod="openstack/openstackclient" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.855558 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5783401d-3007-4df3-a902-1869d62c4acc-openstack-config-secret\") pod \"openstackclient\" (UID: \"5783401d-3007-4df3-a902-1869d62c4acc\") " pod="openstack/openstackclient" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.865479 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5783401d-3007-4df3-a902-1869d62c4acc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5783401d-3007-4df3-a902-1869d62c4acc\") " pod="openstack/openstackclient" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.868975 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdn74\" (UniqueName: \"kubernetes.io/projected/5783401d-3007-4df3-a902-1869d62c4acc-kube-api-access-fdn74\") pod \"openstackclient\" (UID: \"5783401d-3007-4df3-a902-1869d62c4acc\") " pod="openstack/openstackclient" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.905236 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.918723 4833 generic.go:334] "Generic (PLEG): container finished" podID="df3eb85a-16b6-46fa-9d0a-1c209cd72d04" containerID="de59c5cc2ff6668fe5e7cce4868a7246a0c8cf29ed3fdb7afc6625d050c19322" exitCode=0 Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.918827 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c78787df7-crk9m" event={"ID":"df3eb85a-16b6-46fa-9d0a-1c209cd72d04","Type":"ContainerDied","Data":"de59c5cc2ff6668fe5e7cce4868a7246a0c8cf29ed3fdb7afc6625d050c19322"} Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.918876 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c78787df7-crk9m" event={"ID":"df3eb85a-16b6-46fa-9d0a-1c209cd72d04","Type":"ContainerStarted","Data":"5091316ba9387574a05bb4871dba19fda40d5d59f97d16ce9249b8128bca449c"} Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.927885 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cbbcdc7cb-9s5cb" event={"ID":"90d6e69b-7990-402a-90b1-affbd44376cd","Type":"ContainerStarted","Data":"95e559b240dc2787177b1ddc8dfdae8672e9457b79ea305bf2acfe212d88728a"} Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.946471 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9cb888f-7c29b" event={"ID":"f4d093b1-7af9-4157-902f-78359796b208","Type":"ContainerDied","Data":"ff4213ac298c2cae4e04d07af912b4c3c69a295b2a758a7e80b7072706ef4916"} Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.946521 4833 scope.go:117] "RemoveContainer" containerID="dbd6ac2a64e81cc6af84b83d350eb067bb96ea5873ddac6c0941d36320782c89" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.946682 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9cb888f-7c29b" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.954127 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ltqfn" event={"ID":"b5d6e331-404e-48b3-b9ee-66386208af92","Type":"ContainerDied","Data":"81a0b378f31bd6d899f9c4dfba05c3dd6bcd7e5caca3a7c448508c1eba0d678e"} Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.954171 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81a0b378f31bd6d899f9c4dfba05c3dd6bcd7e5caca3a7c448508c1eba0d678e" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.954337 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ltqfn" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.954599 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 13 06:47:06 crc kubenswrapper[4833]: I1013 06:47:06.954663 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-595797578d-ddhnv" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.147137 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c78787df7-crk9m"] Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.195270 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.196757 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.199711 4833 scope.go:117] "RemoveContainer" containerID="a312f45e423f2417aebdb51edcfea9161eb4d3497e46e9da0e5847e8340d2ff4" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.200313 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-wjvmq" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.202373 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.202515 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.202695 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.206331 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.233403 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f9cb888f-7c29b"] Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.233727 4833 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="360072b6-cb90-4df5-9762-d7e212a17b1e" podUID="5783401d-3007-4df3-a902-1869d62c4acc" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.261501 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0387db0-cc24-4278-bb4b-f3f5784440ef-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b0387db0-cc24-4278-bb4b-f3f5784440ef\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.261600 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0387db0-cc24-4278-bb4b-f3f5784440ef-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b0387db0-cc24-4278-bb4b-f3f5784440ef\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.261758 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0387db0-cc24-4278-bb4b-f3f5784440ef-scripts\") pod \"cinder-scheduler-0\" (UID: \"b0387db0-cc24-4278-bb4b-f3f5784440ef\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.261829 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0387db0-cc24-4278-bb4b-f3f5784440ef-config-data\") pod \"cinder-scheduler-0\" (UID: \"b0387db0-cc24-4278-bb4b-f3f5784440ef\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.261858 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twhkq\" (UniqueName: \"kubernetes.io/projected/b0387db0-cc24-4278-bb4b-f3f5784440ef-kube-api-access-twhkq\") pod \"cinder-scheduler-0\" (UID: \"b0387db0-cc24-4278-bb4b-f3f5784440ef\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.261930 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b0387db0-cc24-4278-bb4b-f3f5784440ef-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b0387db0-cc24-4278-bb4b-f3f5784440ef\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.270905 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54f9cb888f-7c29b"] Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.358764 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bd785c49-zbzrf"] Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.361996 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bd785c49-zbzrf" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.368936 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360072b6-cb90-4df5-9762-d7e212a17b1e-combined-ca-bundle\") pod \"360072b6-cb90-4df5-9762-d7e212a17b1e\" (UID: \"360072b6-cb90-4df5-9762-d7e212a17b1e\") " Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.369068 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwcql\" (UniqueName: \"kubernetes.io/projected/360072b6-cb90-4df5-9762-d7e212a17b1e-kube-api-access-wwcql\") pod \"360072b6-cb90-4df5-9762-d7e212a17b1e\" (UID: \"360072b6-cb90-4df5-9762-d7e212a17b1e\") " Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.369095 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/360072b6-cb90-4df5-9762-d7e212a17b1e-openstack-config-secret\") pod \"360072b6-cb90-4df5-9762-d7e212a17b1e\" (UID: \"360072b6-cb90-4df5-9762-d7e212a17b1e\") " Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.369157 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/360072b6-cb90-4df5-9762-d7e212a17b1e-openstack-config\") pod \"360072b6-cb90-4df5-9762-d7e212a17b1e\" (UID: \"360072b6-cb90-4df5-9762-d7e212a17b1e\") " Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.370260 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0387db0-cc24-4278-bb4b-f3f5784440ef-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b0387db0-cc24-4278-bb4b-f3f5784440ef\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.370317 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0387db0-cc24-4278-bb4b-f3f5784440ef-scripts\") pod \"cinder-scheduler-0\" (UID: \"b0387db0-cc24-4278-bb4b-f3f5784440ef\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.370399 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0387db0-cc24-4278-bb4b-f3f5784440ef-config-data\") pod \"cinder-scheduler-0\" (UID: \"b0387db0-cc24-4278-bb4b-f3f5784440ef\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.370421 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twhkq\" (UniqueName: \"kubernetes.io/projected/b0387db0-cc24-4278-bb4b-f3f5784440ef-kube-api-access-twhkq\") pod \"cinder-scheduler-0\" (UID: \"b0387db0-cc24-4278-bb4b-f3f5784440ef\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.370497 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b0387db0-cc24-4278-bb4b-f3f5784440ef-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b0387db0-cc24-4278-bb4b-f3f5784440ef\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.370589 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0387db0-cc24-4278-bb4b-f3f5784440ef-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b0387db0-cc24-4278-bb4b-f3f5784440ef\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.373501 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/360072b6-cb90-4df5-9762-d7e212a17b1e-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "360072b6-cb90-4df5-9762-d7e212a17b1e" (UID: "360072b6-cb90-4df5-9762-d7e212a17b1e"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.377646 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/360072b6-cb90-4df5-9762-d7e212a17b1e-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "360072b6-cb90-4df5-9762-d7e212a17b1e" (UID: "360072b6-cb90-4df5-9762-d7e212a17b1e"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.378033 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b0387db0-cc24-4278-bb4b-f3f5784440ef-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b0387db0-cc24-4278-bb4b-f3f5784440ef\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.378672 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/360072b6-cb90-4df5-9762-d7e212a17b1e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "360072b6-cb90-4df5-9762-d7e212a17b1e" (UID: "360072b6-cb90-4df5-9762-d7e212a17b1e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.382061 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0387db0-cc24-4278-bb4b-f3f5784440ef-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b0387db0-cc24-4278-bb4b-f3f5784440ef\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.382862 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0387db0-cc24-4278-bb4b-f3f5784440ef-scripts\") pod \"cinder-scheduler-0\" (UID: \"b0387db0-cc24-4278-bb4b-f3f5784440ef\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.391778 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/360072b6-cb90-4df5-9762-d7e212a17b1e-kube-api-access-wwcql" (OuterVolumeSpecName: "kube-api-access-wwcql") pod "360072b6-cb90-4df5-9762-d7e212a17b1e" (UID: "360072b6-cb90-4df5-9762-d7e212a17b1e"). InnerVolumeSpecName "kube-api-access-wwcql". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.392074 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0387db0-cc24-4278-bb4b-f3f5784440ef-config-data\") pod \"cinder-scheduler-0\" (UID: \"b0387db0-cc24-4278-bb4b-f3f5784440ef\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.393899 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.398266 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0387db0-cc24-4278-bb4b-f3f5784440ef-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b0387db0-cc24-4278-bb4b-f3f5784440ef\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.425034 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twhkq\" (UniqueName: \"kubernetes.io/projected/b0387db0-cc24-4278-bb4b-f3f5784440ef-kube-api-access-twhkq\") pod \"cinder-scheduler-0\" (UID: \"b0387db0-cc24-4278-bb4b-f3f5784440ef\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.425093 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bd785c49-zbzrf"] Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.472644 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/377af78b-ea09-46d2-939d-debdb6630796-ovsdbserver-nb\") pod \"dnsmasq-dns-84bd785c49-zbzrf\" (UID: \"377af78b-ea09-46d2-939d-debdb6630796\") " pod="openstack/dnsmasq-dns-84bd785c49-zbzrf" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.472743 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/377af78b-ea09-46d2-939d-debdb6630796-config\") pod \"dnsmasq-dns-84bd785c49-zbzrf\" (UID: \"377af78b-ea09-46d2-939d-debdb6630796\") " pod="openstack/dnsmasq-dns-84bd785c49-zbzrf" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.472790 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/377af78b-ea09-46d2-939d-debdb6630796-dns-swift-storage-0\") pod \"dnsmasq-dns-84bd785c49-zbzrf\" (UID: \"377af78b-ea09-46d2-939d-debdb6630796\") " pod="openstack/dnsmasq-dns-84bd785c49-zbzrf" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.472815 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/377af78b-ea09-46d2-939d-debdb6630796-dns-svc\") pod \"dnsmasq-dns-84bd785c49-zbzrf\" (UID: \"377af78b-ea09-46d2-939d-debdb6630796\") " pod="openstack/dnsmasq-dns-84bd785c49-zbzrf" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.472889 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/377af78b-ea09-46d2-939d-debdb6630796-ovsdbserver-sb\") pod \"dnsmasq-dns-84bd785c49-zbzrf\" (UID: \"377af78b-ea09-46d2-939d-debdb6630796\") " pod="openstack/dnsmasq-dns-84bd785c49-zbzrf" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.472912 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sml8b\" (UniqueName: \"kubernetes.io/projected/377af78b-ea09-46d2-939d-debdb6630796-kube-api-access-sml8b\") pod \"dnsmasq-dns-84bd785c49-zbzrf\" (UID: \"377af78b-ea09-46d2-939d-debdb6630796\") " pod="openstack/dnsmasq-dns-84bd785c49-zbzrf" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.472991 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360072b6-cb90-4df5-9762-d7e212a17b1e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.473006 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwcql\" (UniqueName: \"kubernetes.io/projected/360072b6-cb90-4df5-9762-d7e212a17b1e-kube-api-access-wwcql\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.473019 4833 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/360072b6-cb90-4df5-9762-d7e212a17b1e-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.473031 4833 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/360072b6-cb90-4df5-9762-d7e212a17b1e-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.490957 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.493138 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.497265 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.503015 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 13 06:47:07 crc kubenswrapper[4833]: E1013 06:47:07.507412 4833 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Oct 13 06:47:07 crc kubenswrapper[4833]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/df3eb85a-16b6-46fa-9d0a-1c209cd72d04/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 13 06:47:07 crc kubenswrapper[4833]: > podSandboxID="5091316ba9387574a05bb4871dba19fda40d5d59f97d16ce9249b8128bca449c" Oct 13 06:47:07 crc kubenswrapper[4833]: E1013 06:47:07.507921 4833 kuberuntime_manager.go:1274] "Unhandled Error" err=< Oct 13 06:47:07 crc kubenswrapper[4833]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n567h69h57fh656h5dh545h7h595h5bhb9h5cbh548h575h68ch689h5cdhd6h5c6hd6h5c6h546h9h657h688h56h5cch9bh585h56fh64ch587h647q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h2rq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5c78787df7-crk9m_openstack(df3eb85a-16b6-46fa-9d0a-1c209cd72d04): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/df3eb85a-16b6-46fa-9d0a-1c209cd72d04/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 13 06:47:07 crc kubenswrapper[4833]: > logger="UnhandledError" Oct 13 06:47:07 crc kubenswrapper[4833]: E1013 06:47:07.509035 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/df3eb85a-16b6-46fa-9d0a-1c209cd72d04/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-5c78787df7-crk9m" podUID="df3eb85a-16b6-46fa-9d0a-1c209cd72d04" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.575070 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5cb165-db41-4025-8aca-e3fab02fcaee-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6f5cb165-db41-4025-8aca-e3fab02fcaee\") " pod="openstack/cinder-api-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.575159 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f5cb165-db41-4025-8aca-e3fab02fcaee-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6f5cb165-db41-4025-8aca-e3fab02fcaee\") " pod="openstack/cinder-api-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.575210 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f5cb165-db41-4025-8aca-e3fab02fcaee-logs\") pod \"cinder-api-0\" (UID: \"6f5cb165-db41-4025-8aca-e3fab02fcaee\") " pod="openstack/cinder-api-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.575754 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/377af78b-ea09-46d2-939d-debdb6630796-ovsdbserver-nb\") pod \"dnsmasq-dns-84bd785c49-zbzrf\" (UID: \"377af78b-ea09-46d2-939d-debdb6630796\") " pod="openstack/dnsmasq-dns-84bd785c49-zbzrf" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.575810 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f5cb165-db41-4025-8aca-e3fab02fcaee-scripts\") pod \"cinder-api-0\" (UID: \"6f5cb165-db41-4025-8aca-e3fab02fcaee\") " pod="openstack/cinder-api-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.575842 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f5cb165-db41-4025-8aca-e3fab02fcaee-config-data-custom\") pod \"cinder-api-0\" (UID: \"6f5cb165-db41-4025-8aca-e3fab02fcaee\") " pod="openstack/cinder-api-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.575885 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/377af78b-ea09-46d2-939d-debdb6630796-config\") pod \"dnsmasq-dns-84bd785c49-zbzrf\" (UID: \"377af78b-ea09-46d2-939d-debdb6630796\") " pod="openstack/dnsmasq-dns-84bd785c49-zbzrf" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.575924 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5kkn\" (UniqueName: \"kubernetes.io/projected/6f5cb165-db41-4025-8aca-e3fab02fcaee-kube-api-access-j5kkn\") pod \"cinder-api-0\" (UID: \"6f5cb165-db41-4025-8aca-e3fab02fcaee\") " pod="openstack/cinder-api-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.576025 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/377af78b-ea09-46d2-939d-debdb6630796-dns-swift-storage-0\") pod \"dnsmasq-dns-84bd785c49-zbzrf\" (UID: \"377af78b-ea09-46d2-939d-debdb6630796\") " pod="openstack/dnsmasq-dns-84bd785c49-zbzrf" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.576077 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/377af78b-ea09-46d2-939d-debdb6630796-dns-svc\") pod \"dnsmasq-dns-84bd785c49-zbzrf\" (UID: \"377af78b-ea09-46d2-939d-debdb6630796\") " pod="openstack/dnsmasq-dns-84bd785c49-zbzrf" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.576149 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5cb165-db41-4025-8aca-e3fab02fcaee-config-data\") pod \"cinder-api-0\" (UID: \"6f5cb165-db41-4025-8aca-e3fab02fcaee\") " pod="openstack/cinder-api-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.576270 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/377af78b-ea09-46d2-939d-debdb6630796-ovsdbserver-sb\") pod \"dnsmasq-dns-84bd785c49-zbzrf\" (UID: \"377af78b-ea09-46d2-939d-debdb6630796\") " pod="openstack/dnsmasq-dns-84bd785c49-zbzrf" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.576298 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sml8b\" (UniqueName: \"kubernetes.io/projected/377af78b-ea09-46d2-939d-debdb6630796-kube-api-access-sml8b\") pod \"dnsmasq-dns-84bd785c49-zbzrf\" (UID: \"377af78b-ea09-46d2-939d-debdb6630796\") " pod="openstack/dnsmasq-dns-84bd785c49-zbzrf" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.576495 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/377af78b-ea09-46d2-939d-debdb6630796-ovsdbserver-nb\") pod \"dnsmasq-dns-84bd785c49-zbzrf\" (UID: \"377af78b-ea09-46d2-939d-debdb6630796\") " pod="openstack/dnsmasq-dns-84bd785c49-zbzrf" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.577653 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/377af78b-ea09-46d2-939d-debdb6630796-config\") pod \"dnsmasq-dns-84bd785c49-zbzrf\" (UID: \"377af78b-ea09-46d2-939d-debdb6630796\") " pod="openstack/dnsmasq-dns-84bd785c49-zbzrf" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.595176 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/377af78b-ea09-46d2-939d-debdb6630796-dns-swift-storage-0\") pod \"dnsmasq-dns-84bd785c49-zbzrf\" (UID: \"377af78b-ea09-46d2-939d-debdb6630796\") " pod="openstack/dnsmasq-dns-84bd785c49-zbzrf" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.601098 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/377af78b-ea09-46d2-939d-debdb6630796-dns-svc\") pod \"dnsmasq-dns-84bd785c49-zbzrf\" (UID: \"377af78b-ea09-46d2-939d-debdb6630796\") " pod="openstack/dnsmasq-dns-84bd785c49-zbzrf" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.601299 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/377af78b-ea09-46d2-939d-debdb6630796-ovsdbserver-sb\") pod \"dnsmasq-dns-84bd785c49-zbzrf\" (UID: \"377af78b-ea09-46d2-939d-debdb6630796\") " pod="openstack/dnsmasq-dns-84bd785c49-zbzrf" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.602082 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sml8b\" (UniqueName: \"kubernetes.io/projected/377af78b-ea09-46d2-939d-debdb6630796-kube-api-access-sml8b\") pod \"dnsmasq-dns-84bd785c49-zbzrf\" (UID: \"377af78b-ea09-46d2-939d-debdb6630796\") " pod="openstack/dnsmasq-dns-84bd785c49-zbzrf" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.637372 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.678478 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5cb165-db41-4025-8aca-e3fab02fcaee-config-data\") pod \"cinder-api-0\" (UID: \"6f5cb165-db41-4025-8aca-e3fab02fcaee\") " pod="openstack/cinder-api-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.678620 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5cb165-db41-4025-8aca-e3fab02fcaee-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6f5cb165-db41-4025-8aca-e3fab02fcaee\") " pod="openstack/cinder-api-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.678660 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f5cb165-db41-4025-8aca-e3fab02fcaee-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6f5cb165-db41-4025-8aca-e3fab02fcaee\") " pod="openstack/cinder-api-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.678694 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f5cb165-db41-4025-8aca-e3fab02fcaee-logs\") pod \"cinder-api-0\" (UID: \"6f5cb165-db41-4025-8aca-e3fab02fcaee\") " pod="openstack/cinder-api-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.678731 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f5cb165-db41-4025-8aca-e3fab02fcaee-scripts\") pod \"cinder-api-0\" (UID: \"6f5cb165-db41-4025-8aca-e3fab02fcaee\") " pod="openstack/cinder-api-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.678757 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f5cb165-db41-4025-8aca-e3fab02fcaee-config-data-custom\") pod \"cinder-api-0\" (UID: \"6f5cb165-db41-4025-8aca-e3fab02fcaee\") " pod="openstack/cinder-api-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.678802 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5kkn\" (UniqueName: \"kubernetes.io/projected/6f5cb165-db41-4025-8aca-e3fab02fcaee-kube-api-access-j5kkn\") pod \"cinder-api-0\" (UID: \"6f5cb165-db41-4025-8aca-e3fab02fcaee\") " pod="openstack/cinder-api-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.679305 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f5cb165-db41-4025-8aca-e3fab02fcaee-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6f5cb165-db41-4025-8aca-e3fab02fcaee\") " pod="openstack/cinder-api-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.680143 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f5cb165-db41-4025-8aca-e3fab02fcaee-logs\") pod \"cinder-api-0\" (UID: \"6f5cb165-db41-4025-8aca-e3fab02fcaee\") " pod="openstack/cinder-api-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.688362 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5cb165-db41-4025-8aca-e3fab02fcaee-config-data\") pod \"cinder-api-0\" (UID: \"6f5cb165-db41-4025-8aca-e3fab02fcaee\") " pod="openstack/cinder-api-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.688496 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f5cb165-db41-4025-8aca-e3fab02fcaee-scripts\") pod \"cinder-api-0\" (UID: \"6f5cb165-db41-4025-8aca-e3fab02fcaee\") " pod="openstack/cinder-api-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.692180 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5cb165-db41-4025-8aca-e3fab02fcaee-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6f5cb165-db41-4025-8aca-e3fab02fcaee\") " pod="openstack/cinder-api-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.693089 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f5cb165-db41-4025-8aca-e3fab02fcaee-config-data-custom\") pod \"cinder-api-0\" (UID: \"6f5cb165-db41-4025-8aca-e3fab02fcaee\") " pod="openstack/cinder-api-0" Oct 13 06:47:07 crc kubenswrapper[4833]: E1013 06:47:07.696139 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf3eb85a_16b6_46fa_9d0a_1c209cd72d04.slice/crio-69325b50e4678f5c96ec64bdada93d08a0793d93bff13f35174a6b7536ce13cf.scope\": RecentStats: unable to find data in memory cache]" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.701047 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5kkn\" (UniqueName: \"kubernetes.io/projected/6f5cb165-db41-4025-8aca-e3fab02fcaee-kube-api-access-j5kkn\") pod \"cinder-api-0\" (UID: \"6f5cb165-db41-4025-8aca-e3fab02fcaee\") " pod="openstack/cinder-api-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.717169 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.772693 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bd785c49-zbzrf" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.891028 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.991259 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cbbcdc7cb-9s5cb" event={"ID":"90d6e69b-7990-402a-90b1-affbd44376cd","Type":"ContainerStarted","Data":"08788cf590584a916b041279ff8d75a402c9c1f78d1a781850a0bc86ece29f34"} Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.991558 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cbbcdc7cb-9s5cb" event={"ID":"90d6e69b-7990-402a-90b1-affbd44376cd","Type":"ContainerStarted","Data":"eabb7866f4cd10a20d8f69c0751abec8dd19c02f3aa1edee8758d0c3f00c24c5"} Oct 13 06:47:07 crc kubenswrapper[4833]: I1013 06:47:07.991596 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-cbbcdc7cb-9s5cb" Oct 13 06:47:08 crc kubenswrapper[4833]: I1013 06:47:08.008921 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5783401d-3007-4df3-a902-1869d62c4acc","Type":"ContainerStarted","Data":"d1f90d9d97e1227c722676087df3dfa9aad394cdb1cf35a09e8e6c08f29f5b37"} Oct 13 06:47:08 crc kubenswrapper[4833]: I1013 06:47:08.009745 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 13 06:47:08 crc kubenswrapper[4833]: I1013 06:47:08.014279 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-cbbcdc7cb-9s5cb" podStartSLOduration=3.014263172 podStartE2EDuration="3.014263172s" podCreationTimestamp="2025-10-13 06:47:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:47:08.012934863 +0000 UTC m=+1118.113357789" watchObservedRunningTime="2025-10-13 06:47:08.014263172 +0000 UTC m=+1118.114686088" Oct 13 06:47:08 crc kubenswrapper[4833]: I1013 06:47:08.050896 4833 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="360072b6-cb90-4df5-9762-d7e212a17b1e" podUID="5783401d-3007-4df3-a902-1869d62c4acc" Oct 13 06:47:08 crc kubenswrapper[4833]: I1013 06:47:08.321104 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 13 06:47:08 crc kubenswrapper[4833]: I1013 06:47:08.412915 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bd785c49-zbzrf"] Oct 13 06:47:08 crc kubenswrapper[4833]: I1013 06:47:08.519106 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c78787df7-crk9m" Oct 13 06:47:08 crc kubenswrapper[4833]: I1013 06:47:08.611869 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-ovsdbserver-nb\") pod \"df3eb85a-16b6-46fa-9d0a-1c209cd72d04\" (UID: \"df3eb85a-16b6-46fa-9d0a-1c209cd72d04\") " Oct 13 06:47:08 crc kubenswrapper[4833]: I1013 06:47:08.616667 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-ovsdbserver-sb\") pod \"df3eb85a-16b6-46fa-9d0a-1c209cd72d04\" (UID: \"df3eb85a-16b6-46fa-9d0a-1c209cd72d04\") " Oct 13 06:47:08 crc kubenswrapper[4833]: I1013 06:47:08.616856 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-config\") pod \"df3eb85a-16b6-46fa-9d0a-1c209cd72d04\" (UID: \"df3eb85a-16b6-46fa-9d0a-1c209cd72d04\") " Oct 13 06:47:08 crc kubenswrapper[4833]: I1013 06:47:08.616896 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2rq8\" (UniqueName: \"kubernetes.io/projected/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-kube-api-access-h2rq8\") pod \"df3eb85a-16b6-46fa-9d0a-1c209cd72d04\" (UID: \"df3eb85a-16b6-46fa-9d0a-1c209cd72d04\") " Oct 13 06:47:08 crc kubenswrapper[4833]: I1013 06:47:08.616974 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-dns-svc\") pod \"df3eb85a-16b6-46fa-9d0a-1c209cd72d04\" (UID: \"df3eb85a-16b6-46fa-9d0a-1c209cd72d04\") " Oct 13 06:47:08 crc kubenswrapper[4833]: I1013 06:47:08.617107 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-dns-swift-storage-0\") pod \"df3eb85a-16b6-46fa-9d0a-1c209cd72d04\" (UID: \"df3eb85a-16b6-46fa-9d0a-1c209cd72d04\") " Oct 13 06:47:08 crc kubenswrapper[4833]: I1013 06:47:08.627465 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-kube-api-access-h2rq8" (OuterVolumeSpecName: "kube-api-access-h2rq8") pod "df3eb85a-16b6-46fa-9d0a-1c209cd72d04" (UID: "df3eb85a-16b6-46fa-9d0a-1c209cd72d04"). InnerVolumeSpecName "kube-api-access-h2rq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:47:08 crc kubenswrapper[4833]: I1013 06:47:08.662760 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="360072b6-cb90-4df5-9762-d7e212a17b1e" path="/var/lib/kubelet/pods/360072b6-cb90-4df5-9762-d7e212a17b1e/volumes" Oct 13 06:47:08 crc kubenswrapper[4833]: I1013 06:47:08.663311 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4d093b1-7af9-4157-902f-78359796b208" path="/var/lib/kubelet/pods/f4d093b1-7af9-4157-902f-78359796b208/volumes" Oct 13 06:47:08 crc kubenswrapper[4833]: I1013 06:47:08.719943 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2rq8\" (UniqueName: \"kubernetes.io/projected/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-kube-api-access-h2rq8\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:08 crc kubenswrapper[4833]: W1013 06:47:08.724884 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f5cb165_db41_4025_8aca_e3fab02fcaee.slice/crio-665f3a8f3447f6848b636a81060d1035a7a6db2ba43bf15bb54b861b25f3dbc3 WatchSource:0}: Error finding container 665f3a8f3447f6848b636a81060d1035a7a6db2ba43bf15bb54b861b25f3dbc3: Status 404 returned error can't find the container with id 665f3a8f3447f6848b636a81060d1035a7a6db2ba43bf15bb54b861b25f3dbc3 Oct 13 06:47:08 crc kubenswrapper[4833]: I1013 06:47:08.731663 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 13 06:47:08 crc kubenswrapper[4833]: I1013 06:47:08.736167 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "df3eb85a-16b6-46fa-9d0a-1c209cd72d04" (UID: "df3eb85a-16b6-46fa-9d0a-1c209cd72d04"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:47:08 crc kubenswrapper[4833]: I1013 06:47:08.737914 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-config" (OuterVolumeSpecName: "config") pod "df3eb85a-16b6-46fa-9d0a-1c209cd72d04" (UID: "df3eb85a-16b6-46fa-9d0a-1c209cd72d04"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:47:08 crc kubenswrapper[4833]: I1013 06:47:08.739209 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "df3eb85a-16b6-46fa-9d0a-1c209cd72d04" (UID: "df3eb85a-16b6-46fa-9d0a-1c209cd72d04"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:47:08 crc kubenswrapper[4833]: I1013 06:47:08.781004 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "df3eb85a-16b6-46fa-9d0a-1c209cd72d04" (UID: "df3eb85a-16b6-46fa-9d0a-1c209cd72d04"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:47:08 crc kubenswrapper[4833]: I1013 06:47:08.807729 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "df3eb85a-16b6-46fa-9d0a-1c209cd72d04" (UID: "df3eb85a-16b6-46fa-9d0a-1c209cd72d04"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:47:08 crc kubenswrapper[4833]: I1013 06:47:08.823926 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:08 crc kubenswrapper[4833]: I1013 06:47:08.823969 4833 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:08 crc kubenswrapper[4833]: I1013 06:47:08.823982 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:08 crc kubenswrapper[4833]: I1013 06:47:08.823993 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:08 crc kubenswrapper[4833]: I1013 06:47:08.824004 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df3eb85a-16b6-46fa-9d0a-1c209cd72d04-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:09 crc kubenswrapper[4833]: I1013 06:47:09.026122 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c78787df7-crk9m" Oct 13 06:47:09 crc kubenswrapper[4833]: I1013 06:47:09.027043 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c78787df7-crk9m" event={"ID":"df3eb85a-16b6-46fa-9d0a-1c209cd72d04","Type":"ContainerDied","Data":"5091316ba9387574a05bb4871dba19fda40d5d59f97d16ce9249b8128bca449c"} Oct 13 06:47:09 crc kubenswrapper[4833]: I1013 06:47:09.027109 4833 scope.go:117] "RemoveContainer" containerID="de59c5cc2ff6668fe5e7cce4868a7246a0c8cf29ed3fdb7afc6625d050c19322" Oct 13 06:47:09 crc kubenswrapper[4833]: I1013 06:47:09.032436 4833 generic.go:334] "Generic (PLEG): container finished" podID="377af78b-ea09-46d2-939d-debdb6630796" containerID="3772da44d8ece66196f41a2e74f23f38837f1e074079fa27b9b7eac3a6b6c7cc" exitCode=0 Oct 13 06:47:09 crc kubenswrapper[4833]: I1013 06:47:09.032500 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bd785c49-zbzrf" event={"ID":"377af78b-ea09-46d2-939d-debdb6630796","Type":"ContainerDied","Data":"3772da44d8ece66196f41a2e74f23f38837f1e074079fa27b9b7eac3a6b6c7cc"} Oct 13 06:47:09 crc kubenswrapper[4833]: I1013 06:47:09.032530 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bd785c49-zbzrf" event={"ID":"377af78b-ea09-46d2-939d-debdb6630796","Type":"ContainerStarted","Data":"d76e11ae046a83edda71eb042d58e7277e560446a94bc08d5854c0ffc1e7f7d2"} Oct 13 06:47:09 crc kubenswrapper[4833]: I1013 06:47:09.042803 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b0387db0-cc24-4278-bb4b-f3f5784440ef","Type":"ContainerStarted","Data":"da38dea0e524b44150b38db4be9cce8ea879bb749fe6f3a3ced3d59e07b588bd"} Oct 13 06:47:09 crc kubenswrapper[4833]: I1013 06:47:09.062370 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6f5cb165-db41-4025-8aca-e3fab02fcaee","Type":"ContainerStarted","Data":"665f3a8f3447f6848b636a81060d1035a7a6db2ba43bf15bb54b861b25f3dbc3"} Oct 13 06:47:09 crc kubenswrapper[4833]: I1013 06:47:09.368093 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c78787df7-crk9m"] Oct 13 06:47:09 crc kubenswrapper[4833]: I1013 06:47:09.382749 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c78787df7-crk9m"] Oct 13 06:47:10 crc kubenswrapper[4833]: I1013 06:47:10.087823 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6f5cb165-db41-4025-8aca-e3fab02fcaee","Type":"ContainerStarted","Data":"ead1c4201d286b45fac83abd7dd463f0eaee1be093395c65fcd7360498838668"} Oct 13 06:47:10 crc kubenswrapper[4833]: I1013 06:47:10.098575 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bd785c49-zbzrf" event={"ID":"377af78b-ea09-46d2-939d-debdb6630796","Type":"ContainerStarted","Data":"3ec564f1c6684624dbb2bb2624aabef179dd9177b1c68381a773dc25cc713c3a"} Oct 13 06:47:10 crc kubenswrapper[4833]: I1013 06:47:10.100321 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84bd785c49-zbzrf" Oct 13 06:47:10 crc kubenswrapper[4833]: I1013 06:47:10.643127 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df3eb85a-16b6-46fa-9d0a-1c209cd72d04" path="/var/lib/kubelet/pods/df3eb85a-16b6-46fa-9d0a-1c209cd72d04/volumes" Oct 13 06:47:10 crc kubenswrapper[4833]: I1013 06:47:10.671475 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84bd785c49-zbzrf" podStartSLOduration=3.671424269 podStartE2EDuration="3.671424269s" podCreationTimestamp="2025-10-13 06:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:47:10.128957281 +0000 UTC m=+1120.229380217" watchObservedRunningTime="2025-10-13 06:47:10.671424269 +0000 UTC m=+1120.771847195" Oct 13 06:47:10 crc kubenswrapper[4833]: I1013 06:47:10.765469 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 13 06:47:11 crc kubenswrapper[4833]: I1013 06:47:11.114179 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b0387db0-cc24-4278-bb4b-f3f5784440ef","Type":"ContainerStarted","Data":"426841c889091e481b9bcd8b1d16d885174020895778937bdf2d01a96e7cb4bf"} Oct 13 06:47:11 crc kubenswrapper[4833]: I1013 06:47:11.117246 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6f5cb165-db41-4025-8aca-e3fab02fcaee","Type":"ContainerStarted","Data":"82f7d68fb2410a534412786cd7f59706c57187386674198c2ab5bde4e7aec4f1"} Oct 13 06:47:11 crc kubenswrapper[4833]: I1013 06:47:11.117308 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 13 06:47:11 crc kubenswrapper[4833]: I1013 06:47:11.137299 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.137278343 podStartE2EDuration="4.137278343s" podCreationTimestamp="2025-10-13 06:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:47:11.133203835 +0000 UTC m=+1121.233626751" watchObservedRunningTime="2025-10-13 06:47:11.137278343 +0000 UTC m=+1121.237701259" Oct 13 06:47:11 crc kubenswrapper[4833]: I1013 06:47:11.236858 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-699bdfffd4-dzv2d" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.095466 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b78565d7c-d78jk"] Oct 13 06:47:12 crc kubenswrapper[4833]: E1013 06:47:12.095879 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df3eb85a-16b6-46fa-9d0a-1c209cd72d04" containerName="init" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.095894 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="df3eb85a-16b6-46fa-9d0a-1c209cd72d04" containerName="init" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.096076 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="df3eb85a-16b6-46fa-9d0a-1c209cd72d04" containerName="init" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.097158 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b78565d7c-d78jk" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.099405 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.110316 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.113414 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b78565d7c-d78jk"] Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.171065 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b0387db0-cc24-4278-bb4b-f3f5784440ef","Type":"ContainerStarted","Data":"fd86922d1434f8eb76dfe2b32cbe10ca21776d31a57c402f5d666198f149ff3c"} Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.171769 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="6f5cb165-db41-4025-8aca-e3fab02fcaee" containerName="cinder-api-log" containerID="cri-o://ead1c4201d286b45fac83abd7dd463f0eaee1be093395c65fcd7360498838668" gracePeriod=30 Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.171844 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="6f5cb165-db41-4025-8aca-e3fab02fcaee" containerName="cinder-api" containerID="cri-o://82f7d68fb2410a534412786cd7f59706c57187386674198c2ab5bde4e7aec4f1" gracePeriod=30 Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.205947 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.097156449 podStartE2EDuration="5.205931495s" podCreationTimestamp="2025-10-13 06:47:07 +0000 UTC" firstStartedPulling="2025-10-13 06:47:08.358508639 +0000 UTC m=+1118.458931555" lastFinishedPulling="2025-10-13 06:47:09.467283695 +0000 UTC m=+1119.567706601" observedRunningTime="2025-10-13 06:47:12.198751447 +0000 UTC m=+1122.299174363" watchObservedRunningTime="2025-10-13 06:47:12.205931495 +0000 UTC m=+1122.306354411" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.210202 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-httpd-config\") pod \"neutron-b78565d7c-d78jk\" (UID: \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\") " pod="openstack/neutron-b78565d7c-d78jk" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.210260 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-internal-tls-certs\") pod \"neutron-b78565d7c-d78jk\" (UID: \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\") " pod="openstack/neutron-b78565d7c-d78jk" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.210311 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-combined-ca-bundle\") pod \"neutron-b78565d7c-d78jk\" (UID: \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\") " pod="openstack/neutron-b78565d7c-d78jk" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.210333 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-public-tls-certs\") pod \"neutron-b78565d7c-d78jk\" (UID: \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\") " pod="openstack/neutron-b78565d7c-d78jk" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.210353 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-config\") pod \"neutron-b78565d7c-d78jk\" (UID: \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\") " pod="openstack/neutron-b78565d7c-d78jk" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.210376 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-ovndb-tls-certs\") pod \"neutron-b78565d7c-d78jk\" (UID: \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\") " pod="openstack/neutron-b78565d7c-d78jk" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.210412 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g6bm\" (UniqueName: \"kubernetes.io/projected/65e5cee6-ee1c-4612-89b8-c2cfe968438b-kube-api-access-2g6bm\") pod \"neutron-b78565d7c-d78jk\" (UID: \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\") " pod="openstack/neutron-b78565d7c-d78jk" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.311317 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-699bdfffd4-dzv2d" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.313058 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-ovndb-tls-certs\") pod \"neutron-b78565d7c-d78jk\" (UID: \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\") " pod="openstack/neutron-b78565d7c-d78jk" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.313157 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g6bm\" (UniqueName: \"kubernetes.io/projected/65e5cee6-ee1c-4612-89b8-c2cfe968438b-kube-api-access-2g6bm\") pod \"neutron-b78565d7c-d78jk\" (UID: \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\") " pod="openstack/neutron-b78565d7c-d78jk" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.313273 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-httpd-config\") pod \"neutron-b78565d7c-d78jk\" (UID: \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\") " pod="openstack/neutron-b78565d7c-d78jk" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.313361 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-internal-tls-certs\") pod \"neutron-b78565d7c-d78jk\" (UID: \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\") " pod="openstack/neutron-b78565d7c-d78jk" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.313431 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-combined-ca-bundle\") pod \"neutron-b78565d7c-d78jk\" (UID: \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\") " pod="openstack/neutron-b78565d7c-d78jk" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.313493 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-public-tls-certs\") pod \"neutron-b78565d7c-d78jk\" (UID: \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\") " pod="openstack/neutron-b78565d7c-d78jk" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.313528 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-config\") pod \"neutron-b78565d7c-d78jk\" (UID: \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\") " pod="openstack/neutron-b78565d7c-d78jk" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.321660 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-config\") pod \"neutron-b78565d7c-d78jk\" (UID: \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\") " pod="openstack/neutron-b78565d7c-d78jk" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.324170 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-httpd-config\") pod \"neutron-b78565d7c-d78jk\" (UID: \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\") " pod="openstack/neutron-b78565d7c-d78jk" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.325257 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-internal-tls-certs\") pod \"neutron-b78565d7c-d78jk\" (UID: \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\") " pod="openstack/neutron-b78565d7c-d78jk" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.327573 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-combined-ca-bundle\") pod \"neutron-b78565d7c-d78jk\" (UID: \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\") " pod="openstack/neutron-b78565d7c-d78jk" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.330246 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-public-tls-certs\") pod \"neutron-b78565d7c-d78jk\" (UID: \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\") " pod="openstack/neutron-b78565d7c-d78jk" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.333634 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g6bm\" (UniqueName: \"kubernetes.io/projected/65e5cee6-ee1c-4612-89b8-c2cfe968438b-kube-api-access-2g6bm\") pod \"neutron-b78565d7c-d78jk\" (UID: \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\") " pod="openstack/neutron-b78565d7c-d78jk" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.338160 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-ovndb-tls-certs\") pod \"neutron-b78565d7c-d78jk\" (UID: \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\") " pod="openstack/neutron-b78565d7c-d78jk" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.413210 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b78565d7c-d78jk" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.719588 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.813413 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-85d74757d5-v95tz"] Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.815213 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.823781 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.823969 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.824127 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.835912 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-85d74757d5-v95tz"] Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.907120 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.932263 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77004520-24e0-4076-8155-b4a8b6b3e1a2-config-data\") pod \"swift-proxy-85d74757d5-v95tz\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.932315 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77004520-24e0-4076-8155-b4a8b6b3e1a2-combined-ca-bundle\") pod \"swift-proxy-85d74757d5-v95tz\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.932342 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77004520-24e0-4076-8155-b4a8b6b3e1a2-internal-tls-certs\") pod \"swift-proxy-85d74757d5-v95tz\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.932381 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77004520-24e0-4076-8155-b4a8b6b3e1a2-log-httpd\") pod \"swift-proxy-85d74757d5-v95tz\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.932425 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/77004520-24e0-4076-8155-b4a8b6b3e1a2-etc-swift\") pod \"swift-proxy-85d74757d5-v95tz\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.932455 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbb2r\" (UniqueName: \"kubernetes.io/projected/77004520-24e0-4076-8155-b4a8b6b3e1a2-kube-api-access-cbb2r\") pod \"swift-proxy-85d74757d5-v95tz\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.932485 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77004520-24e0-4076-8155-b4a8b6b3e1a2-public-tls-certs\") pod \"swift-proxy-85d74757d5-v95tz\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:47:12 crc kubenswrapper[4833]: I1013 06:47:12.932514 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77004520-24e0-4076-8155-b4a8b6b3e1a2-run-httpd\") pod \"swift-proxy-85d74757d5-v95tz\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.035520 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f5cb165-db41-4025-8aca-e3fab02fcaee-scripts\") pod \"6f5cb165-db41-4025-8aca-e3fab02fcaee\" (UID: \"6f5cb165-db41-4025-8aca-e3fab02fcaee\") " Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.035941 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5cb165-db41-4025-8aca-e3fab02fcaee-combined-ca-bundle\") pod \"6f5cb165-db41-4025-8aca-e3fab02fcaee\" (UID: \"6f5cb165-db41-4025-8aca-e3fab02fcaee\") " Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.035971 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f5cb165-db41-4025-8aca-e3fab02fcaee-config-data-custom\") pod \"6f5cb165-db41-4025-8aca-e3fab02fcaee\" (UID: \"6f5cb165-db41-4025-8aca-e3fab02fcaee\") " Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.036057 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5cb165-db41-4025-8aca-e3fab02fcaee-config-data\") pod \"6f5cb165-db41-4025-8aca-e3fab02fcaee\" (UID: \"6f5cb165-db41-4025-8aca-e3fab02fcaee\") " Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.036098 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5kkn\" (UniqueName: \"kubernetes.io/projected/6f5cb165-db41-4025-8aca-e3fab02fcaee-kube-api-access-j5kkn\") pod \"6f5cb165-db41-4025-8aca-e3fab02fcaee\" (UID: \"6f5cb165-db41-4025-8aca-e3fab02fcaee\") " Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.036133 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f5cb165-db41-4025-8aca-e3fab02fcaee-etc-machine-id\") pod \"6f5cb165-db41-4025-8aca-e3fab02fcaee\" (UID: \"6f5cb165-db41-4025-8aca-e3fab02fcaee\") " Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.036283 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f5cb165-db41-4025-8aca-e3fab02fcaee-logs\") pod \"6f5cb165-db41-4025-8aca-e3fab02fcaee\" (UID: \"6f5cb165-db41-4025-8aca-e3fab02fcaee\") " Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.036583 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77004520-24e0-4076-8155-b4a8b6b3e1a2-internal-tls-certs\") pod \"swift-proxy-85d74757d5-v95tz\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.036659 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77004520-24e0-4076-8155-b4a8b6b3e1a2-log-httpd\") pod \"swift-proxy-85d74757d5-v95tz\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.036738 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/77004520-24e0-4076-8155-b4a8b6b3e1a2-etc-swift\") pod \"swift-proxy-85d74757d5-v95tz\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.036795 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbb2r\" (UniqueName: \"kubernetes.io/projected/77004520-24e0-4076-8155-b4a8b6b3e1a2-kube-api-access-cbb2r\") pod \"swift-proxy-85d74757d5-v95tz\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.036851 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77004520-24e0-4076-8155-b4a8b6b3e1a2-public-tls-certs\") pod \"swift-proxy-85d74757d5-v95tz\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.036902 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77004520-24e0-4076-8155-b4a8b6b3e1a2-run-httpd\") pod \"swift-proxy-85d74757d5-v95tz\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.037020 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77004520-24e0-4076-8155-b4a8b6b3e1a2-config-data\") pod \"swift-proxy-85d74757d5-v95tz\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.037062 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77004520-24e0-4076-8155-b4a8b6b3e1a2-combined-ca-bundle\") pod \"swift-proxy-85d74757d5-v95tz\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.041624 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77004520-24e0-4076-8155-b4a8b6b3e1a2-log-httpd\") pod \"swift-proxy-85d74757d5-v95tz\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.043228 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f5cb165-db41-4025-8aca-e3fab02fcaee-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6f5cb165-db41-4025-8aca-e3fab02fcaee" (UID: "6f5cb165-db41-4025-8aca-e3fab02fcaee"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.044232 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77004520-24e0-4076-8155-b4a8b6b3e1a2-run-httpd\") pod \"swift-proxy-85d74757d5-v95tz\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.046159 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f5cb165-db41-4025-8aca-e3fab02fcaee-logs" (OuterVolumeSpecName: "logs") pod "6f5cb165-db41-4025-8aca-e3fab02fcaee" (UID: "6f5cb165-db41-4025-8aca-e3fab02fcaee"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.049708 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77004520-24e0-4076-8155-b4a8b6b3e1a2-public-tls-certs\") pod \"swift-proxy-85d74757d5-v95tz\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.049942 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f5cb165-db41-4025-8aca-e3fab02fcaee-kube-api-access-j5kkn" (OuterVolumeSpecName: "kube-api-access-j5kkn") pod "6f5cb165-db41-4025-8aca-e3fab02fcaee" (UID: "6f5cb165-db41-4025-8aca-e3fab02fcaee"). InnerVolumeSpecName "kube-api-access-j5kkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.050195 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/77004520-24e0-4076-8155-b4a8b6b3e1a2-etc-swift\") pod \"swift-proxy-85d74757d5-v95tz\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.051883 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77004520-24e0-4076-8155-b4a8b6b3e1a2-combined-ca-bundle\") pod \"swift-proxy-85d74757d5-v95tz\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.057633 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77004520-24e0-4076-8155-b4a8b6b3e1a2-config-data\") pod \"swift-proxy-85d74757d5-v95tz\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.057753 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5cb165-db41-4025-8aca-e3fab02fcaee-scripts" (OuterVolumeSpecName: "scripts") pod "6f5cb165-db41-4025-8aca-e3fab02fcaee" (UID: "6f5cb165-db41-4025-8aca-e3fab02fcaee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.063697 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbb2r\" (UniqueName: \"kubernetes.io/projected/77004520-24e0-4076-8155-b4a8b6b3e1a2-kube-api-access-cbb2r\") pod \"swift-proxy-85d74757d5-v95tz\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.063870 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5cb165-db41-4025-8aca-e3fab02fcaee-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6f5cb165-db41-4025-8aca-e3fab02fcaee" (UID: "6f5cb165-db41-4025-8aca-e3fab02fcaee"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.117432 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77004520-24e0-4076-8155-b4a8b6b3e1a2-internal-tls-certs\") pod \"swift-proxy-85d74757d5-v95tz\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.138377 4833 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f5cb165-db41-4025-8aca-e3fab02fcaee-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.138408 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5kkn\" (UniqueName: \"kubernetes.io/projected/6f5cb165-db41-4025-8aca-e3fab02fcaee-kube-api-access-j5kkn\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.138420 4833 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f5cb165-db41-4025-8aca-e3fab02fcaee-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.138429 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f5cb165-db41-4025-8aca-e3fab02fcaee-logs\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.138438 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f5cb165-db41-4025-8aca-e3fab02fcaee-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.147713 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b78565d7c-d78jk"] Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.148699 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5cb165-db41-4025-8aca-e3fab02fcaee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f5cb165-db41-4025-8aca-e3fab02fcaee" (UID: "6f5cb165-db41-4025-8aca-e3fab02fcaee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.192369 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b78565d7c-d78jk" event={"ID":"65e5cee6-ee1c-4612-89b8-c2cfe968438b","Type":"ContainerStarted","Data":"adef8b0bdfe5ad5877a14b9da9c7bf657b0674168363b76ea94b930673147f08"} Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.195471 4833 generic.go:334] "Generic (PLEG): container finished" podID="6f5cb165-db41-4025-8aca-e3fab02fcaee" containerID="82f7d68fb2410a534412786cd7f59706c57187386674198c2ab5bde4e7aec4f1" exitCode=0 Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.195511 4833 generic.go:334] "Generic (PLEG): container finished" podID="6f5cb165-db41-4025-8aca-e3fab02fcaee" containerID="ead1c4201d286b45fac83abd7dd463f0eaee1be093395c65fcd7360498838668" exitCode=143 Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.195730 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6f5cb165-db41-4025-8aca-e3fab02fcaee","Type":"ContainerDied","Data":"82f7d68fb2410a534412786cd7f59706c57187386674198c2ab5bde4e7aec4f1"} Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.195771 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6f5cb165-db41-4025-8aca-e3fab02fcaee","Type":"ContainerDied","Data":"ead1c4201d286b45fac83abd7dd463f0eaee1be093395c65fcd7360498838668"} Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.195786 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6f5cb165-db41-4025-8aca-e3fab02fcaee","Type":"ContainerDied","Data":"665f3a8f3447f6848b636a81060d1035a7a6db2ba43bf15bb54b861b25f3dbc3"} Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.195808 4833 scope.go:117] "RemoveContainer" containerID="82f7d68fb2410a534412786cd7f59706c57187386674198c2ab5bde4e7aec4f1" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.196090 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.199219 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.205948 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5cb165-db41-4025-8aca-e3fab02fcaee-config-data" (OuterVolumeSpecName: "config-data") pod "6f5cb165-db41-4025-8aca-e3fab02fcaee" (UID: "6f5cb165-db41-4025-8aca-e3fab02fcaee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.240496 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5cb165-db41-4025-8aca-e3fab02fcaee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.240522 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5cb165-db41-4025-8aca-e3fab02fcaee-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.269703 4833 scope.go:117] "RemoveContainer" containerID="ead1c4201d286b45fac83abd7dd463f0eaee1be093395c65fcd7360498838668" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.330701 4833 scope.go:117] "RemoveContainer" containerID="82f7d68fb2410a534412786cd7f59706c57187386674198c2ab5bde4e7aec4f1" Oct 13 06:47:13 crc kubenswrapper[4833]: E1013 06:47:13.337703 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82f7d68fb2410a534412786cd7f59706c57187386674198c2ab5bde4e7aec4f1\": container with ID starting with 82f7d68fb2410a534412786cd7f59706c57187386674198c2ab5bde4e7aec4f1 not found: ID does not exist" containerID="82f7d68fb2410a534412786cd7f59706c57187386674198c2ab5bde4e7aec4f1" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.337778 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82f7d68fb2410a534412786cd7f59706c57187386674198c2ab5bde4e7aec4f1"} err="failed to get container status \"82f7d68fb2410a534412786cd7f59706c57187386674198c2ab5bde4e7aec4f1\": rpc error: code = NotFound desc = could not find container \"82f7d68fb2410a534412786cd7f59706c57187386674198c2ab5bde4e7aec4f1\": container with ID starting with 82f7d68fb2410a534412786cd7f59706c57187386674198c2ab5bde4e7aec4f1 not found: ID does not exist" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.337835 4833 scope.go:117] "RemoveContainer" containerID="ead1c4201d286b45fac83abd7dd463f0eaee1be093395c65fcd7360498838668" Oct 13 06:47:13 crc kubenswrapper[4833]: E1013 06:47:13.340288 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ead1c4201d286b45fac83abd7dd463f0eaee1be093395c65fcd7360498838668\": container with ID starting with ead1c4201d286b45fac83abd7dd463f0eaee1be093395c65fcd7360498838668 not found: ID does not exist" containerID="ead1c4201d286b45fac83abd7dd463f0eaee1be093395c65fcd7360498838668" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.340333 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ead1c4201d286b45fac83abd7dd463f0eaee1be093395c65fcd7360498838668"} err="failed to get container status \"ead1c4201d286b45fac83abd7dd463f0eaee1be093395c65fcd7360498838668\": rpc error: code = NotFound desc = could not find container \"ead1c4201d286b45fac83abd7dd463f0eaee1be093395c65fcd7360498838668\": container with ID starting with ead1c4201d286b45fac83abd7dd463f0eaee1be093395c65fcd7360498838668 not found: ID does not exist" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.340348 4833 scope.go:117] "RemoveContainer" containerID="82f7d68fb2410a534412786cd7f59706c57187386674198c2ab5bde4e7aec4f1" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.341798 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82f7d68fb2410a534412786cd7f59706c57187386674198c2ab5bde4e7aec4f1"} err="failed to get container status \"82f7d68fb2410a534412786cd7f59706c57187386674198c2ab5bde4e7aec4f1\": rpc error: code = NotFound desc = could not find container \"82f7d68fb2410a534412786cd7f59706c57187386674198c2ab5bde4e7aec4f1\": container with ID starting with 82f7d68fb2410a534412786cd7f59706c57187386674198c2ab5bde4e7aec4f1 not found: ID does not exist" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.341854 4833 scope.go:117] "RemoveContainer" containerID="ead1c4201d286b45fac83abd7dd463f0eaee1be093395c65fcd7360498838668" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.343116 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ead1c4201d286b45fac83abd7dd463f0eaee1be093395c65fcd7360498838668"} err="failed to get container status \"ead1c4201d286b45fac83abd7dd463f0eaee1be093395c65fcd7360498838668\": rpc error: code = NotFound desc = could not find container \"ead1c4201d286b45fac83abd7dd463f0eaee1be093395c65fcd7360498838668\": container with ID starting with ead1c4201d286b45fac83abd7dd463f0eaee1be093395c65fcd7360498838668 not found: ID does not exist" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.565522 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.582930 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.612692 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 13 06:47:13 crc kubenswrapper[4833]: E1013 06:47:13.613111 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5cb165-db41-4025-8aca-e3fab02fcaee" containerName="cinder-api" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.613127 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5cb165-db41-4025-8aca-e3fab02fcaee" containerName="cinder-api" Oct 13 06:47:13 crc kubenswrapper[4833]: E1013 06:47:13.613170 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5cb165-db41-4025-8aca-e3fab02fcaee" containerName="cinder-api-log" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.613181 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5cb165-db41-4025-8aca-e3fab02fcaee" containerName="cinder-api-log" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.613405 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f5cb165-db41-4025-8aca-e3fab02fcaee" containerName="cinder-api-log" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.613428 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f5cb165-db41-4025-8aca-e3fab02fcaee" containerName="cinder-api" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.614728 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.619736 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.620058 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.620195 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.623486 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.764465 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-scripts\") pod \"cinder-api-0\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " pod="openstack/cinder-api-0" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.764793 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-etc-machine-id\") pod \"cinder-api-0\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " pod="openstack/cinder-api-0" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.764814 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-config-data\") pod \"cinder-api-0\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " pod="openstack/cinder-api-0" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.764833 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-logs\") pod \"cinder-api-0\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " pod="openstack/cinder-api-0" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.764849 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " pod="openstack/cinder-api-0" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.765842 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-public-tls-certs\") pod \"cinder-api-0\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " pod="openstack/cinder-api-0" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.765892 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6q7d\" (UniqueName: \"kubernetes.io/projected/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-kube-api-access-j6q7d\") pod \"cinder-api-0\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " pod="openstack/cinder-api-0" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.765920 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " pod="openstack/cinder-api-0" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.766174 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-config-data-custom\") pod \"cinder-api-0\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " pod="openstack/cinder-api-0" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.868875 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-config-data-custom\") pod \"cinder-api-0\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " pod="openstack/cinder-api-0" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.868963 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-scripts\") pod \"cinder-api-0\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " pod="openstack/cinder-api-0" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.868994 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-etc-machine-id\") pod \"cinder-api-0\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " pod="openstack/cinder-api-0" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.869012 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-config-data\") pod \"cinder-api-0\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " pod="openstack/cinder-api-0" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.869038 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-logs\") pod \"cinder-api-0\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " pod="openstack/cinder-api-0" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.869062 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " pod="openstack/cinder-api-0" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.869140 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-public-tls-certs\") pod \"cinder-api-0\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " pod="openstack/cinder-api-0" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.869172 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6q7d\" (UniqueName: \"kubernetes.io/projected/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-kube-api-access-j6q7d\") pod \"cinder-api-0\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " pod="openstack/cinder-api-0" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.869194 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " pod="openstack/cinder-api-0" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.876067 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-etc-machine-id\") pod \"cinder-api-0\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " pod="openstack/cinder-api-0" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.877196 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-logs\") pod \"cinder-api-0\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " pod="openstack/cinder-api-0" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.888003 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-config-data\") pod \"cinder-api-0\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " pod="openstack/cinder-api-0" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.909908 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-85d74757d5-v95tz"] Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.921363 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " pod="openstack/cinder-api-0" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.921419 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-config-data-custom\") pod \"cinder-api-0\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " pod="openstack/cinder-api-0" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.921750 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-scripts\") pod \"cinder-api-0\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " pod="openstack/cinder-api-0" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.922450 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " pod="openstack/cinder-api-0" Oct 13 06:47:13 crc kubenswrapper[4833]: W1013 06:47:13.923106 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77004520_24e0_4076_8155_b4a8b6b3e1a2.slice/crio-fc4185496a8f5aceebc4214786114cd6386579632e77074f673c4c0161c217d8 WatchSource:0}: Error finding container fc4185496a8f5aceebc4214786114cd6386579632e77074f673c4c0161c217d8: Status 404 returned error can't find the container with id fc4185496a8f5aceebc4214786114cd6386579632e77074f673c4c0161c217d8 Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.925999 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-public-tls-certs\") pod \"cinder-api-0\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " pod="openstack/cinder-api-0" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.945550 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6q7d\" (UniqueName: \"kubernetes.io/projected/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-kube-api-access-j6q7d\") pod \"cinder-api-0\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " pod="openstack/cinder-api-0" Oct 13 06:47:13 crc kubenswrapper[4833]: I1013 06:47:13.954430 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 13 06:47:14 crc kubenswrapper[4833]: I1013 06:47:14.253257 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b78565d7c-d78jk" event={"ID":"65e5cee6-ee1c-4612-89b8-c2cfe968438b","Type":"ContainerStarted","Data":"d43c06194342280710813b12ad00477467b337fe1567ed350bad5cbf383d8289"} Oct 13 06:47:14 crc kubenswrapper[4833]: I1013 06:47:14.253839 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b78565d7c-d78jk" event={"ID":"65e5cee6-ee1c-4612-89b8-c2cfe968438b","Type":"ContainerStarted","Data":"693f1a344ba18ce292d370fac9613ada4bf6424ec01d376fafe1fa5f5d79c8b2"} Oct 13 06:47:14 crc kubenswrapper[4833]: I1013 06:47:14.254597 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 13 06:47:14 crc kubenswrapper[4833]: I1013 06:47:14.254726 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-b78565d7c-d78jk" Oct 13 06:47:14 crc kubenswrapper[4833]: I1013 06:47:14.261682 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85d74757d5-v95tz" event={"ID":"77004520-24e0-4076-8155-b4a8b6b3e1a2","Type":"ContainerStarted","Data":"fc4185496a8f5aceebc4214786114cd6386579632e77074f673c4c0161c217d8"} Oct 13 06:47:14 crc kubenswrapper[4833]: W1013 06:47:14.269707 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaaeaef09_d532_4399_b9bb_c9e59fbf1a62.slice/crio-9379fbfd7950ced81fb002d6875d701e087c8f38f737429e34e06513f1b0a092 WatchSource:0}: Error finding container 9379fbfd7950ced81fb002d6875d701e087c8f38f737429e34e06513f1b0a092: Status 404 returned error can't find the container with id 9379fbfd7950ced81fb002d6875d701e087c8f38f737429e34e06513f1b0a092 Oct 13 06:47:14 crc kubenswrapper[4833]: I1013 06:47:14.283677 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-b78565d7c-d78jk" podStartSLOduration=2.283649231 podStartE2EDuration="2.283649231s" podCreationTimestamp="2025-10-13 06:47:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:47:14.283442145 +0000 UTC m=+1124.383865061" watchObservedRunningTime="2025-10-13 06:47:14.283649231 +0000 UTC m=+1124.384072147" Oct 13 06:47:14 crc kubenswrapper[4833]: I1013 06:47:14.647390 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f5cb165-db41-4025-8aca-e3fab02fcaee" path="/var/lib/kubelet/pods/6f5cb165-db41-4025-8aca-e3fab02fcaee/volumes" Oct 13 06:47:15 crc kubenswrapper[4833]: I1013 06:47:15.075131 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-595797578d-ddhnv" Oct 13 06:47:15 crc kubenswrapper[4833]: I1013 06:47:15.276428 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85d74757d5-v95tz" event={"ID":"77004520-24e0-4076-8155-b4a8b6b3e1a2","Type":"ContainerStarted","Data":"ffc9dd1e713324f809d315d085d14b604a856f57e2677cc7a6979ac4e967d33f"} Oct 13 06:47:15 crc kubenswrapper[4833]: I1013 06:47:15.276850 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85d74757d5-v95tz" event={"ID":"77004520-24e0-4076-8155-b4a8b6b3e1a2","Type":"ContainerStarted","Data":"bd81c5b70960bb5c69c83b108d56e6ca81fcf9ac0a02765d463866b0cd5ed1af"} Oct 13 06:47:15 crc kubenswrapper[4833]: I1013 06:47:15.278240 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:47:15 crc kubenswrapper[4833]: I1013 06:47:15.278273 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:47:15 crc kubenswrapper[4833]: I1013 06:47:15.281745 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"aaeaef09-d532-4399-b9bb-c9e59fbf1a62","Type":"ContainerStarted","Data":"ae447bf76892b7eb14df95538c7ae37b62536247b753f59a219c7f2aae34cdf7"} Oct 13 06:47:15 crc kubenswrapper[4833]: I1013 06:47:15.281792 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"aaeaef09-d532-4399-b9bb-c9e59fbf1a62","Type":"ContainerStarted","Data":"9379fbfd7950ced81fb002d6875d701e087c8f38f737429e34e06513f1b0a092"} Oct 13 06:47:15 crc kubenswrapper[4833]: I1013 06:47:15.419922 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-595797578d-ddhnv" Oct 13 06:47:15 crc kubenswrapper[4833]: I1013 06:47:15.445652 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-85d74757d5-v95tz" podStartSLOduration=3.445637021 podStartE2EDuration="3.445637021s" podCreationTimestamp="2025-10-13 06:47:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:47:15.30601935 +0000 UTC m=+1125.406442276" watchObservedRunningTime="2025-10-13 06:47:15.445637021 +0000 UTC m=+1125.546059937" Oct 13 06:47:15 crc kubenswrapper[4833]: I1013 06:47:15.497907 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-699bdfffd4-dzv2d"] Oct 13 06:47:15 crc kubenswrapper[4833]: I1013 06:47:15.501013 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-699bdfffd4-dzv2d" podUID="82e87d62-aa7e-466c-a479-8b0c6e3deb64" containerName="barbican-api-log" containerID="cri-o://ea826397e4620c9aa6802a28450a26408a048afd091c87c85d9c0e538a386d5f" gracePeriod=30 Oct 13 06:47:15 crc kubenswrapper[4833]: I1013 06:47:15.501592 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-699bdfffd4-dzv2d" podUID="82e87d62-aa7e-466c-a479-8b0c6e3deb64" containerName="barbican-api" containerID="cri-o://bf36514ddef85dd3ee99972e11cc3f1f90b0bfd44f94b8cee098277e01bd4b36" gracePeriod=30 Oct 13 06:47:16 crc kubenswrapper[4833]: I1013 06:47:16.297391 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"aaeaef09-d532-4399-b9bb-c9e59fbf1a62","Type":"ContainerStarted","Data":"b566af3cdcf6966c91d8eb92814d438d4b7a59c8593fa14b053ca258afc3130a"} Oct 13 06:47:16 crc kubenswrapper[4833]: I1013 06:47:16.297776 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 13 06:47:16 crc kubenswrapper[4833]: I1013 06:47:16.306821 4833 generic.go:334] "Generic (PLEG): container finished" podID="82e87d62-aa7e-466c-a479-8b0c6e3deb64" containerID="ea826397e4620c9aa6802a28450a26408a048afd091c87c85d9c0e538a386d5f" exitCode=143 Oct 13 06:47:16 crc kubenswrapper[4833]: I1013 06:47:16.308117 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-699bdfffd4-dzv2d" event={"ID":"82e87d62-aa7e-466c-a479-8b0c6e3deb64","Type":"ContainerDied","Data":"ea826397e4620c9aa6802a28450a26408a048afd091c87c85d9c0e538a386d5f"} Oct 13 06:47:16 crc kubenswrapper[4833]: I1013 06:47:16.326456 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.326429903 podStartE2EDuration="3.326429903s" podCreationTimestamp="2025-10-13 06:47:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:47:16.319169652 +0000 UTC m=+1126.419592568" watchObservedRunningTime="2025-10-13 06:47:16.326429903 +0000 UTC m=+1126.426852819" Oct 13 06:47:17 crc kubenswrapper[4833]: I1013 06:47:17.280868 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:47:17 crc kubenswrapper[4833]: I1013 06:47:17.281554 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d" containerName="ceilometer-central-agent" containerID="cri-o://c48cf04e811c4344f099ce9cf34c4e3083c0fdad051024faf84fdd53e28a1725" gracePeriod=30 Oct 13 06:47:17 crc kubenswrapper[4833]: I1013 06:47:17.281630 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d" containerName="proxy-httpd" containerID="cri-o://5acb9e02c1b90b2983691f74932d3024b23ddf2e04312a1fca2aa178c00fe6fd" gracePeriod=30 Oct 13 06:47:17 crc kubenswrapper[4833]: I1013 06:47:17.281694 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d" containerName="ceilometer-notification-agent" containerID="cri-o://4f1a44196150c66ed38bc160ef64b3a59417e5f14fa3a722c97f2886256ddebe" gracePeriod=30 Oct 13 06:47:17 crc kubenswrapper[4833]: I1013 06:47:17.281632 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d" containerName="sg-core" containerID="cri-o://42c273be61601c62ad5e6c7cfbad29f367aa6e1f643df15bf038851fd84eff5c" gracePeriod=30 Oct 13 06:47:17 crc kubenswrapper[4833]: I1013 06:47:17.303440 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Oct 13 06:47:17 crc kubenswrapper[4833]: I1013 06:47:17.774338 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84bd785c49-zbzrf" Oct 13 06:47:17 crc kubenswrapper[4833]: I1013 06:47:17.862285 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc68bd5-kndjd"] Oct 13 06:47:17 crc kubenswrapper[4833]: I1013 06:47:17.863725 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" podUID="87554ede-75d3-4ee6-a16a-71c768cb09ef" containerName="dnsmasq-dns" containerID="cri-o://669a4b07d2f582062a00e9808d522c9427ff0c3677eab5936a594d9ecbbd1677" gracePeriod=10 Oct 13 06:47:18 crc kubenswrapper[4833]: E1013 06:47:18.076217 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87554ede_75d3_4ee6_a16a_71c768cb09ef.slice/crio-669a4b07d2f582062a00e9808d522c9427ff0c3677eab5936a594d9ecbbd1677.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87554ede_75d3_4ee6_a16a_71c768cb09ef.slice/crio-conmon-669a4b07d2f582062a00e9808d522c9427ff0c3677eab5936a594d9ecbbd1677.scope\": RecentStats: unable to find data in memory cache]" Oct 13 06:47:18 crc kubenswrapper[4833]: I1013 06:47:18.243342 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 13 06:47:18 crc kubenswrapper[4833]: I1013 06:47:18.288865 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 13 06:47:18 crc kubenswrapper[4833]: I1013 06:47:18.332026 4833 generic.go:334] "Generic (PLEG): container finished" podID="0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d" containerID="5acb9e02c1b90b2983691f74932d3024b23ddf2e04312a1fca2aa178c00fe6fd" exitCode=0 Oct 13 06:47:18 crc kubenswrapper[4833]: I1013 06:47:18.332063 4833 generic.go:334] "Generic (PLEG): container finished" podID="0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d" containerID="42c273be61601c62ad5e6c7cfbad29f367aa6e1f643df15bf038851fd84eff5c" exitCode=2 Oct 13 06:47:18 crc kubenswrapper[4833]: I1013 06:47:18.332075 4833 generic.go:334] "Generic (PLEG): container finished" podID="0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d" containerID="c48cf04e811c4344f099ce9cf34c4e3083c0fdad051024faf84fdd53e28a1725" exitCode=0 Oct 13 06:47:18 crc kubenswrapper[4833]: I1013 06:47:18.332097 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d","Type":"ContainerDied","Data":"5acb9e02c1b90b2983691f74932d3024b23ddf2e04312a1fca2aa178c00fe6fd"} Oct 13 06:47:18 crc kubenswrapper[4833]: I1013 06:47:18.332134 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d","Type":"ContainerDied","Data":"42c273be61601c62ad5e6c7cfbad29f367aa6e1f643df15bf038851fd84eff5c"} Oct 13 06:47:18 crc kubenswrapper[4833]: I1013 06:47:18.332151 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d","Type":"ContainerDied","Data":"c48cf04e811c4344f099ce9cf34c4e3083c0fdad051024faf84fdd53e28a1725"} Oct 13 06:47:18 crc kubenswrapper[4833]: I1013 06:47:18.337357 4833 generic.go:334] "Generic (PLEG): container finished" podID="87554ede-75d3-4ee6-a16a-71c768cb09ef" containerID="669a4b07d2f582062a00e9808d522c9427ff0c3677eab5936a594d9ecbbd1677" exitCode=0 Oct 13 06:47:18 crc kubenswrapper[4833]: I1013 06:47:18.337439 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" event={"ID":"87554ede-75d3-4ee6-a16a-71c768cb09ef","Type":"ContainerDied","Data":"669a4b07d2f582062a00e9808d522c9427ff0c3677eab5936a594d9ecbbd1677"} Oct 13 06:47:18 crc kubenswrapper[4833]: I1013 06:47:18.337598 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b0387db0-cc24-4278-bb4b-f3f5784440ef" containerName="cinder-scheduler" containerID="cri-o://426841c889091e481b9bcd8b1d16d885174020895778937bdf2d01a96e7cb4bf" gracePeriod=30 Oct 13 06:47:18 crc kubenswrapper[4833]: I1013 06:47:18.338148 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b0387db0-cc24-4278-bb4b-f3f5784440ef" containerName="probe" containerID="cri-o://fd86922d1434f8eb76dfe2b32cbe10ca21776d31a57c402f5d666198f149ff3c" gracePeriod=30 Oct 13 06:47:18 crc kubenswrapper[4833]: I1013 06:47:18.670939 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-699bdfffd4-dzv2d" podUID="82e87d62-aa7e-466c-a479-8b0c6e3deb64" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": read tcp 10.217.0.2:45126->10.217.0.158:9311: read: connection reset by peer" Oct 13 06:47:18 crc kubenswrapper[4833]: I1013 06:47:18.671080 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-699bdfffd4-dzv2d" podUID="82e87d62-aa7e-466c-a479-8b0c6e3deb64" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": read tcp 10.217.0.2:45142->10.217.0.158:9311: read: connection reset by peer" Oct 13 06:47:19 crc kubenswrapper[4833]: I1013 06:47:19.349435 4833 generic.go:334] "Generic (PLEG): container finished" podID="82e87d62-aa7e-466c-a479-8b0c6e3deb64" containerID="bf36514ddef85dd3ee99972e11cc3f1f90b0bfd44f94b8cee098277e01bd4b36" exitCode=0 Oct 13 06:47:19 crc kubenswrapper[4833]: I1013 06:47:19.349531 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-699bdfffd4-dzv2d" event={"ID":"82e87d62-aa7e-466c-a479-8b0c6e3deb64","Type":"ContainerDied","Data":"bf36514ddef85dd3ee99972e11cc3f1f90b0bfd44f94b8cee098277e01bd4b36"} Oct 13 06:47:19 crc kubenswrapper[4833]: I1013 06:47:19.352246 4833 generic.go:334] "Generic (PLEG): container finished" podID="b0387db0-cc24-4278-bb4b-f3f5784440ef" containerID="fd86922d1434f8eb76dfe2b32cbe10ca21776d31a57c402f5d666198f149ff3c" exitCode=0 Oct 13 06:47:19 crc kubenswrapper[4833]: I1013 06:47:19.352279 4833 generic.go:334] "Generic (PLEG): container finished" podID="b0387db0-cc24-4278-bb4b-f3f5784440ef" containerID="426841c889091e481b9bcd8b1d16d885174020895778937bdf2d01a96e7cb4bf" exitCode=0 Oct 13 06:47:19 crc kubenswrapper[4833]: I1013 06:47:19.352298 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b0387db0-cc24-4278-bb4b-f3f5784440ef","Type":"ContainerDied","Data":"fd86922d1434f8eb76dfe2b32cbe10ca21776d31a57c402f5d666198f149ff3c"} Oct 13 06:47:19 crc kubenswrapper[4833]: I1013 06:47:19.352335 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b0387db0-cc24-4278-bb4b-f3f5784440ef","Type":"ContainerDied","Data":"426841c889091e481b9bcd8b1d16d885174020895778937bdf2d01a96e7cb4bf"} Oct 13 06:47:19 crc kubenswrapper[4833]: I1013 06:47:19.355445 4833 generic.go:334] "Generic (PLEG): container finished" podID="0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d" containerID="4f1a44196150c66ed38bc160ef64b3a59417e5f14fa3a722c97f2886256ddebe" exitCode=0 Oct 13 06:47:19 crc kubenswrapper[4833]: I1013 06:47:19.355482 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d","Type":"ContainerDied","Data":"4f1a44196150c66ed38bc160ef64b3a59417e5f14fa3a722c97f2886256ddebe"} Oct 13 06:47:19 crc kubenswrapper[4833]: I1013 06:47:19.920454 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" podUID="87554ede-75d3-4ee6-a16a-71c768cb09ef" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.141:5353: connect: connection refused" Oct 13 06:47:21 crc kubenswrapper[4833]: I1013 06:47:21.014293 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-g7h2h"] Oct 13 06:47:21 crc kubenswrapper[4833]: I1013 06:47:21.017787 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-g7h2h" Oct 13 06:47:21 crc kubenswrapper[4833]: I1013 06:47:21.024751 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-g7h2h"] Oct 13 06:47:21 crc kubenswrapper[4833]: I1013 06:47:21.068819 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnxwn\" (UniqueName: \"kubernetes.io/projected/6924055c-4a48-4fdc-ba3f-fb5c48bd110e-kube-api-access-hnxwn\") pod \"nova-api-db-create-g7h2h\" (UID: \"6924055c-4a48-4fdc-ba3f-fb5c48bd110e\") " pod="openstack/nova-api-db-create-g7h2h" Oct 13 06:47:21 crc kubenswrapper[4833]: I1013 06:47:21.133983 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-9f6qc"] Oct 13 06:47:21 crc kubenswrapper[4833]: I1013 06:47:21.135111 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9f6qc" Oct 13 06:47:21 crc kubenswrapper[4833]: I1013 06:47:21.155356 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-9f6qc"] Oct 13 06:47:21 crc kubenswrapper[4833]: I1013 06:47:21.175735 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j69cb\" (UniqueName: \"kubernetes.io/projected/d46a93cd-a9e7-492d-99f2-931ea5e957c2-kube-api-access-j69cb\") pod \"nova-cell0-db-create-9f6qc\" (UID: \"d46a93cd-a9e7-492d-99f2-931ea5e957c2\") " pod="openstack/nova-cell0-db-create-9f6qc" Oct 13 06:47:21 crc kubenswrapper[4833]: I1013 06:47:21.175790 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnxwn\" (UniqueName: \"kubernetes.io/projected/6924055c-4a48-4fdc-ba3f-fb5c48bd110e-kube-api-access-hnxwn\") pod \"nova-api-db-create-g7h2h\" (UID: \"6924055c-4a48-4fdc-ba3f-fb5c48bd110e\") " pod="openstack/nova-api-db-create-g7h2h" Oct 13 06:47:21 crc kubenswrapper[4833]: I1013 06:47:21.216850 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnxwn\" (UniqueName: \"kubernetes.io/projected/6924055c-4a48-4fdc-ba3f-fb5c48bd110e-kube-api-access-hnxwn\") pod \"nova-api-db-create-g7h2h\" (UID: \"6924055c-4a48-4fdc-ba3f-fb5c48bd110e\") " pod="openstack/nova-api-db-create-g7h2h" Oct 13 06:47:21 crc kubenswrapper[4833]: I1013 06:47:21.278672 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j69cb\" (UniqueName: \"kubernetes.io/projected/d46a93cd-a9e7-492d-99f2-931ea5e957c2-kube-api-access-j69cb\") pod \"nova-cell0-db-create-9f6qc\" (UID: \"d46a93cd-a9e7-492d-99f2-931ea5e957c2\") " pod="openstack/nova-cell0-db-create-9f6qc" Oct 13 06:47:21 crc kubenswrapper[4833]: I1013 06:47:21.300341 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j69cb\" (UniqueName: \"kubernetes.io/projected/d46a93cd-a9e7-492d-99f2-931ea5e957c2-kube-api-access-j69cb\") pod \"nova-cell0-db-create-9f6qc\" (UID: \"d46a93cd-a9e7-492d-99f2-931ea5e957c2\") " pod="openstack/nova-cell0-db-create-9f6qc" Oct 13 06:47:21 crc kubenswrapper[4833]: I1013 06:47:21.333657 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-g7h2h" Oct 13 06:47:21 crc kubenswrapper[4833]: I1013 06:47:21.414562 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-wmxsj"] Oct 13 06:47:21 crc kubenswrapper[4833]: I1013 06:47:21.417429 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-wmxsj" Oct 13 06:47:21 crc kubenswrapper[4833]: I1013 06:47:21.426029 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-wmxsj"] Oct 13 06:47:21 crc kubenswrapper[4833]: I1013 06:47:21.458719 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9f6qc" Oct 13 06:47:21 crc kubenswrapper[4833]: I1013 06:47:21.482667 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqkm8\" (UniqueName: \"kubernetes.io/projected/fa712112-5d57-44d0-9417-a5eb9d993780-kube-api-access-nqkm8\") pod \"nova-cell1-db-create-wmxsj\" (UID: \"fa712112-5d57-44d0-9417-a5eb9d993780\") " pod="openstack/nova-cell1-db-create-wmxsj" Oct 13 06:47:21 crc kubenswrapper[4833]: I1013 06:47:21.585178 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqkm8\" (UniqueName: \"kubernetes.io/projected/fa712112-5d57-44d0-9417-a5eb9d993780-kube-api-access-nqkm8\") pod \"nova-cell1-db-create-wmxsj\" (UID: \"fa712112-5d57-44d0-9417-a5eb9d993780\") " pod="openstack/nova-cell1-db-create-wmxsj" Oct 13 06:47:21 crc kubenswrapper[4833]: I1013 06:47:21.606205 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqkm8\" (UniqueName: \"kubernetes.io/projected/fa712112-5d57-44d0-9417-a5eb9d993780-kube-api-access-nqkm8\") pod \"nova-cell1-db-create-wmxsj\" (UID: \"fa712112-5d57-44d0-9417-a5eb9d993780\") " pod="openstack/nova-cell1-db-create-wmxsj" Oct 13 06:47:21 crc kubenswrapper[4833]: I1013 06:47:21.740772 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-wmxsj" Oct 13 06:47:22 crc kubenswrapper[4833]: I1013 06:47:22.513074 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 06:47:22 crc kubenswrapper[4833]: I1013 06:47:22.513320 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5f571ae4-3483-4a8e-8f33-f445c77395c2" containerName="glance-log" containerID="cri-o://37dd09c6646170fa6f3886c244589036ff9754a0e7a5801c48e6491e8dcac529" gracePeriod=30 Oct 13 06:47:22 crc kubenswrapper[4833]: I1013 06:47:22.513428 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5f571ae4-3483-4a8e-8f33-f445c77395c2" containerName="glance-httpd" containerID="cri-o://f840acbd624d040e57c1e9a49cb3766d68bf39e90eead2700c4dd3deb340c011" gracePeriod=30 Oct 13 06:47:23 crc kubenswrapper[4833]: I1013 06:47:23.207744 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:47:23 crc kubenswrapper[4833]: I1013 06:47:23.215059 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:47:23 crc kubenswrapper[4833]: I1013 06:47:23.407558 4833 generic.go:334] "Generic (PLEG): container finished" podID="5f571ae4-3483-4a8e-8f33-f445c77395c2" containerID="37dd09c6646170fa6f3886c244589036ff9754a0e7a5801c48e6491e8dcac529" exitCode=143 Oct 13 06:47:23 crc kubenswrapper[4833]: I1013 06:47:23.408698 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5f571ae4-3483-4a8e-8f33-f445c77395c2","Type":"ContainerDied","Data":"37dd09c6646170fa6f3886c244589036ff9754a0e7a5801c48e6491e8dcac529"} Oct 13 06:47:23 crc kubenswrapper[4833]: I1013 06:47:23.444179 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 06:47:23 crc kubenswrapper[4833]: I1013 06:47:23.444496 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d1f1a519-8ba6-44b4-9230-b93b13f25ff4" containerName="glance-log" containerID="cri-o://962e9cff01281b7ce16e6fae829fd64a8ed217dbd472918821700633c7bdbef2" gracePeriod=30 Oct 13 06:47:23 crc kubenswrapper[4833]: I1013 06:47:23.444666 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d1f1a519-8ba6-44b4-9230-b93b13f25ff4" containerName="glance-httpd" containerID="cri-o://39dc2c6e7d3dae7680675bf6375804e22b92b7c42942d41bd52d866978cb9f5f" gracePeriod=30 Oct 13 06:47:23 crc kubenswrapper[4833]: I1013 06:47:23.543275 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-699bdfffd4-dzv2d" podUID="82e87d62-aa7e-466c-a479-8b0c6e3deb64" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": dial tcp 10.217.0.158:9311: connect: connection refused" Oct 13 06:47:23 crc kubenswrapper[4833]: I1013 06:47:23.543754 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-699bdfffd4-dzv2d" podUID="82e87d62-aa7e-466c-a479-8b0c6e3deb64" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": dial tcp 10.217.0.158:9311: connect: connection refused" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.347368 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.416630 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" event={"ID":"87554ede-75d3-4ee6-a16a-71c768cb09ef","Type":"ContainerDied","Data":"647deb5dbc02c360162c70ab012bd5b75679da6988456c7fa707b74b06e4bb3e"} Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.416676 4833 scope.go:117] "RemoveContainer" containerID="669a4b07d2f582062a00e9808d522c9427ff0c3677eab5936a594d9ecbbd1677" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.416784 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc68bd5-kndjd" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.427964 4833 generic.go:334] "Generic (PLEG): container finished" podID="d1f1a519-8ba6-44b4-9230-b93b13f25ff4" containerID="962e9cff01281b7ce16e6fae829fd64a8ed217dbd472918821700633c7bdbef2" exitCode=143 Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.428018 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d1f1a519-8ba6-44b4-9230-b93b13f25ff4","Type":"ContainerDied","Data":"962e9cff01281b7ce16e6fae829fd64a8ed217dbd472918821700633c7bdbef2"} Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.449788 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-dns-svc\") pod \"87554ede-75d3-4ee6-a16a-71c768cb09ef\" (UID: \"87554ede-75d3-4ee6-a16a-71c768cb09ef\") " Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.449879 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-ovsdbserver-nb\") pod \"87554ede-75d3-4ee6-a16a-71c768cb09ef\" (UID: \"87554ede-75d3-4ee6-a16a-71c768cb09ef\") " Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.450002 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-dns-swift-storage-0\") pod \"87554ede-75d3-4ee6-a16a-71c768cb09ef\" (UID: \"87554ede-75d3-4ee6-a16a-71c768cb09ef\") " Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.450049 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-config\") pod \"87554ede-75d3-4ee6-a16a-71c768cb09ef\" (UID: \"87554ede-75d3-4ee6-a16a-71c768cb09ef\") " Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.450089 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ndtr\" (UniqueName: \"kubernetes.io/projected/87554ede-75d3-4ee6-a16a-71c768cb09ef-kube-api-access-6ndtr\") pod \"87554ede-75d3-4ee6-a16a-71c768cb09ef\" (UID: \"87554ede-75d3-4ee6-a16a-71c768cb09ef\") " Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.450149 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-ovsdbserver-sb\") pod \"87554ede-75d3-4ee6-a16a-71c768cb09ef\" (UID: \"87554ede-75d3-4ee6-a16a-71c768cb09ef\") " Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.468518 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87554ede-75d3-4ee6-a16a-71c768cb09ef-kube-api-access-6ndtr" (OuterVolumeSpecName: "kube-api-access-6ndtr") pod "87554ede-75d3-4ee6-a16a-71c768cb09ef" (UID: "87554ede-75d3-4ee6-a16a-71c768cb09ef"). InnerVolumeSpecName "kube-api-access-6ndtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.468853 4833 scope.go:117] "RemoveContainer" containerID="ec50a07230d9abf532354ae3d840994406bec2abfb9b82f9150e98b445ac952c" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.566310 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "87554ede-75d3-4ee6-a16a-71c768cb09ef" (UID: "87554ede-75d3-4ee6-a16a-71c768cb09ef"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.566601 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "87554ede-75d3-4ee6-a16a-71c768cb09ef" (UID: "87554ede-75d3-4ee6-a16a-71c768cb09ef"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.566781 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-ovsdbserver-sb\") pod \"87554ede-75d3-4ee6-a16a-71c768cb09ef\" (UID: \"87554ede-75d3-4ee6-a16a-71c768cb09ef\") " Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.566903 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-dns-svc\") pod \"87554ede-75d3-4ee6-a16a-71c768cb09ef\" (UID: \"87554ede-75d3-4ee6-a16a-71c768cb09ef\") " Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.567376 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ndtr\" (UniqueName: \"kubernetes.io/projected/87554ede-75d3-4ee6-a16a-71c768cb09ef-kube-api-access-6ndtr\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:24 crc kubenswrapper[4833]: W1013 06:47:24.567449 4833 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/87554ede-75d3-4ee6-a16a-71c768cb09ef/volumes/kubernetes.io~configmap/dns-svc Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.567457 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "87554ede-75d3-4ee6-a16a-71c768cb09ef" (UID: "87554ede-75d3-4ee6-a16a-71c768cb09ef"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:47:24 crc kubenswrapper[4833]: W1013 06:47:24.567554 4833 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/87554ede-75d3-4ee6-a16a-71c768cb09ef/volumes/kubernetes.io~configmap/ovsdbserver-sb Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.567564 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "87554ede-75d3-4ee6-a16a-71c768cb09ef" (UID: "87554ede-75d3-4ee6-a16a-71c768cb09ef"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.573927 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "87554ede-75d3-4ee6-a16a-71c768cb09ef" (UID: "87554ede-75d3-4ee6-a16a-71c768cb09ef"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.582088 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-config" (OuterVolumeSpecName: "config") pod "87554ede-75d3-4ee6-a16a-71c768cb09ef" (UID: "87554ede-75d3-4ee6-a16a-71c768cb09ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.583065 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "87554ede-75d3-4ee6-a16a-71c768cb09ef" (UID: "87554ede-75d3-4ee6-a16a-71c768cb09ef"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.671712 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.671749 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.671759 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.671769 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.671777 4833 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/87554ede-75d3-4ee6-a16a-71c768cb09ef-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.759185 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc68bd5-kndjd"] Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.768513 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.771857 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5dc68bd5-kndjd"] Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.797774 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.809164 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-699bdfffd4-dzv2d" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.876767 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0387db0-cc24-4278-bb4b-f3f5784440ef-config-data\") pod \"b0387db0-cc24-4278-bb4b-f3f5784440ef\" (UID: \"b0387db0-cc24-4278-bb4b-f3f5784440ef\") " Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.877000 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-config-data\") pod \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\" (UID: \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\") " Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.877041 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82e87d62-aa7e-466c-a479-8b0c6e3deb64-config-data-custom\") pod \"82e87d62-aa7e-466c-a479-8b0c6e3deb64\" (UID: \"82e87d62-aa7e-466c-a479-8b0c6e3deb64\") " Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.877066 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0387db0-cc24-4278-bb4b-f3f5784440ef-scripts\") pod \"b0387db0-cc24-4278-bb4b-f3f5784440ef\" (UID: \"b0387db0-cc24-4278-bb4b-f3f5784440ef\") " Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.877108 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0387db0-cc24-4278-bb4b-f3f5784440ef-combined-ca-bundle\") pod \"b0387db0-cc24-4278-bb4b-f3f5784440ef\" (UID: \"b0387db0-cc24-4278-bb4b-f3f5784440ef\") " Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.877137 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twhkq\" (UniqueName: \"kubernetes.io/projected/b0387db0-cc24-4278-bb4b-f3f5784440ef-kube-api-access-twhkq\") pod \"b0387db0-cc24-4278-bb4b-f3f5784440ef\" (UID: \"b0387db0-cc24-4278-bb4b-f3f5784440ef\") " Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.877154 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-run-httpd\") pod \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\" (UID: \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\") " Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.877170 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82e87d62-aa7e-466c-a479-8b0c6e3deb64-logs\") pod \"82e87d62-aa7e-466c-a479-8b0c6e3deb64\" (UID: \"82e87d62-aa7e-466c-a479-8b0c6e3deb64\") " Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.877227 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-scripts\") pod \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\" (UID: \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\") " Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.877258 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b0387db0-cc24-4278-bb4b-f3f5784440ef-etc-machine-id\") pod \"b0387db0-cc24-4278-bb4b-f3f5784440ef\" (UID: \"b0387db0-cc24-4278-bb4b-f3f5784440ef\") " Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.877276 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk6cn\" (UniqueName: \"kubernetes.io/projected/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-kube-api-access-bk6cn\") pod \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\" (UID: \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\") " Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.877306 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-log-httpd\") pod \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\" (UID: \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\") " Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.877321 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-sg-core-conf-yaml\") pod \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\" (UID: \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\") " Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.877346 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmk4l\" (UniqueName: \"kubernetes.io/projected/82e87d62-aa7e-466c-a479-8b0c6e3deb64-kube-api-access-pmk4l\") pod \"82e87d62-aa7e-466c-a479-8b0c6e3deb64\" (UID: \"82e87d62-aa7e-466c-a479-8b0c6e3deb64\") " Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.877377 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82e87d62-aa7e-466c-a479-8b0c6e3deb64-combined-ca-bundle\") pod \"82e87d62-aa7e-466c-a479-8b0c6e3deb64\" (UID: \"82e87d62-aa7e-466c-a479-8b0c6e3deb64\") " Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.877403 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0387db0-cc24-4278-bb4b-f3f5784440ef-config-data-custom\") pod \"b0387db0-cc24-4278-bb4b-f3f5784440ef\" (UID: \"b0387db0-cc24-4278-bb4b-f3f5784440ef\") " Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.877479 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-combined-ca-bundle\") pod \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\" (UID: \"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d\") " Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.877499 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82e87d62-aa7e-466c-a479-8b0c6e3deb64-config-data\") pod \"82e87d62-aa7e-466c-a479-8b0c6e3deb64\" (UID: \"82e87d62-aa7e-466c-a479-8b0c6e3deb64\") " Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.879457 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d" (UID: "0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.883364 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0387db0-cc24-4278-bb4b-f3f5784440ef-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b0387db0-cc24-4278-bb4b-f3f5784440ef" (UID: "b0387db0-cc24-4278-bb4b-f3f5784440ef"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.883398 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82e87d62-aa7e-466c-a479-8b0c6e3deb64-logs" (OuterVolumeSpecName: "logs") pod "82e87d62-aa7e-466c-a479-8b0c6e3deb64" (UID: "82e87d62-aa7e-466c-a479-8b0c6e3deb64"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.885405 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d" (UID: "0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.890371 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82e87d62-aa7e-466c-a479-8b0c6e3deb64-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "82e87d62-aa7e-466c-a479-8b0c6e3deb64" (UID: "82e87d62-aa7e-466c-a479-8b0c6e3deb64"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.898353 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82e87d62-aa7e-466c-a479-8b0c6e3deb64-kube-api-access-pmk4l" (OuterVolumeSpecName: "kube-api-access-pmk4l") pod "82e87d62-aa7e-466c-a479-8b0c6e3deb64" (UID: "82e87d62-aa7e-466c-a479-8b0c6e3deb64"). InnerVolumeSpecName "kube-api-access-pmk4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.918094 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0387db0-cc24-4278-bb4b-f3f5784440ef-scripts" (OuterVolumeSpecName: "scripts") pod "b0387db0-cc24-4278-bb4b-f3f5784440ef" (UID: "b0387db0-cc24-4278-bb4b-f3f5784440ef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.918142 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-kube-api-access-bk6cn" (OuterVolumeSpecName: "kube-api-access-bk6cn") pod "0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d" (UID: "0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d"). InnerVolumeSpecName "kube-api-access-bk6cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.924288 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0387db0-cc24-4278-bb4b-f3f5784440ef-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b0387db0-cc24-4278-bb4b-f3f5784440ef" (UID: "b0387db0-cc24-4278-bb4b-f3f5784440ef"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.924424 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-scripts" (OuterVolumeSpecName: "scripts") pod "0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d" (UID: "0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.925668 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0387db0-cc24-4278-bb4b-f3f5784440ef-kube-api-access-twhkq" (OuterVolumeSpecName: "kube-api-access-twhkq") pod "b0387db0-cc24-4278-bb4b-f3f5784440ef" (UID: "b0387db0-cc24-4278-bb4b-f3f5784440ef"). InnerVolumeSpecName "kube-api-access-twhkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.936468 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82e87d62-aa7e-466c-a479-8b0c6e3deb64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82e87d62-aa7e-466c-a479-8b0c6e3deb64" (UID: "82e87d62-aa7e-466c-a479-8b0c6e3deb64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.947659 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d" (UID: "0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.980259 4833 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82e87d62-aa7e-466c-a479-8b0c6e3deb64-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.980294 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0387db0-cc24-4278-bb4b-f3f5784440ef-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.980306 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twhkq\" (UniqueName: \"kubernetes.io/projected/b0387db0-cc24-4278-bb4b-f3f5784440ef-kube-api-access-twhkq\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.980321 4833 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.980334 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82e87d62-aa7e-466c-a479-8b0c6e3deb64-logs\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.980344 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.980355 4833 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b0387db0-cc24-4278-bb4b-f3f5784440ef-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.980366 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk6cn\" (UniqueName: \"kubernetes.io/projected/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-kube-api-access-bk6cn\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.980376 4833 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.980388 4833 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.980398 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmk4l\" (UniqueName: \"kubernetes.io/projected/82e87d62-aa7e-466c-a479-8b0c6e3deb64-kube-api-access-pmk4l\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.980411 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82e87d62-aa7e-466c-a479-8b0c6e3deb64-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.980420 4833 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0387db0-cc24-4278-bb4b-f3f5784440ef-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.989603 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-9f6qc"] Oct 13 06:47:24 crc kubenswrapper[4833]: I1013 06:47:24.991477 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0387db0-cc24-4278-bb4b-f3f5784440ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0387db0-cc24-4278-bb4b-f3f5784440ef" (UID: "b0387db0-cc24-4278-bb4b-f3f5784440ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.017594 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-g7h2h"] Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.048897 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-wmxsj"] Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.055676 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82e87d62-aa7e-466c-a479-8b0c6e3deb64-config-data" (OuterVolumeSpecName: "config-data") pod "82e87d62-aa7e-466c-a479-8b0c6e3deb64" (UID: "82e87d62-aa7e-466c-a479-8b0c6e3deb64"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.082580 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0387db0-cc24-4278-bb4b-f3f5784440ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.082635 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82e87d62-aa7e-466c-a479-8b0c6e3deb64-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.087653 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0387db0-cc24-4278-bb4b-f3f5784440ef-config-data" (OuterVolumeSpecName: "config-data") pod "b0387db0-cc24-4278-bb4b-f3f5784440ef" (UID: "b0387db0-cc24-4278-bb4b-f3f5784440ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.117465 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d" (UID: "0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.134162 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-config-data" (OuterVolumeSpecName: "config-data") pod "0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d" (UID: "0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.184757 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.184793 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0387db0-cc24-4278-bb4b-f3f5784440ef-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.184806 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.441137 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-wmxsj" event={"ID":"fa712112-5d57-44d0-9417-a5eb9d993780","Type":"ContainerStarted","Data":"9416ac9c968c4a5a9db5cb7e7ac7d1de0a102dc393f0cd7058fc8426e472195e"} Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.441506 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-wmxsj" event={"ID":"fa712112-5d57-44d0-9417-a5eb9d993780","Type":"ContainerStarted","Data":"6b2f7f619c8ef26605ae64ffa0d89a7ba73d27a91fd626462ac779bd5dbb79a0"} Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.450406 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-g7h2h" event={"ID":"6924055c-4a48-4fdc-ba3f-fb5c48bd110e","Type":"ContainerStarted","Data":"e1a96ec3f20ad6ec9e3f20cf64c5ec450408d8b81f36296a339f7790dd37c4ac"} Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.450443 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-g7h2h" event={"ID":"6924055c-4a48-4fdc-ba3f-fb5c48bd110e","Type":"ContainerStarted","Data":"1389a4bf017df44055bed8c1f2d2e0f0c80be36ea9313b2adc0a70bd5ca4abe3"} Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.452432 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-699bdfffd4-dzv2d" event={"ID":"82e87d62-aa7e-466c-a479-8b0c6e3deb64","Type":"ContainerDied","Data":"49284366641c19517c22851685ee9ce9dec2d675df092fa9316b8cf2e83d2c54"} Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.452462 4833 scope.go:117] "RemoveContainer" containerID="bf36514ddef85dd3ee99972e11cc3f1f90b0bfd44f94b8cee098277e01bd4b36" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.452705 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-699bdfffd4-dzv2d" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.464752 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-wmxsj" podStartSLOduration=4.464729751 podStartE2EDuration="4.464729751s" podCreationTimestamp="2025-10-13 06:47:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:47:25.45780759 +0000 UTC m=+1135.558230516" watchObservedRunningTime="2025-10-13 06:47:25.464729751 +0000 UTC m=+1135.565152667" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.472924 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b0387db0-cc24-4278-bb4b-f3f5784440ef","Type":"ContainerDied","Data":"da38dea0e524b44150b38db4be9cce8ea879bb749fe6f3a3ced3d59e07b588bd"} Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.473037 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.486977 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d","Type":"ContainerDied","Data":"b8a1626c4455bbf77649d6ebd208e3c8f0f76fe9482176e8f41a72b7e3f53a35"} Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.487085 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.490132 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9f6qc" event={"ID":"d46a93cd-a9e7-492d-99f2-931ea5e957c2","Type":"ContainerStarted","Data":"166d638371c21b70a6b659f8a7c10a54bb522993f3251baf156ad2b8b7c87960"} Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.490305 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9f6qc" event={"ID":"d46a93cd-a9e7-492d-99f2-931ea5e957c2","Type":"ContainerStarted","Data":"8174ede67b93c7f3a76af8ead0b07376d0467544fc4700415a3883294a3cc983"} Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.492927 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5783401d-3007-4df3-a902-1869d62c4acc","Type":"ContainerStarted","Data":"c8ad3d74107bc327da884a44b88aa948e92843c3f297250dc65f8ce46d13f20f"} Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.521595 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.934380055 podStartE2EDuration="19.52157047s" podCreationTimestamp="2025-10-13 06:47:06 +0000 UTC" firstStartedPulling="2025-10-13 06:47:07.637457761 +0000 UTC m=+1117.737880677" lastFinishedPulling="2025-10-13 06:47:24.224648176 +0000 UTC m=+1134.325071092" observedRunningTime="2025-10-13 06:47:25.518639435 +0000 UTC m=+1135.619062351" watchObservedRunningTime="2025-10-13 06:47:25.52157047 +0000 UTC m=+1135.621993396" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.617558 4833 scope.go:117] "RemoveContainer" containerID="ea826397e4620c9aa6802a28450a26408a048afd091c87c85d9c0e538a386d5f" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.624559 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.639085 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.641936 4833 scope.go:117] "RemoveContainer" containerID="fd86922d1434f8eb76dfe2b32cbe10ca21776d31a57c402f5d666198f149ff3c" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.660722 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-699bdfffd4-dzv2d"] Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.675677 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 13 06:47:25 crc kubenswrapper[4833]: E1013 06:47:25.676052 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87554ede-75d3-4ee6-a16a-71c768cb09ef" containerName="dnsmasq-dns" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.676068 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="87554ede-75d3-4ee6-a16a-71c768cb09ef" containerName="dnsmasq-dns" Oct 13 06:47:25 crc kubenswrapper[4833]: E1013 06:47:25.676077 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0387db0-cc24-4278-bb4b-f3f5784440ef" containerName="cinder-scheduler" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.676084 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0387db0-cc24-4278-bb4b-f3f5784440ef" containerName="cinder-scheduler" Oct 13 06:47:25 crc kubenswrapper[4833]: E1013 06:47:25.676094 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e87d62-aa7e-466c-a479-8b0c6e3deb64" containerName="barbican-api" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.676100 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e87d62-aa7e-466c-a479-8b0c6e3deb64" containerName="barbican-api" Oct 13 06:47:25 crc kubenswrapper[4833]: E1013 06:47:25.676118 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d" containerName="sg-core" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.676123 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d" containerName="sg-core" Oct 13 06:47:25 crc kubenswrapper[4833]: E1013 06:47:25.676141 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e87d62-aa7e-466c-a479-8b0c6e3deb64" containerName="barbican-api-log" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.676147 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e87d62-aa7e-466c-a479-8b0c6e3deb64" containerName="barbican-api-log" Oct 13 06:47:25 crc kubenswrapper[4833]: E1013 06:47:25.676153 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d" containerName="ceilometer-notification-agent" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.676159 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d" containerName="ceilometer-notification-agent" Oct 13 06:47:25 crc kubenswrapper[4833]: E1013 06:47:25.676171 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d" containerName="proxy-httpd" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.676177 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d" containerName="proxy-httpd" Oct 13 06:47:25 crc kubenswrapper[4833]: E1013 06:47:25.676185 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d" containerName="ceilometer-central-agent" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.676190 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d" containerName="ceilometer-central-agent" Oct 13 06:47:25 crc kubenswrapper[4833]: E1013 06:47:25.676204 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87554ede-75d3-4ee6-a16a-71c768cb09ef" containerName="init" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.676211 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="87554ede-75d3-4ee6-a16a-71c768cb09ef" containerName="init" Oct 13 06:47:25 crc kubenswrapper[4833]: E1013 06:47:25.676223 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0387db0-cc24-4278-bb4b-f3f5784440ef" containerName="probe" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.676228 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0387db0-cc24-4278-bb4b-f3f5784440ef" containerName="probe" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.676381 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="87554ede-75d3-4ee6-a16a-71c768cb09ef" containerName="dnsmasq-dns" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.676392 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="82e87d62-aa7e-466c-a479-8b0c6e3deb64" containerName="barbican-api" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.676398 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0387db0-cc24-4278-bb4b-f3f5784440ef" containerName="probe" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.676407 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d" containerName="ceilometer-notification-agent" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.676415 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="82e87d62-aa7e-466c-a479-8b0c6e3deb64" containerName="barbican-api-log" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.676422 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d" containerName="sg-core" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.676433 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d" containerName="proxy-httpd" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.676448 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d" containerName="ceilometer-central-agent" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.676459 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0387db0-cc24-4278-bb4b-f3f5784440ef" containerName="cinder-scheduler" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.677352 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.681198 4833 scope.go:117] "RemoveContainer" containerID="426841c889091e481b9bcd8b1d16d885174020895778937bdf2d01a96e7cb4bf" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.687248 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-699bdfffd4-dzv2d"] Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.690908 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.703591 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.718848 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.736514 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.761379 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.765396 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.767676 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.767896 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.768844 4833 scope.go:117] "RemoveContainer" containerID="5acb9e02c1b90b2983691f74932d3024b23ddf2e04312a1fca2aa178c00fe6fd" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.770332 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.798527 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3604db-dabe-4d61-918d-b41a85fbcbf5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2b3604db-dabe-4d61-918d-b41a85fbcbf5\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.798618 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b3604db-dabe-4d61-918d-b41a85fbcbf5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2b3604db-dabe-4d61-918d-b41a85fbcbf5\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.798720 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b3604db-dabe-4d61-918d-b41a85fbcbf5-scripts\") pod \"cinder-scheduler-0\" (UID: \"2b3604db-dabe-4d61-918d-b41a85fbcbf5\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.798746 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-854w8\" (UniqueName: \"kubernetes.io/projected/2b3604db-dabe-4d61-918d-b41a85fbcbf5-kube-api-access-854w8\") pod \"cinder-scheduler-0\" (UID: \"2b3604db-dabe-4d61-918d-b41a85fbcbf5\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.798795 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2b3604db-dabe-4d61-918d-b41a85fbcbf5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2b3604db-dabe-4d61-918d-b41a85fbcbf5\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.798858 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3604db-dabe-4d61-918d-b41a85fbcbf5-config-data\") pod \"cinder-scheduler-0\" (UID: \"2b3604db-dabe-4d61-918d-b41a85fbcbf5\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.813604 4833 scope.go:117] "RemoveContainer" containerID="42c273be61601c62ad5e6c7cfbad29f367aa6e1f643df15bf038851fd84eff5c" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.853811 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:47:25 crc kubenswrapper[4833]: E1013 06:47:25.854426 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-s89l9 log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[combined-ca-bundle config-data kube-api-access-s89l9 log-httpd run-httpd scripts sg-core-conf-yaml]: context canceled" pod="openstack/ceilometer-0" podUID="8df3ad39-10bc-4055-accb-165a6c885437" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.874432 4833 scope.go:117] "RemoveContainer" containerID="4f1a44196150c66ed38bc160ef64b3a59417e5f14fa3a722c97f2886256ddebe" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.900086 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b3604db-dabe-4d61-918d-b41a85fbcbf5-scripts\") pod \"cinder-scheduler-0\" (UID: \"2b3604db-dabe-4d61-918d-b41a85fbcbf5\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.900168 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-854w8\" (UniqueName: \"kubernetes.io/projected/2b3604db-dabe-4d61-918d-b41a85fbcbf5-kube-api-access-854w8\") pod \"cinder-scheduler-0\" (UID: \"2b3604db-dabe-4d61-918d-b41a85fbcbf5\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.900233 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s89l9\" (UniqueName: \"kubernetes.io/projected/8df3ad39-10bc-4055-accb-165a6c885437-kube-api-access-s89l9\") pod \"ceilometer-0\" (UID: \"8df3ad39-10bc-4055-accb-165a6c885437\") " pod="openstack/ceilometer-0" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.900262 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2b3604db-dabe-4d61-918d-b41a85fbcbf5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2b3604db-dabe-4d61-918d-b41a85fbcbf5\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.900287 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df3ad39-10bc-4055-accb-165a6c885437-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8df3ad39-10bc-4055-accb-165a6c885437\") " pod="openstack/ceilometer-0" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.900325 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8df3ad39-10bc-4055-accb-165a6c885437-config-data\") pod \"ceilometer-0\" (UID: \"8df3ad39-10bc-4055-accb-165a6c885437\") " pod="openstack/ceilometer-0" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.900368 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8df3ad39-10bc-4055-accb-165a6c885437-log-httpd\") pod \"ceilometer-0\" (UID: \"8df3ad39-10bc-4055-accb-165a6c885437\") " pod="openstack/ceilometer-0" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.900413 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3604db-dabe-4d61-918d-b41a85fbcbf5-config-data\") pod \"cinder-scheduler-0\" (UID: \"2b3604db-dabe-4d61-918d-b41a85fbcbf5\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.900462 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8df3ad39-10bc-4055-accb-165a6c885437-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8df3ad39-10bc-4055-accb-165a6c885437\") " pod="openstack/ceilometer-0" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.900491 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3604db-dabe-4d61-918d-b41a85fbcbf5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2b3604db-dabe-4d61-918d-b41a85fbcbf5\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.900757 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b3604db-dabe-4d61-918d-b41a85fbcbf5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2b3604db-dabe-4d61-918d-b41a85fbcbf5\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.900812 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8df3ad39-10bc-4055-accb-165a6c885437-run-httpd\") pod \"ceilometer-0\" (UID: \"8df3ad39-10bc-4055-accb-165a6c885437\") " pod="openstack/ceilometer-0" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.900913 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8df3ad39-10bc-4055-accb-165a6c885437-scripts\") pod \"ceilometer-0\" (UID: \"8df3ad39-10bc-4055-accb-165a6c885437\") " pod="openstack/ceilometer-0" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.902132 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2b3604db-dabe-4d61-918d-b41a85fbcbf5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2b3604db-dabe-4d61-918d-b41a85fbcbf5\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.908051 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b3604db-dabe-4d61-918d-b41a85fbcbf5-scripts\") pod \"cinder-scheduler-0\" (UID: \"2b3604db-dabe-4d61-918d-b41a85fbcbf5\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.908059 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3604db-dabe-4d61-918d-b41a85fbcbf5-config-data\") pod \"cinder-scheduler-0\" (UID: \"2b3604db-dabe-4d61-918d-b41a85fbcbf5\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.909083 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b3604db-dabe-4d61-918d-b41a85fbcbf5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2b3604db-dabe-4d61-918d-b41a85fbcbf5\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.909366 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3604db-dabe-4d61-918d-b41a85fbcbf5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2b3604db-dabe-4d61-918d-b41a85fbcbf5\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.909882 4833 scope.go:117] "RemoveContainer" containerID="c48cf04e811c4344f099ce9cf34c4e3083c0fdad051024faf84fdd53e28a1725" Oct 13 06:47:25 crc kubenswrapper[4833]: I1013 06:47:25.917470 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-854w8\" (UniqueName: \"kubernetes.io/projected/2b3604db-dabe-4d61-918d-b41a85fbcbf5-kube-api-access-854w8\") pod \"cinder-scheduler-0\" (UID: \"2b3604db-dabe-4d61-918d-b41a85fbcbf5\") " pod="openstack/cinder-scheduler-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.002739 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8df3ad39-10bc-4055-accb-165a6c885437-scripts\") pod \"ceilometer-0\" (UID: \"8df3ad39-10bc-4055-accb-165a6c885437\") " pod="openstack/ceilometer-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.002868 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s89l9\" (UniqueName: \"kubernetes.io/projected/8df3ad39-10bc-4055-accb-165a6c885437-kube-api-access-s89l9\") pod \"ceilometer-0\" (UID: \"8df3ad39-10bc-4055-accb-165a6c885437\") " pod="openstack/ceilometer-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.002905 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df3ad39-10bc-4055-accb-165a6c885437-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8df3ad39-10bc-4055-accb-165a6c885437\") " pod="openstack/ceilometer-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.002943 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8df3ad39-10bc-4055-accb-165a6c885437-config-data\") pod \"ceilometer-0\" (UID: \"8df3ad39-10bc-4055-accb-165a6c885437\") " pod="openstack/ceilometer-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.002978 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8df3ad39-10bc-4055-accb-165a6c885437-log-httpd\") pod \"ceilometer-0\" (UID: \"8df3ad39-10bc-4055-accb-165a6c885437\") " pod="openstack/ceilometer-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.003036 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8df3ad39-10bc-4055-accb-165a6c885437-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8df3ad39-10bc-4055-accb-165a6c885437\") " pod="openstack/ceilometer-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.003083 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8df3ad39-10bc-4055-accb-165a6c885437-run-httpd\") pod \"ceilometer-0\" (UID: \"8df3ad39-10bc-4055-accb-165a6c885437\") " pod="openstack/ceilometer-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.004704 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8df3ad39-10bc-4055-accb-165a6c885437-log-httpd\") pod \"ceilometer-0\" (UID: \"8df3ad39-10bc-4055-accb-165a6c885437\") " pod="openstack/ceilometer-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.004712 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8df3ad39-10bc-4055-accb-165a6c885437-run-httpd\") pod \"ceilometer-0\" (UID: \"8df3ad39-10bc-4055-accb-165a6c885437\") " pod="openstack/ceilometer-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.009769 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df3ad39-10bc-4055-accb-165a6c885437-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8df3ad39-10bc-4055-accb-165a6c885437\") " pod="openstack/ceilometer-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.010062 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8df3ad39-10bc-4055-accb-165a6c885437-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8df3ad39-10bc-4055-accb-165a6c885437\") " pod="openstack/ceilometer-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.010517 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8df3ad39-10bc-4055-accb-165a6c885437-config-data\") pod \"ceilometer-0\" (UID: \"8df3ad39-10bc-4055-accb-165a6c885437\") " pod="openstack/ceilometer-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.013055 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8df3ad39-10bc-4055-accb-165a6c885437-scripts\") pod \"ceilometer-0\" (UID: \"8df3ad39-10bc-4055-accb-165a6c885437\") " pod="openstack/ceilometer-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.024851 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s89l9\" (UniqueName: \"kubernetes.io/projected/8df3ad39-10bc-4055-accb-165a6c885437-kube-api-access-s89l9\") pod \"ceilometer-0\" (UID: \"8df3ad39-10bc-4055-accb-165a6c885437\") " pod="openstack/ceilometer-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.117933 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.306419 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.411995 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f571ae4-3483-4a8e-8f33-f445c77395c2-combined-ca-bundle\") pod \"5f571ae4-3483-4a8e-8f33-f445c77395c2\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.412042 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f571ae4-3483-4a8e-8f33-f445c77395c2-public-tls-certs\") pod \"5f571ae4-3483-4a8e-8f33-f445c77395c2\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.412108 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f571ae4-3483-4a8e-8f33-f445c77395c2-httpd-run\") pod \"5f571ae4-3483-4a8e-8f33-f445c77395c2\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.412129 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f571ae4-3483-4a8e-8f33-f445c77395c2-scripts\") pod \"5f571ae4-3483-4a8e-8f33-f445c77395c2\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.412231 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f571ae4-3483-4a8e-8f33-f445c77395c2-config-data\") pod \"5f571ae4-3483-4a8e-8f33-f445c77395c2\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.412270 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22q57\" (UniqueName: \"kubernetes.io/projected/5f571ae4-3483-4a8e-8f33-f445c77395c2-kube-api-access-22q57\") pod \"5f571ae4-3483-4a8e-8f33-f445c77395c2\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.412354 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"5f571ae4-3483-4a8e-8f33-f445c77395c2\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.412377 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f571ae4-3483-4a8e-8f33-f445c77395c2-logs\") pod \"5f571ae4-3483-4a8e-8f33-f445c77395c2\" (UID: \"5f571ae4-3483-4a8e-8f33-f445c77395c2\") " Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.413272 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f571ae4-3483-4a8e-8f33-f445c77395c2-logs" (OuterVolumeSpecName: "logs") pod "5f571ae4-3483-4a8e-8f33-f445c77395c2" (UID: "5f571ae4-3483-4a8e-8f33-f445c77395c2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.414300 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f571ae4-3483-4a8e-8f33-f445c77395c2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5f571ae4-3483-4a8e-8f33-f445c77395c2" (UID: "5f571ae4-3483-4a8e-8f33-f445c77395c2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.418338 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f571ae4-3483-4a8e-8f33-f445c77395c2-kube-api-access-22q57" (OuterVolumeSpecName: "kube-api-access-22q57") pod "5f571ae4-3483-4a8e-8f33-f445c77395c2" (UID: "5f571ae4-3483-4a8e-8f33-f445c77395c2"). InnerVolumeSpecName "kube-api-access-22q57". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.421431 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "5f571ae4-3483-4a8e-8f33-f445c77395c2" (UID: "5f571ae4-3483-4a8e-8f33-f445c77395c2"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.436791 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f571ae4-3483-4a8e-8f33-f445c77395c2-scripts" (OuterVolumeSpecName: "scripts") pod "5f571ae4-3483-4a8e-8f33-f445c77395c2" (UID: "5f571ae4-3483-4a8e-8f33-f445c77395c2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.466681 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f571ae4-3483-4a8e-8f33-f445c77395c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f571ae4-3483-4a8e-8f33-f445c77395c2" (UID: "5f571ae4-3483-4a8e-8f33-f445c77395c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.488416 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f571ae4-3483-4a8e-8f33-f445c77395c2-config-data" (OuterVolumeSpecName: "config-data") pod "5f571ae4-3483-4a8e-8f33-f445c77395c2" (UID: "5f571ae4-3483-4a8e-8f33-f445c77395c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.491977 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f571ae4-3483-4a8e-8f33-f445c77395c2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5f571ae4-3483-4a8e-8f33-f445c77395c2" (UID: "5f571ae4-3483-4a8e-8f33-f445c77395c2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.504784 4833 generic.go:334] "Generic (PLEG): container finished" podID="6924055c-4a48-4fdc-ba3f-fb5c48bd110e" containerID="e1a96ec3f20ad6ec9e3f20cf64c5ec450408d8b81f36296a339f7790dd37c4ac" exitCode=0 Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.504871 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-g7h2h" event={"ID":"6924055c-4a48-4fdc-ba3f-fb5c48bd110e","Type":"ContainerDied","Data":"e1a96ec3f20ad6ec9e3f20cf64c5ec450408d8b81f36296a339f7790dd37c4ac"} Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.509656 4833 generic.go:334] "Generic (PLEG): container finished" podID="5f571ae4-3483-4a8e-8f33-f445c77395c2" containerID="f840acbd624d040e57c1e9a49cb3766d68bf39e90eead2700c4dd3deb340c011" exitCode=0 Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.509735 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.509748 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5f571ae4-3483-4a8e-8f33-f445c77395c2","Type":"ContainerDied","Data":"f840acbd624d040e57c1e9a49cb3766d68bf39e90eead2700c4dd3deb340c011"} Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.510098 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5f571ae4-3483-4a8e-8f33-f445c77395c2","Type":"ContainerDied","Data":"e72bf7c77dc7789deed88c0d63b45c7b3c2274e6425a22d2b374b439f501f871"} Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.510140 4833 scope.go:117] "RemoveContainer" containerID="f840acbd624d040e57c1e9a49cb3766d68bf39e90eead2700c4dd3deb340c011" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.514288 4833 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f571ae4-3483-4a8e-8f33-f445c77395c2-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.514359 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f571ae4-3483-4a8e-8f33-f445c77395c2-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.514372 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f571ae4-3483-4a8e-8f33-f445c77395c2-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.514386 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22q57\" (UniqueName: \"kubernetes.io/projected/5f571ae4-3483-4a8e-8f33-f445c77395c2-kube-api-access-22q57\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.514432 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.514444 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f571ae4-3483-4a8e-8f33-f445c77395c2-logs\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.514454 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f571ae4-3483-4a8e-8f33-f445c77395c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.514465 4833 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f571ae4-3483-4a8e-8f33-f445c77395c2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.526830 4833 generic.go:334] "Generic (PLEG): container finished" podID="d46a93cd-a9e7-492d-99f2-931ea5e957c2" containerID="166d638371c21b70a6b659f8a7c10a54bb522993f3251baf156ad2b8b7c87960" exitCode=0 Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.526895 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9f6qc" event={"ID":"d46a93cd-a9e7-492d-99f2-931ea5e957c2","Type":"ContainerDied","Data":"166d638371c21b70a6b659f8a7c10a54bb522993f3251baf156ad2b8b7c87960"} Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.537606 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa712112-5d57-44d0-9417-a5eb9d993780" containerID="9416ac9c968c4a5a9db5cb7e7ac7d1de0a102dc393f0cd7058fc8426e472195e" exitCode=0 Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.537700 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.538293 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-wmxsj" event={"ID":"fa712112-5d57-44d0-9417-a5eb9d993780","Type":"ContainerDied","Data":"9416ac9c968c4a5a9db5cb7e7ac7d1de0a102dc393f0cd7058fc8426e472195e"} Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.552055 4833 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.557833 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.560634 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.576751 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.591471 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 06:47:26 crc kubenswrapper[4833]: E1013 06:47:26.591912 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f571ae4-3483-4a8e-8f33-f445c77395c2" containerName="glance-log" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.591924 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f571ae4-3483-4a8e-8f33-f445c77395c2" containerName="glance-log" Oct 13 06:47:26 crc kubenswrapper[4833]: E1013 06:47:26.591937 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f571ae4-3483-4a8e-8f33-f445c77395c2" containerName="glance-httpd" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.591942 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f571ae4-3483-4a8e-8f33-f445c77395c2" containerName="glance-httpd" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.592144 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f571ae4-3483-4a8e-8f33-f445c77395c2" containerName="glance-httpd" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.592165 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f571ae4-3483-4a8e-8f33-f445c77395c2" containerName="glance-log" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.594430 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.603289 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.603929 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.616767 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8df3ad39-10bc-4055-accb-165a6c885437-sg-core-conf-yaml\") pod \"8df3ad39-10bc-4055-accb-165a6c885437\" (UID: \"8df3ad39-10bc-4055-accb-165a6c885437\") " Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.616840 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8df3ad39-10bc-4055-accb-165a6c885437-config-data\") pod \"8df3ad39-10bc-4055-accb-165a6c885437\" (UID: \"8df3ad39-10bc-4055-accb-165a6c885437\") " Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.616860 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8df3ad39-10bc-4055-accb-165a6c885437-scripts\") pod \"8df3ad39-10bc-4055-accb-165a6c885437\" (UID: \"8df3ad39-10bc-4055-accb-165a6c885437\") " Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.616955 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df3ad39-10bc-4055-accb-165a6c885437-combined-ca-bundle\") pod \"8df3ad39-10bc-4055-accb-165a6c885437\" (UID: \"8df3ad39-10bc-4055-accb-165a6c885437\") " Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.617015 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s89l9\" (UniqueName: \"kubernetes.io/projected/8df3ad39-10bc-4055-accb-165a6c885437-kube-api-access-s89l9\") pod \"8df3ad39-10bc-4055-accb-165a6c885437\" (UID: \"8df3ad39-10bc-4055-accb-165a6c885437\") " Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.617082 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8df3ad39-10bc-4055-accb-165a6c885437-log-httpd\") pod \"8df3ad39-10bc-4055-accb-165a6c885437\" (UID: \"8df3ad39-10bc-4055-accb-165a6c885437\") " Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.617120 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8df3ad39-10bc-4055-accb-165a6c885437-run-httpd\") pod \"8df3ad39-10bc-4055-accb-165a6c885437\" (UID: \"8df3ad39-10bc-4055-accb-165a6c885437\") " Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.617571 4833 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.618512 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8df3ad39-10bc-4055-accb-165a6c885437-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8df3ad39-10bc-4055-accb-165a6c885437" (UID: "8df3ad39-10bc-4055-accb-165a6c885437"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.620398 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8df3ad39-10bc-4055-accb-165a6c885437-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8df3ad39-10bc-4055-accb-165a6c885437" (UID: "8df3ad39-10bc-4055-accb-165a6c885437"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.623830 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8df3ad39-10bc-4055-accb-165a6c885437-scripts" (OuterVolumeSpecName: "scripts") pod "8df3ad39-10bc-4055-accb-165a6c885437" (UID: "8df3ad39-10bc-4055-accb-165a6c885437"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.623936 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8df3ad39-10bc-4055-accb-165a6c885437-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8df3ad39-10bc-4055-accb-165a6c885437" (UID: "8df3ad39-10bc-4055-accb-165a6c885437"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.631460 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8df3ad39-10bc-4055-accb-165a6c885437-config-data" (OuterVolumeSpecName: "config-data") pod "8df3ad39-10bc-4055-accb-165a6c885437" (UID: "8df3ad39-10bc-4055-accb-165a6c885437"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.631725 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8df3ad39-10bc-4055-accb-165a6c885437-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8df3ad39-10bc-4055-accb-165a6c885437" (UID: "8df3ad39-10bc-4055-accb-165a6c885437"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.631791 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8df3ad39-10bc-4055-accb-165a6c885437-kube-api-access-s89l9" (OuterVolumeSpecName: "kube-api-access-s89l9") pod "8df3ad39-10bc-4055-accb-165a6c885437" (UID: "8df3ad39-10bc-4055-accb-165a6c885437"). InnerVolumeSpecName "kube-api-access-s89l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.633107 4833 scope.go:117] "RemoveContainer" containerID="37dd09c6646170fa6f3886c244589036ff9754a0e7a5801c48e6491e8dcac529" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.642774 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="d1f1a519-8ba6-44b4-9230-b93b13f25ff4" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.148:9292/healthcheck\": read tcp 10.217.0.2:48026->10.217.0.148:9292: read: connection reset by peer" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.642811 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="d1f1a519-8ba6-44b4-9230-b93b13f25ff4" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.148:9292/healthcheck\": read tcp 10.217.0.2:48028->10.217.0.148:9292: read: connection reset by peer" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.653992 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d" path="/var/lib/kubelet/pods/0f8bb26b-2a01-4e9a-9216-3a016a8a9a2d/volumes" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.655062 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f571ae4-3483-4a8e-8f33-f445c77395c2" path="/var/lib/kubelet/pods/5f571ae4-3483-4a8e-8f33-f445c77395c2/volumes" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.656767 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82e87d62-aa7e-466c-a479-8b0c6e3deb64" path="/var/lib/kubelet/pods/82e87d62-aa7e-466c-a479-8b0c6e3deb64/volumes" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.657952 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87554ede-75d3-4ee6-a16a-71c768cb09ef" path="/var/lib/kubelet/pods/87554ede-75d3-4ee6-a16a-71c768cb09ef/volumes" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.658525 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0387db0-cc24-4278-bb4b-f3f5784440ef" path="/var/lib/kubelet/pods/b0387db0-cc24-4278-bb4b-f3f5784440ef/volumes" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.679637 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.716443 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.719241 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " pod="openstack/glance-default-external-api-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.719310 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baa4dafc-7be7-4f97-ba72-359c27e3151c-config-data\") pod \"glance-default-external-api-0\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " pod="openstack/glance-default-external-api-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.719350 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zdkc\" (UniqueName: \"kubernetes.io/projected/baa4dafc-7be7-4f97-ba72-359c27e3151c-kube-api-access-4zdkc\") pod \"glance-default-external-api-0\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " pod="openstack/glance-default-external-api-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.719449 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/baa4dafc-7be7-4f97-ba72-359c27e3151c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " pod="openstack/glance-default-external-api-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.719620 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baa4dafc-7be7-4f97-ba72-359c27e3151c-logs\") pod \"glance-default-external-api-0\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " pod="openstack/glance-default-external-api-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.719671 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/baa4dafc-7be7-4f97-ba72-359c27e3151c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " pod="openstack/glance-default-external-api-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.719704 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baa4dafc-7be7-4f97-ba72-359c27e3151c-scripts\") pod \"glance-default-external-api-0\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " pod="openstack/glance-default-external-api-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.719856 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baa4dafc-7be7-4f97-ba72-359c27e3151c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " pod="openstack/glance-default-external-api-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.719951 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8df3ad39-10bc-4055-accb-165a6c885437-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.719970 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8df3ad39-10bc-4055-accb-165a6c885437-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.719983 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df3ad39-10bc-4055-accb-165a6c885437-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.719995 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s89l9\" (UniqueName: \"kubernetes.io/projected/8df3ad39-10bc-4055-accb-165a6c885437-kube-api-access-s89l9\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.720006 4833 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8df3ad39-10bc-4055-accb-165a6c885437-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.720017 4833 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8df3ad39-10bc-4055-accb-165a6c885437-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.720028 4833 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8df3ad39-10bc-4055-accb-165a6c885437-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.721823 4833 scope.go:117] "RemoveContainer" containerID="f840acbd624d040e57c1e9a49cb3766d68bf39e90eead2700c4dd3deb340c011" Oct 13 06:47:26 crc kubenswrapper[4833]: E1013 06:47:26.724925 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f840acbd624d040e57c1e9a49cb3766d68bf39e90eead2700c4dd3deb340c011\": container with ID starting with f840acbd624d040e57c1e9a49cb3766d68bf39e90eead2700c4dd3deb340c011 not found: ID does not exist" containerID="f840acbd624d040e57c1e9a49cb3766d68bf39e90eead2700c4dd3deb340c011" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.724958 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f840acbd624d040e57c1e9a49cb3766d68bf39e90eead2700c4dd3deb340c011"} err="failed to get container status \"f840acbd624d040e57c1e9a49cb3766d68bf39e90eead2700c4dd3deb340c011\": rpc error: code = NotFound desc = could not find container \"f840acbd624d040e57c1e9a49cb3766d68bf39e90eead2700c4dd3deb340c011\": container with ID starting with f840acbd624d040e57c1e9a49cb3766d68bf39e90eead2700c4dd3deb340c011 not found: ID does not exist" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.724978 4833 scope.go:117] "RemoveContainer" containerID="37dd09c6646170fa6f3886c244589036ff9754a0e7a5801c48e6491e8dcac529" Oct 13 06:47:26 crc kubenswrapper[4833]: E1013 06:47:26.725273 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37dd09c6646170fa6f3886c244589036ff9754a0e7a5801c48e6491e8dcac529\": container with ID starting with 37dd09c6646170fa6f3886c244589036ff9754a0e7a5801c48e6491e8dcac529 not found: ID does not exist" containerID="37dd09c6646170fa6f3886c244589036ff9754a0e7a5801c48e6491e8dcac529" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.725322 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37dd09c6646170fa6f3886c244589036ff9754a0e7a5801c48e6491e8dcac529"} err="failed to get container status \"37dd09c6646170fa6f3886c244589036ff9754a0e7a5801c48e6491e8dcac529\": rpc error: code = NotFound desc = could not find container \"37dd09c6646170fa6f3886c244589036ff9754a0e7a5801c48e6491e8dcac529\": container with ID starting with 37dd09c6646170fa6f3886c244589036ff9754a0e7a5801c48e6491e8dcac529 not found: ID does not exist" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.821931 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/baa4dafc-7be7-4f97-ba72-359c27e3151c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " pod="openstack/glance-default-external-api-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.822032 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baa4dafc-7be7-4f97-ba72-359c27e3151c-logs\") pod \"glance-default-external-api-0\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " pod="openstack/glance-default-external-api-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.822079 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/baa4dafc-7be7-4f97-ba72-359c27e3151c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " pod="openstack/glance-default-external-api-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.822112 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baa4dafc-7be7-4f97-ba72-359c27e3151c-scripts\") pod \"glance-default-external-api-0\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " pod="openstack/glance-default-external-api-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.822141 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baa4dafc-7be7-4f97-ba72-359c27e3151c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " pod="openstack/glance-default-external-api-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.822195 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " pod="openstack/glance-default-external-api-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.822306 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baa4dafc-7be7-4f97-ba72-359c27e3151c-config-data\") pod \"glance-default-external-api-0\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " pod="openstack/glance-default-external-api-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.822923 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zdkc\" (UniqueName: \"kubernetes.io/projected/baa4dafc-7be7-4f97-ba72-359c27e3151c-kube-api-access-4zdkc\") pod \"glance-default-external-api-0\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " pod="openstack/glance-default-external-api-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.825257 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baa4dafc-7be7-4f97-ba72-359c27e3151c-logs\") pod \"glance-default-external-api-0\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " pod="openstack/glance-default-external-api-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.825499 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/baa4dafc-7be7-4f97-ba72-359c27e3151c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " pod="openstack/glance-default-external-api-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.826113 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.831232 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/baa4dafc-7be7-4f97-ba72-359c27e3151c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " pod="openstack/glance-default-external-api-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.831385 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baa4dafc-7be7-4f97-ba72-359c27e3151c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " pod="openstack/glance-default-external-api-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.838334 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baa4dafc-7be7-4f97-ba72-359c27e3151c-config-data\") pod \"glance-default-external-api-0\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " pod="openstack/glance-default-external-api-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.838473 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.843562 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zdkc\" (UniqueName: \"kubernetes.io/projected/baa4dafc-7be7-4f97-ba72-359c27e3151c-kube-api-access-4zdkc\") pod \"glance-default-external-api-0\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " pod="openstack/glance-default-external-api-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.853358 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baa4dafc-7be7-4f97-ba72-359c27e3151c-scripts\") pod \"glance-default-external-api-0\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " pod="openstack/glance-default-external-api-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.872724 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " pod="openstack/glance-default-external-api-0" Oct 13 06:47:26 crc kubenswrapper[4833]: I1013 06:47:26.950770 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.280698 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-g7h2h" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.289940 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9f6qc" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.341341 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j69cb\" (UniqueName: \"kubernetes.io/projected/d46a93cd-a9e7-492d-99f2-931ea5e957c2-kube-api-access-j69cb\") pod \"d46a93cd-a9e7-492d-99f2-931ea5e957c2\" (UID: \"d46a93cd-a9e7-492d-99f2-931ea5e957c2\") " Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.341745 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnxwn\" (UniqueName: \"kubernetes.io/projected/6924055c-4a48-4fdc-ba3f-fb5c48bd110e-kube-api-access-hnxwn\") pod \"6924055c-4a48-4fdc-ba3f-fb5c48bd110e\" (UID: \"6924055c-4a48-4fdc-ba3f-fb5c48bd110e\") " Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.348264 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6924055c-4a48-4fdc-ba3f-fb5c48bd110e-kube-api-access-hnxwn" (OuterVolumeSpecName: "kube-api-access-hnxwn") pod "6924055c-4a48-4fdc-ba3f-fb5c48bd110e" (UID: "6924055c-4a48-4fdc-ba3f-fb5c48bd110e"). InnerVolumeSpecName "kube-api-access-hnxwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.357825 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d46a93cd-a9e7-492d-99f2-931ea5e957c2-kube-api-access-j69cb" (OuterVolumeSpecName: "kube-api-access-j69cb") pod "d46a93cd-a9e7-492d-99f2-931ea5e957c2" (UID: "d46a93cd-a9e7-492d-99f2-931ea5e957c2"). InnerVolumeSpecName "kube-api-access-j69cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.431702 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.445121 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnxwn\" (UniqueName: \"kubernetes.io/projected/6924055c-4a48-4fdc-ba3f-fb5c48bd110e-kube-api-access-hnxwn\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.445152 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j69cb\" (UniqueName: \"kubernetes.io/projected/d46a93cd-a9e7-492d-99f2-931ea5e957c2-kube-api-access-j69cb\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.552021 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-logs\") pod \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.552067 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.552161 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-httpd-run\") pod \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.552778 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-logs" (OuterVolumeSpecName: "logs") pod "d1f1a519-8ba6-44b4-9230-b93b13f25ff4" (UID: "d1f1a519-8ba6-44b4-9230-b93b13f25ff4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.552950 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9f6qc" event={"ID":"d46a93cd-a9e7-492d-99f2-931ea5e957c2","Type":"ContainerDied","Data":"8174ede67b93c7f3a76af8ead0b07376d0467544fc4700415a3883294a3cc983"} Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.552980 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8174ede67b93c7f3a76af8ead0b07376d0467544fc4700415a3883294a3cc983" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.553034 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9f6qc" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.556158 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d1f1a519-8ba6-44b4-9230-b93b13f25ff4" (UID: "d1f1a519-8ba6-44b4-9230-b93b13f25ff4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.557275 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-config-data\") pod \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.557290 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "d1f1a519-8ba6-44b4-9230-b93b13f25ff4" (UID: "d1f1a519-8ba6-44b4-9230-b93b13f25ff4"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.557375 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-scripts\") pod \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.557393 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-internal-tls-certs\") pod \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.557427 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxf44\" (UniqueName: \"kubernetes.io/projected/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-kube-api-access-xxf44\") pod \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.557469 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-combined-ca-bundle\") pod \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\" (UID: \"d1f1a519-8ba6-44b4-9230-b93b13f25ff4\") " Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.559731 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2b3604db-dabe-4d61-918d-b41a85fbcbf5","Type":"ContainerStarted","Data":"f7a757e7430121896966c8b5353c760bb5793b9db0630c5d1962ce266bdfe25f"} Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.561194 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-kube-api-access-xxf44" (OuterVolumeSpecName: "kube-api-access-xxf44") pod "d1f1a519-8ba6-44b4-9230-b93b13f25ff4" (UID: "d1f1a519-8ba6-44b4-9230-b93b13f25ff4"). InnerVolumeSpecName "kube-api-access-xxf44". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.562993 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-logs\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.563051 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.563065 4833 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.563078 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxf44\" (UniqueName: \"kubernetes.io/projected/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-kube-api-access-xxf44\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.567874 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-g7h2h" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.568024 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-g7h2h" event={"ID":"6924055c-4a48-4fdc-ba3f-fb5c48bd110e","Type":"ContainerDied","Data":"1389a4bf017df44055bed8c1f2d2e0f0c80be36ea9313b2adc0a70bd5ca4abe3"} Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.568058 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1389a4bf017df44055bed8c1f2d2e0f0c80be36ea9313b2adc0a70bd5ca4abe3" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.580704 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-scripts" (OuterVolumeSpecName: "scripts") pod "d1f1a519-8ba6-44b4-9230-b93b13f25ff4" (UID: "d1f1a519-8ba6-44b4-9230-b93b13f25ff4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.591158 4833 generic.go:334] "Generic (PLEG): container finished" podID="d1f1a519-8ba6-44b4-9230-b93b13f25ff4" containerID="39dc2c6e7d3dae7680675bf6375804e22b92b7c42942d41bd52d866978cb9f5f" exitCode=0 Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.591314 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.592866 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.592981 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d1f1a519-8ba6-44b4-9230-b93b13f25ff4","Type":"ContainerDied","Data":"39dc2c6e7d3dae7680675bf6375804e22b92b7c42942d41bd52d866978cb9f5f"} Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.593062 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d1f1a519-8ba6-44b4-9230-b93b13f25ff4","Type":"ContainerDied","Data":"cdc76984f6c2dbf92690452970a5b9caa55c3c1d9722432cedd7e693dd16191f"} Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.593083 4833 scope.go:117] "RemoveContainer" containerID="39dc2c6e7d3dae7680675bf6375804e22b92b7c42942d41bd52d866978cb9f5f" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.631945 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1f1a519-8ba6-44b4-9230-b93b13f25ff4" (UID: "d1f1a519-8ba6-44b4-9230-b93b13f25ff4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.645896 4833 scope.go:117] "RemoveContainer" containerID="962e9cff01281b7ce16e6fae829fd64a8ed217dbd472918821700633c7bdbef2" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.658024 4833 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.665331 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.665360 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.665373 4833 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.680348 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-config-data" (OuterVolumeSpecName: "config-data") pod "d1f1a519-8ba6-44b4-9230-b93b13f25ff4" (UID: "d1f1a519-8ba6-44b4-9230-b93b13f25ff4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.697714 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d1f1a519-8ba6-44b4-9230-b93b13f25ff4" (UID: "d1f1a519-8ba6-44b4-9230-b93b13f25ff4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.704698 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.719183 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.731589 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.731673 4833 scope.go:117] "RemoveContainer" containerID="39dc2c6e7d3dae7680675bf6375804e22b92b7c42942d41bd52d866978cb9f5f" Oct 13 06:47:27 crc kubenswrapper[4833]: E1013 06:47:27.732023 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1f1a519-8ba6-44b4-9230-b93b13f25ff4" containerName="glance-httpd" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.732070 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f1a519-8ba6-44b4-9230-b93b13f25ff4" containerName="glance-httpd" Oct 13 06:47:27 crc kubenswrapper[4833]: E1013 06:47:27.732108 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1f1a519-8ba6-44b4-9230-b93b13f25ff4" containerName="glance-log" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.732115 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f1a519-8ba6-44b4-9230-b93b13f25ff4" containerName="glance-log" Oct 13 06:47:27 crc kubenswrapper[4833]: E1013 06:47:27.732130 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d46a93cd-a9e7-492d-99f2-931ea5e957c2" containerName="mariadb-database-create" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.732143 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d46a93cd-a9e7-492d-99f2-931ea5e957c2" containerName="mariadb-database-create" Oct 13 06:47:27 crc kubenswrapper[4833]: E1013 06:47:27.732153 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6924055c-4a48-4fdc-ba3f-fb5c48bd110e" containerName="mariadb-database-create" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.732159 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6924055c-4a48-4fdc-ba3f-fb5c48bd110e" containerName="mariadb-database-create" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.732322 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1f1a519-8ba6-44b4-9230-b93b13f25ff4" containerName="glance-log" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.732339 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1f1a519-8ba6-44b4-9230-b93b13f25ff4" containerName="glance-httpd" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.732354 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d46a93cd-a9e7-492d-99f2-931ea5e957c2" containerName="mariadb-database-create" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.732382 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6924055c-4a48-4fdc-ba3f-fb5c48bd110e" containerName="mariadb-database-create" Oct 13 06:47:27 crc kubenswrapper[4833]: E1013 06:47:27.733281 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39dc2c6e7d3dae7680675bf6375804e22b92b7c42942d41bd52d866978cb9f5f\": container with ID starting with 39dc2c6e7d3dae7680675bf6375804e22b92b7c42942d41bd52d866978cb9f5f not found: ID does not exist" containerID="39dc2c6e7d3dae7680675bf6375804e22b92b7c42942d41bd52d866978cb9f5f" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.733319 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39dc2c6e7d3dae7680675bf6375804e22b92b7c42942d41bd52d866978cb9f5f"} err="failed to get container status \"39dc2c6e7d3dae7680675bf6375804e22b92b7c42942d41bd52d866978cb9f5f\": rpc error: code = NotFound desc = could not find container \"39dc2c6e7d3dae7680675bf6375804e22b92b7c42942d41bd52d866978cb9f5f\": container with ID starting with 39dc2c6e7d3dae7680675bf6375804e22b92b7c42942d41bd52d866978cb9f5f not found: ID does not exist" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.733346 4833 scope.go:117] "RemoveContainer" containerID="962e9cff01281b7ce16e6fae829fd64a8ed217dbd472918821700633c7bdbef2" Oct 13 06:47:27 crc kubenswrapper[4833]: E1013 06:47:27.733986 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"962e9cff01281b7ce16e6fae829fd64a8ed217dbd472918821700633c7bdbef2\": container with ID starting with 962e9cff01281b7ce16e6fae829fd64a8ed217dbd472918821700633c7bdbef2 not found: ID does not exist" containerID="962e9cff01281b7ce16e6fae829fd64a8ed217dbd472918821700633c7bdbef2" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.734027 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"962e9cff01281b7ce16e6fae829fd64a8ed217dbd472918821700633c7bdbef2"} err="failed to get container status \"962e9cff01281b7ce16e6fae829fd64a8ed217dbd472918821700633c7bdbef2\": rpc error: code = NotFound desc = could not find container \"962e9cff01281b7ce16e6fae829fd64a8ed217dbd472918821700633c7bdbef2\": container with ID starting with 962e9cff01281b7ce16e6fae829fd64a8ed217dbd472918821700633c7bdbef2 not found: ID does not exist" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.734005 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.738594 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.738754 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.747067 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.767043 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.767072 4833 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1f1a519-8ba6-44b4-9230-b93b13f25ff4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.774053 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.868837 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/981f215e-5282-41b5-80a8-436ebc928de4-config-data\") pod \"ceilometer-0\" (UID: \"981f215e-5282-41b5-80a8-436ebc928de4\") " pod="openstack/ceilometer-0" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.869093 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/981f215e-5282-41b5-80a8-436ebc928de4-log-httpd\") pod \"ceilometer-0\" (UID: \"981f215e-5282-41b5-80a8-436ebc928de4\") " pod="openstack/ceilometer-0" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.869162 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/981f215e-5282-41b5-80a8-436ebc928de4-run-httpd\") pod \"ceilometer-0\" (UID: \"981f215e-5282-41b5-80a8-436ebc928de4\") " pod="openstack/ceilometer-0" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.869227 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/981f215e-5282-41b5-80a8-436ebc928de4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"981f215e-5282-41b5-80a8-436ebc928de4\") " pod="openstack/ceilometer-0" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.869315 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/981f215e-5282-41b5-80a8-436ebc928de4-scripts\") pod \"ceilometer-0\" (UID: \"981f215e-5282-41b5-80a8-436ebc928de4\") " pod="openstack/ceilometer-0" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.869420 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv295\" (UniqueName: \"kubernetes.io/projected/981f215e-5282-41b5-80a8-436ebc928de4-kube-api-access-lv295\") pod \"ceilometer-0\" (UID: \"981f215e-5282-41b5-80a8-436ebc928de4\") " pod="openstack/ceilometer-0" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.869479 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/981f215e-5282-41b5-80a8-436ebc928de4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"981f215e-5282-41b5-80a8-436ebc928de4\") " pod="openstack/ceilometer-0" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.972711 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/981f215e-5282-41b5-80a8-436ebc928de4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"981f215e-5282-41b5-80a8-436ebc928de4\") " pod="openstack/ceilometer-0" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.972846 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/981f215e-5282-41b5-80a8-436ebc928de4-config-data\") pod \"ceilometer-0\" (UID: \"981f215e-5282-41b5-80a8-436ebc928de4\") " pod="openstack/ceilometer-0" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.972933 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/981f215e-5282-41b5-80a8-436ebc928de4-log-httpd\") pod \"ceilometer-0\" (UID: \"981f215e-5282-41b5-80a8-436ebc928de4\") " pod="openstack/ceilometer-0" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.972969 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/981f215e-5282-41b5-80a8-436ebc928de4-run-httpd\") pod \"ceilometer-0\" (UID: \"981f215e-5282-41b5-80a8-436ebc928de4\") " pod="openstack/ceilometer-0" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.973015 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/981f215e-5282-41b5-80a8-436ebc928de4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"981f215e-5282-41b5-80a8-436ebc928de4\") " pod="openstack/ceilometer-0" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.973054 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/981f215e-5282-41b5-80a8-436ebc928de4-scripts\") pod \"ceilometer-0\" (UID: \"981f215e-5282-41b5-80a8-436ebc928de4\") " pod="openstack/ceilometer-0" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.973088 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv295\" (UniqueName: \"kubernetes.io/projected/981f215e-5282-41b5-80a8-436ebc928de4-kube-api-access-lv295\") pod \"ceilometer-0\" (UID: \"981f215e-5282-41b5-80a8-436ebc928de4\") " pod="openstack/ceilometer-0" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.973745 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/981f215e-5282-41b5-80a8-436ebc928de4-log-httpd\") pod \"ceilometer-0\" (UID: \"981f215e-5282-41b5-80a8-436ebc928de4\") " pod="openstack/ceilometer-0" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.974138 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/981f215e-5282-41b5-80a8-436ebc928de4-run-httpd\") pod \"ceilometer-0\" (UID: \"981f215e-5282-41b5-80a8-436ebc928de4\") " pod="openstack/ceilometer-0" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.977856 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/981f215e-5282-41b5-80a8-436ebc928de4-scripts\") pod \"ceilometer-0\" (UID: \"981f215e-5282-41b5-80a8-436ebc928de4\") " pod="openstack/ceilometer-0" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.978602 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/981f215e-5282-41b5-80a8-436ebc928de4-config-data\") pod \"ceilometer-0\" (UID: \"981f215e-5282-41b5-80a8-436ebc928de4\") " pod="openstack/ceilometer-0" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.979400 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/981f215e-5282-41b5-80a8-436ebc928de4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"981f215e-5282-41b5-80a8-436ebc928de4\") " pod="openstack/ceilometer-0" Oct 13 06:47:27 crc kubenswrapper[4833]: I1013 06:47:27.980233 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/981f215e-5282-41b5-80a8-436ebc928de4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"981f215e-5282-41b5-80a8-436ebc928de4\") " pod="openstack/ceilometer-0" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:27.996811 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv295\" (UniqueName: \"kubernetes.io/projected/981f215e-5282-41b5-80a8-436ebc928de4-kube-api-access-lv295\") pod \"ceilometer-0\" (UID: \"981f215e-5282-41b5-80a8-436ebc928de4\") " pod="openstack/ceilometer-0" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.069515 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.104293 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-wmxsj" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.129624 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.141574 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.151811 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 06:47:28 crc kubenswrapper[4833]: E1013 06:47:28.152211 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa712112-5d57-44d0-9417-a5eb9d993780" containerName="mariadb-database-create" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.152227 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa712112-5d57-44d0-9417-a5eb9d993780" containerName="mariadb-database-create" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.152394 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa712112-5d57-44d0-9417-a5eb9d993780" containerName="mariadb-database-create" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.159612 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.160395 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.161764 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.175180 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.176638 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqkm8\" (UniqueName: \"kubernetes.io/projected/fa712112-5d57-44d0-9417-a5eb9d993780-kube-api-access-nqkm8\") pod \"fa712112-5d57-44d0-9417-a5eb9d993780\" (UID: \"fa712112-5d57-44d0-9417-a5eb9d993780\") " Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.189780 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa712112-5d57-44d0-9417-a5eb9d993780-kube-api-access-nqkm8" (OuterVolumeSpecName: "kube-api-access-nqkm8") pod "fa712112-5d57-44d0-9417-a5eb9d993780" (UID: "fa712112-5d57-44d0-9417-a5eb9d993780"). InnerVolumeSpecName "kube-api-access-nqkm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.281750 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d04cb142-7473-455b-8d5b-f79d879d8d58-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.282107 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9wmd\" (UniqueName: \"kubernetes.io/projected/d04cb142-7473-455b-8d5b-f79d879d8d58-kube-api-access-p9wmd\") pod \"glance-default-internal-api-0\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.282144 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d04cb142-7473-455b-8d5b-f79d879d8d58-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.282181 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d04cb142-7473-455b-8d5b-f79d879d8d58-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.282220 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d04cb142-7473-455b-8d5b-f79d879d8d58-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.282244 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d04cb142-7473-455b-8d5b-f79d879d8d58-logs\") pod \"glance-default-internal-api-0\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.282320 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.282471 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d04cb142-7473-455b-8d5b-f79d879d8d58-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.282562 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqkm8\" (UniqueName: \"kubernetes.io/projected/fa712112-5d57-44d0-9417-a5eb9d993780-kube-api-access-nqkm8\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.387836 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d04cb142-7473-455b-8d5b-f79d879d8d58-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.387959 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d04cb142-7473-455b-8d5b-f79d879d8d58-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.388005 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9wmd\" (UniqueName: \"kubernetes.io/projected/d04cb142-7473-455b-8d5b-f79d879d8d58-kube-api-access-p9wmd\") pod \"glance-default-internal-api-0\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.388024 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d04cb142-7473-455b-8d5b-f79d879d8d58-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.388045 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d04cb142-7473-455b-8d5b-f79d879d8d58-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.388072 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d04cb142-7473-455b-8d5b-f79d879d8d58-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.388119 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d04cb142-7473-455b-8d5b-f79d879d8d58-logs\") pod \"glance-default-internal-api-0\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.388165 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.388643 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.396481 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d04cb142-7473-455b-8d5b-f79d879d8d58-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.404978 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d04cb142-7473-455b-8d5b-f79d879d8d58-logs\") pod \"glance-default-internal-api-0\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.408978 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d04cb142-7473-455b-8d5b-f79d879d8d58-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.414444 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d04cb142-7473-455b-8d5b-f79d879d8d58-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.417428 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d04cb142-7473-455b-8d5b-f79d879d8d58-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.418757 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9wmd\" (UniqueName: \"kubernetes.io/projected/d04cb142-7473-455b-8d5b-f79d879d8d58-kube-api-access-p9wmd\") pod \"glance-default-internal-api-0\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.420505 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d04cb142-7473-455b-8d5b-f79d879d8d58-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.426991 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " pod="openstack/glance-default-internal-api-0" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.610827 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2b3604db-dabe-4d61-918d-b41a85fbcbf5","Type":"ContainerStarted","Data":"a3e737b2f25b20ffb3b6db74d1d62d4e6066ed41e5b09d860374f17370033973"} Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.614051 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"baa4dafc-7be7-4f97-ba72-359c27e3151c","Type":"ContainerStarted","Data":"1ca508c1342303ad5c31c8a5e603015c22d16995025ac25d7f8d695d9a220d32"} Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.621215 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-wmxsj" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.621250 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-wmxsj" event={"ID":"fa712112-5d57-44d0-9417-a5eb9d993780","Type":"ContainerDied","Data":"6b2f7f619c8ef26605ae64ffa0d89a7ba73d27a91fd626462ac779bd5dbb79a0"} Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.621302 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b2f7f619c8ef26605ae64ffa0d89a7ba73d27a91fd626462ac779bd5dbb79a0" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.659937 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8df3ad39-10bc-4055-accb-165a6c885437" path="/var/lib/kubelet/pods/8df3ad39-10bc-4055-accb-165a6c885437/volumes" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.662158 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1f1a519-8ba6-44b4-9230-b93b13f25ff4" path="/var/lib/kubelet/pods/d1f1a519-8ba6-44b4-9230-b93b13f25ff4/volumes" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.671259 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 06:47:28 crc kubenswrapper[4833]: I1013 06:47:28.767723 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:47:29 crc kubenswrapper[4833]: I1013 06:47:29.303662 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 06:47:29 crc kubenswrapper[4833]: I1013 06:47:29.645170 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2b3604db-dabe-4d61-918d-b41a85fbcbf5","Type":"ContainerStarted","Data":"2c269f1c0068b7093464c1d749f2f94c414ec34d98624840bb84d4f79d7523e2"} Oct 13 06:47:29 crc kubenswrapper[4833]: I1013 06:47:29.649066 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"baa4dafc-7be7-4f97-ba72-359c27e3151c","Type":"ContainerStarted","Data":"ef7d82e1a30e86fc26d1eaeeddc5dbfd7806656a7caf33159465853af570230a"} Oct 13 06:47:29 crc kubenswrapper[4833]: I1013 06:47:29.649110 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"baa4dafc-7be7-4f97-ba72-359c27e3151c","Type":"ContainerStarted","Data":"bb3c5e96c00181e44f04caa42689894453b910ac136df05f2f6dad225247c410"} Oct 13 06:47:29 crc kubenswrapper[4833]: I1013 06:47:29.651828 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d04cb142-7473-455b-8d5b-f79d879d8d58","Type":"ContainerStarted","Data":"18280953815e89a47e836a1c880b8b210c94a68182ac3f04e94c5cb9fe4ff098"} Oct 13 06:47:29 crc kubenswrapper[4833]: I1013 06:47:29.655033 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"981f215e-5282-41b5-80a8-436ebc928de4","Type":"ContainerStarted","Data":"bf342aba4e703eb760855dd70323e3dff210b798ac1b84dc771ebc49aba54d4c"} Oct 13 06:47:29 crc kubenswrapper[4833]: I1013 06:47:29.655076 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"981f215e-5282-41b5-80a8-436ebc928de4","Type":"ContainerStarted","Data":"836522b5b5094cae814e14e50e88752a0365425d305c17b3240f88e432ce15b2"} Oct 13 06:47:29 crc kubenswrapper[4833]: I1013 06:47:29.687474 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.687456164 podStartE2EDuration="3.687456164s" podCreationTimestamp="2025-10-13 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:47:29.685426045 +0000 UTC m=+1139.785848961" watchObservedRunningTime="2025-10-13 06:47:29.687456164 +0000 UTC m=+1139.787879080" Oct 13 06:47:29 crc kubenswrapper[4833]: I1013 06:47:29.701047 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.701024448 podStartE2EDuration="4.701024448s" podCreationTimestamp="2025-10-13 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:47:29.665509738 +0000 UTC m=+1139.765932674" watchObservedRunningTime="2025-10-13 06:47:29.701024448 +0000 UTC m=+1139.801447364" Oct 13 06:47:30 crc kubenswrapper[4833]: I1013 06:47:30.542157 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 06:47:30 crc kubenswrapper[4833]: I1013 06:47:30.542618 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 06:47:30 crc kubenswrapper[4833]: I1013 06:47:30.670971 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d04cb142-7473-455b-8d5b-f79d879d8d58","Type":"ContainerStarted","Data":"51e7bc679df23d3e526317bc29126a1542ce20237f8a48a696b934699094819a"} Oct 13 06:47:30 crc kubenswrapper[4833]: I1013 06:47:30.671093 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d04cb142-7473-455b-8d5b-f79d879d8d58","Type":"ContainerStarted","Data":"a14dbf5251baa6dac8fcb1a3b7d4c495bc7314e806a21af252e2c6ac6c47c059"} Oct 13 06:47:30 crc kubenswrapper[4833]: I1013 06:47:30.673981 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"981f215e-5282-41b5-80a8-436ebc928de4","Type":"ContainerStarted","Data":"acebcc018ee946ec4ce2e9accf80945da52231e2d91a360f2137d0924685d4a1"} Oct 13 06:47:30 crc kubenswrapper[4833]: I1013 06:47:30.725944 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.7259301110000003 podStartE2EDuration="2.725930111s" podCreationTimestamp="2025-10-13 06:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:47:30.715510639 +0000 UTC m=+1140.815933555" watchObservedRunningTime="2025-10-13 06:47:30.725930111 +0000 UTC m=+1140.826353027" Oct 13 06:47:31 crc kubenswrapper[4833]: I1013 06:47:31.118195 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 13 06:47:31 crc kubenswrapper[4833]: I1013 06:47:31.274529 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-1bdf-account-create-rmzqc"] Oct 13 06:47:31 crc kubenswrapper[4833]: I1013 06:47:31.281870 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1bdf-account-create-rmzqc"] Oct 13 06:47:31 crc kubenswrapper[4833]: I1013 06:47:31.282032 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1bdf-account-create-rmzqc" Oct 13 06:47:31 crc kubenswrapper[4833]: I1013 06:47:31.292990 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Oct 13 06:47:31 crc kubenswrapper[4833]: I1013 06:47:31.387175 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkf6m\" (UniqueName: \"kubernetes.io/projected/0c65395b-b132-4f36-98d1-f8eb739bab83-kube-api-access-hkf6m\") pod \"nova-api-1bdf-account-create-rmzqc\" (UID: \"0c65395b-b132-4f36-98d1-f8eb739bab83\") " pod="openstack/nova-api-1bdf-account-create-rmzqc" Oct 13 06:47:31 crc kubenswrapper[4833]: I1013 06:47:31.435056 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7d06-account-create-dk8dj"] Oct 13 06:47:31 crc kubenswrapper[4833]: I1013 06:47:31.436170 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7d06-account-create-dk8dj" Oct 13 06:47:31 crc kubenswrapper[4833]: I1013 06:47:31.439558 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Oct 13 06:47:31 crc kubenswrapper[4833]: I1013 06:47:31.448347 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7d06-account-create-dk8dj"] Oct 13 06:47:31 crc kubenswrapper[4833]: I1013 06:47:31.488777 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcktp\" (UniqueName: \"kubernetes.io/projected/f60dfd99-68a4-49c4-8265-06cb09bca910-kube-api-access-qcktp\") pod \"nova-cell0-7d06-account-create-dk8dj\" (UID: \"f60dfd99-68a4-49c4-8265-06cb09bca910\") " pod="openstack/nova-cell0-7d06-account-create-dk8dj" Oct 13 06:47:31 crc kubenswrapper[4833]: I1013 06:47:31.488860 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkf6m\" (UniqueName: \"kubernetes.io/projected/0c65395b-b132-4f36-98d1-f8eb739bab83-kube-api-access-hkf6m\") pod \"nova-api-1bdf-account-create-rmzqc\" (UID: \"0c65395b-b132-4f36-98d1-f8eb739bab83\") " pod="openstack/nova-api-1bdf-account-create-rmzqc" Oct 13 06:47:31 crc kubenswrapper[4833]: I1013 06:47:31.504427 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkf6m\" (UniqueName: \"kubernetes.io/projected/0c65395b-b132-4f36-98d1-f8eb739bab83-kube-api-access-hkf6m\") pod \"nova-api-1bdf-account-create-rmzqc\" (UID: \"0c65395b-b132-4f36-98d1-f8eb739bab83\") " pod="openstack/nova-api-1bdf-account-create-rmzqc" Oct 13 06:47:31 crc kubenswrapper[4833]: I1013 06:47:31.590658 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcktp\" (UniqueName: \"kubernetes.io/projected/f60dfd99-68a4-49c4-8265-06cb09bca910-kube-api-access-qcktp\") pod \"nova-cell0-7d06-account-create-dk8dj\" (UID: \"f60dfd99-68a4-49c4-8265-06cb09bca910\") " pod="openstack/nova-cell0-7d06-account-create-dk8dj" Oct 13 06:47:31 crc kubenswrapper[4833]: I1013 06:47:31.610308 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcktp\" (UniqueName: \"kubernetes.io/projected/f60dfd99-68a4-49c4-8265-06cb09bca910-kube-api-access-qcktp\") pod \"nova-cell0-7d06-account-create-dk8dj\" (UID: \"f60dfd99-68a4-49c4-8265-06cb09bca910\") " pod="openstack/nova-cell0-7d06-account-create-dk8dj" Oct 13 06:47:31 crc kubenswrapper[4833]: I1013 06:47:31.621698 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1bdf-account-create-rmzqc" Oct 13 06:47:31 crc kubenswrapper[4833]: I1013 06:47:31.683941 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"981f215e-5282-41b5-80a8-436ebc928de4","Type":"ContainerStarted","Data":"b4d9cbdb7f2eb6ebaf1e2820b8d9a012698f0a5beb7992de06133f7ce11059c5"} Oct 13 06:47:31 crc kubenswrapper[4833]: I1013 06:47:31.756078 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7d06-account-create-dk8dj" Oct 13 06:47:31 crc kubenswrapper[4833]: I1013 06:47:31.868675 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:47:31 crc kubenswrapper[4833]: I1013 06:47:31.900412 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1bdf-account-create-rmzqc"] Oct 13 06:47:31 crc kubenswrapper[4833]: W1013 06:47:31.901969 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c65395b_b132_4f36_98d1_f8eb739bab83.slice/crio-b4a450bd72b1a9c138ffa01ecd46f55de482d41380a756e43cb4296a52f11979 WatchSource:0}: Error finding container b4a450bd72b1a9c138ffa01ecd46f55de482d41380a756e43cb4296a52f11979: Status 404 returned error can't find the container with id b4a450bd72b1a9c138ffa01ecd46f55de482d41380a756e43cb4296a52f11979 Oct 13 06:47:32 crc kubenswrapper[4833]: I1013 06:47:32.287700 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7d06-account-create-dk8dj"] Oct 13 06:47:32 crc kubenswrapper[4833]: I1013 06:47:32.693107 4833 generic.go:334] "Generic (PLEG): container finished" podID="0c65395b-b132-4f36-98d1-f8eb739bab83" containerID="0e7db4ccbcb65bf77590c076e5c09f23a44034b0adfe352bcf2d8160439db4ff" exitCode=0 Oct 13 06:47:32 crc kubenswrapper[4833]: I1013 06:47:32.693151 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1bdf-account-create-rmzqc" event={"ID":"0c65395b-b132-4f36-98d1-f8eb739bab83","Type":"ContainerDied","Data":"0e7db4ccbcb65bf77590c076e5c09f23a44034b0adfe352bcf2d8160439db4ff"} Oct 13 06:47:32 crc kubenswrapper[4833]: I1013 06:47:32.693200 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1bdf-account-create-rmzqc" event={"ID":"0c65395b-b132-4f36-98d1-f8eb739bab83","Type":"ContainerStarted","Data":"b4a450bd72b1a9c138ffa01ecd46f55de482d41380a756e43cb4296a52f11979"} Oct 13 06:47:32 crc kubenswrapper[4833]: I1013 06:47:32.694972 4833 generic.go:334] "Generic (PLEG): container finished" podID="f60dfd99-68a4-49c4-8265-06cb09bca910" containerID="1a0179804d6c84f013bfd95b5895e54b8a21725efd2ed74b7d0d9644d13dad41" exitCode=0 Oct 13 06:47:32 crc kubenswrapper[4833]: I1013 06:47:32.695049 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7d06-account-create-dk8dj" event={"ID":"f60dfd99-68a4-49c4-8265-06cb09bca910","Type":"ContainerDied","Data":"1a0179804d6c84f013bfd95b5895e54b8a21725efd2ed74b7d0d9644d13dad41"} Oct 13 06:47:32 crc kubenswrapper[4833]: I1013 06:47:32.695084 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7d06-account-create-dk8dj" event={"ID":"f60dfd99-68a4-49c4-8265-06cb09bca910","Type":"ContainerStarted","Data":"741a71e5db741845d2f662d27944522ba0b8d7cb1baa08a50a9df7a7970bf8a9"} Oct 13 06:47:32 crc kubenswrapper[4833]: I1013 06:47:32.698132 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"981f215e-5282-41b5-80a8-436ebc928de4","Type":"ContainerStarted","Data":"ddab72c46c30aac5a62dc50b7724bcf76405d678015a42d0c053d291b574b9a0"} Oct 13 06:47:32 crc kubenswrapper[4833]: I1013 06:47:32.698308 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="981f215e-5282-41b5-80a8-436ebc928de4" containerName="ceilometer-central-agent" containerID="cri-o://bf342aba4e703eb760855dd70323e3dff210b798ac1b84dc771ebc49aba54d4c" gracePeriod=30 Oct 13 06:47:32 crc kubenswrapper[4833]: I1013 06:47:32.698356 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 13 06:47:32 crc kubenswrapper[4833]: I1013 06:47:32.698381 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="981f215e-5282-41b5-80a8-436ebc928de4" containerName="proxy-httpd" containerID="cri-o://ddab72c46c30aac5a62dc50b7724bcf76405d678015a42d0c053d291b574b9a0" gracePeriod=30 Oct 13 06:47:32 crc kubenswrapper[4833]: I1013 06:47:32.698377 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="981f215e-5282-41b5-80a8-436ebc928de4" containerName="ceilometer-notification-agent" containerID="cri-o://acebcc018ee946ec4ce2e9accf80945da52231e2d91a360f2137d0924685d4a1" gracePeriod=30 Oct 13 06:47:32 crc kubenswrapper[4833]: I1013 06:47:32.698603 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="981f215e-5282-41b5-80a8-436ebc928de4" containerName="sg-core" containerID="cri-o://b4d9cbdb7f2eb6ebaf1e2820b8d9a012698f0a5beb7992de06133f7ce11059c5" gracePeriod=30 Oct 13 06:47:32 crc kubenswrapper[4833]: I1013 06:47:32.731239 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.196864333 podStartE2EDuration="5.731216716s" podCreationTimestamp="2025-10-13 06:47:27 +0000 UTC" firstStartedPulling="2025-10-13 06:47:28.801968607 +0000 UTC m=+1138.902391513" lastFinishedPulling="2025-10-13 06:47:32.33632096 +0000 UTC m=+1142.436743896" observedRunningTime="2025-10-13 06:47:32.723903114 +0000 UTC m=+1142.824326030" watchObservedRunningTime="2025-10-13 06:47:32.731216716 +0000 UTC m=+1142.831639632" Oct 13 06:47:33 crc kubenswrapper[4833]: I1013 06:47:33.713098 4833 generic.go:334] "Generic (PLEG): container finished" podID="981f215e-5282-41b5-80a8-436ebc928de4" containerID="ddab72c46c30aac5a62dc50b7724bcf76405d678015a42d0c053d291b574b9a0" exitCode=0 Oct 13 06:47:33 crc kubenswrapper[4833]: I1013 06:47:33.713451 4833 generic.go:334] "Generic (PLEG): container finished" podID="981f215e-5282-41b5-80a8-436ebc928de4" containerID="b4d9cbdb7f2eb6ebaf1e2820b8d9a012698f0a5beb7992de06133f7ce11059c5" exitCode=2 Oct 13 06:47:33 crc kubenswrapper[4833]: I1013 06:47:33.713469 4833 generic.go:334] "Generic (PLEG): container finished" podID="981f215e-5282-41b5-80a8-436ebc928de4" containerID="acebcc018ee946ec4ce2e9accf80945da52231e2d91a360f2137d0924685d4a1" exitCode=0 Oct 13 06:47:33 crc kubenswrapper[4833]: I1013 06:47:33.713756 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"981f215e-5282-41b5-80a8-436ebc928de4","Type":"ContainerDied","Data":"ddab72c46c30aac5a62dc50b7724bcf76405d678015a42d0c053d291b574b9a0"} Oct 13 06:47:33 crc kubenswrapper[4833]: I1013 06:47:33.713797 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"981f215e-5282-41b5-80a8-436ebc928de4","Type":"ContainerDied","Data":"b4d9cbdb7f2eb6ebaf1e2820b8d9a012698f0a5beb7992de06133f7ce11059c5"} Oct 13 06:47:33 crc kubenswrapper[4833]: I1013 06:47:33.713820 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"981f215e-5282-41b5-80a8-436ebc928de4","Type":"ContainerDied","Data":"acebcc018ee946ec4ce2e9accf80945da52231e2d91a360f2137d0924685d4a1"} Oct 13 06:47:34 crc kubenswrapper[4833]: I1013 06:47:34.138047 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7d06-account-create-dk8dj" Oct 13 06:47:34 crc kubenswrapper[4833]: I1013 06:47:34.144700 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1bdf-account-create-rmzqc" Oct 13 06:47:34 crc kubenswrapper[4833]: I1013 06:47:34.245355 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkf6m\" (UniqueName: \"kubernetes.io/projected/0c65395b-b132-4f36-98d1-f8eb739bab83-kube-api-access-hkf6m\") pod \"0c65395b-b132-4f36-98d1-f8eb739bab83\" (UID: \"0c65395b-b132-4f36-98d1-f8eb739bab83\") " Oct 13 06:47:34 crc kubenswrapper[4833]: I1013 06:47:34.245668 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcktp\" (UniqueName: \"kubernetes.io/projected/f60dfd99-68a4-49c4-8265-06cb09bca910-kube-api-access-qcktp\") pod \"f60dfd99-68a4-49c4-8265-06cb09bca910\" (UID: \"f60dfd99-68a4-49c4-8265-06cb09bca910\") " Oct 13 06:47:34 crc kubenswrapper[4833]: I1013 06:47:34.252831 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c65395b-b132-4f36-98d1-f8eb739bab83-kube-api-access-hkf6m" (OuterVolumeSpecName: "kube-api-access-hkf6m") pod "0c65395b-b132-4f36-98d1-f8eb739bab83" (UID: "0c65395b-b132-4f36-98d1-f8eb739bab83"). InnerVolumeSpecName "kube-api-access-hkf6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:47:34 crc kubenswrapper[4833]: I1013 06:47:34.252891 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f60dfd99-68a4-49c4-8265-06cb09bca910-kube-api-access-qcktp" (OuterVolumeSpecName: "kube-api-access-qcktp") pod "f60dfd99-68a4-49c4-8265-06cb09bca910" (UID: "f60dfd99-68a4-49c4-8265-06cb09bca910"). InnerVolumeSpecName "kube-api-access-qcktp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:47:34 crc kubenswrapper[4833]: I1013 06:47:34.347749 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcktp\" (UniqueName: \"kubernetes.io/projected/f60dfd99-68a4-49c4-8265-06cb09bca910-kube-api-access-qcktp\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:34 crc kubenswrapper[4833]: I1013 06:47:34.348012 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkf6m\" (UniqueName: \"kubernetes.io/projected/0c65395b-b132-4f36-98d1-f8eb739bab83-kube-api-access-hkf6m\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:34 crc kubenswrapper[4833]: I1013 06:47:34.731560 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7d06-account-create-dk8dj" event={"ID":"f60dfd99-68a4-49c4-8265-06cb09bca910","Type":"ContainerDied","Data":"741a71e5db741845d2f662d27944522ba0b8d7cb1baa08a50a9df7a7970bf8a9"} Oct 13 06:47:34 crc kubenswrapper[4833]: I1013 06:47:34.731602 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="741a71e5db741845d2f662d27944522ba0b8d7cb1baa08a50a9df7a7970bf8a9" Oct 13 06:47:34 crc kubenswrapper[4833]: I1013 06:47:34.731652 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7d06-account-create-dk8dj" Oct 13 06:47:34 crc kubenswrapper[4833]: I1013 06:47:34.737068 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1bdf-account-create-rmzqc" event={"ID":"0c65395b-b132-4f36-98d1-f8eb739bab83","Type":"ContainerDied","Data":"b4a450bd72b1a9c138ffa01ecd46f55de482d41380a756e43cb4296a52f11979"} Oct 13 06:47:34 crc kubenswrapper[4833]: I1013 06:47:34.737114 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4a450bd72b1a9c138ffa01ecd46f55de482d41380a756e43cb4296a52f11979" Oct 13 06:47:34 crc kubenswrapper[4833]: I1013 06:47:34.737173 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1bdf-account-create-rmzqc" Oct 13 06:47:35 crc kubenswrapper[4833]: I1013 06:47:35.851999 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-cbbcdc7cb-9s5cb" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.367163 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.617814 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.692118 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/981f215e-5282-41b5-80a8-436ebc928de4-combined-ca-bundle\") pod \"981f215e-5282-41b5-80a8-436ebc928de4\" (UID: \"981f215e-5282-41b5-80a8-436ebc928de4\") " Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.692426 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/981f215e-5282-41b5-80a8-436ebc928de4-run-httpd\") pod \"981f215e-5282-41b5-80a8-436ebc928de4\" (UID: \"981f215e-5282-41b5-80a8-436ebc928de4\") " Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.692573 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/981f215e-5282-41b5-80a8-436ebc928de4-config-data\") pod \"981f215e-5282-41b5-80a8-436ebc928de4\" (UID: \"981f215e-5282-41b5-80a8-436ebc928de4\") " Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.692727 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/981f215e-5282-41b5-80a8-436ebc928de4-log-httpd\") pod \"981f215e-5282-41b5-80a8-436ebc928de4\" (UID: \"981f215e-5282-41b5-80a8-436ebc928de4\") " Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.692899 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/981f215e-5282-41b5-80a8-436ebc928de4-scripts\") pod \"981f215e-5282-41b5-80a8-436ebc928de4\" (UID: \"981f215e-5282-41b5-80a8-436ebc928de4\") " Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.693030 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv295\" (UniqueName: \"kubernetes.io/projected/981f215e-5282-41b5-80a8-436ebc928de4-kube-api-access-lv295\") pod \"981f215e-5282-41b5-80a8-436ebc928de4\" (UID: \"981f215e-5282-41b5-80a8-436ebc928de4\") " Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.693175 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/981f215e-5282-41b5-80a8-436ebc928de4-sg-core-conf-yaml\") pod \"981f215e-5282-41b5-80a8-436ebc928de4\" (UID: \"981f215e-5282-41b5-80a8-436ebc928de4\") " Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.693282 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/981f215e-5282-41b5-80a8-436ebc928de4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "981f215e-5282-41b5-80a8-436ebc928de4" (UID: "981f215e-5282-41b5-80a8-436ebc928de4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.693658 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/981f215e-5282-41b5-80a8-436ebc928de4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "981f215e-5282-41b5-80a8-436ebc928de4" (UID: "981f215e-5282-41b5-80a8-436ebc928de4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.694058 4833 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/981f215e-5282-41b5-80a8-436ebc928de4-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.694148 4833 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/981f215e-5282-41b5-80a8-436ebc928de4-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.702884 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/981f215e-5282-41b5-80a8-436ebc928de4-kube-api-access-lv295" (OuterVolumeSpecName: "kube-api-access-lv295") pod "981f215e-5282-41b5-80a8-436ebc928de4" (UID: "981f215e-5282-41b5-80a8-436ebc928de4"). InnerVolumeSpecName "kube-api-access-lv295". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.710982 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bxq6n"] Oct 13 06:47:36 crc kubenswrapper[4833]: E1013 06:47:36.711507 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="981f215e-5282-41b5-80a8-436ebc928de4" containerName="proxy-httpd" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.711618 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="981f215e-5282-41b5-80a8-436ebc928de4" containerName="proxy-httpd" Oct 13 06:47:36 crc kubenswrapper[4833]: E1013 06:47:36.711721 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f60dfd99-68a4-49c4-8265-06cb09bca910" containerName="mariadb-account-create" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.711851 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f60dfd99-68a4-49c4-8265-06cb09bca910" containerName="mariadb-account-create" Oct 13 06:47:36 crc kubenswrapper[4833]: E1013 06:47:36.711924 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="981f215e-5282-41b5-80a8-436ebc928de4" containerName="sg-core" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.712136 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="981f215e-5282-41b5-80a8-436ebc928de4" containerName="sg-core" Oct 13 06:47:36 crc kubenswrapper[4833]: E1013 06:47:36.712224 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="981f215e-5282-41b5-80a8-436ebc928de4" containerName="ceilometer-central-agent" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.712276 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="981f215e-5282-41b5-80a8-436ebc928de4" containerName="ceilometer-central-agent" Oct 13 06:47:36 crc kubenswrapper[4833]: E1013 06:47:36.712356 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="981f215e-5282-41b5-80a8-436ebc928de4" containerName="ceilometer-notification-agent" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.712442 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="981f215e-5282-41b5-80a8-436ebc928de4" containerName="ceilometer-notification-agent" Oct 13 06:47:36 crc kubenswrapper[4833]: E1013 06:47:36.712519 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c65395b-b132-4f36-98d1-f8eb739bab83" containerName="mariadb-account-create" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.712603 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c65395b-b132-4f36-98d1-f8eb739bab83" containerName="mariadb-account-create" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.712102 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/981f215e-5282-41b5-80a8-436ebc928de4-scripts" (OuterVolumeSpecName: "scripts") pod "981f215e-5282-41b5-80a8-436ebc928de4" (UID: "981f215e-5282-41b5-80a8-436ebc928de4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.712958 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c65395b-b132-4f36-98d1-f8eb739bab83" containerName="mariadb-account-create" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.713028 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="981f215e-5282-41b5-80a8-436ebc928de4" containerName="sg-core" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.713082 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="981f215e-5282-41b5-80a8-436ebc928de4" containerName="ceilometer-notification-agent" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.713161 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="981f215e-5282-41b5-80a8-436ebc928de4" containerName="ceilometer-central-agent" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.713216 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="981f215e-5282-41b5-80a8-436ebc928de4" containerName="proxy-httpd" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.713276 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f60dfd99-68a4-49c4-8265-06cb09bca910" containerName="mariadb-account-create" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.714589 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bxq6n" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.716095 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-dqk8g" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.720234 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.720622 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.736669 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bxq6n"] Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.754651 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/981f215e-5282-41b5-80a8-436ebc928de4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "981f215e-5282-41b5-80a8-436ebc928de4" (UID: "981f215e-5282-41b5-80a8-436ebc928de4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.773195 4833 generic.go:334] "Generic (PLEG): container finished" podID="981f215e-5282-41b5-80a8-436ebc928de4" containerID="bf342aba4e703eb760855dd70323e3dff210b798ac1b84dc771ebc49aba54d4c" exitCode=0 Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.773250 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"981f215e-5282-41b5-80a8-436ebc928de4","Type":"ContainerDied","Data":"bf342aba4e703eb760855dd70323e3dff210b798ac1b84dc771ebc49aba54d4c"} Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.773283 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"981f215e-5282-41b5-80a8-436ebc928de4","Type":"ContainerDied","Data":"836522b5b5094cae814e14e50e88752a0365425d305c17b3240f88e432ce15b2"} Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.773304 4833 scope.go:117] "RemoveContainer" containerID="ddab72c46c30aac5a62dc50b7724bcf76405d678015a42d0c053d291b574b9a0" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.773461 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.800955 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3c4d37b-34eb-411f-9a0f-e266fdf37141-config-data\") pod \"nova-cell0-conductor-db-sync-bxq6n\" (UID: \"d3c4d37b-34eb-411f-9a0f-e266fdf37141\") " pod="openstack/nova-cell0-conductor-db-sync-bxq6n" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.801205 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3c4d37b-34eb-411f-9a0f-e266fdf37141-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bxq6n\" (UID: \"d3c4d37b-34eb-411f-9a0f-e266fdf37141\") " pod="openstack/nova-cell0-conductor-db-sync-bxq6n" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.801354 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3c4d37b-34eb-411f-9a0f-e266fdf37141-scripts\") pod \"nova-cell0-conductor-db-sync-bxq6n\" (UID: \"d3c4d37b-34eb-411f-9a0f-e266fdf37141\") " pod="openstack/nova-cell0-conductor-db-sync-bxq6n" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.801960 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m45kc\" (UniqueName: \"kubernetes.io/projected/d3c4d37b-34eb-411f-9a0f-e266fdf37141-kube-api-access-m45kc\") pod \"nova-cell0-conductor-db-sync-bxq6n\" (UID: \"d3c4d37b-34eb-411f-9a0f-e266fdf37141\") " pod="openstack/nova-cell0-conductor-db-sync-bxq6n" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.802119 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/981f215e-5282-41b5-80a8-436ebc928de4-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.802181 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lv295\" (UniqueName: \"kubernetes.io/projected/981f215e-5282-41b5-80a8-436ebc928de4-kube-api-access-lv295\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.802253 4833 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/981f215e-5282-41b5-80a8-436ebc928de4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.806022 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/981f215e-5282-41b5-80a8-436ebc928de4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "981f215e-5282-41b5-80a8-436ebc928de4" (UID: "981f215e-5282-41b5-80a8-436ebc928de4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.806159 4833 scope.go:117] "RemoveContainer" containerID="b4d9cbdb7f2eb6ebaf1e2820b8d9a012698f0a5beb7992de06133f7ce11059c5" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.827030 4833 scope.go:117] "RemoveContainer" containerID="acebcc018ee946ec4ce2e9accf80945da52231e2d91a360f2137d0924685d4a1" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.845099 4833 scope.go:117] "RemoveContainer" containerID="bf342aba4e703eb760855dd70323e3dff210b798ac1b84dc771ebc49aba54d4c" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.864978 4833 scope.go:117] "RemoveContainer" containerID="ddab72c46c30aac5a62dc50b7724bcf76405d678015a42d0c053d291b574b9a0" Oct 13 06:47:36 crc kubenswrapper[4833]: E1013 06:47:36.865407 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddab72c46c30aac5a62dc50b7724bcf76405d678015a42d0c053d291b574b9a0\": container with ID starting with ddab72c46c30aac5a62dc50b7724bcf76405d678015a42d0c053d291b574b9a0 not found: ID does not exist" containerID="ddab72c46c30aac5a62dc50b7724bcf76405d678015a42d0c053d291b574b9a0" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.865442 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddab72c46c30aac5a62dc50b7724bcf76405d678015a42d0c053d291b574b9a0"} err="failed to get container status \"ddab72c46c30aac5a62dc50b7724bcf76405d678015a42d0c053d291b574b9a0\": rpc error: code = NotFound desc = could not find container \"ddab72c46c30aac5a62dc50b7724bcf76405d678015a42d0c053d291b574b9a0\": container with ID starting with ddab72c46c30aac5a62dc50b7724bcf76405d678015a42d0c053d291b574b9a0 not found: ID does not exist" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.865469 4833 scope.go:117] "RemoveContainer" containerID="b4d9cbdb7f2eb6ebaf1e2820b8d9a012698f0a5beb7992de06133f7ce11059c5" Oct 13 06:47:36 crc kubenswrapper[4833]: E1013 06:47:36.865839 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4d9cbdb7f2eb6ebaf1e2820b8d9a012698f0a5beb7992de06133f7ce11059c5\": container with ID starting with b4d9cbdb7f2eb6ebaf1e2820b8d9a012698f0a5beb7992de06133f7ce11059c5 not found: ID does not exist" containerID="b4d9cbdb7f2eb6ebaf1e2820b8d9a012698f0a5beb7992de06133f7ce11059c5" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.865900 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4d9cbdb7f2eb6ebaf1e2820b8d9a012698f0a5beb7992de06133f7ce11059c5"} err="failed to get container status \"b4d9cbdb7f2eb6ebaf1e2820b8d9a012698f0a5beb7992de06133f7ce11059c5\": rpc error: code = NotFound desc = could not find container \"b4d9cbdb7f2eb6ebaf1e2820b8d9a012698f0a5beb7992de06133f7ce11059c5\": container with ID starting with b4d9cbdb7f2eb6ebaf1e2820b8d9a012698f0a5beb7992de06133f7ce11059c5 not found: ID does not exist" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.865919 4833 scope.go:117] "RemoveContainer" containerID="acebcc018ee946ec4ce2e9accf80945da52231e2d91a360f2137d0924685d4a1" Oct 13 06:47:36 crc kubenswrapper[4833]: E1013 06:47:36.866742 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acebcc018ee946ec4ce2e9accf80945da52231e2d91a360f2137d0924685d4a1\": container with ID starting with acebcc018ee946ec4ce2e9accf80945da52231e2d91a360f2137d0924685d4a1 not found: ID does not exist" containerID="acebcc018ee946ec4ce2e9accf80945da52231e2d91a360f2137d0924685d4a1" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.866782 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acebcc018ee946ec4ce2e9accf80945da52231e2d91a360f2137d0924685d4a1"} err="failed to get container status \"acebcc018ee946ec4ce2e9accf80945da52231e2d91a360f2137d0924685d4a1\": rpc error: code = NotFound desc = could not find container \"acebcc018ee946ec4ce2e9accf80945da52231e2d91a360f2137d0924685d4a1\": container with ID starting with acebcc018ee946ec4ce2e9accf80945da52231e2d91a360f2137d0924685d4a1 not found: ID does not exist" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.866809 4833 scope.go:117] "RemoveContainer" containerID="bf342aba4e703eb760855dd70323e3dff210b798ac1b84dc771ebc49aba54d4c" Oct 13 06:47:36 crc kubenswrapper[4833]: E1013 06:47:36.867098 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf342aba4e703eb760855dd70323e3dff210b798ac1b84dc771ebc49aba54d4c\": container with ID starting with bf342aba4e703eb760855dd70323e3dff210b798ac1b84dc771ebc49aba54d4c not found: ID does not exist" containerID="bf342aba4e703eb760855dd70323e3dff210b798ac1b84dc771ebc49aba54d4c" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.867136 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf342aba4e703eb760855dd70323e3dff210b798ac1b84dc771ebc49aba54d4c"} err="failed to get container status \"bf342aba4e703eb760855dd70323e3dff210b798ac1b84dc771ebc49aba54d4c\": rpc error: code = NotFound desc = could not find container \"bf342aba4e703eb760855dd70323e3dff210b798ac1b84dc771ebc49aba54d4c\": container with ID starting with bf342aba4e703eb760855dd70323e3dff210b798ac1b84dc771ebc49aba54d4c not found: ID does not exist" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.871718 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/981f215e-5282-41b5-80a8-436ebc928de4-config-data" (OuterVolumeSpecName: "config-data") pod "981f215e-5282-41b5-80a8-436ebc928de4" (UID: "981f215e-5282-41b5-80a8-436ebc928de4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.904330 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m45kc\" (UniqueName: \"kubernetes.io/projected/d3c4d37b-34eb-411f-9a0f-e266fdf37141-kube-api-access-m45kc\") pod \"nova-cell0-conductor-db-sync-bxq6n\" (UID: \"d3c4d37b-34eb-411f-9a0f-e266fdf37141\") " pod="openstack/nova-cell0-conductor-db-sync-bxq6n" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.904418 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3c4d37b-34eb-411f-9a0f-e266fdf37141-config-data\") pod \"nova-cell0-conductor-db-sync-bxq6n\" (UID: \"d3c4d37b-34eb-411f-9a0f-e266fdf37141\") " pod="openstack/nova-cell0-conductor-db-sync-bxq6n" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.904450 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3c4d37b-34eb-411f-9a0f-e266fdf37141-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bxq6n\" (UID: \"d3c4d37b-34eb-411f-9a0f-e266fdf37141\") " pod="openstack/nova-cell0-conductor-db-sync-bxq6n" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.904496 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3c4d37b-34eb-411f-9a0f-e266fdf37141-scripts\") pod \"nova-cell0-conductor-db-sync-bxq6n\" (UID: \"d3c4d37b-34eb-411f-9a0f-e266fdf37141\") " pod="openstack/nova-cell0-conductor-db-sync-bxq6n" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.904719 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/981f215e-5282-41b5-80a8-436ebc928de4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.904740 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/981f215e-5282-41b5-80a8-436ebc928de4-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.907935 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3c4d37b-34eb-411f-9a0f-e266fdf37141-scripts\") pod \"nova-cell0-conductor-db-sync-bxq6n\" (UID: \"d3c4d37b-34eb-411f-9a0f-e266fdf37141\") " pod="openstack/nova-cell0-conductor-db-sync-bxq6n" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.908490 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3c4d37b-34eb-411f-9a0f-e266fdf37141-config-data\") pod \"nova-cell0-conductor-db-sync-bxq6n\" (UID: \"d3c4d37b-34eb-411f-9a0f-e266fdf37141\") " pod="openstack/nova-cell0-conductor-db-sync-bxq6n" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.908943 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3c4d37b-34eb-411f-9a0f-e266fdf37141-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bxq6n\" (UID: \"d3c4d37b-34eb-411f-9a0f-e266fdf37141\") " pod="openstack/nova-cell0-conductor-db-sync-bxq6n" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.924823 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m45kc\" (UniqueName: \"kubernetes.io/projected/d3c4d37b-34eb-411f-9a0f-e266fdf37141-kube-api-access-m45kc\") pod \"nova-cell0-conductor-db-sync-bxq6n\" (UID: \"d3c4d37b-34eb-411f-9a0f-e266fdf37141\") " pod="openstack/nova-cell0-conductor-db-sync-bxq6n" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.952454 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.952508 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.991467 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 13 06:47:36 crc kubenswrapper[4833]: I1013 06:47:36.998235 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.111907 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bxq6n" Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.114896 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.124208 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.139887 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.142001 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.148149 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.148996 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.155625 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.215987 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd885861-10bc-45a0-9ded-d17019cae13a-log-httpd\") pod \"ceilometer-0\" (UID: \"fd885861-10bc-45a0-9ded-d17019cae13a\") " pod="openstack/ceilometer-0" Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.216027 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd885861-10bc-45a0-9ded-d17019cae13a-config-data\") pod \"ceilometer-0\" (UID: \"fd885861-10bc-45a0-9ded-d17019cae13a\") " pod="openstack/ceilometer-0" Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.216050 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9rk6\" (UniqueName: \"kubernetes.io/projected/fd885861-10bc-45a0-9ded-d17019cae13a-kube-api-access-m9rk6\") pod \"ceilometer-0\" (UID: \"fd885861-10bc-45a0-9ded-d17019cae13a\") " pod="openstack/ceilometer-0" Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.216186 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd885861-10bc-45a0-9ded-d17019cae13a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd885861-10bc-45a0-9ded-d17019cae13a\") " pod="openstack/ceilometer-0" Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.216306 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd885861-10bc-45a0-9ded-d17019cae13a-scripts\") pod \"ceilometer-0\" (UID: \"fd885861-10bc-45a0-9ded-d17019cae13a\") " pod="openstack/ceilometer-0" Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.216382 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd885861-10bc-45a0-9ded-d17019cae13a-run-httpd\") pod \"ceilometer-0\" (UID: \"fd885861-10bc-45a0-9ded-d17019cae13a\") " pod="openstack/ceilometer-0" Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.216648 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd885861-10bc-45a0-9ded-d17019cae13a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd885861-10bc-45a0-9ded-d17019cae13a\") " pod="openstack/ceilometer-0" Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.319679 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd885861-10bc-45a0-9ded-d17019cae13a-scripts\") pod \"ceilometer-0\" (UID: \"fd885861-10bc-45a0-9ded-d17019cae13a\") " pod="openstack/ceilometer-0" Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.319729 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd885861-10bc-45a0-9ded-d17019cae13a-run-httpd\") pod \"ceilometer-0\" (UID: \"fd885861-10bc-45a0-9ded-d17019cae13a\") " pod="openstack/ceilometer-0" Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.319801 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd885861-10bc-45a0-9ded-d17019cae13a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd885861-10bc-45a0-9ded-d17019cae13a\") " pod="openstack/ceilometer-0" Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.319865 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd885861-10bc-45a0-9ded-d17019cae13a-log-httpd\") pod \"ceilometer-0\" (UID: \"fd885861-10bc-45a0-9ded-d17019cae13a\") " pod="openstack/ceilometer-0" Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.319882 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd885861-10bc-45a0-9ded-d17019cae13a-config-data\") pod \"ceilometer-0\" (UID: \"fd885861-10bc-45a0-9ded-d17019cae13a\") " pod="openstack/ceilometer-0" Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.319899 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9rk6\" (UniqueName: \"kubernetes.io/projected/fd885861-10bc-45a0-9ded-d17019cae13a-kube-api-access-m9rk6\") pod \"ceilometer-0\" (UID: \"fd885861-10bc-45a0-9ded-d17019cae13a\") " pod="openstack/ceilometer-0" Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.319922 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd885861-10bc-45a0-9ded-d17019cae13a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd885861-10bc-45a0-9ded-d17019cae13a\") " pod="openstack/ceilometer-0" Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.320687 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd885861-10bc-45a0-9ded-d17019cae13a-log-httpd\") pod \"ceilometer-0\" (UID: \"fd885861-10bc-45a0-9ded-d17019cae13a\") " pod="openstack/ceilometer-0" Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.320686 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd885861-10bc-45a0-9ded-d17019cae13a-run-httpd\") pod \"ceilometer-0\" (UID: \"fd885861-10bc-45a0-9ded-d17019cae13a\") " pod="openstack/ceilometer-0" Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.333743 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd885861-10bc-45a0-9ded-d17019cae13a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd885861-10bc-45a0-9ded-d17019cae13a\") " pod="openstack/ceilometer-0" Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.333897 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd885861-10bc-45a0-9ded-d17019cae13a-scripts\") pod \"ceilometer-0\" (UID: \"fd885861-10bc-45a0-9ded-d17019cae13a\") " pod="openstack/ceilometer-0" Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.335467 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd885861-10bc-45a0-9ded-d17019cae13a-config-data\") pod \"ceilometer-0\" (UID: \"fd885861-10bc-45a0-9ded-d17019cae13a\") " pod="openstack/ceilometer-0" Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.336996 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9rk6\" (UniqueName: \"kubernetes.io/projected/fd885861-10bc-45a0-9ded-d17019cae13a-kube-api-access-m9rk6\") pod \"ceilometer-0\" (UID: \"fd885861-10bc-45a0-9ded-d17019cae13a\") " pod="openstack/ceilometer-0" Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.337860 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd885861-10bc-45a0-9ded-d17019cae13a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd885861-10bc-45a0-9ded-d17019cae13a\") " pod="openstack/ceilometer-0" Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.460755 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.589598 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bxq6n"] Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.783039 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bxq6n" event={"ID":"d3c4d37b-34eb-411f-9a0f-e266fdf37141","Type":"ContainerStarted","Data":"aabf31579ea53d52821c693fda0e304d594b6ddea590ebdbb5c5ccdceb3109c9"} Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.784713 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.784820 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 13 06:47:37 crc kubenswrapper[4833]: I1013 06:47:37.940455 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:47:37 crc kubenswrapper[4833]: W1013 06:47:37.942516 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd885861_10bc_45a0_9ded_d17019cae13a.slice/crio-efbdabc0484cb279905cb0a712fa562d654b387e4cd59d5c5f8330991402d5f2 WatchSource:0}: Error finding container efbdabc0484cb279905cb0a712fa562d654b387e4cd59d5c5f8330991402d5f2: Status 404 returned error can't find the container with id efbdabc0484cb279905cb0a712fa562d654b387e4cd59d5c5f8330991402d5f2 Oct 13 06:47:38 crc kubenswrapper[4833]: I1013 06:47:38.641276 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="981f215e-5282-41b5-80a8-436ebc928de4" path="/var/lib/kubelet/pods/981f215e-5282-41b5-80a8-436ebc928de4/volumes" Oct 13 06:47:38 crc kubenswrapper[4833]: I1013 06:47:38.672461 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 13 06:47:38 crc kubenswrapper[4833]: I1013 06:47:38.673401 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 13 06:47:38 crc kubenswrapper[4833]: I1013 06:47:38.730892 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 13 06:47:38 crc kubenswrapper[4833]: I1013 06:47:38.731223 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 13 06:47:38 crc kubenswrapper[4833]: I1013 06:47:38.805087 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd885861-10bc-45a0-9ded-d17019cae13a","Type":"ContainerStarted","Data":"09f05ae0f15f158640bfcddf40e73222963237ea018533209d716c3e0d5be6fb"} Oct 13 06:47:38 crc kubenswrapper[4833]: I1013 06:47:38.805157 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd885861-10bc-45a0-9ded-d17019cae13a","Type":"ContainerStarted","Data":"efbdabc0484cb279905cb0a712fa562d654b387e4cd59d5c5f8330991402d5f2"} Oct 13 06:47:38 crc kubenswrapper[4833]: I1013 06:47:38.805825 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 13 06:47:38 crc kubenswrapper[4833]: I1013 06:47:38.806208 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 13 06:47:39 crc kubenswrapper[4833]: I1013 06:47:39.814348 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd885861-10bc-45a0-9ded-d17019cae13a","Type":"ContainerStarted","Data":"5dd9b98ccf548d37dcdac3f8dbd8013568ce016f11698d54a122341be98a8f15"} Oct 13 06:47:39 crc kubenswrapper[4833]: I1013 06:47:39.841234 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 13 06:47:39 crc kubenswrapper[4833]: I1013 06:47:39.841342 4833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 06:47:39 crc kubenswrapper[4833]: I1013 06:47:39.844902 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 13 06:47:40 crc kubenswrapper[4833]: I1013 06:47:40.842293 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd885861-10bc-45a0-9ded-d17019cae13a","Type":"ContainerStarted","Data":"5e2fdf635c2cb9e5aa06ff67b3bfaee2fe1530a9590e2992c6db48616b93f506"} Oct 13 06:47:40 crc kubenswrapper[4833]: I1013 06:47:40.920677 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 13 06:47:40 crc kubenswrapper[4833]: I1013 06:47:40.920800 4833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 06:47:40 crc kubenswrapper[4833]: I1013 06:47:40.921367 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 13 06:47:41 crc kubenswrapper[4833]: I1013 06:47:41.564827 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-3e16-account-create-mf4v7"] Oct 13 06:47:41 crc kubenswrapper[4833]: I1013 06:47:41.566728 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3e16-account-create-mf4v7" Oct 13 06:47:41 crc kubenswrapper[4833]: I1013 06:47:41.570486 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Oct 13 06:47:41 crc kubenswrapper[4833]: I1013 06:47:41.577670 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3e16-account-create-mf4v7"] Oct 13 06:47:41 crc kubenswrapper[4833]: I1013 06:47:41.700940 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24zhc\" (UniqueName: \"kubernetes.io/projected/56a9b023-885e-4bfe-8fa8-21cb68518b48-kube-api-access-24zhc\") pod \"nova-cell1-3e16-account-create-mf4v7\" (UID: \"56a9b023-885e-4bfe-8fa8-21cb68518b48\") " pod="openstack/nova-cell1-3e16-account-create-mf4v7" Oct 13 06:47:41 crc kubenswrapper[4833]: I1013 06:47:41.803280 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24zhc\" (UniqueName: \"kubernetes.io/projected/56a9b023-885e-4bfe-8fa8-21cb68518b48-kube-api-access-24zhc\") pod \"nova-cell1-3e16-account-create-mf4v7\" (UID: \"56a9b023-885e-4bfe-8fa8-21cb68518b48\") " pod="openstack/nova-cell1-3e16-account-create-mf4v7" Oct 13 06:47:41 crc kubenswrapper[4833]: I1013 06:47:41.821712 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24zhc\" (UniqueName: \"kubernetes.io/projected/56a9b023-885e-4bfe-8fa8-21cb68518b48-kube-api-access-24zhc\") pod \"nova-cell1-3e16-account-create-mf4v7\" (UID: \"56a9b023-885e-4bfe-8fa8-21cb68518b48\") " pod="openstack/nova-cell1-3e16-account-create-mf4v7" Oct 13 06:47:41 crc kubenswrapper[4833]: I1013 06:47:41.922064 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3e16-account-create-mf4v7" Oct 13 06:47:42 crc kubenswrapper[4833]: I1013 06:47:42.425759 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-b78565d7c-d78jk" Oct 13 06:47:42 crc kubenswrapper[4833]: I1013 06:47:42.490179 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cbbcdc7cb-9s5cb"] Oct 13 06:47:42 crc kubenswrapper[4833]: I1013 06:47:42.490419 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-cbbcdc7cb-9s5cb" podUID="90d6e69b-7990-402a-90b1-affbd44376cd" containerName="neutron-api" containerID="cri-o://eabb7866f4cd10a20d8f69c0751abec8dd19c02f3aa1edee8758d0c3f00c24c5" gracePeriod=30 Oct 13 06:47:42 crc kubenswrapper[4833]: I1013 06:47:42.490561 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-cbbcdc7cb-9s5cb" podUID="90d6e69b-7990-402a-90b1-affbd44376cd" containerName="neutron-httpd" containerID="cri-o://08788cf590584a916b041279ff8d75a402c9c1f78d1a781850a0bc86ece29f34" gracePeriod=30 Oct 13 06:47:42 crc kubenswrapper[4833]: I1013 06:47:42.862129 4833 generic.go:334] "Generic (PLEG): container finished" podID="90d6e69b-7990-402a-90b1-affbd44376cd" containerID="08788cf590584a916b041279ff8d75a402c9c1f78d1a781850a0bc86ece29f34" exitCode=0 Oct 13 06:47:42 crc kubenswrapper[4833]: I1013 06:47:42.862180 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cbbcdc7cb-9s5cb" event={"ID":"90d6e69b-7990-402a-90b1-affbd44376cd","Type":"ContainerDied","Data":"08788cf590584a916b041279ff8d75a402c9c1f78d1a781850a0bc86ece29f34"} Oct 13 06:47:45 crc kubenswrapper[4833]: I1013 06:47:45.903681 4833 generic.go:334] "Generic (PLEG): container finished" podID="90d6e69b-7990-402a-90b1-affbd44376cd" containerID="eabb7866f4cd10a20d8f69c0751abec8dd19c02f3aa1edee8758d0c3f00c24c5" exitCode=0 Oct 13 06:47:45 crc kubenswrapper[4833]: I1013 06:47:45.904080 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cbbcdc7cb-9s5cb" event={"ID":"90d6e69b-7990-402a-90b1-affbd44376cd","Type":"ContainerDied","Data":"eabb7866f4cd10a20d8f69c0751abec8dd19c02f3aa1edee8758d0c3f00c24c5"} Oct 13 06:47:46 crc kubenswrapper[4833]: I1013 06:47:46.073169 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cbbcdc7cb-9s5cb" Oct 13 06:47:46 crc kubenswrapper[4833]: I1013 06:47:46.185935 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90d6e69b-7990-402a-90b1-affbd44376cd-ovndb-tls-certs\") pod \"90d6e69b-7990-402a-90b1-affbd44376cd\" (UID: \"90d6e69b-7990-402a-90b1-affbd44376cd\") " Oct 13 06:47:46 crc kubenswrapper[4833]: I1013 06:47:46.186100 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/90d6e69b-7990-402a-90b1-affbd44376cd-config\") pod \"90d6e69b-7990-402a-90b1-affbd44376cd\" (UID: \"90d6e69b-7990-402a-90b1-affbd44376cd\") " Oct 13 06:47:46 crc kubenswrapper[4833]: I1013 06:47:46.186187 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90d6e69b-7990-402a-90b1-affbd44376cd-combined-ca-bundle\") pod \"90d6e69b-7990-402a-90b1-affbd44376cd\" (UID: \"90d6e69b-7990-402a-90b1-affbd44376cd\") " Oct 13 06:47:46 crc kubenswrapper[4833]: I1013 06:47:46.186320 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z79kq\" (UniqueName: \"kubernetes.io/projected/90d6e69b-7990-402a-90b1-affbd44376cd-kube-api-access-z79kq\") pod \"90d6e69b-7990-402a-90b1-affbd44376cd\" (UID: \"90d6e69b-7990-402a-90b1-affbd44376cd\") " Oct 13 06:47:46 crc kubenswrapper[4833]: I1013 06:47:46.186847 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/90d6e69b-7990-402a-90b1-affbd44376cd-httpd-config\") pod \"90d6e69b-7990-402a-90b1-affbd44376cd\" (UID: \"90d6e69b-7990-402a-90b1-affbd44376cd\") " Oct 13 06:47:46 crc kubenswrapper[4833]: I1013 06:47:46.192321 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90d6e69b-7990-402a-90b1-affbd44376cd-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "90d6e69b-7990-402a-90b1-affbd44376cd" (UID: "90d6e69b-7990-402a-90b1-affbd44376cd"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:46 crc kubenswrapper[4833]: I1013 06:47:46.210801 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90d6e69b-7990-402a-90b1-affbd44376cd-kube-api-access-z79kq" (OuterVolumeSpecName: "kube-api-access-z79kq") pod "90d6e69b-7990-402a-90b1-affbd44376cd" (UID: "90d6e69b-7990-402a-90b1-affbd44376cd"). InnerVolumeSpecName "kube-api-access-z79kq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:47:46 crc kubenswrapper[4833]: I1013 06:47:46.256761 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90d6e69b-7990-402a-90b1-affbd44376cd-config" (OuterVolumeSpecName: "config") pod "90d6e69b-7990-402a-90b1-affbd44376cd" (UID: "90d6e69b-7990-402a-90b1-affbd44376cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:46 crc kubenswrapper[4833]: I1013 06:47:46.257201 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90d6e69b-7990-402a-90b1-affbd44376cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90d6e69b-7990-402a-90b1-affbd44376cd" (UID: "90d6e69b-7990-402a-90b1-affbd44376cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:46 crc kubenswrapper[4833]: I1013 06:47:46.276518 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90d6e69b-7990-402a-90b1-affbd44376cd-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "90d6e69b-7990-402a-90b1-affbd44376cd" (UID: "90d6e69b-7990-402a-90b1-affbd44376cd"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:46 crc kubenswrapper[4833]: I1013 06:47:46.291144 4833 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90d6e69b-7990-402a-90b1-affbd44376cd-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:46 crc kubenswrapper[4833]: I1013 06:47:46.291182 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/90d6e69b-7990-402a-90b1-affbd44376cd-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:46 crc kubenswrapper[4833]: I1013 06:47:46.291193 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90d6e69b-7990-402a-90b1-affbd44376cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:46 crc kubenswrapper[4833]: I1013 06:47:46.291203 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z79kq\" (UniqueName: \"kubernetes.io/projected/90d6e69b-7990-402a-90b1-affbd44376cd-kube-api-access-z79kq\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:46 crc kubenswrapper[4833]: I1013 06:47:46.291213 4833 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/90d6e69b-7990-402a-90b1-affbd44376cd-httpd-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:46 crc kubenswrapper[4833]: I1013 06:47:46.295274 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3e16-account-create-mf4v7"] Oct 13 06:47:46 crc kubenswrapper[4833]: W1013 06:47:46.297558 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56a9b023_885e_4bfe_8fa8_21cb68518b48.slice/crio-7c7673ee67f5982e96d0f403479939f94ee5f927aece54ff612f5935492b3b21 WatchSource:0}: Error finding container 7c7673ee67f5982e96d0f403479939f94ee5f927aece54ff612f5935492b3b21: Status 404 returned error can't find the container with id 7c7673ee67f5982e96d0f403479939f94ee5f927aece54ff612f5935492b3b21 Oct 13 06:47:46 crc kubenswrapper[4833]: I1013 06:47:46.913774 4833 generic.go:334] "Generic (PLEG): container finished" podID="56a9b023-885e-4bfe-8fa8-21cb68518b48" containerID="b1df1f9310d6f29aed59c99011ac33f389a422870bb4e4529f70e5dfd9e4d56f" exitCode=0 Oct 13 06:47:46 crc kubenswrapper[4833]: I1013 06:47:46.913877 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3e16-account-create-mf4v7" event={"ID":"56a9b023-885e-4bfe-8fa8-21cb68518b48","Type":"ContainerDied","Data":"b1df1f9310d6f29aed59c99011ac33f389a422870bb4e4529f70e5dfd9e4d56f"} Oct 13 06:47:46 crc kubenswrapper[4833]: I1013 06:47:46.915287 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3e16-account-create-mf4v7" event={"ID":"56a9b023-885e-4bfe-8fa8-21cb68518b48","Type":"ContainerStarted","Data":"7c7673ee67f5982e96d0f403479939f94ee5f927aece54ff612f5935492b3b21"} Oct 13 06:47:46 crc kubenswrapper[4833]: I1013 06:47:46.917951 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bxq6n" event={"ID":"d3c4d37b-34eb-411f-9a0f-e266fdf37141","Type":"ContainerStarted","Data":"94f41b633f482bd10ac5e593952a9e2259ee80551092d69b162f93a3b634fe8a"} Oct 13 06:47:46 crc kubenswrapper[4833]: I1013 06:47:46.920141 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cbbcdc7cb-9s5cb" event={"ID":"90d6e69b-7990-402a-90b1-affbd44376cd","Type":"ContainerDied","Data":"95e559b240dc2787177b1ddc8dfdae8672e9457b79ea305bf2acfe212d88728a"} Oct 13 06:47:46 crc kubenswrapper[4833]: I1013 06:47:46.920186 4833 scope.go:117] "RemoveContainer" containerID="08788cf590584a916b041279ff8d75a402c9c1f78d1a781850a0bc86ece29f34" Oct 13 06:47:46 crc kubenswrapper[4833]: I1013 06:47:46.920220 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cbbcdc7cb-9s5cb" Oct 13 06:47:46 crc kubenswrapper[4833]: I1013 06:47:46.929784 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd885861-10bc-45a0-9ded-d17019cae13a","Type":"ContainerStarted","Data":"069701aaff75a7a32fda759312c10c3428d8a25f981e8f33c90e90f28d140bf7"} Oct 13 06:47:46 crc kubenswrapper[4833]: I1013 06:47:46.931574 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 13 06:47:46 crc kubenswrapper[4833]: I1013 06:47:46.961999 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cbbcdc7cb-9s5cb"] Oct 13 06:47:46 crc kubenswrapper[4833]: I1013 06:47:46.976668 4833 scope.go:117] "RemoveContainer" containerID="eabb7866f4cd10a20d8f69c0751abec8dd19c02f3aa1edee8758d0c3f00c24c5" Oct 13 06:47:46 crc kubenswrapper[4833]: I1013 06:47:46.980242 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-cbbcdc7cb-9s5cb"] Oct 13 06:47:46 crc kubenswrapper[4833]: I1013 06:47:46.986024 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-bxq6n" podStartSLOduration=2.7725222880000002 podStartE2EDuration="10.986002236s" podCreationTimestamp="2025-10-13 06:47:36 +0000 UTC" firstStartedPulling="2025-10-13 06:47:37.607143729 +0000 UTC m=+1147.707566645" lastFinishedPulling="2025-10-13 06:47:45.820623677 +0000 UTC m=+1155.921046593" observedRunningTime="2025-10-13 06:47:46.985156592 +0000 UTC m=+1157.085579518" watchObservedRunningTime="2025-10-13 06:47:46.986002236 +0000 UTC m=+1157.086425152" Oct 13 06:47:47 crc kubenswrapper[4833]: I1013 06:47:47.003725 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.126982162 podStartE2EDuration="10.0037057s" podCreationTimestamp="2025-10-13 06:47:37 +0000 UTC" firstStartedPulling="2025-10-13 06:47:37.945028312 +0000 UTC m=+1148.045451228" lastFinishedPulling="2025-10-13 06:47:45.82175185 +0000 UTC m=+1155.922174766" observedRunningTime="2025-10-13 06:47:47.001617479 +0000 UTC m=+1157.102040405" watchObservedRunningTime="2025-10-13 06:47:47.0037057 +0000 UTC m=+1157.104128616" Oct 13 06:47:48 crc kubenswrapper[4833]: I1013 06:47:48.293668 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3e16-account-create-mf4v7" Oct 13 06:47:48 crc kubenswrapper[4833]: I1013 06:47:48.459641 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24zhc\" (UniqueName: \"kubernetes.io/projected/56a9b023-885e-4bfe-8fa8-21cb68518b48-kube-api-access-24zhc\") pod \"56a9b023-885e-4bfe-8fa8-21cb68518b48\" (UID: \"56a9b023-885e-4bfe-8fa8-21cb68518b48\") " Oct 13 06:47:48 crc kubenswrapper[4833]: I1013 06:47:48.465036 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56a9b023-885e-4bfe-8fa8-21cb68518b48-kube-api-access-24zhc" (OuterVolumeSpecName: "kube-api-access-24zhc") pod "56a9b023-885e-4bfe-8fa8-21cb68518b48" (UID: "56a9b023-885e-4bfe-8fa8-21cb68518b48"). InnerVolumeSpecName "kube-api-access-24zhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:47:48 crc kubenswrapper[4833]: I1013 06:47:48.562211 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24zhc\" (UniqueName: \"kubernetes.io/projected/56a9b023-885e-4bfe-8fa8-21cb68518b48-kube-api-access-24zhc\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:48 crc kubenswrapper[4833]: I1013 06:47:48.643251 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90d6e69b-7990-402a-90b1-affbd44376cd" path="/var/lib/kubelet/pods/90d6e69b-7990-402a-90b1-affbd44376cd/volumes" Oct 13 06:47:48 crc kubenswrapper[4833]: I1013 06:47:48.947655 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3e16-account-create-mf4v7" event={"ID":"56a9b023-885e-4bfe-8fa8-21cb68518b48","Type":"ContainerDied","Data":"7c7673ee67f5982e96d0f403479939f94ee5f927aece54ff612f5935492b3b21"} Oct 13 06:47:48 crc kubenswrapper[4833]: I1013 06:47:48.947699 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3e16-account-create-mf4v7" Oct 13 06:47:48 crc kubenswrapper[4833]: I1013 06:47:48.947707 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c7673ee67f5982e96d0f403479939f94ee5f927aece54ff612f5935492b3b21" Oct 13 06:47:51 crc kubenswrapper[4833]: I1013 06:47:51.777552 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:47:51 crc kubenswrapper[4833]: I1013 06:47:51.778197 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd885861-10bc-45a0-9ded-d17019cae13a" containerName="ceilometer-central-agent" containerID="cri-o://09f05ae0f15f158640bfcddf40e73222963237ea018533209d716c3e0d5be6fb" gracePeriod=30 Oct 13 06:47:51 crc kubenswrapper[4833]: I1013 06:47:51.778237 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd885861-10bc-45a0-9ded-d17019cae13a" containerName="proxy-httpd" containerID="cri-o://069701aaff75a7a32fda759312c10c3428d8a25f981e8f33c90e90f28d140bf7" gracePeriod=30 Oct 13 06:47:51 crc kubenswrapper[4833]: I1013 06:47:51.778303 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd885861-10bc-45a0-9ded-d17019cae13a" containerName="sg-core" containerID="cri-o://5e2fdf635c2cb9e5aa06ff67b3bfaee2fe1530a9590e2992c6db48616b93f506" gracePeriod=30 Oct 13 06:47:51 crc kubenswrapper[4833]: I1013 06:47:51.778338 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd885861-10bc-45a0-9ded-d17019cae13a" containerName="ceilometer-notification-agent" containerID="cri-o://5dd9b98ccf548d37dcdac3f8dbd8013568ce016f11698d54a122341be98a8f15" gracePeriod=30 Oct 13 06:47:51 crc kubenswrapper[4833]: I1013 06:47:51.977017 4833 generic.go:334] "Generic (PLEG): container finished" podID="fd885861-10bc-45a0-9ded-d17019cae13a" containerID="5e2fdf635c2cb9e5aa06ff67b3bfaee2fe1530a9590e2992c6db48616b93f506" exitCode=2 Oct 13 06:47:51 crc kubenswrapper[4833]: I1013 06:47:51.977059 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd885861-10bc-45a0-9ded-d17019cae13a","Type":"ContainerDied","Data":"5e2fdf635c2cb9e5aa06ff67b3bfaee2fe1530a9590e2992c6db48616b93f506"} Oct 13 06:47:52 crc kubenswrapper[4833]: I1013 06:47:52.987011 4833 generic.go:334] "Generic (PLEG): container finished" podID="fd885861-10bc-45a0-9ded-d17019cae13a" containerID="069701aaff75a7a32fda759312c10c3428d8a25f981e8f33c90e90f28d140bf7" exitCode=0 Oct 13 06:47:52 crc kubenswrapper[4833]: I1013 06:47:52.987331 4833 generic.go:334] "Generic (PLEG): container finished" podID="fd885861-10bc-45a0-9ded-d17019cae13a" containerID="09f05ae0f15f158640bfcddf40e73222963237ea018533209d716c3e0d5be6fb" exitCode=0 Oct 13 06:47:52 crc kubenswrapper[4833]: I1013 06:47:52.987382 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd885861-10bc-45a0-9ded-d17019cae13a","Type":"ContainerDied","Data":"069701aaff75a7a32fda759312c10c3428d8a25f981e8f33c90e90f28d140bf7"} Oct 13 06:47:52 crc kubenswrapper[4833]: I1013 06:47:52.987408 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd885861-10bc-45a0-9ded-d17019cae13a","Type":"ContainerDied","Data":"09f05ae0f15f158640bfcddf40e73222963237ea018533209d716c3e0d5be6fb"} Oct 13 06:47:56 crc kubenswrapper[4833]: I1013 06:47:56.017683 4833 generic.go:334] "Generic (PLEG): container finished" podID="fd885861-10bc-45a0-9ded-d17019cae13a" containerID="5dd9b98ccf548d37dcdac3f8dbd8013568ce016f11698d54a122341be98a8f15" exitCode=0 Oct 13 06:47:56 crc kubenswrapper[4833]: I1013 06:47:56.017741 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd885861-10bc-45a0-9ded-d17019cae13a","Type":"ContainerDied","Data":"5dd9b98ccf548d37dcdac3f8dbd8013568ce016f11698d54a122341be98a8f15"} Oct 13 06:47:56 crc kubenswrapper[4833]: I1013 06:47:56.017775 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd885861-10bc-45a0-9ded-d17019cae13a","Type":"ContainerDied","Data":"efbdabc0484cb279905cb0a712fa562d654b387e4cd59d5c5f8330991402d5f2"} Oct 13 06:47:56 crc kubenswrapper[4833]: I1013 06:47:56.017789 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efbdabc0484cb279905cb0a712fa562d654b387e4cd59d5c5f8330991402d5f2" Oct 13 06:47:56 crc kubenswrapper[4833]: I1013 06:47:56.071864 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:47:56 crc kubenswrapper[4833]: I1013 06:47:56.209391 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd885861-10bc-45a0-9ded-d17019cae13a-log-httpd\") pod \"fd885861-10bc-45a0-9ded-d17019cae13a\" (UID: \"fd885861-10bc-45a0-9ded-d17019cae13a\") " Oct 13 06:47:56 crc kubenswrapper[4833]: I1013 06:47:56.209447 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd885861-10bc-45a0-9ded-d17019cae13a-combined-ca-bundle\") pod \"fd885861-10bc-45a0-9ded-d17019cae13a\" (UID: \"fd885861-10bc-45a0-9ded-d17019cae13a\") " Oct 13 06:47:56 crc kubenswrapper[4833]: I1013 06:47:56.209513 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd885861-10bc-45a0-9ded-d17019cae13a-sg-core-conf-yaml\") pod \"fd885861-10bc-45a0-9ded-d17019cae13a\" (UID: \"fd885861-10bc-45a0-9ded-d17019cae13a\") " Oct 13 06:47:56 crc kubenswrapper[4833]: I1013 06:47:56.209596 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9rk6\" (UniqueName: \"kubernetes.io/projected/fd885861-10bc-45a0-9ded-d17019cae13a-kube-api-access-m9rk6\") pod \"fd885861-10bc-45a0-9ded-d17019cae13a\" (UID: \"fd885861-10bc-45a0-9ded-d17019cae13a\") " Oct 13 06:47:56 crc kubenswrapper[4833]: I1013 06:47:56.209722 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd885861-10bc-45a0-9ded-d17019cae13a-config-data\") pod \"fd885861-10bc-45a0-9ded-d17019cae13a\" (UID: \"fd885861-10bc-45a0-9ded-d17019cae13a\") " Oct 13 06:47:56 crc kubenswrapper[4833]: I1013 06:47:56.209777 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd885861-10bc-45a0-9ded-d17019cae13a-run-httpd\") pod \"fd885861-10bc-45a0-9ded-d17019cae13a\" (UID: \"fd885861-10bc-45a0-9ded-d17019cae13a\") " Oct 13 06:47:56 crc kubenswrapper[4833]: I1013 06:47:56.209811 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd885861-10bc-45a0-9ded-d17019cae13a-scripts\") pod \"fd885861-10bc-45a0-9ded-d17019cae13a\" (UID: \"fd885861-10bc-45a0-9ded-d17019cae13a\") " Oct 13 06:47:56 crc kubenswrapper[4833]: I1013 06:47:56.210050 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd885861-10bc-45a0-9ded-d17019cae13a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fd885861-10bc-45a0-9ded-d17019cae13a" (UID: "fd885861-10bc-45a0-9ded-d17019cae13a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:47:56 crc kubenswrapper[4833]: I1013 06:47:56.210521 4833 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd885861-10bc-45a0-9ded-d17019cae13a-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:56 crc kubenswrapper[4833]: I1013 06:47:56.210698 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd885861-10bc-45a0-9ded-d17019cae13a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fd885861-10bc-45a0-9ded-d17019cae13a" (UID: "fd885861-10bc-45a0-9ded-d17019cae13a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:47:56 crc kubenswrapper[4833]: I1013 06:47:56.215262 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd885861-10bc-45a0-9ded-d17019cae13a-kube-api-access-m9rk6" (OuterVolumeSpecName: "kube-api-access-m9rk6") pod "fd885861-10bc-45a0-9ded-d17019cae13a" (UID: "fd885861-10bc-45a0-9ded-d17019cae13a"). InnerVolumeSpecName "kube-api-access-m9rk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:47:56 crc kubenswrapper[4833]: I1013 06:47:56.220574 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd885861-10bc-45a0-9ded-d17019cae13a-scripts" (OuterVolumeSpecName: "scripts") pod "fd885861-10bc-45a0-9ded-d17019cae13a" (UID: "fd885861-10bc-45a0-9ded-d17019cae13a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:56 crc kubenswrapper[4833]: I1013 06:47:56.243943 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd885861-10bc-45a0-9ded-d17019cae13a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fd885861-10bc-45a0-9ded-d17019cae13a" (UID: "fd885861-10bc-45a0-9ded-d17019cae13a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:56 crc kubenswrapper[4833]: I1013 06:47:56.289901 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd885861-10bc-45a0-9ded-d17019cae13a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd885861-10bc-45a0-9ded-d17019cae13a" (UID: "fd885861-10bc-45a0-9ded-d17019cae13a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:56 crc kubenswrapper[4833]: I1013 06:47:56.312200 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd885861-10bc-45a0-9ded-d17019cae13a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:56 crc kubenswrapper[4833]: I1013 06:47:56.312245 4833 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd885861-10bc-45a0-9ded-d17019cae13a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:56 crc kubenswrapper[4833]: I1013 06:47:56.312257 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9rk6\" (UniqueName: \"kubernetes.io/projected/fd885861-10bc-45a0-9ded-d17019cae13a-kube-api-access-m9rk6\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:56 crc kubenswrapper[4833]: I1013 06:47:56.312270 4833 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd885861-10bc-45a0-9ded-d17019cae13a-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:56 crc kubenswrapper[4833]: I1013 06:47:56.312280 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd885861-10bc-45a0-9ded-d17019cae13a-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:56 crc kubenswrapper[4833]: I1013 06:47:56.341250 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd885861-10bc-45a0-9ded-d17019cae13a-config-data" (OuterVolumeSpecName: "config-data") pod "fd885861-10bc-45a0-9ded-d17019cae13a" (UID: "fd885861-10bc-45a0-9ded-d17019cae13a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:56 crc kubenswrapper[4833]: I1013 06:47:56.413278 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd885861-10bc-45a0-9ded-d17019cae13a-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.026512 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.050796 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.060516 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.078987 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:47:57 crc kubenswrapper[4833]: E1013 06:47:57.079381 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd885861-10bc-45a0-9ded-d17019cae13a" containerName="ceilometer-notification-agent" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.079403 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd885861-10bc-45a0-9ded-d17019cae13a" containerName="ceilometer-notification-agent" Oct 13 06:47:57 crc kubenswrapper[4833]: E1013 06:47:57.079420 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd885861-10bc-45a0-9ded-d17019cae13a" containerName="ceilometer-central-agent" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.079426 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd885861-10bc-45a0-9ded-d17019cae13a" containerName="ceilometer-central-agent" Oct 13 06:47:57 crc kubenswrapper[4833]: E1013 06:47:57.079444 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56a9b023-885e-4bfe-8fa8-21cb68518b48" containerName="mariadb-account-create" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.079450 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="56a9b023-885e-4bfe-8fa8-21cb68518b48" containerName="mariadb-account-create" Oct 13 06:47:57 crc kubenswrapper[4833]: E1013 06:47:57.079464 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd885861-10bc-45a0-9ded-d17019cae13a" containerName="proxy-httpd" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.079469 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd885861-10bc-45a0-9ded-d17019cae13a" containerName="proxy-httpd" Oct 13 06:47:57 crc kubenswrapper[4833]: E1013 06:47:57.079479 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90d6e69b-7990-402a-90b1-affbd44376cd" containerName="neutron-api" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.079486 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="90d6e69b-7990-402a-90b1-affbd44376cd" containerName="neutron-api" Oct 13 06:47:57 crc kubenswrapper[4833]: E1013 06:47:57.079492 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd885861-10bc-45a0-9ded-d17019cae13a" containerName="sg-core" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.079499 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd885861-10bc-45a0-9ded-d17019cae13a" containerName="sg-core" Oct 13 06:47:57 crc kubenswrapper[4833]: E1013 06:47:57.079518 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90d6e69b-7990-402a-90b1-affbd44376cd" containerName="neutron-httpd" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.079523 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="90d6e69b-7990-402a-90b1-affbd44376cd" containerName="neutron-httpd" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.079727 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd885861-10bc-45a0-9ded-d17019cae13a" containerName="ceilometer-notification-agent" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.079744 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd885861-10bc-45a0-9ded-d17019cae13a" containerName="ceilometer-central-agent" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.079759 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="90d6e69b-7990-402a-90b1-affbd44376cd" containerName="neutron-api" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.079766 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="56a9b023-885e-4bfe-8fa8-21cb68518b48" containerName="mariadb-account-create" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.079775 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd885861-10bc-45a0-9ded-d17019cae13a" containerName="sg-core" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.079792 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd885861-10bc-45a0-9ded-d17019cae13a" containerName="proxy-httpd" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.079804 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="90d6e69b-7990-402a-90b1-affbd44376cd" containerName="neutron-httpd" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.086060 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.089799 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.090241 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.115187 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.227780 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f564521f-ebb2-4103-9326-2acfcb6c90e4-config-data\") pod \"ceilometer-0\" (UID: \"f564521f-ebb2-4103-9326-2acfcb6c90e4\") " pod="openstack/ceilometer-0" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.227827 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f564521f-ebb2-4103-9326-2acfcb6c90e4-run-httpd\") pod \"ceilometer-0\" (UID: \"f564521f-ebb2-4103-9326-2acfcb6c90e4\") " pod="openstack/ceilometer-0" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.227847 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f564521f-ebb2-4103-9326-2acfcb6c90e4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f564521f-ebb2-4103-9326-2acfcb6c90e4\") " pod="openstack/ceilometer-0" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.227864 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbxd9\" (UniqueName: \"kubernetes.io/projected/f564521f-ebb2-4103-9326-2acfcb6c90e4-kube-api-access-vbxd9\") pod \"ceilometer-0\" (UID: \"f564521f-ebb2-4103-9326-2acfcb6c90e4\") " pod="openstack/ceilometer-0" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.227892 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f564521f-ebb2-4103-9326-2acfcb6c90e4-log-httpd\") pod \"ceilometer-0\" (UID: \"f564521f-ebb2-4103-9326-2acfcb6c90e4\") " pod="openstack/ceilometer-0" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.227975 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f564521f-ebb2-4103-9326-2acfcb6c90e4-scripts\") pod \"ceilometer-0\" (UID: \"f564521f-ebb2-4103-9326-2acfcb6c90e4\") " pod="openstack/ceilometer-0" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.228097 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f564521f-ebb2-4103-9326-2acfcb6c90e4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f564521f-ebb2-4103-9326-2acfcb6c90e4\") " pod="openstack/ceilometer-0" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.330740 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f564521f-ebb2-4103-9326-2acfcb6c90e4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f564521f-ebb2-4103-9326-2acfcb6c90e4\") " pod="openstack/ceilometer-0" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.330881 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f564521f-ebb2-4103-9326-2acfcb6c90e4-run-httpd\") pod \"ceilometer-0\" (UID: \"f564521f-ebb2-4103-9326-2acfcb6c90e4\") " pod="openstack/ceilometer-0" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.330918 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f564521f-ebb2-4103-9326-2acfcb6c90e4-config-data\") pod \"ceilometer-0\" (UID: \"f564521f-ebb2-4103-9326-2acfcb6c90e4\") " pod="openstack/ceilometer-0" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.330952 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbxd9\" (UniqueName: \"kubernetes.io/projected/f564521f-ebb2-4103-9326-2acfcb6c90e4-kube-api-access-vbxd9\") pod \"ceilometer-0\" (UID: \"f564521f-ebb2-4103-9326-2acfcb6c90e4\") " pod="openstack/ceilometer-0" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.330987 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f564521f-ebb2-4103-9326-2acfcb6c90e4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f564521f-ebb2-4103-9326-2acfcb6c90e4\") " pod="openstack/ceilometer-0" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.331043 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f564521f-ebb2-4103-9326-2acfcb6c90e4-log-httpd\") pod \"ceilometer-0\" (UID: \"f564521f-ebb2-4103-9326-2acfcb6c90e4\") " pod="openstack/ceilometer-0" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.331198 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f564521f-ebb2-4103-9326-2acfcb6c90e4-scripts\") pod \"ceilometer-0\" (UID: \"f564521f-ebb2-4103-9326-2acfcb6c90e4\") " pod="openstack/ceilometer-0" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.331447 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f564521f-ebb2-4103-9326-2acfcb6c90e4-run-httpd\") pod \"ceilometer-0\" (UID: \"f564521f-ebb2-4103-9326-2acfcb6c90e4\") " pod="openstack/ceilometer-0" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.331709 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f564521f-ebb2-4103-9326-2acfcb6c90e4-log-httpd\") pod \"ceilometer-0\" (UID: \"f564521f-ebb2-4103-9326-2acfcb6c90e4\") " pod="openstack/ceilometer-0" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.335878 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f564521f-ebb2-4103-9326-2acfcb6c90e4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f564521f-ebb2-4103-9326-2acfcb6c90e4\") " pod="openstack/ceilometer-0" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.336093 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f564521f-ebb2-4103-9326-2acfcb6c90e4-config-data\") pod \"ceilometer-0\" (UID: \"f564521f-ebb2-4103-9326-2acfcb6c90e4\") " pod="openstack/ceilometer-0" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.339573 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f564521f-ebb2-4103-9326-2acfcb6c90e4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f564521f-ebb2-4103-9326-2acfcb6c90e4\") " pod="openstack/ceilometer-0" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.342359 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f564521f-ebb2-4103-9326-2acfcb6c90e4-scripts\") pod \"ceilometer-0\" (UID: \"f564521f-ebb2-4103-9326-2acfcb6c90e4\") " pod="openstack/ceilometer-0" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.359444 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbxd9\" (UniqueName: \"kubernetes.io/projected/f564521f-ebb2-4103-9326-2acfcb6c90e4-kube-api-access-vbxd9\") pod \"ceilometer-0\" (UID: \"f564521f-ebb2-4103-9326-2acfcb6c90e4\") " pod="openstack/ceilometer-0" Oct 13 06:47:57 crc kubenswrapper[4833]: I1013 06:47:57.426153 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:47:58 crc kubenswrapper[4833]: I1013 06:47:58.002138 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:47:58 crc kubenswrapper[4833]: W1013 06:47:58.006829 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf564521f_ebb2_4103_9326_2acfcb6c90e4.slice/crio-41ffc940e438488d037ac5dff346a1c8f1b485f36b34b89b7a0aa77cc1c39f6e WatchSource:0}: Error finding container 41ffc940e438488d037ac5dff346a1c8f1b485f36b34b89b7a0aa77cc1c39f6e: Status 404 returned error can't find the container with id 41ffc940e438488d037ac5dff346a1c8f1b485f36b34b89b7a0aa77cc1c39f6e Oct 13 06:47:58 crc kubenswrapper[4833]: I1013 06:47:58.051573 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f564521f-ebb2-4103-9326-2acfcb6c90e4","Type":"ContainerStarted","Data":"41ffc940e438488d037ac5dff346a1c8f1b485f36b34b89b7a0aa77cc1c39f6e"} Oct 13 06:47:58 crc kubenswrapper[4833]: I1013 06:47:58.059187 4833 generic.go:334] "Generic (PLEG): container finished" podID="d3c4d37b-34eb-411f-9a0f-e266fdf37141" containerID="94f41b633f482bd10ac5e593952a9e2259ee80551092d69b162f93a3b634fe8a" exitCode=0 Oct 13 06:47:58 crc kubenswrapper[4833]: I1013 06:47:58.059235 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bxq6n" event={"ID":"d3c4d37b-34eb-411f-9a0f-e266fdf37141","Type":"ContainerDied","Data":"94f41b633f482bd10ac5e593952a9e2259ee80551092d69b162f93a3b634fe8a"} Oct 13 06:47:58 crc kubenswrapper[4833]: I1013 06:47:58.637110 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd885861-10bc-45a0-9ded-d17019cae13a" path="/var/lib/kubelet/pods/fd885861-10bc-45a0-9ded-d17019cae13a/volumes" Oct 13 06:47:59 crc kubenswrapper[4833]: I1013 06:47:59.069895 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f564521f-ebb2-4103-9326-2acfcb6c90e4","Type":"ContainerStarted","Data":"04b344348eb62bb1741dbd68b52de9e84fd9af3c9fd266bc53196689afe34baf"} Oct 13 06:47:59 crc kubenswrapper[4833]: I1013 06:47:59.432714 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bxq6n" Oct 13 06:47:59 crc kubenswrapper[4833]: I1013 06:47:59.579286 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m45kc\" (UniqueName: \"kubernetes.io/projected/d3c4d37b-34eb-411f-9a0f-e266fdf37141-kube-api-access-m45kc\") pod \"d3c4d37b-34eb-411f-9a0f-e266fdf37141\" (UID: \"d3c4d37b-34eb-411f-9a0f-e266fdf37141\") " Oct 13 06:47:59 crc kubenswrapper[4833]: I1013 06:47:59.579530 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3c4d37b-34eb-411f-9a0f-e266fdf37141-scripts\") pod \"d3c4d37b-34eb-411f-9a0f-e266fdf37141\" (UID: \"d3c4d37b-34eb-411f-9a0f-e266fdf37141\") " Oct 13 06:47:59 crc kubenswrapper[4833]: I1013 06:47:59.579610 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3c4d37b-34eb-411f-9a0f-e266fdf37141-combined-ca-bundle\") pod \"d3c4d37b-34eb-411f-9a0f-e266fdf37141\" (UID: \"d3c4d37b-34eb-411f-9a0f-e266fdf37141\") " Oct 13 06:47:59 crc kubenswrapper[4833]: I1013 06:47:59.579661 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3c4d37b-34eb-411f-9a0f-e266fdf37141-config-data\") pod \"d3c4d37b-34eb-411f-9a0f-e266fdf37141\" (UID: \"d3c4d37b-34eb-411f-9a0f-e266fdf37141\") " Oct 13 06:47:59 crc kubenswrapper[4833]: I1013 06:47:59.586094 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3c4d37b-34eb-411f-9a0f-e266fdf37141-scripts" (OuterVolumeSpecName: "scripts") pod "d3c4d37b-34eb-411f-9a0f-e266fdf37141" (UID: "d3c4d37b-34eb-411f-9a0f-e266fdf37141"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:59 crc kubenswrapper[4833]: I1013 06:47:59.587056 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3c4d37b-34eb-411f-9a0f-e266fdf37141-kube-api-access-m45kc" (OuterVolumeSpecName: "kube-api-access-m45kc") pod "d3c4d37b-34eb-411f-9a0f-e266fdf37141" (UID: "d3c4d37b-34eb-411f-9a0f-e266fdf37141"). InnerVolumeSpecName "kube-api-access-m45kc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:47:59 crc kubenswrapper[4833]: I1013 06:47:59.610526 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3c4d37b-34eb-411f-9a0f-e266fdf37141-config-data" (OuterVolumeSpecName: "config-data") pod "d3c4d37b-34eb-411f-9a0f-e266fdf37141" (UID: "d3c4d37b-34eb-411f-9a0f-e266fdf37141"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:59 crc kubenswrapper[4833]: I1013 06:47:59.612602 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3c4d37b-34eb-411f-9a0f-e266fdf37141-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3c4d37b-34eb-411f-9a0f-e266fdf37141" (UID: "d3c4d37b-34eb-411f-9a0f-e266fdf37141"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:47:59 crc kubenswrapper[4833]: I1013 06:47:59.684588 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m45kc\" (UniqueName: \"kubernetes.io/projected/d3c4d37b-34eb-411f-9a0f-e266fdf37141-kube-api-access-m45kc\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:59 crc kubenswrapper[4833]: I1013 06:47:59.684629 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3c4d37b-34eb-411f-9a0f-e266fdf37141-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:59 crc kubenswrapper[4833]: I1013 06:47:59.684641 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3c4d37b-34eb-411f-9a0f-e266fdf37141-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:47:59 crc kubenswrapper[4833]: I1013 06:47:59.684654 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3c4d37b-34eb-411f-9a0f-e266fdf37141-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:00 crc kubenswrapper[4833]: I1013 06:48:00.101886 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bxq6n" event={"ID":"d3c4d37b-34eb-411f-9a0f-e266fdf37141","Type":"ContainerDied","Data":"aabf31579ea53d52821c693fda0e304d594b6ddea590ebdbb5c5ccdceb3109c9"} Oct 13 06:48:00 crc kubenswrapper[4833]: I1013 06:48:00.102991 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aabf31579ea53d52821c693fda0e304d594b6ddea590ebdbb5c5ccdceb3109c9" Oct 13 06:48:00 crc kubenswrapper[4833]: I1013 06:48:00.103026 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bxq6n" Oct 13 06:48:00 crc kubenswrapper[4833]: I1013 06:48:00.207232 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 13 06:48:00 crc kubenswrapper[4833]: E1013 06:48:00.208491 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3c4d37b-34eb-411f-9a0f-e266fdf37141" containerName="nova-cell0-conductor-db-sync" Oct 13 06:48:00 crc kubenswrapper[4833]: I1013 06:48:00.208514 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3c4d37b-34eb-411f-9a0f-e266fdf37141" containerName="nova-cell0-conductor-db-sync" Oct 13 06:48:00 crc kubenswrapper[4833]: I1013 06:48:00.208879 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3c4d37b-34eb-411f-9a0f-e266fdf37141" containerName="nova-cell0-conductor-db-sync" Oct 13 06:48:00 crc kubenswrapper[4833]: I1013 06:48:00.209945 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 13 06:48:00 crc kubenswrapper[4833]: I1013 06:48:00.218132 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 13 06:48:00 crc kubenswrapper[4833]: I1013 06:48:00.219306 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-dqk8g" Oct 13 06:48:00 crc kubenswrapper[4833]: I1013 06:48:00.246774 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 13 06:48:00 crc kubenswrapper[4833]: I1013 06:48:00.398325 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0bf370-6aac-4334-b612-db75770844df-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"fd0bf370-6aac-4334-b612-db75770844df\") " pod="openstack/nova-cell0-conductor-0" Oct 13 06:48:00 crc kubenswrapper[4833]: I1013 06:48:00.398392 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd0bf370-6aac-4334-b612-db75770844df-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"fd0bf370-6aac-4334-b612-db75770844df\") " pod="openstack/nova-cell0-conductor-0" Oct 13 06:48:00 crc kubenswrapper[4833]: I1013 06:48:00.398461 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtdbn\" (UniqueName: \"kubernetes.io/projected/fd0bf370-6aac-4334-b612-db75770844df-kube-api-access-mtdbn\") pod \"nova-cell0-conductor-0\" (UID: \"fd0bf370-6aac-4334-b612-db75770844df\") " pod="openstack/nova-cell0-conductor-0" Oct 13 06:48:00 crc kubenswrapper[4833]: I1013 06:48:00.500225 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtdbn\" (UniqueName: \"kubernetes.io/projected/fd0bf370-6aac-4334-b612-db75770844df-kube-api-access-mtdbn\") pod \"nova-cell0-conductor-0\" (UID: \"fd0bf370-6aac-4334-b612-db75770844df\") " pod="openstack/nova-cell0-conductor-0" Oct 13 06:48:00 crc kubenswrapper[4833]: I1013 06:48:00.500330 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0bf370-6aac-4334-b612-db75770844df-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"fd0bf370-6aac-4334-b612-db75770844df\") " pod="openstack/nova-cell0-conductor-0" Oct 13 06:48:00 crc kubenswrapper[4833]: I1013 06:48:00.500369 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd0bf370-6aac-4334-b612-db75770844df-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"fd0bf370-6aac-4334-b612-db75770844df\") " pod="openstack/nova-cell0-conductor-0" Oct 13 06:48:00 crc kubenswrapper[4833]: I1013 06:48:00.511322 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0bf370-6aac-4334-b612-db75770844df-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"fd0bf370-6aac-4334-b612-db75770844df\") " pod="openstack/nova-cell0-conductor-0" Oct 13 06:48:00 crc kubenswrapper[4833]: I1013 06:48:00.518898 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd0bf370-6aac-4334-b612-db75770844df-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"fd0bf370-6aac-4334-b612-db75770844df\") " pod="openstack/nova-cell0-conductor-0" Oct 13 06:48:00 crc kubenswrapper[4833]: I1013 06:48:00.519268 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtdbn\" (UniqueName: \"kubernetes.io/projected/fd0bf370-6aac-4334-b612-db75770844df-kube-api-access-mtdbn\") pod \"nova-cell0-conductor-0\" (UID: \"fd0bf370-6aac-4334-b612-db75770844df\") " pod="openstack/nova-cell0-conductor-0" Oct 13 06:48:00 crc kubenswrapper[4833]: I1013 06:48:00.541601 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 13 06:48:00 crc kubenswrapper[4833]: I1013 06:48:00.542574 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 06:48:00 crc kubenswrapper[4833]: I1013 06:48:00.542628 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 06:48:00 crc kubenswrapper[4833]: I1013 06:48:00.542672 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 06:48:00 crc kubenswrapper[4833]: I1013 06:48:00.543338 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e40769d80ac05fe37627a30679bf55af458c5472940a5c49dc7bce9376576247"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 06:48:00 crc kubenswrapper[4833]: I1013 06:48:00.543405 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://e40769d80ac05fe37627a30679bf55af458c5472940a5c49dc7bce9376576247" gracePeriod=600 Oct 13 06:48:01 crc kubenswrapper[4833]: I1013 06:48:01.139459 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 13 06:48:01 crc kubenswrapper[4833]: I1013 06:48:01.141612 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f564521f-ebb2-4103-9326-2acfcb6c90e4","Type":"ContainerStarted","Data":"26823de353a8760c5ebbcaaefc6be008fc3a41771a195d964e142397c91dd7d6"} Oct 13 06:48:01 crc kubenswrapper[4833]: I1013 06:48:01.142068 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f564521f-ebb2-4103-9326-2acfcb6c90e4","Type":"ContainerStarted","Data":"7c43d137cab1969d963afc221baf1b8f8d88dea0cb5e37184e172cfef9f58cac"} Oct 13 06:48:01 crc kubenswrapper[4833]: I1013 06:48:01.159072 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="e40769d80ac05fe37627a30679bf55af458c5472940a5c49dc7bce9376576247" exitCode=0 Oct 13 06:48:01 crc kubenswrapper[4833]: I1013 06:48:01.159132 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"e40769d80ac05fe37627a30679bf55af458c5472940a5c49dc7bce9376576247"} Oct 13 06:48:01 crc kubenswrapper[4833]: I1013 06:48:01.159166 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"1e0b8be85d4de611f9b44e391e311db1ba1fbe1a8afc86f11d35d9a6be4b16ce"} Oct 13 06:48:01 crc kubenswrapper[4833]: I1013 06:48:01.159214 4833 scope.go:117] "RemoveContainer" containerID="dd9e737b5446edf3a1bc45401b54500d618ca084763a39fb5a0be12fcd006b99" Oct 13 06:48:02 crc kubenswrapper[4833]: I1013 06:48:02.174493 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"fd0bf370-6aac-4334-b612-db75770844df","Type":"ContainerStarted","Data":"d828744544a28555a1b28a9ac7c2a4e7360927b89674b6368f01b8b2cf5d2ad8"} Oct 13 06:48:02 crc kubenswrapper[4833]: I1013 06:48:02.175853 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"fd0bf370-6aac-4334-b612-db75770844df","Type":"ContainerStarted","Data":"3925169af8a9b5f4ed6c674eb78c9e1a977bae7d21b3de8613b162e7a08e0aae"} Oct 13 06:48:02 crc kubenswrapper[4833]: I1013 06:48:02.175879 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Oct 13 06:48:02 crc kubenswrapper[4833]: I1013 06:48:02.197090 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.197074734 podStartE2EDuration="2.197074734s" podCreationTimestamp="2025-10-13 06:48:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:48:02.192212897 +0000 UTC m=+1172.292635813" watchObservedRunningTime="2025-10-13 06:48:02.197074734 +0000 UTC m=+1172.297497650" Oct 13 06:48:03 crc kubenswrapper[4833]: I1013 06:48:03.197469 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f564521f-ebb2-4103-9326-2acfcb6c90e4","Type":"ContainerStarted","Data":"dccc56cae35eeb688e15c7824a5075cbc1f00517bde392388433981c1ee7e18a"} Oct 13 06:48:03 crc kubenswrapper[4833]: I1013 06:48:03.198386 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 13 06:48:10 crc kubenswrapper[4833]: I1013 06:48:10.577350 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Oct 13 06:48:10 crc kubenswrapper[4833]: I1013 06:48:10.601776 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=8.980725001 podStartE2EDuration="13.601744942s" podCreationTimestamp="2025-10-13 06:47:57 +0000 UTC" firstStartedPulling="2025-10-13 06:47:58.008807379 +0000 UTC m=+1168.109230285" lastFinishedPulling="2025-10-13 06:48:02.62982727 +0000 UTC m=+1172.730250226" observedRunningTime="2025-10-13 06:48:03.224212282 +0000 UTC m=+1173.324635198" watchObservedRunningTime="2025-10-13 06:48:10.601744942 +0000 UTC m=+1180.702167908" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.023337 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-jpl8n"] Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.025437 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jpl8n" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.029505 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.029901 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.039303 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-jpl8n"] Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.124512 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/949b1dbe-5e00-401f-a0a6-d0830a0092ad-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-jpl8n\" (UID: \"949b1dbe-5e00-401f-a0a6-d0830a0092ad\") " pod="openstack/nova-cell0-cell-mapping-jpl8n" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.124691 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/949b1dbe-5e00-401f-a0a6-d0830a0092ad-config-data\") pod \"nova-cell0-cell-mapping-jpl8n\" (UID: \"949b1dbe-5e00-401f-a0a6-d0830a0092ad\") " pod="openstack/nova-cell0-cell-mapping-jpl8n" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.124729 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l555k\" (UniqueName: \"kubernetes.io/projected/949b1dbe-5e00-401f-a0a6-d0830a0092ad-kube-api-access-l555k\") pod \"nova-cell0-cell-mapping-jpl8n\" (UID: \"949b1dbe-5e00-401f-a0a6-d0830a0092ad\") " pod="openstack/nova-cell0-cell-mapping-jpl8n" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.124900 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/949b1dbe-5e00-401f-a0a6-d0830a0092ad-scripts\") pod \"nova-cell0-cell-mapping-jpl8n\" (UID: \"949b1dbe-5e00-401f-a0a6-d0830a0092ad\") " pod="openstack/nova-cell0-cell-mapping-jpl8n" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.148160 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.150036 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.155148 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.169203 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.228319 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/949b1dbe-5e00-401f-a0a6-d0830a0092ad-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-jpl8n\" (UID: \"949b1dbe-5e00-401f-a0a6-d0830a0092ad\") " pod="openstack/nova-cell0-cell-mapping-jpl8n" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.228381 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/949b1dbe-5e00-401f-a0a6-d0830a0092ad-config-data\") pod \"nova-cell0-cell-mapping-jpl8n\" (UID: \"949b1dbe-5e00-401f-a0a6-d0830a0092ad\") " pod="openstack/nova-cell0-cell-mapping-jpl8n" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.228404 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l555k\" (UniqueName: \"kubernetes.io/projected/949b1dbe-5e00-401f-a0a6-d0830a0092ad-kube-api-access-l555k\") pod \"nova-cell0-cell-mapping-jpl8n\" (UID: \"949b1dbe-5e00-401f-a0a6-d0830a0092ad\") " pod="openstack/nova-cell0-cell-mapping-jpl8n" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.228443 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv9r6\" (UniqueName: \"kubernetes.io/projected/f2846b39-c826-4f73-aff8-dccd5a1f4ad1-kube-api-access-zv9r6\") pod \"nova-api-0\" (UID: \"f2846b39-c826-4f73-aff8-dccd5a1f4ad1\") " pod="openstack/nova-api-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.228471 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/949b1dbe-5e00-401f-a0a6-d0830a0092ad-scripts\") pod \"nova-cell0-cell-mapping-jpl8n\" (UID: \"949b1dbe-5e00-401f-a0a6-d0830a0092ad\") " pod="openstack/nova-cell0-cell-mapping-jpl8n" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.228499 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2846b39-c826-4f73-aff8-dccd5a1f4ad1-logs\") pod \"nova-api-0\" (UID: \"f2846b39-c826-4f73-aff8-dccd5a1f4ad1\") " pod="openstack/nova-api-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.228566 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2846b39-c826-4f73-aff8-dccd5a1f4ad1-config-data\") pod \"nova-api-0\" (UID: \"f2846b39-c826-4f73-aff8-dccd5a1f4ad1\") " pod="openstack/nova-api-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.228592 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2846b39-c826-4f73-aff8-dccd5a1f4ad1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f2846b39-c826-4f73-aff8-dccd5a1f4ad1\") " pod="openstack/nova-api-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.242287 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/949b1dbe-5e00-401f-a0a6-d0830a0092ad-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-jpl8n\" (UID: \"949b1dbe-5e00-401f-a0a6-d0830a0092ad\") " pod="openstack/nova-cell0-cell-mapping-jpl8n" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.243058 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/949b1dbe-5e00-401f-a0a6-d0830a0092ad-scripts\") pod \"nova-cell0-cell-mapping-jpl8n\" (UID: \"949b1dbe-5e00-401f-a0a6-d0830a0092ad\") " pod="openstack/nova-cell0-cell-mapping-jpl8n" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.243827 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.245582 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/949b1dbe-5e00-401f-a0a6-d0830a0092ad-config-data\") pod \"nova-cell0-cell-mapping-jpl8n\" (UID: \"949b1dbe-5e00-401f-a0a6-d0830a0092ad\") " pod="openstack/nova-cell0-cell-mapping-jpl8n" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.262048 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.278166 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.306591 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l555k\" (UniqueName: \"kubernetes.io/projected/949b1dbe-5e00-401f-a0a6-d0830a0092ad-kube-api-access-l555k\") pod \"nova-cell0-cell-mapping-jpl8n\" (UID: \"949b1dbe-5e00-401f-a0a6-d0830a0092ad\") " pod="openstack/nova-cell0-cell-mapping-jpl8n" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.308159 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.334645 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e166efd-c317-44e2-9c48-e122a9cc3fab-config-data\") pod \"nova-metadata-0\" (UID: \"0e166efd-c317-44e2-9c48-e122a9cc3fab\") " pod="openstack/nova-metadata-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.334783 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpmtd\" (UniqueName: \"kubernetes.io/projected/0e166efd-c317-44e2-9c48-e122a9cc3fab-kube-api-access-wpmtd\") pod \"nova-metadata-0\" (UID: \"0e166efd-c317-44e2-9c48-e122a9cc3fab\") " pod="openstack/nova-metadata-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.334938 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e166efd-c317-44e2-9c48-e122a9cc3fab-logs\") pod \"nova-metadata-0\" (UID: \"0e166efd-c317-44e2-9c48-e122a9cc3fab\") " pod="openstack/nova-metadata-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.335028 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv9r6\" (UniqueName: \"kubernetes.io/projected/f2846b39-c826-4f73-aff8-dccd5a1f4ad1-kube-api-access-zv9r6\") pod \"nova-api-0\" (UID: \"f2846b39-c826-4f73-aff8-dccd5a1f4ad1\") " pod="openstack/nova-api-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.335129 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2846b39-c826-4f73-aff8-dccd5a1f4ad1-logs\") pod \"nova-api-0\" (UID: \"f2846b39-c826-4f73-aff8-dccd5a1f4ad1\") " pod="openstack/nova-api-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.335277 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e166efd-c317-44e2-9c48-e122a9cc3fab-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0e166efd-c317-44e2-9c48-e122a9cc3fab\") " pod="openstack/nova-metadata-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.335361 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2846b39-c826-4f73-aff8-dccd5a1f4ad1-config-data\") pod \"nova-api-0\" (UID: \"f2846b39-c826-4f73-aff8-dccd5a1f4ad1\") " pod="openstack/nova-api-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.335439 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2846b39-c826-4f73-aff8-dccd5a1f4ad1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f2846b39-c826-4f73-aff8-dccd5a1f4ad1\") " pod="openstack/nova-api-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.337915 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2846b39-c826-4f73-aff8-dccd5a1f4ad1-logs\") pod \"nova-api-0\" (UID: \"f2846b39-c826-4f73-aff8-dccd5a1f4ad1\") " pod="openstack/nova-api-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.351470 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2846b39-c826-4f73-aff8-dccd5a1f4ad1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f2846b39-c826-4f73-aff8-dccd5a1f4ad1\") " pod="openstack/nova-api-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.354679 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jpl8n" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.362238 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.364496 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.370160 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.371015 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2846b39-c826-4f73-aff8-dccd5a1f4ad1-config-data\") pod \"nova-api-0\" (UID: \"f2846b39-c826-4f73-aff8-dccd5a1f4ad1\") " pod="openstack/nova-api-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.379121 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv9r6\" (UniqueName: \"kubernetes.io/projected/f2846b39-c826-4f73-aff8-dccd5a1f4ad1-kube-api-access-zv9r6\") pod \"nova-api-0\" (UID: \"f2846b39-c826-4f73-aff8-dccd5a1f4ad1\") " pod="openstack/nova-api-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.386616 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.388194 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.393625 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.425248 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.447050 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e166efd-c317-44e2-9c48-e122a9cc3fab-config-data\") pod \"nova-metadata-0\" (UID: \"0e166efd-c317-44e2-9c48-e122a9cc3fab\") " pod="openstack/nova-metadata-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.447366 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpmtd\" (UniqueName: \"kubernetes.io/projected/0e166efd-c317-44e2-9c48-e122a9cc3fab-kube-api-access-wpmtd\") pod \"nova-metadata-0\" (UID: \"0e166efd-c317-44e2-9c48-e122a9cc3fab\") " pod="openstack/nova-metadata-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.447436 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30cc9429-c129-488f-9839-f69607ee7640-config-data\") pod \"nova-scheduler-0\" (UID: \"30cc9429-c129-488f-9839-f69607ee7640\") " pod="openstack/nova-scheduler-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.447458 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30cc9429-c129-488f-9839-f69607ee7640-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"30cc9429-c129-488f-9839-f69607ee7640\") " pod="openstack/nova-scheduler-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.447482 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e166efd-c317-44e2-9c48-e122a9cc3fab-logs\") pod \"nova-metadata-0\" (UID: \"0e166efd-c317-44e2-9c48-e122a9cc3fab\") " pod="openstack/nova-metadata-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.447514 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6klls\" (UniqueName: \"kubernetes.io/projected/30cc9429-c129-488f-9839-f69607ee7640-kube-api-access-6klls\") pod \"nova-scheduler-0\" (UID: \"30cc9429-c129-488f-9839-f69607ee7640\") " pod="openstack/nova-scheduler-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.447583 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc4xj\" (UniqueName: \"kubernetes.io/projected/f75d210b-f440-4d30-ae98-927b7660dad6-kube-api-access-wc4xj\") pod \"nova-cell1-novncproxy-0\" (UID: \"f75d210b-f440-4d30-ae98-927b7660dad6\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.447614 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f75d210b-f440-4d30-ae98-927b7660dad6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f75d210b-f440-4d30-ae98-927b7660dad6\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.447671 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e166efd-c317-44e2-9c48-e122a9cc3fab-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0e166efd-c317-44e2-9c48-e122a9cc3fab\") " pod="openstack/nova-metadata-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.447692 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f75d210b-f440-4d30-ae98-927b7660dad6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f75d210b-f440-4d30-ae98-927b7660dad6\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.449057 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e166efd-c317-44e2-9c48-e122a9cc3fab-logs\") pod \"nova-metadata-0\" (UID: \"0e166efd-c317-44e2-9c48-e122a9cc3fab\") " pod="openstack/nova-metadata-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.456739 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e166efd-c317-44e2-9c48-e122a9cc3fab-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0e166efd-c317-44e2-9c48-e122a9cc3fab\") " pod="openstack/nova-metadata-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.462175 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e166efd-c317-44e2-9c48-e122a9cc3fab-config-data\") pod \"nova-metadata-0\" (UID: \"0e166efd-c317-44e2-9c48-e122a9cc3fab\") " pod="openstack/nova-metadata-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.478068 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpmtd\" (UniqueName: \"kubernetes.io/projected/0e166efd-c317-44e2-9c48-e122a9cc3fab-kube-api-access-wpmtd\") pod \"nova-metadata-0\" (UID: \"0e166efd-c317-44e2-9c48-e122a9cc3fab\") " pod="openstack/nova-metadata-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.483246 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.483992 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.488261 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.552911 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ffc974fdf-26tbs"] Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.553367 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30cc9429-c129-488f-9839-f69607ee7640-config-data\") pod \"nova-scheduler-0\" (UID: \"30cc9429-c129-488f-9839-f69607ee7640\") " pod="openstack/nova-scheduler-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.553430 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30cc9429-c129-488f-9839-f69607ee7640-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"30cc9429-c129-488f-9839-f69607ee7640\") " pod="openstack/nova-scheduler-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.553473 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6klls\" (UniqueName: \"kubernetes.io/projected/30cc9429-c129-488f-9839-f69607ee7640-kube-api-access-6klls\") pod \"nova-scheduler-0\" (UID: \"30cc9429-c129-488f-9839-f69607ee7640\") " pod="openstack/nova-scheduler-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.553580 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc4xj\" (UniqueName: \"kubernetes.io/projected/f75d210b-f440-4d30-ae98-927b7660dad6-kube-api-access-wc4xj\") pod \"nova-cell1-novncproxy-0\" (UID: \"f75d210b-f440-4d30-ae98-927b7660dad6\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.553614 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f75d210b-f440-4d30-ae98-927b7660dad6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f75d210b-f440-4d30-ae98-927b7660dad6\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.553737 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f75d210b-f440-4d30-ae98-927b7660dad6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f75d210b-f440-4d30-ae98-927b7660dad6\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.554916 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffc974fdf-26tbs" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.558901 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30cc9429-c129-488f-9839-f69607ee7640-config-data\") pod \"nova-scheduler-0\" (UID: \"30cc9429-c129-488f-9839-f69607ee7640\") " pod="openstack/nova-scheduler-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.559879 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f75d210b-f440-4d30-ae98-927b7660dad6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f75d210b-f440-4d30-ae98-927b7660dad6\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.561098 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f75d210b-f440-4d30-ae98-927b7660dad6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f75d210b-f440-4d30-ae98-927b7660dad6\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.563159 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30cc9429-c129-488f-9839-f69607ee7640-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"30cc9429-c129-488f-9839-f69607ee7640\") " pod="openstack/nova-scheduler-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.592171 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc4xj\" (UniqueName: \"kubernetes.io/projected/f75d210b-f440-4d30-ae98-927b7660dad6-kube-api-access-wc4xj\") pod \"nova-cell1-novncproxy-0\" (UID: \"f75d210b-f440-4d30-ae98-927b7660dad6\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.593200 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6klls\" (UniqueName: \"kubernetes.io/projected/30cc9429-c129-488f-9839-f69607ee7640-kube-api-access-6klls\") pod \"nova-scheduler-0\" (UID: \"30cc9429-c129-488f-9839-f69607ee7640\") " pod="openstack/nova-scheduler-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.635003 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ffc974fdf-26tbs"] Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.655019 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffc974fdf-26tbs\" (UID: \"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9\") " pod="openstack/dnsmasq-dns-6ffc974fdf-26tbs" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.655080 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffc974fdf-26tbs\" (UID: \"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9\") " pod="openstack/dnsmasq-dns-6ffc974fdf-26tbs" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.655150 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-config\") pod \"dnsmasq-dns-6ffc974fdf-26tbs\" (UID: \"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9\") " pod="openstack/dnsmasq-dns-6ffc974fdf-26tbs" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.655174 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-dns-swift-storage-0\") pod \"dnsmasq-dns-6ffc974fdf-26tbs\" (UID: \"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9\") " pod="openstack/dnsmasq-dns-6ffc974fdf-26tbs" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.655195 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt56q\" (UniqueName: \"kubernetes.io/projected/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-kube-api-access-mt56q\") pod \"dnsmasq-dns-6ffc974fdf-26tbs\" (UID: \"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9\") " pod="openstack/dnsmasq-dns-6ffc974fdf-26tbs" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.655228 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-dns-svc\") pod \"dnsmasq-dns-6ffc974fdf-26tbs\" (UID: \"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9\") " pod="openstack/dnsmasq-dns-6ffc974fdf-26tbs" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.760744 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-config\") pod \"dnsmasq-dns-6ffc974fdf-26tbs\" (UID: \"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9\") " pod="openstack/dnsmasq-dns-6ffc974fdf-26tbs" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.761060 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-dns-swift-storage-0\") pod \"dnsmasq-dns-6ffc974fdf-26tbs\" (UID: \"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9\") " pod="openstack/dnsmasq-dns-6ffc974fdf-26tbs" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.761095 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt56q\" (UniqueName: \"kubernetes.io/projected/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-kube-api-access-mt56q\") pod \"dnsmasq-dns-6ffc974fdf-26tbs\" (UID: \"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9\") " pod="openstack/dnsmasq-dns-6ffc974fdf-26tbs" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.761124 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-dns-svc\") pod \"dnsmasq-dns-6ffc974fdf-26tbs\" (UID: \"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9\") " pod="openstack/dnsmasq-dns-6ffc974fdf-26tbs" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.761276 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffc974fdf-26tbs\" (UID: \"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9\") " pod="openstack/dnsmasq-dns-6ffc974fdf-26tbs" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.761338 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffc974fdf-26tbs\" (UID: \"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9\") " pod="openstack/dnsmasq-dns-6ffc974fdf-26tbs" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.763975 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-config\") pod \"dnsmasq-dns-6ffc974fdf-26tbs\" (UID: \"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9\") " pod="openstack/dnsmasq-dns-6ffc974fdf-26tbs" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.765083 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-dns-swift-storage-0\") pod \"dnsmasq-dns-6ffc974fdf-26tbs\" (UID: \"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9\") " pod="openstack/dnsmasq-dns-6ffc974fdf-26tbs" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.765495 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffc974fdf-26tbs\" (UID: \"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9\") " pod="openstack/dnsmasq-dns-6ffc974fdf-26tbs" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.765600 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-dns-svc\") pod \"dnsmasq-dns-6ffc974fdf-26tbs\" (UID: \"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9\") " pod="openstack/dnsmasq-dns-6ffc974fdf-26tbs" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.765835 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffc974fdf-26tbs\" (UID: \"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9\") " pod="openstack/dnsmasq-dns-6ffc974fdf-26tbs" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.781191 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt56q\" (UniqueName: \"kubernetes.io/projected/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-kube-api-access-mt56q\") pod \"dnsmasq-dns-6ffc974fdf-26tbs\" (UID: \"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9\") " pod="openstack/dnsmasq-dns-6ffc974fdf-26tbs" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.796197 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.807000 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.905466 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffc974fdf-26tbs" Oct 13 06:48:11 crc kubenswrapper[4833]: I1013 06:48:11.910798 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-jpl8n"] Oct 13 06:48:12 crc kubenswrapper[4833]: I1013 06:48:12.052938 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 06:48:12 crc kubenswrapper[4833]: W1013 06:48:12.073233 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e166efd_c317_44e2_9c48_e122a9cc3fab.slice/crio-2046778404b9bddad8eae75e4adadff18f7167c21a5441bbe9045a33029522ab WatchSource:0}: Error finding container 2046778404b9bddad8eae75e4adadff18f7167c21a5441bbe9045a33029522ab: Status 404 returned error can't find the container with id 2046778404b9bddad8eae75e4adadff18f7167c21a5441bbe9045a33029522ab Oct 13 06:48:12 crc kubenswrapper[4833]: W1013 06:48:12.121740 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2846b39_c826_4f73_aff8_dccd5a1f4ad1.slice/crio-b5d0cb9036b38837b39e284076018d88e4defdde9e50a55c17c3703d2b683773 WatchSource:0}: Error finding container b5d0cb9036b38837b39e284076018d88e4defdde9e50a55c17c3703d2b683773: Status 404 returned error can't find the container with id b5d0cb9036b38837b39e284076018d88e4defdde9e50a55c17c3703d2b683773 Oct 13 06:48:12 crc kubenswrapper[4833]: I1013 06:48:12.126042 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 13 06:48:12 crc kubenswrapper[4833]: I1013 06:48:12.189601 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mc8l7"] Oct 13 06:48:12 crc kubenswrapper[4833]: I1013 06:48:12.191207 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mc8l7" Oct 13 06:48:12 crc kubenswrapper[4833]: I1013 06:48:12.194062 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Oct 13 06:48:12 crc kubenswrapper[4833]: I1013 06:48:12.194142 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 13 06:48:12 crc kubenswrapper[4833]: I1013 06:48:12.203784 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mc8l7"] Oct 13 06:48:12 crc kubenswrapper[4833]: I1013 06:48:12.282117 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc812647-c154-4fe8-b6cc-fcf008841900-config-data\") pod \"nova-cell1-conductor-db-sync-mc8l7\" (UID: \"bc812647-c154-4fe8-b6cc-fcf008841900\") " pod="openstack/nova-cell1-conductor-db-sync-mc8l7" Oct 13 06:48:12 crc kubenswrapper[4833]: I1013 06:48:12.282176 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qmwj\" (UniqueName: \"kubernetes.io/projected/bc812647-c154-4fe8-b6cc-fcf008841900-kube-api-access-2qmwj\") pod \"nova-cell1-conductor-db-sync-mc8l7\" (UID: \"bc812647-c154-4fe8-b6cc-fcf008841900\") " pod="openstack/nova-cell1-conductor-db-sync-mc8l7" Oct 13 06:48:12 crc kubenswrapper[4833]: I1013 06:48:12.282438 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc812647-c154-4fe8-b6cc-fcf008841900-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-mc8l7\" (UID: \"bc812647-c154-4fe8-b6cc-fcf008841900\") " pod="openstack/nova-cell1-conductor-db-sync-mc8l7" Oct 13 06:48:12 crc kubenswrapper[4833]: I1013 06:48:12.282650 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc812647-c154-4fe8-b6cc-fcf008841900-scripts\") pod \"nova-cell1-conductor-db-sync-mc8l7\" (UID: \"bc812647-c154-4fe8-b6cc-fcf008841900\") " pod="openstack/nova-cell1-conductor-db-sync-mc8l7" Oct 13 06:48:12 crc kubenswrapper[4833]: I1013 06:48:12.322781 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f2846b39-c826-4f73-aff8-dccd5a1f4ad1","Type":"ContainerStarted","Data":"b5d0cb9036b38837b39e284076018d88e4defdde9e50a55c17c3703d2b683773"} Oct 13 06:48:12 crc kubenswrapper[4833]: I1013 06:48:12.324788 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0e166efd-c317-44e2-9c48-e122a9cc3fab","Type":"ContainerStarted","Data":"2046778404b9bddad8eae75e4adadff18f7167c21a5441bbe9045a33029522ab"} Oct 13 06:48:12 crc kubenswrapper[4833]: I1013 06:48:12.330072 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jpl8n" event={"ID":"949b1dbe-5e00-401f-a0a6-d0830a0092ad","Type":"ContainerStarted","Data":"90ee1a23cd38eedd15beb4921be258e3fbaa0411381f1f5b0925d97d2d4fcd83"} Oct 13 06:48:12 crc kubenswrapper[4833]: I1013 06:48:12.330130 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jpl8n" event={"ID":"949b1dbe-5e00-401f-a0a6-d0830a0092ad","Type":"ContainerStarted","Data":"f26933f7d88eaab824db653f2278be2664cb7d6534df9d1269049f3dde5b3bd0"} Oct 13 06:48:12 crc kubenswrapper[4833]: I1013 06:48:12.353666 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-jpl8n" podStartSLOduration=1.353649863 podStartE2EDuration="1.353649863s" podCreationTimestamp="2025-10-13 06:48:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:48:12.346083911 +0000 UTC m=+1182.446506837" watchObservedRunningTime="2025-10-13 06:48:12.353649863 +0000 UTC m=+1182.454072779" Oct 13 06:48:12 crc kubenswrapper[4833]: I1013 06:48:12.354742 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 06:48:12 crc kubenswrapper[4833]: I1013 06:48:12.385775 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc812647-c154-4fe8-b6cc-fcf008841900-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-mc8l7\" (UID: \"bc812647-c154-4fe8-b6cc-fcf008841900\") " pod="openstack/nova-cell1-conductor-db-sync-mc8l7" Oct 13 06:48:12 crc kubenswrapper[4833]: I1013 06:48:12.385918 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc812647-c154-4fe8-b6cc-fcf008841900-scripts\") pod \"nova-cell1-conductor-db-sync-mc8l7\" (UID: \"bc812647-c154-4fe8-b6cc-fcf008841900\") " pod="openstack/nova-cell1-conductor-db-sync-mc8l7" Oct 13 06:48:12 crc kubenswrapper[4833]: I1013 06:48:12.386001 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc812647-c154-4fe8-b6cc-fcf008841900-config-data\") pod \"nova-cell1-conductor-db-sync-mc8l7\" (UID: \"bc812647-c154-4fe8-b6cc-fcf008841900\") " pod="openstack/nova-cell1-conductor-db-sync-mc8l7" Oct 13 06:48:12 crc kubenswrapper[4833]: I1013 06:48:12.386040 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qmwj\" (UniqueName: \"kubernetes.io/projected/bc812647-c154-4fe8-b6cc-fcf008841900-kube-api-access-2qmwj\") pod \"nova-cell1-conductor-db-sync-mc8l7\" (UID: \"bc812647-c154-4fe8-b6cc-fcf008841900\") " pod="openstack/nova-cell1-conductor-db-sync-mc8l7" Oct 13 06:48:12 crc kubenswrapper[4833]: W1013 06:48:12.388815 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf75d210b_f440_4d30_ae98_927b7660dad6.slice/crio-c1e92efc1a98f3d5da5d97e47e91766c4493561c4366d6a2a532622e5c10d42f WatchSource:0}: Error finding container c1e92efc1a98f3d5da5d97e47e91766c4493561c4366d6a2a532622e5c10d42f: Status 404 returned error can't find the container with id c1e92efc1a98f3d5da5d97e47e91766c4493561c4366d6a2a532622e5c10d42f Oct 13 06:48:12 crc kubenswrapper[4833]: I1013 06:48:12.390863 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 13 06:48:12 crc kubenswrapper[4833]: I1013 06:48:12.394244 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc812647-c154-4fe8-b6cc-fcf008841900-scripts\") pod \"nova-cell1-conductor-db-sync-mc8l7\" (UID: \"bc812647-c154-4fe8-b6cc-fcf008841900\") " pod="openstack/nova-cell1-conductor-db-sync-mc8l7" Oct 13 06:48:12 crc kubenswrapper[4833]: I1013 06:48:12.395125 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc812647-c154-4fe8-b6cc-fcf008841900-config-data\") pod \"nova-cell1-conductor-db-sync-mc8l7\" (UID: \"bc812647-c154-4fe8-b6cc-fcf008841900\") " pod="openstack/nova-cell1-conductor-db-sync-mc8l7" Oct 13 06:48:12 crc kubenswrapper[4833]: I1013 06:48:12.399183 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc812647-c154-4fe8-b6cc-fcf008841900-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-mc8l7\" (UID: \"bc812647-c154-4fe8-b6cc-fcf008841900\") " pod="openstack/nova-cell1-conductor-db-sync-mc8l7" Oct 13 06:48:12 crc kubenswrapper[4833]: I1013 06:48:12.407627 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qmwj\" (UniqueName: \"kubernetes.io/projected/bc812647-c154-4fe8-b6cc-fcf008841900-kube-api-access-2qmwj\") pod \"nova-cell1-conductor-db-sync-mc8l7\" (UID: \"bc812647-c154-4fe8-b6cc-fcf008841900\") " pod="openstack/nova-cell1-conductor-db-sync-mc8l7" Oct 13 06:48:12 crc kubenswrapper[4833]: I1013 06:48:12.486679 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ffc974fdf-26tbs"] Oct 13 06:48:12 crc kubenswrapper[4833]: W1013 06:48:12.489453 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c6a27f2_c55b_45bc_b6e3_9a4d7ecba2e9.slice/crio-f1a34e39026a0d783506a761bceb4c71203a92921c1074dd163fc2424e486a9f WatchSource:0}: Error finding container f1a34e39026a0d783506a761bceb4c71203a92921c1074dd163fc2424e486a9f: Status 404 returned error can't find the container with id f1a34e39026a0d783506a761bceb4c71203a92921c1074dd163fc2424e486a9f Oct 13 06:48:12 crc kubenswrapper[4833]: I1013 06:48:12.527284 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mc8l7" Oct 13 06:48:13 crc kubenswrapper[4833]: I1013 06:48:13.054416 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mc8l7"] Oct 13 06:48:13 crc kubenswrapper[4833]: W1013 06:48:13.061993 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc812647_c154_4fe8_b6cc_fcf008841900.slice/crio-01df84021262d64ba9d9ac1c500de73258096a5dfb6417de1c3af1537e404a00 WatchSource:0}: Error finding container 01df84021262d64ba9d9ac1c500de73258096a5dfb6417de1c3af1537e404a00: Status 404 returned error can't find the container with id 01df84021262d64ba9d9ac1c500de73258096a5dfb6417de1c3af1537e404a00 Oct 13 06:48:13 crc kubenswrapper[4833]: I1013 06:48:13.366303 4833 generic.go:334] "Generic (PLEG): container finished" podID="1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9" containerID="8f1fe79426b9fa427cc5c31e21ca7823fdf4110a096cbb45b248814edc3f7aed" exitCode=0 Oct 13 06:48:13 crc kubenswrapper[4833]: I1013 06:48:13.366399 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffc974fdf-26tbs" event={"ID":"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9","Type":"ContainerDied","Data":"8f1fe79426b9fa427cc5c31e21ca7823fdf4110a096cbb45b248814edc3f7aed"} Oct 13 06:48:13 crc kubenswrapper[4833]: I1013 06:48:13.367139 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffc974fdf-26tbs" event={"ID":"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9","Type":"ContainerStarted","Data":"f1a34e39026a0d783506a761bceb4c71203a92921c1074dd163fc2424e486a9f"} Oct 13 06:48:13 crc kubenswrapper[4833]: I1013 06:48:13.370306 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f75d210b-f440-4d30-ae98-927b7660dad6","Type":"ContainerStarted","Data":"c1e92efc1a98f3d5da5d97e47e91766c4493561c4366d6a2a532622e5c10d42f"} Oct 13 06:48:13 crc kubenswrapper[4833]: I1013 06:48:13.372706 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mc8l7" event={"ID":"bc812647-c154-4fe8-b6cc-fcf008841900","Type":"ContainerStarted","Data":"75b83eb5a0f4a87001ec550008c86da38762ad7776efbc4d0f7db261a9b40d50"} Oct 13 06:48:13 crc kubenswrapper[4833]: I1013 06:48:13.372747 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mc8l7" event={"ID":"bc812647-c154-4fe8-b6cc-fcf008841900","Type":"ContainerStarted","Data":"01df84021262d64ba9d9ac1c500de73258096a5dfb6417de1c3af1537e404a00"} Oct 13 06:48:13 crc kubenswrapper[4833]: I1013 06:48:13.384200 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"30cc9429-c129-488f-9839-f69607ee7640","Type":"ContainerStarted","Data":"ac16962d55ccc4a0cc8f83d66f82be41a593e8d0cd8a691bfca25ad1f14b3f12"} Oct 13 06:48:13 crc kubenswrapper[4833]: I1013 06:48:13.420410 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-mc8l7" podStartSLOduration=1.420388069 podStartE2EDuration="1.420388069s" podCreationTimestamp="2025-10-13 06:48:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:48:13.411054058 +0000 UTC m=+1183.511476984" watchObservedRunningTime="2025-10-13 06:48:13.420388069 +0000 UTC m=+1183.520810985" Oct 13 06:48:14 crc kubenswrapper[4833]: I1013 06:48:14.396625 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffc974fdf-26tbs" event={"ID":"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9","Type":"ContainerStarted","Data":"32890bbdeeeca403a2277afb0203e9c187bb14905e5fda3b7f3a40ea1824d364"} Oct 13 06:48:14 crc kubenswrapper[4833]: I1013 06:48:14.397000 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ffc974fdf-26tbs" Oct 13 06:48:14 crc kubenswrapper[4833]: I1013 06:48:14.428302 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ffc974fdf-26tbs" podStartSLOduration=3.428285899 podStartE2EDuration="3.428285899s" podCreationTimestamp="2025-10-13 06:48:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:48:14.425263034 +0000 UTC m=+1184.525685950" watchObservedRunningTime="2025-10-13 06:48:14.428285899 +0000 UTC m=+1184.528708815" Oct 13 06:48:15 crc kubenswrapper[4833]: I1013 06:48:15.015848 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 13 06:48:15 crc kubenswrapper[4833]: I1013 06:48:15.025805 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 06:48:16 crc kubenswrapper[4833]: I1013 06:48:16.429810 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f75d210b-f440-4d30-ae98-927b7660dad6","Type":"ContainerStarted","Data":"bdea18d1dda04a6f48a543d7b832b414dd45a318e0945919afca57e801a3a01c"} Oct 13 06:48:16 crc kubenswrapper[4833]: I1013 06:48:16.430667 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="f75d210b-f440-4d30-ae98-927b7660dad6" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://bdea18d1dda04a6f48a543d7b832b414dd45a318e0945919afca57e801a3a01c" gracePeriod=30 Oct 13 06:48:16 crc kubenswrapper[4833]: I1013 06:48:16.439444 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0e166efd-c317-44e2-9c48-e122a9cc3fab","Type":"ContainerStarted","Data":"8ffb0d1ca0155fa37e5d3f0f91633efccece4c7a21a9f7a9fa2c932b077b6c71"} Oct 13 06:48:16 crc kubenswrapper[4833]: I1013 06:48:16.439521 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0e166efd-c317-44e2-9c48-e122a9cc3fab","Type":"ContainerStarted","Data":"096466f7bd7cd1ba1e7b2b30e6a801fee1db1f7465d8f6d91bdbb28c754c932c"} Oct 13 06:48:16 crc kubenswrapper[4833]: I1013 06:48:16.439586 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0e166efd-c317-44e2-9c48-e122a9cc3fab" containerName="nova-metadata-log" containerID="cri-o://096466f7bd7cd1ba1e7b2b30e6a801fee1db1f7465d8f6d91bdbb28c754c932c" gracePeriod=30 Oct 13 06:48:16 crc kubenswrapper[4833]: I1013 06:48:16.439677 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0e166efd-c317-44e2-9c48-e122a9cc3fab" containerName="nova-metadata-metadata" containerID="cri-o://8ffb0d1ca0155fa37e5d3f0f91633efccece4c7a21a9f7a9fa2c932b077b6c71" gracePeriod=30 Oct 13 06:48:16 crc kubenswrapper[4833]: I1013 06:48:16.454954 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"30cc9429-c129-488f-9839-f69607ee7640","Type":"ContainerStarted","Data":"cce5dfc0de3798640853aa6a0f4b781269c9be1f56a2ec623efc8ddd7ae0957c"} Oct 13 06:48:16 crc kubenswrapper[4833]: I1013 06:48:16.458995 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f2846b39-c826-4f73-aff8-dccd5a1f4ad1","Type":"ContainerStarted","Data":"94b0bc1da13c4a10e0526402ab5b203934392d21cd77951bd225cfecf37e89cf"} Oct 13 06:48:16 crc kubenswrapper[4833]: I1013 06:48:16.459027 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f2846b39-c826-4f73-aff8-dccd5a1f4ad1","Type":"ContainerStarted","Data":"41535e1ffe936b7cec7b5c301b358027575f5edfb8097d0294cec6c89d967144"} Oct 13 06:48:16 crc kubenswrapper[4833]: I1013 06:48:16.461409 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.612514628 podStartE2EDuration="5.461393502s" podCreationTimestamp="2025-10-13 06:48:11 +0000 UTC" firstStartedPulling="2025-10-13 06:48:12.405117944 +0000 UTC m=+1182.505540860" lastFinishedPulling="2025-10-13 06:48:15.253996808 +0000 UTC m=+1185.354419734" observedRunningTime="2025-10-13 06:48:16.453158992 +0000 UTC m=+1186.553581908" watchObservedRunningTime="2025-10-13 06:48:16.461393502 +0000 UTC m=+1186.561816418" Oct 13 06:48:16 crc kubenswrapper[4833]: I1013 06:48:16.486049 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.309187155 podStartE2EDuration="5.486027952s" podCreationTimestamp="2025-10-13 06:48:11 +0000 UTC" firstStartedPulling="2025-10-13 06:48:12.075294069 +0000 UTC m=+1182.175716985" lastFinishedPulling="2025-10-13 06:48:15.252134856 +0000 UTC m=+1185.352557782" observedRunningTime="2025-10-13 06:48:16.477808262 +0000 UTC m=+1186.578231178" watchObservedRunningTime="2025-10-13 06:48:16.486027952 +0000 UTC m=+1186.586450888" Oct 13 06:48:16 crc kubenswrapper[4833]: I1013 06:48:16.489800 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 13 06:48:16 crc kubenswrapper[4833]: I1013 06:48:16.489862 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 13 06:48:16 crc kubenswrapper[4833]: I1013 06:48:16.498585 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.3810873790000002 podStartE2EDuration="5.498562503s" podCreationTimestamp="2025-10-13 06:48:11 +0000 UTC" firstStartedPulling="2025-10-13 06:48:12.124153557 +0000 UTC m=+1182.224576463" lastFinishedPulling="2025-10-13 06:48:15.241628671 +0000 UTC m=+1185.342051587" observedRunningTime="2025-10-13 06:48:16.496200417 +0000 UTC m=+1186.596623353" watchObservedRunningTime="2025-10-13 06:48:16.498562503 +0000 UTC m=+1186.598985459" Oct 13 06:48:16 crc kubenswrapper[4833]: I1013 06:48:16.529967 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.646594983 podStartE2EDuration="5.529945762s" podCreationTimestamp="2025-10-13 06:48:11 +0000 UTC" firstStartedPulling="2025-10-13 06:48:12.391560334 +0000 UTC m=+1182.491983250" lastFinishedPulling="2025-10-13 06:48:15.274911093 +0000 UTC m=+1185.375334029" observedRunningTime="2025-10-13 06:48:16.517964876 +0000 UTC m=+1186.618387812" watchObservedRunningTime="2025-10-13 06:48:16.529945762 +0000 UTC m=+1186.630368698" Oct 13 06:48:16 crc kubenswrapper[4833]: I1013 06:48:16.797018 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:16 crc kubenswrapper[4833]: I1013 06:48:16.807550 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 13 06:48:16 crc kubenswrapper[4833]: I1013 06:48:16.983292 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.101231 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e166efd-c317-44e2-9c48-e122a9cc3fab-config-data\") pod \"0e166efd-c317-44e2-9c48-e122a9cc3fab\" (UID: \"0e166efd-c317-44e2-9c48-e122a9cc3fab\") " Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.101339 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpmtd\" (UniqueName: \"kubernetes.io/projected/0e166efd-c317-44e2-9c48-e122a9cc3fab-kube-api-access-wpmtd\") pod \"0e166efd-c317-44e2-9c48-e122a9cc3fab\" (UID: \"0e166efd-c317-44e2-9c48-e122a9cc3fab\") " Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.101428 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e166efd-c317-44e2-9c48-e122a9cc3fab-combined-ca-bundle\") pod \"0e166efd-c317-44e2-9c48-e122a9cc3fab\" (UID: \"0e166efd-c317-44e2-9c48-e122a9cc3fab\") " Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.101476 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e166efd-c317-44e2-9c48-e122a9cc3fab-logs\") pod \"0e166efd-c317-44e2-9c48-e122a9cc3fab\" (UID: \"0e166efd-c317-44e2-9c48-e122a9cc3fab\") " Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.102256 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e166efd-c317-44e2-9c48-e122a9cc3fab-logs" (OuterVolumeSpecName: "logs") pod "0e166efd-c317-44e2-9c48-e122a9cc3fab" (UID: "0e166efd-c317-44e2-9c48-e122a9cc3fab"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.109294 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e166efd-c317-44e2-9c48-e122a9cc3fab-kube-api-access-wpmtd" (OuterVolumeSpecName: "kube-api-access-wpmtd") pod "0e166efd-c317-44e2-9c48-e122a9cc3fab" (UID: "0e166efd-c317-44e2-9c48-e122a9cc3fab"). InnerVolumeSpecName "kube-api-access-wpmtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.131752 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e166efd-c317-44e2-9c48-e122a9cc3fab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e166efd-c317-44e2-9c48-e122a9cc3fab" (UID: "0e166efd-c317-44e2-9c48-e122a9cc3fab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.165778 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e166efd-c317-44e2-9c48-e122a9cc3fab-config-data" (OuterVolumeSpecName: "config-data") pod "0e166efd-c317-44e2-9c48-e122a9cc3fab" (UID: "0e166efd-c317-44e2-9c48-e122a9cc3fab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.203190 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e166efd-c317-44e2-9c48-e122a9cc3fab-logs\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.203449 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e166efd-c317-44e2-9c48-e122a9cc3fab-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.203628 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpmtd\" (UniqueName: \"kubernetes.io/projected/0e166efd-c317-44e2-9c48-e122a9cc3fab-kube-api-access-wpmtd\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.203719 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e166efd-c317-44e2-9c48-e122a9cc3fab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.473216 4833 generic.go:334] "Generic (PLEG): container finished" podID="0e166efd-c317-44e2-9c48-e122a9cc3fab" containerID="8ffb0d1ca0155fa37e5d3f0f91633efccece4c7a21a9f7a9fa2c932b077b6c71" exitCode=0 Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.473256 4833 generic.go:334] "Generic (PLEG): container finished" podID="0e166efd-c317-44e2-9c48-e122a9cc3fab" containerID="096466f7bd7cd1ba1e7b2b30e6a801fee1db1f7465d8f6d91bdbb28c754c932c" exitCode=143 Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.473302 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0e166efd-c317-44e2-9c48-e122a9cc3fab","Type":"ContainerDied","Data":"8ffb0d1ca0155fa37e5d3f0f91633efccece4c7a21a9f7a9fa2c932b077b6c71"} Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.473349 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0e166efd-c317-44e2-9c48-e122a9cc3fab","Type":"ContainerDied","Data":"096466f7bd7cd1ba1e7b2b30e6a801fee1db1f7465d8f6d91bdbb28c754c932c"} Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.473361 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0e166efd-c317-44e2-9c48-e122a9cc3fab","Type":"ContainerDied","Data":"2046778404b9bddad8eae75e4adadff18f7167c21a5441bbe9045a33029522ab"} Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.473376 4833 scope.go:117] "RemoveContainer" containerID="8ffb0d1ca0155fa37e5d3f0f91633efccece4c7a21a9f7a9fa2c932b077b6c71" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.474782 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.504679 4833 scope.go:117] "RemoveContainer" containerID="096466f7bd7cd1ba1e7b2b30e6a801fee1db1f7465d8f6d91bdbb28c754c932c" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.536650 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.539789 4833 scope.go:117] "RemoveContainer" containerID="8ffb0d1ca0155fa37e5d3f0f91633efccece4c7a21a9f7a9fa2c932b077b6c71" Oct 13 06:48:17 crc kubenswrapper[4833]: E1013 06:48:17.540273 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ffb0d1ca0155fa37e5d3f0f91633efccece4c7a21a9f7a9fa2c932b077b6c71\": container with ID starting with 8ffb0d1ca0155fa37e5d3f0f91633efccece4c7a21a9f7a9fa2c932b077b6c71 not found: ID does not exist" containerID="8ffb0d1ca0155fa37e5d3f0f91633efccece4c7a21a9f7a9fa2c932b077b6c71" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.540366 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ffb0d1ca0155fa37e5d3f0f91633efccece4c7a21a9f7a9fa2c932b077b6c71"} err="failed to get container status \"8ffb0d1ca0155fa37e5d3f0f91633efccece4c7a21a9f7a9fa2c932b077b6c71\": rpc error: code = NotFound desc = could not find container \"8ffb0d1ca0155fa37e5d3f0f91633efccece4c7a21a9f7a9fa2c932b077b6c71\": container with ID starting with 8ffb0d1ca0155fa37e5d3f0f91633efccece4c7a21a9f7a9fa2c932b077b6c71 not found: ID does not exist" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.544470 4833 scope.go:117] "RemoveContainer" containerID="096466f7bd7cd1ba1e7b2b30e6a801fee1db1f7465d8f6d91bdbb28c754c932c" Oct 13 06:48:17 crc kubenswrapper[4833]: E1013 06:48:17.545188 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"096466f7bd7cd1ba1e7b2b30e6a801fee1db1f7465d8f6d91bdbb28c754c932c\": container with ID starting with 096466f7bd7cd1ba1e7b2b30e6a801fee1db1f7465d8f6d91bdbb28c754c932c not found: ID does not exist" containerID="096466f7bd7cd1ba1e7b2b30e6a801fee1db1f7465d8f6d91bdbb28c754c932c" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.545297 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"096466f7bd7cd1ba1e7b2b30e6a801fee1db1f7465d8f6d91bdbb28c754c932c"} err="failed to get container status \"096466f7bd7cd1ba1e7b2b30e6a801fee1db1f7465d8f6d91bdbb28c754c932c\": rpc error: code = NotFound desc = could not find container \"096466f7bd7cd1ba1e7b2b30e6a801fee1db1f7465d8f6d91bdbb28c754c932c\": container with ID starting with 096466f7bd7cd1ba1e7b2b30e6a801fee1db1f7465d8f6d91bdbb28c754c932c not found: ID does not exist" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.545375 4833 scope.go:117] "RemoveContainer" containerID="8ffb0d1ca0155fa37e5d3f0f91633efccece4c7a21a9f7a9fa2c932b077b6c71" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.545864 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ffb0d1ca0155fa37e5d3f0f91633efccece4c7a21a9f7a9fa2c932b077b6c71"} err="failed to get container status \"8ffb0d1ca0155fa37e5d3f0f91633efccece4c7a21a9f7a9fa2c932b077b6c71\": rpc error: code = NotFound desc = could not find container \"8ffb0d1ca0155fa37e5d3f0f91633efccece4c7a21a9f7a9fa2c932b077b6c71\": container with ID starting with 8ffb0d1ca0155fa37e5d3f0f91633efccece4c7a21a9f7a9fa2c932b077b6c71 not found: ID does not exist" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.546053 4833 scope.go:117] "RemoveContainer" containerID="096466f7bd7cd1ba1e7b2b30e6a801fee1db1f7465d8f6d91bdbb28c754c932c" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.546385 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"096466f7bd7cd1ba1e7b2b30e6a801fee1db1f7465d8f6d91bdbb28c754c932c"} err="failed to get container status \"096466f7bd7cd1ba1e7b2b30e6a801fee1db1f7465d8f6d91bdbb28c754c932c\": rpc error: code = NotFound desc = could not find container \"096466f7bd7cd1ba1e7b2b30e6a801fee1db1f7465d8f6d91bdbb28c754c932c\": container with ID starting with 096466f7bd7cd1ba1e7b2b30e6a801fee1db1f7465d8f6d91bdbb28c754c932c not found: ID does not exist" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.554428 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.562910 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 13 06:48:17 crc kubenswrapper[4833]: E1013 06:48:17.563346 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e166efd-c317-44e2-9c48-e122a9cc3fab" containerName="nova-metadata-metadata" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.563364 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e166efd-c317-44e2-9c48-e122a9cc3fab" containerName="nova-metadata-metadata" Oct 13 06:48:17 crc kubenswrapper[4833]: E1013 06:48:17.563394 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e166efd-c317-44e2-9c48-e122a9cc3fab" containerName="nova-metadata-log" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.563403 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e166efd-c317-44e2-9c48-e122a9cc3fab" containerName="nova-metadata-log" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.563679 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e166efd-c317-44e2-9c48-e122a9cc3fab" containerName="nova-metadata-metadata" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.563705 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e166efd-c317-44e2-9c48-e122a9cc3fab" containerName="nova-metadata-log" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.564691 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.566822 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.567047 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.571823 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.722594 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529-logs\") pod \"nova-metadata-0\" (UID: \"8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529\") " pod="openstack/nova-metadata-0" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.722863 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529-config-data\") pod \"nova-metadata-0\" (UID: \"8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529\") " pod="openstack/nova-metadata-0" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.722919 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt2n8\" (UniqueName: \"kubernetes.io/projected/8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529-kube-api-access-gt2n8\") pod \"nova-metadata-0\" (UID: \"8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529\") " pod="openstack/nova-metadata-0" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.722975 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529\") " pod="openstack/nova-metadata-0" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.723037 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529\") " pod="openstack/nova-metadata-0" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.824702 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529-logs\") pod \"nova-metadata-0\" (UID: \"8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529\") " pod="openstack/nova-metadata-0" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.825095 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529-config-data\") pod \"nova-metadata-0\" (UID: \"8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529\") " pod="openstack/nova-metadata-0" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.825318 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt2n8\" (UniqueName: \"kubernetes.io/projected/8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529-kube-api-access-gt2n8\") pod \"nova-metadata-0\" (UID: \"8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529\") " pod="openstack/nova-metadata-0" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.825187 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529-logs\") pod \"nova-metadata-0\" (UID: \"8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529\") " pod="openstack/nova-metadata-0" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.825741 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529\") " pod="openstack/nova-metadata-0" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.825948 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529\") " pod="openstack/nova-metadata-0" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.831405 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529\") " pod="openstack/nova-metadata-0" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.831997 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529\") " pod="openstack/nova-metadata-0" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.838635 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529-config-data\") pod \"nova-metadata-0\" (UID: \"8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529\") " pod="openstack/nova-metadata-0" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.842484 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt2n8\" (UniqueName: \"kubernetes.io/projected/8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529-kube-api-access-gt2n8\") pod \"nova-metadata-0\" (UID: \"8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529\") " pod="openstack/nova-metadata-0" Oct 13 06:48:17 crc kubenswrapper[4833]: I1013 06:48:17.924706 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 06:48:18 crc kubenswrapper[4833]: I1013 06:48:18.357317 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 06:48:18 crc kubenswrapper[4833]: W1013 06:48:18.361155 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ca377ab_8b8f_4abf_9ef1_6fa2e43e7529.slice/crio-31f1bee0410e234c972a84882694bc84403e20697c521b812763644519e919ed WatchSource:0}: Error finding container 31f1bee0410e234c972a84882694bc84403e20697c521b812763644519e919ed: Status 404 returned error can't find the container with id 31f1bee0410e234c972a84882694bc84403e20697c521b812763644519e919ed Oct 13 06:48:18 crc kubenswrapper[4833]: I1013 06:48:18.486060 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529","Type":"ContainerStarted","Data":"31f1bee0410e234c972a84882694bc84403e20697c521b812763644519e919ed"} Oct 13 06:48:18 crc kubenswrapper[4833]: I1013 06:48:18.643291 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e166efd-c317-44e2-9c48-e122a9cc3fab" path="/var/lib/kubelet/pods/0e166efd-c317-44e2-9c48-e122a9cc3fab/volumes" Oct 13 06:48:19 crc kubenswrapper[4833]: I1013 06:48:19.499382 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529","Type":"ContainerStarted","Data":"fdbd3c543aaf1f680eab27098852c21d8292059fe70bd3194b8c38df3625e5aa"} Oct 13 06:48:19 crc kubenswrapper[4833]: I1013 06:48:19.499451 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529","Type":"ContainerStarted","Data":"f43063c583d244a6b64bb0910dd47f1f4185c8d32b73b92fa6a6ea008a22a15e"} Oct 13 06:48:19 crc kubenswrapper[4833]: I1013 06:48:19.537462 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.537441408 podStartE2EDuration="2.537441408s" podCreationTimestamp="2025-10-13 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:48:19.530424491 +0000 UTC m=+1189.630847407" watchObservedRunningTime="2025-10-13 06:48:19.537441408 +0000 UTC m=+1189.637864324" Oct 13 06:48:20 crc kubenswrapper[4833]: I1013 06:48:20.514132 4833 generic.go:334] "Generic (PLEG): container finished" podID="bc812647-c154-4fe8-b6cc-fcf008841900" containerID="75b83eb5a0f4a87001ec550008c86da38762ad7776efbc4d0f7db261a9b40d50" exitCode=0 Oct 13 06:48:20 crc kubenswrapper[4833]: I1013 06:48:20.514284 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mc8l7" event={"ID":"bc812647-c154-4fe8-b6cc-fcf008841900","Type":"ContainerDied","Data":"75b83eb5a0f4a87001ec550008c86da38762ad7776efbc4d0f7db261a9b40d50"} Oct 13 06:48:20 crc kubenswrapper[4833]: I1013 06:48:20.517819 4833 generic.go:334] "Generic (PLEG): container finished" podID="949b1dbe-5e00-401f-a0a6-d0830a0092ad" containerID="90ee1a23cd38eedd15beb4921be258e3fbaa0411381f1f5b0925d97d2d4fcd83" exitCode=0 Oct 13 06:48:20 crc kubenswrapper[4833]: I1013 06:48:20.517945 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jpl8n" event={"ID":"949b1dbe-5e00-401f-a0a6-d0830a0092ad","Type":"ContainerDied","Data":"90ee1a23cd38eedd15beb4921be258e3fbaa0411381f1f5b0925d97d2d4fcd83"} Oct 13 06:48:21 crc kubenswrapper[4833]: I1013 06:48:21.484760 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 13 06:48:21 crc kubenswrapper[4833]: I1013 06:48:21.484848 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 13 06:48:21 crc kubenswrapper[4833]: I1013 06:48:21.808626 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 13 06:48:21 crc kubenswrapper[4833]: I1013 06:48:21.851642 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 13 06:48:21 crc kubenswrapper[4833]: I1013 06:48:21.907808 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6ffc974fdf-26tbs" Oct 13 06:48:21 crc kubenswrapper[4833]: I1013 06:48:21.993283 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bd785c49-zbzrf"] Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:21.998759 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84bd785c49-zbzrf" podUID="377af78b-ea09-46d2-939d-debdb6630796" containerName="dnsmasq-dns" containerID="cri-o://3ec564f1c6684624dbb2bb2624aabef179dd9177b1c68381a773dc25cc713c3a" gracePeriod=10 Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.025311 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mc8l7" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.025711 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jpl8n" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.134083 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/949b1dbe-5e00-401f-a0a6-d0830a0092ad-scripts\") pod \"949b1dbe-5e00-401f-a0a6-d0830a0092ad\" (UID: \"949b1dbe-5e00-401f-a0a6-d0830a0092ad\") " Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.134545 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/949b1dbe-5e00-401f-a0a6-d0830a0092ad-config-data\") pod \"949b1dbe-5e00-401f-a0a6-d0830a0092ad\" (UID: \"949b1dbe-5e00-401f-a0a6-d0830a0092ad\") " Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.134604 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qmwj\" (UniqueName: \"kubernetes.io/projected/bc812647-c154-4fe8-b6cc-fcf008841900-kube-api-access-2qmwj\") pod \"bc812647-c154-4fe8-b6cc-fcf008841900\" (UID: \"bc812647-c154-4fe8-b6cc-fcf008841900\") " Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.134643 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc812647-c154-4fe8-b6cc-fcf008841900-scripts\") pod \"bc812647-c154-4fe8-b6cc-fcf008841900\" (UID: \"bc812647-c154-4fe8-b6cc-fcf008841900\") " Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.134742 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc812647-c154-4fe8-b6cc-fcf008841900-config-data\") pod \"bc812647-c154-4fe8-b6cc-fcf008841900\" (UID: \"bc812647-c154-4fe8-b6cc-fcf008841900\") " Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.134777 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l555k\" (UniqueName: \"kubernetes.io/projected/949b1dbe-5e00-401f-a0a6-d0830a0092ad-kube-api-access-l555k\") pod \"949b1dbe-5e00-401f-a0a6-d0830a0092ad\" (UID: \"949b1dbe-5e00-401f-a0a6-d0830a0092ad\") " Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.134804 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc812647-c154-4fe8-b6cc-fcf008841900-combined-ca-bundle\") pod \"bc812647-c154-4fe8-b6cc-fcf008841900\" (UID: \"bc812647-c154-4fe8-b6cc-fcf008841900\") " Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.134828 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/949b1dbe-5e00-401f-a0a6-d0830a0092ad-combined-ca-bundle\") pod \"949b1dbe-5e00-401f-a0a6-d0830a0092ad\" (UID: \"949b1dbe-5e00-401f-a0a6-d0830a0092ad\") " Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.140330 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/949b1dbe-5e00-401f-a0a6-d0830a0092ad-scripts" (OuterVolumeSpecName: "scripts") pod "949b1dbe-5e00-401f-a0a6-d0830a0092ad" (UID: "949b1dbe-5e00-401f-a0a6-d0830a0092ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.147707 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc812647-c154-4fe8-b6cc-fcf008841900-kube-api-access-2qmwj" (OuterVolumeSpecName: "kube-api-access-2qmwj") pod "bc812647-c154-4fe8-b6cc-fcf008841900" (UID: "bc812647-c154-4fe8-b6cc-fcf008841900"). InnerVolumeSpecName "kube-api-access-2qmwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.150751 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/949b1dbe-5e00-401f-a0a6-d0830a0092ad-kube-api-access-l555k" (OuterVolumeSpecName: "kube-api-access-l555k") pod "949b1dbe-5e00-401f-a0a6-d0830a0092ad" (UID: "949b1dbe-5e00-401f-a0a6-d0830a0092ad"). InnerVolumeSpecName "kube-api-access-l555k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.180762 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc812647-c154-4fe8-b6cc-fcf008841900-scripts" (OuterVolumeSpecName: "scripts") pod "bc812647-c154-4fe8-b6cc-fcf008841900" (UID: "bc812647-c154-4fe8-b6cc-fcf008841900"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.188078 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/949b1dbe-5e00-401f-a0a6-d0830a0092ad-config-data" (OuterVolumeSpecName: "config-data") pod "949b1dbe-5e00-401f-a0a6-d0830a0092ad" (UID: "949b1dbe-5e00-401f-a0a6-d0830a0092ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.191478 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/949b1dbe-5e00-401f-a0a6-d0830a0092ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "949b1dbe-5e00-401f-a0a6-d0830a0092ad" (UID: "949b1dbe-5e00-401f-a0a6-d0830a0092ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.205241 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc812647-c154-4fe8-b6cc-fcf008841900-config-data" (OuterVolumeSpecName: "config-data") pod "bc812647-c154-4fe8-b6cc-fcf008841900" (UID: "bc812647-c154-4fe8-b6cc-fcf008841900"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.215674 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc812647-c154-4fe8-b6cc-fcf008841900-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc812647-c154-4fe8-b6cc-fcf008841900" (UID: "bc812647-c154-4fe8-b6cc-fcf008841900"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.236984 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc812647-c154-4fe8-b6cc-fcf008841900-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.237019 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l555k\" (UniqueName: \"kubernetes.io/projected/949b1dbe-5e00-401f-a0a6-d0830a0092ad-kube-api-access-l555k\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.237034 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc812647-c154-4fe8-b6cc-fcf008841900-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.237047 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/949b1dbe-5e00-401f-a0a6-d0830a0092ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.237057 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/949b1dbe-5e00-401f-a0a6-d0830a0092ad-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.237066 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/949b1dbe-5e00-401f-a0a6-d0830a0092ad-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.237073 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qmwj\" (UniqueName: \"kubernetes.io/projected/bc812647-c154-4fe8-b6cc-fcf008841900-kube-api-access-2qmwj\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.237081 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc812647-c154-4fe8-b6cc-fcf008841900-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.415137 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bd785c49-zbzrf" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.440531 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/377af78b-ea09-46d2-939d-debdb6630796-ovsdbserver-nb\") pod \"377af78b-ea09-46d2-939d-debdb6630796\" (UID: \"377af78b-ea09-46d2-939d-debdb6630796\") " Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.440598 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sml8b\" (UniqueName: \"kubernetes.io/projected/377af78b-ea09-46d2-939d-debdb6630796-kube-api-access-sml8b\") pod \"377af78b-ea09-46d2-939d-debdb6630796\" (UID: \"377af78b-ea09-46d2-939d-debdb6630796\") " Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.440710 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/377af78b-ea09-46d2-939d-debdb6630796-ovsdbserver-sb\") pod \"377af78b-ea09-46d2-939d-debdb6630796\" (UID: \"377af78b-ea09-46d2-939d-debdb6630796\") " Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.440733 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/377af78b-ea09-46d2-939d-debdb6630796-dns-swift-storage-0\") pod \"377af78b-ea09-46d2-939d-debdb6630796\" (UID: \"377af78b-ea09-46d2-939d-debdb6630796\") " Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.440769 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/377af78b-ea09-46d2-939d-debdb6630796-config\") pod \"377af78b-ea09-46d2-939d-debdb6630796\" (UID: \"377af78b-ea09-46d2-939d-debdb6630796\") " Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.440899 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/377af78b-ea09-46d2-939d-debdb6630796-dns-svc\") pod \"377af78b-ea09-46d2-939d-debdb6630796\" (UID: \"377af78b-ea09-46d2-939d-debdb6630796\") " Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.448939 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/377af78b-ea09-46d2-939d-debdb6630796-kube-api-access-sml8b" (OuterVolumeSpecName: "kube-api-access-sml8b") pod "377af78b-ea09-46d2-939d-debdb6630796" (UID: "377af78b-ea09-46d2-939d-debdb6630796"). InnerVolumeSpecName "kube-api-access-sml8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.494292 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/377af78b-ea09-46d2-939d-debdb6630796-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "377af78b-ea09-46d2-939d-debdb6630796" (UID: "377af78b-ea09-46d2-939d-debdb6630796"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.497554 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/377af78b-ea09-46d2-939d-debdb6630796-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "377af78b-ea09-46d2-939d-debdb6630796" (UID: "377af78b-ea09-46d2-939d-debdb6630796"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.503189 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/377af78b-ea09-46d2-939d-debdb6630796-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "377af78b-ea09-46d2-939d-debdb6630796" (UID: "377af78b-ea09-46d2-939d-debdb6630796"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.505063 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/377af78b-ea09-46d2-939d-debdb6630796-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "377af78b-ea09-46d2-939d-debdb6630796" (UID: "377af78b-ea09-46d2-939d-debdb6630796"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.515367 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/377af78b-ea09-46d2-939d-debdb6630796-config" (OuterVolumeSpecName: "config") pod "377af78b-ea09-46d2-939d-debdb6630796" (UID: "377af78b-ea09-46d2-939d-debdb6630796"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.542990 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mc8l7" event={"ID":"bc812647-c154-4fe8-b6cc-fcf008841900","Type":"ContainerDied","Data":"01df84021262d64ba9d9ac1c500de73258096a5dfb6417de1c3af1537e404a00"} Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.543026 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01df84021262d64ba9d9ac1c500de73258096a5dfb6417de1c3af1537e404a00" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.543084 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mc8l7" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.543787 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/377af78b-ea09-46d2-939d-debdb6630796-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.544088 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sml8b\" (UniqueName: \"kubernetes.io/projected/377af78b-ea09-46d2-939d-debdb6630796-kube-api-access-sml8b\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.544151 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/377af78b-ea09-46d2-939d-debdb6630796-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.544167 4833 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/377af78b-ea09-46d2-939d-debdb6630796-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.544179 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/377af78b-ea09-46d2-939d-debdb6630796-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.544192 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/377af78b-ea09-46d2-939d-debdb6630796-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.548265 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jpl8n" event={"ID":"949b1dbe-5e00-401f-a0a6-d0830a0092ad","Type":"ContainerDied","Data":"f26933f7d88eaab824db653f2278be2664cb7d6534df9d1269049f3dde5b3bd0"} Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.548303 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f26933f7d88eaab824db653f2278be2664cb7d6534df9d1269049f3dde5b3bd0" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.548371 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jpl8n" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.550919 4833 generic.go:334] "Generic (PLEG): container finished" podID="377af78b-ea09-46d2-939d-debdb6630796" containerID="3ec564f1c6684624dbb2bb2624aabef179dd9177b1c68381a773dc25cc713c3a" exitCode=0 Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.551562 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bd785c49-zbzrf" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.551870 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bd785c49-zbzrf" event={"ID":"377af78b-ea09-46d2-939d-debdb6630796","Type":"ContainerDied","Data":"3ec564f1c6684624dbb2bb2624aabef179dd9177b1c68381a773dc25cc713c3a"} Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.561324 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bd785c49-zbzrf" event={"ID":"377af78b-ea09-46d2-939d-debdb6630796","Type":"ContainerDied","Data":"d76e11ae046a83edda71eb042d58e7277e560446a94bc08d5854c0ffc1e7f7d2"} Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.561370 4833 scope.go:117] "RemoveContainer" containerID="3ec564f1c6684624dbb2bb2624aabef179dd9177b1c68381a773dc25cc713c3a" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.567758 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f2846b39-c826-4f73-aff8-dccd5a1f4ad1" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.567793 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f2846b39-c826-4f73-aff8-dccd5a1f4ad1" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.595237 4833 scope.go:117] "RemoveContainer" containerID="3772da44d8ece66196f41a2e74f23f38837f1e074079fa27b9b7eac3a6b6c7cc" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.611790 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.629736 4833 scope.go:117] "RemoveContainer" containerID="3ec564f1c6684624dbb2bb2624aabef179dd9177b1c68381a773dc25cc713c3a" Oct 13 06:48:22 crc kubenswrapper[4833]: E1013 06:48:22.633969 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ec564f1c6684624dbb2bb2624aabef179dd9177b1c68381a773dc25cc713c3a\": container with ID starting with 3ec564f1c6684624dbb2bb2624aabef179dd9177b1c68381a773dc25cc713c3a not found: ID does not exist" containerID="3ec564f1c6684624dbb2bb2624aabef179dd9177b1c68381a773dc25cc713c3a" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.634039 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ec564f1c6684624dbb2bb2624aabef179dd9177b1c68381a773dc25cc713c3a"} err="failed to get container status \"3ec564f1c6684624dbb2bb2624aabef179dd9177b1c68381a773dc25cc713c3a\": rpc error: code = NotFound desc = could not find container \"3ec564f1c6684624dbb2bb2624aabef179dd9177b1c68381a773dc25cc713c3a\": container with ID starting with 3ec564f1c6684624dbb2bb2624aabef179dd9177b1c68381a773dc25cc713c3a not found: ID does not exist" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.634072 4833 scope.go:117] "RemoveContainer" containerID="3772da44d8ece66196f41a2e74f23f38837f1e074079fa27b9b7eac3a6b6c7cc" Oct 13 06:48:22 crc kubenswrapper[4833]: E1013 06:48:22.634504 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3772da44d8ece66196f41a2e74f23f38837f1e074079fa27b9b7eac3a6b6c7cc\": container with ID starting with 3772da44d8ece66196f41a2e74f23f38837f1e074079fa27b9b7eac3a6b6c7cc not found: ID does not exist" containerID="3772da44d8ece66196f41a2e74f23f38837f1e074079fa27b9b7eac3a6b6c7cc" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.634549 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3772da44d8ece66196f41a2e74f23f38837f1e074079fa27b9b7eac3a6b6c7cc"} err="failed to get container status \"3772da44d8ece66196f41a2e74f23f38837f1e074079fa27b9b7eac3a6b6c7cc\": rpc error: code = NotFound desc = could not find container \"3772da44d8ece66196f41a2e74f23f38837f1e074079fa27b9b7eac3a6b6c7cc\": container with ID starting with 3772da44d8ece66196f41a2e74f23f38837f1e074079fa27b9b7eac3a6b6c7cc not found: ID does not exist" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.651995 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bd785c49-zbzrf"] Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.652038 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bd785c49-zbzrf"] Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.669972 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 13 06:48:22 crc kubenswrapper[4833]: E1013 06:48:22.670422 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="377af78b-ea09-46d2-939d-debdb6630796" containerName="dnsmasq-dns" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.670439 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="377af78b-ea09-46d2-939d-debdb6630796" containerName="dnsmasq-dns" Oct 13 06:48:22 crc kubenswrapper[4833]: E1013 06:48:22.670448 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949b1dbe-5e00-401f-a0a6-d0830a0092ad" containerName="nova-manage" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.670456 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="949b1dbe-5e00-401f-a0a6-d0830a0092ad" containerName="nova-manage" Oct 13 06:48:22 crc kubenswrapper[4833]: E1013 06:48:22.670472 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="377af78b-ea09-46d2-939d-debdb6630796" containerName="init" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.670478 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="377af78b-ea09-46d2-939d-debdb6630796" containerName="init" Oct 13 06:48:22 crc kubenswrapper[4833]: E1013 06:48:22.670495 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc812647-c154-4fe8-b6cc-fcf008841900" containerName="nova-cell1-conductor-db-sync" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.670500 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc812647-c154-4fe8-b6cc-fcf008841900" containerName="nova-cell1-conductor-db-sync" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.670741 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="949b1dbe-5e00-401f-a0a6-d0830a0092ad" containerName="nova-manage" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.670763 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc812647-c154-4fe8-b6cc-fcf008841900" containerName="nova-cell1-conductor-db-sync" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.670773 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="377af78b-ea09-46d2-939d-debdb6630796" containerName="dnsmasq-dns" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.671437 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.674174 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.678457 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.734605 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.734855 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f2846b39-c826-4f73-aff8-dccd5a1f4ad1" containerName="nova-api-log" containerID="cri-o://41535e1ffe936b7cec7b5c301b358027575f5edfb8097d0294cec6c89d967144" gracePeriod=30 Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.735554 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f2846b39-c826-4f73-aff8-dccd5a1f4ad1" containerName="nova-api-api" containerID="cri-o://94b0bc1da13c4a10e0526402ab5b203934392d21cd77951bd225cfecf37e89cf" gracePeriod=30 Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.749554 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b\") " pod="openstack/nova-cell1-conductor-0" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.749872 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrml6\" (UniqueName: \"kubernetes.io/projected/e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b-kube-api-access-mrml6\") pod \"nova-cell1-conductor-0\" (UID: \"e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b\") " pod="openstack/nova-cell1-conductor-0" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.750009 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b\") " pod="openstack/nova-cell1-conductor-0" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.808200 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.808516 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529" containerName="nova-metadata-metadata" containerID="cri-o://fdbd3c543aaf1f680eab27098852c21d8292059fe70bd3194b8c38df3625e5aa" gracePeriod=30 Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.809400 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529" containerName="nova-metadata-log" containerID="cri-o://f43063c583d244a6b64bb0910dd47f1f4185c8d32b73b92fa6a6ea008a22a15e" gracePeriod=30 Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.852027 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b\") " pod="openstack/nova-cell1-conductor-0" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.852088 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrml6\" (UniqueName: \"kubernetes.io/projected/e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b-kube-api-access-mrml6\") pod \"nova-cell1-conductor-0\" (UID: \"e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b\") " pod="openstack/nova-cell1-conductor-0" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.852223 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b\") " pod="openstack/nova-cell1-conductor-0" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.855968 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b\") " pod="openstack/nova-cell1-conductor-0" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.856663 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b\") " pod="openstack/nova-cell1-conductor-0" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.876656 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrml6\" (UniqueName: \"kubernetes.io/projected/e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b-kube-api-access-mrml6\") pod \"nova-cell1-conductor-0\" (UID: \"e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b\") " pod="openstack/nova-cell1-conductor-0" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.925583 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.925641 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 13 06:48:22 crc kubenswrapper[4833]: I1013 06:48:22.989376 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.072884 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.495463 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.576210 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529-config-data\") pod \"8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529\" (UID: \"8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529\") " Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.576556 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529-logs\") pod \"8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529\" (UID: \"8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529\") " Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.576732 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt2n8\" (UniqueName: \"kubernetes.io/projected/8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529-kube-api-access-gt2n8\") pod \"8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529\" (UID: \"8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529\") " Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.576923 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529-combined-ca-bundle\") pod \"8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529\" (UID: \"8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529\") " Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.577088 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529-nova-metadata-tls-certs\") pod \"8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529\" (UID: \"8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529\") " Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.577909 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529-logs" (OuterVolumeSpecName: "logs") pod "8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529" (UID: "8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.583132 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529-kube-api-access-gt2n8" (OuterVolumeSpecName: "kube-api-access-gt2n8") pod "8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529" (UID: "8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529"). InnerVolumeSpecName "kube-api-access-gt2n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.586208 4833 generic.go:334] "Generic (PLEG): container finished" podID="8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529" containerID="fdbd3c543aaf1f680eab27098852c21d8292059fe70bd3194b8c38df3625e5aa" exitCode=0 Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.586386 4833 generic.go:334] "Generic (PLEG): container finished" podID="8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529" containerID="f43063c583d244a6b64bb0910dd47f1f4185c8d32b73b92fa6a6ea008a22a15e" exitCode=143 Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.586506 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529","Type":"ContainerDied","Data":"fdbd3c543aaf1f680eab27098852c21d8292059fe70bd3194b8c38df3625e5aa"} Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.586641 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529","Type":"ContainerDied","Data":"f43063c583d244a6b64bb0910dd47f1f4185c8d32b73b92fa6a6ea008a22a15e"} Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.586733 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529","Type":"ContainerDied","Data":"31f1bee0410e234c972a84882694bc84403e20697c521b812763644519e919ed"} Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.586666 4833 scope.go:117] "RemoveContainer" containerID="fdbd3c543aaf1f680eab27098852c21d8292059fe70bd3194b8c38df3625e5aa" Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.586648 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.591010 4833 generic.go:334] "Generic (PLEG): container finished" podID="f2846b39-c826-4f73-aff8-dccd5a1f4ad1" containerID="41535e1ffe936b7cec7b5c301b358027575f5edfb8097d0294cec6c89d967144" exitCode=143 Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.591907 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f2846b39-c826-4f73-aff8-dccd5a1f4ad1","Type":"ContainerDied","Data":"41535e1ffe936b7cec7b5c301b358027575f5edfb8097d0294cec6c89d967144"} Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.592962 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.613281 4833 scope.go:117] "RemoveContainer" containerID="f43063c583d244a6b64bb0910dd47f1f4185c8d32b73b92fa6a6ea008a22a15e" Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.616836 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529" (UID: "8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.617672 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529-config-data" (OuterVolumeSpecName: "config-data") pod "8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529" (UID: "8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.633265 4833 scope.go:117] "RemoveContainer" containerID="fdbd3c543aaf1f680eab27098852c21d8292059fe70bd3194b8c38df3625e5aa" Oct 13 06:48:23 crc kubenswrapper[4833]: E1013 06:48:23.633698 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdbd3c543aaf1f680eab27098852c21d8292059fe70bd3194b8c38df3625e5aa\": container with ID starting with fdbd3c543aaf1f680eab27098852c21d8292059fe70bd3194b8c38df3625e5aa not found: ID does not exist" containerID="fdbd3c543aaf1f680eab27098852c21d8292059fe70bd3194b8c38df3625e5aa" Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.633736 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdbd3c543aaf1f680eab27098852c21d8292059fe70bd3194b8c38df3625e5aa"} err="failed to get container status \"fdbd3c543aaf1f680eab27098852c21d8292059fe70bd3194b8c38df3625e5aa\": rpc error: code = NotFound desc = could not find container \"fdbd3c543aaf1f680eab27098852c21d8292059fe70bd3194b8c38df3625e5aa\": container with ID starting with fdbd3c543aaf1f680eab27098852c21d8292059fe70bd3194b8c38df3625e5aa not found: ID does not exist" Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.633763 4833 scope.go:117] "RemoveContainer" containerID="f43063c583d244a6b64bb0910dd47f1f4185c8d32b73b92fa6a6ea008a22a15e" Oct 13 06:48:23 crc kubenswrapper[4833]: E1013 06:48:23.634147 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f43063c583d244a6b64bb0910dd47f1f4185c8d32b73b92fa6a6ea008a22a15e\": container with ID starting with f43063c583d244a6b64bb0910dd47f1f4185c8d32b73b92fa6a6ea008a22a15e not found: ID does not exist" containerID="f43063c583d244a6b64bb0910dd47f1f4185c8d32b73b92fa6a6ea008a22a15e" Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.634191 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f43063c583d244a6b64bb0910dd47f1f4185c8d32b73b92fa6a6ea008a22a15e"} err="failed to get container status \"f43063c583d244a6b64bb0910dd47f1f4185c8d32b73b92fa6a6ea008a22a15e\": rpc error: code = NotFound desc = could not find container \"f43063c583d244a6b64bb0910dd47f1f4185c8d32b73b92fa6a6ea008a22a15e\": container with ID starting with f43063c583d244a6b64bb0910dd47f1f4185c8d32b73b92fa6a6ea008a22a15e not found: ID does not exist" Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.634219 4833 scope.go:117] "RemoveContainer" containerID="fdbd3c543aaf1f680eab27098852c21d8292059fe70bd3194b8c38df3625e5aa" Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.634497 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdbd3c543aaf1f680eab27098852c21d8292059fe70bd3194b8c38df3625e5aa"} err="failed to get container status \"fdbd3c543aaf1f680eab27098852c21d8292059fe70bd3194b8c38df3625e5aa\": rpc error: code = NotFound desc = could not find container \"fdbd3c543aaf1f680eab27098852c21d8292059fe70bd3194b8c38df3625e5aa\": container with ID starting with fdbd3c543aaf1f680eab27098852c21d8292059fe70bd3194b8c38df3625e5aa not found: ID does not exist" Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.634520 4833 scope.go:117] "RemoveContainer" containerID="f43063c583d244a6b64bb0910dd47f1f4185c8d32b73b92fa6a6ea008a22a15e" Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.634776 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f43063c583d244a6b64bb0910dd47f1f4185c8d32b73b92fa6a6ea008a22a15e"} err="failed to get container status \"f43063c583d244a6b64bb0910dd47f1f4185c8d32b73b92fa6a6ea008a22a15e\": rpc error: code = NotFound desc = could not find container \"f43063c583d244a6b64bb0910dd47f1f4185c8d32b73b92fa6a6ea008a22a15e\": container with ID starting with f43063c583d244a6b64bb0910dd47f1f4185c8d32b73b92fa6a6ea008a22a15e not found: ID does not exist" Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.636880 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529" (UID: "8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.681514 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.681906 4833 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.681925 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.681938 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529-logs\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.681950 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gt2n8\" (UniqueName: \"kubernetes.io/projected/8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529-kube-api-access-gt2n8\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.939394 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.945572 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.963448 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 13 06:48:23 crc kubenswrapper[4833]: E1013 06:48:23.963879 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529" containerName="nova-metadata-log" Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.963900 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529" containerName="nova-metadata-log" Oct 13 06:48:23 crc kubenswrapper[4833]: E1013 06:48:23.963939 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529" containerName="nova-metadata-metadata" Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.963949 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529" containerName="nova-metadata-metadata" Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.964765 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529" containerName="nova-metadata-metadata" Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.964792 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529" containerName="nova-metadata-log" Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.965891 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.970134 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.970327 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 13 06:48:23 crc kubenswrapper[4833]: I1013 06:48:23.975080 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 06:48:24 crc kubenswrapper[4833]: I1013 06:48:24.001629 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/318447c0-0084-4b42-92ad-086709e0576b-logs\") pod \"nova-metadata-0\" (UID: \"318447c0-0084-4b42-92ad-086709e0576b\") " pod="openstack/nova-metadata-0" Oct 13 06:48:24 crc kubenswrapper[4833]: I1013 06:48:24.001702 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/318447c0-0084-4b42-92ad-086709e0576b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"318447c0-0084-4b42-92ad-086709e0576b\") " pod="openstack/nova-metadata-0" Oct 13 06:48:24 crc kubenswrapper[4833]: I1013 06:48:24.001774 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqhww\" (UniqueName: \"kubernetes.io/projected/318447c0-0084-4b42-92ad-086709e0576b-kube-api-access-sqhww\") pod \"nova-metadata-0\" (UID: \"318447c0-0084-4b42-92ad-086709e0576b\") " pod="openstack/nova-metadata-0" Oct 13 06:48:24 crc kubenswrapper[4833]: I1013 06:48:24.001912 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/318447c0-0084-4b42-92ad-086709e0576b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"318447c0-0084-4b42-92ad-086709e0576b\") " pod="openstack/nova-metadata-0" Oct 13 06:48:24 crc kubenswrapper[4833]: I1013 06:48:24.001962 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/318447c0-0084-4b42-92ad-086709e0576b-config-data\") pod \"nova-metadata-0\" (UID: \"318447c0-0084-4b42-92ad-086709e0576b\") " pod="openstack/nova-metadata-0" Oct 13 06:48:24 crc kubenswrapper[4833]: I1013 06:48:24.103686 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqhww\" (UniqueName: \"kubernetes.io/projected/318447c0-0084-4b42-92ad-086709e0576b-kube-api-access-sqhww\") pod \"nova-metadata-0\" (UID: \"318447c0-0084-4b42-92ad-086709e0576b\") " pod="openstack/nova-metadata-0" Oct 13 06:48:24 crc kubenswrapper[4833]: I1013 06:48:24.103758 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/318447c0-0084-4b42-92ad-086709e0576b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"318447c0-0084-4b42-92ad-086709e0576b\") " pod="openstack/nova-metadata-0" Oct 13 06:48:24 crc kubenswrapper[4833]: I1013 06:48:24.103786 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/318447c0-0084-4b42-92ad-086709e0576b-config-data\") pod \"nova-metadata-0\" (UID: \"318447c0-0084-4b42-92ad-086709e0576b\") " pod="openstack/nova-metadata-0" Oct 13 06:48:24 crc kubenswrapper[4833]: I1013 06:48:24.103913 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/318447c0-0084-4b42-92ad-086709e0576b-logs\") pod \"nova-metadata-0\" (UID: \"318447c0-0084-4b42-92ad-086709e0576b\") " pod="openstack/nova-metadata-0" Oct 13 06:48:24 crc kubenswrapper[4833]: I1013 06:48:24.103973 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/318447c0-0084-4b42-92ad-086709e0576b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"318447c0-0084-4b42-92ad-086709e0576b\") " pod="openstack/nova-metadata-0" Oct 13 06:48:24 crc kubenswrapper[4833]: I1013 06:48:24.104716 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/318447c0-0084-4b42-92ad-086709e0576b-logs\") pod \"nova-metadata-0\" (UID: \"318447c0-0084-4b42-92ad-086709e0576b\") " pod="openstack/nova-metadata-0" Oct 13 06:48:24 crc kubenswrapper[4833]: I1013 06:48:24.109427 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/318447c0-0084-4b42-92ad-086709e0576b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"318447c0-0084-4b42-92ad-086709e0576b\") " pod="openstack/nova-metadata-0" Oct 13 06:48:24 crc kubenswrapper[4833]: I1013 06:48:24.111321 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/318447c0-0084-4b42-92ad-086709e0576b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"318447c0-0084-4b42-92ad-086709e0576b\") " pod="openstack/nova-metadata-0" Oct 13 06:48:24 crc kubenswrapper[4833]: I1013 06:48:24.112506 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/318447c0-0084-4b42-92ad-086709e0576b-config-data\") pod \"nova-metadata-0\" (UID: \"318447c0-0084-4b42-92ad-086709e0576b\") " pod="openstack/nova-metadata-0" Oct 13 06:48:24 crc kubenswrapper[4833]: I1013 06:48:24.125919 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqhww\" (UniqueName: \"kubernetes.io/projected/318447c0-0084-4b42-92ad-086709e0576b-kube-api-access-sqhww\") pod \"nova-metadata-0\" (UID: \"318447c0-0084-4b42-92ad-086709e0576b\") " pod="openstack/nova-metadata-0" Oct 13 06:48:24 crc kubenswrapper[4833]: I1013 06:48:24.281986 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 06:48:24 crc kubenswrapper[4833]: I1013 06:48:24.603889 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b","Type":"ContainerStarted","Data":"0f094acdb89c411f919d5e575dcd1514370d320b5ec95bb3019deaf50dd6a0bc"} Oct 13 06:48:24 crc kubenswrapper[4833]: I1013 06:48:24.604299 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b","Type":"ContainerStarted","Data":"b5d84666afa529e541c19299ef85a3d905b02b6e03d65e000fa7b22c591ad4cb"} Oct 13 06:48:24 crc kubenswrapper[4833]: I1013 06:48:24.603948 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="30cc9429-c129-488f-9839-f69607ee7640" containerName="nova-scheduler-scheduler" containerID="cri-o://cce5dfc0de3798640853aa6a0f4b781269c9be1f56a2ec623efc8ddd7ae0957c" gracePeriod=30 Oct 13 06:48:24 crc kubenswrapper[4833]: I1013 06:48:24.604694 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Oct 13 06:48:24 crc kubenswrapper[4833]: I1013 06:48:24.630851 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.630828684 podStartE2EDuration="2.630828684s" podCreationTimestamp="2025-10-13 06:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:48:24.624930359 +0000 UTC m=+1194.725353275" watchObservedRunningTime="2025-10-13 06:48:24.630828684 +0000 UTC m=+1194.731251600" Oct 13 06:48:24 crc kubenswrapper[4833]: I1013 06:48:24.636933 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="377af78b-ea09-46d2-939d-debdb6630796" path="/var/lib/kubelet/pods/377af78b-ea09-46d2-939d-debdb6630796/volumes" Oct 13 06:48:24 crc kubenswrapper[4833]: I1013 06:48:24.637628 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529" path="/var/lib/kubelet/pods/8ca377ab-8b8f-4abf-9ef1-6fa2e43e7529/volumes" Oct 13 06:48:24 crc kubenswrapper[4833]: W1013 06:48:24.749098 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod318447c0_0084_4b42_92ad_086709e0576b.slice/crio-c43a2e6bad732959845bce0303151e98b1f8e920e12981b12654fc07f87efeaf WatchSource:0}: Error finding container c43a2e6bad732959845bce0303151e98b1f8e920e12981b12654fc07f87efeaf: Status 404 returned error can't find the container with id c43a2e6bad732959845bce0303151e98b1f8e920e12981b12654fc07f87efeaf Oct 13 06:48:24 crc kubenswrapper[4833]: I1013 06:48:24.751393 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 06:48:25 crc kubenswrapper[4833]: I1013 06:48:25.618500 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"318447c0-0084-4b42-92ad-086709e0576b","Type":"ContainerStarted","Data":"e02872934d4c48155bf45e3d9abfa4cf548374373601ace4dac6abbcc54f7d7b"} Oct 13 06:48:25 crc kubenswrapper[4833]: I1013 06:48:25.618935 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"318447c0-0084-4b42-92ad-086709e0576b","Type":"ContainerStarted","Data":"5c79350ae41df5a5e240931a133bde60403509305f60a215355a42b198e5d899"} Oct 13 06:48:25 crc kubenswrapper[4833]: I1013 06:48:25.618959 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"318447c0-0084-4b42-92ad-086709e0576b","Type":"ContainerStarted","Data":"c43a2e6bad732959845bce0303151e98b1f8e920e12981b12654fc07f87efeaf"} Oct 13 06:48:25 crc kubenswrapper[4833]: I1013 06:48:25.651025 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.650998607 podStartE2EDuration="2.650998607s" podCreationTimestamp="2025-10-13 06:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:48:25.642242812 +0000 UTC m=+1195.742665768" watchObservedRunningTime="2025-10-13 06:48:25.650998607 +0000 UTC m=+1195.751421533" Oct 13 06:48:26 crc kubenswrapper[4833]: E1013 06:48:26.811018 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cce5dfc0de3798640853aa6a0f4b781269c9be1f56a2ec623efc8ddd7ae0957c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 13 06:48:26 crc kubenswrapper[4833]: E1013 06:48:26.812946 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cce5dfc0de3798640853aa6a0f4b781269c9be1f56a2ec623efc8ddd7ae0957c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 13 06:48:26 crc kubenswrapper[4833]: E1013 06:48:26.814280 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cce5dfc0de3798640853aa6a0f4b781269c9be1f56a2ec623efc8ddd7ae0957c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 13 06:48:26 crc kubenswrapper[4833]: E1013 06:48:26.814325 4833 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="30cc9429-c129-488f-9839-f69607ee7640" containerName="nova-scheduler-scheduler" Oct 13 06:48:27 crc kubenswrapper[4833]: I1013 06:48:27.435734 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 13 06:48:27 crc kubenswrapper[4833]: I1013 06:48:27.553285 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 06:48:27 crc kubenswrapper[4833]: I1013 06:48:27.641812 4833 generic.go:334] "Generic (PLEG): container finished" podID="30cc9429-c129-488f-9839-f69607ee7640" containerID="cce5dfc0de3798640853aa6a0f4b781269c9be1f56a2ec623efc8ddd7ae0957c" exitCode=0 Oct 13 06:48:27 crc kubenswrapper[4833]: I1013 06:48:27.641850 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"30cc9429-c129-488f-9839-f69607ee7640","Type":"ContainerDied","Data":"cce5dfc0de3798640853aa6a0f4b781269c9be1f56a2ec623efc8ddd7ae0957c"} Oct 13 06:48:27 crc kubenswrapper[4833]: I1013 06:48:27.641874 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"30cc9429-c129-488f-9839-f69607ee7640","Type":"ContainerDied","Data":"ac16962d55ccc4a0cc8f83d66f82be41a593e8d0cd8a691bfca25ad1f14b3f12"} Oct 13 06:48:27 crc kubenswrapper[4833]: I1013 06:48:27.641892 4833 scope.go:117] "RemoveContainer" containerID="cce5dfc0de3798640853aa6a0f4b781269c9be1f56a2ec623efc8ddd7ae0957c" Oct 13 06:48:27 crc kubenswrapper[4833]: I1013 06:48:27.641962 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 06:48:27 crc kubenswrapper[4833]: I1013 06:48:27.661518 4833 scope.go:117] "RemoveContainer" containerID="cce5dfc0de3798640853aa6a0f4b781269c9be1f56a2ec623efc8ddd7ae0957c" Oct 13 06:48:27 crc kubenswrapper[4833]: E1013 06:48:27.662342 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cce5dfc0de3798640853aa6a0f4b781269c9be1f56a2ec623efc8ddd7ae0957c\": container with ID starting with cce5dfc0de3798640853aa6a0f4b781269c9be1f56a2ec623efc8ddd7ae0957c not found: ID does not exist" containerID="cce5dfc0de3798640853aa6a0f4b781269c9be1f56a2ec623efc8ddd7ae0957c" Oct 13 06:48:27 crc kubenswrapper[4833]: I1013 06:48:27.662370 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cce5dfc0de3798640853aa6a0f4b781269c9be1f56a2ec623efc8ddd7ae0957c"} err="failed to get container status \"cce5dfc0de3798640853aa6a0f4b781269c9be1f56a2ec623efc8ddd7ae0957c\": rpc error: code = NotFound desc = could not find container \"cce5dfc0de3798640853aa6a0f4b781269c9be1f56a2ec623efc8ddd7ae0957c\": container with ID starting with cce5dfc0de3798640853aa6a0f4b781269c9be1f56a2ec623efc8ddd7ae0957c not found: ID does not exist" Oct 13 06:48:27 crc kubenswrapper[4833]: I1013 06:48:27.696722 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6klls\" (UniqueName: \"kubernetes.io/projected/30cc9429-c129-488f-9839-f69607ee7640-kube-api-access-6klls\") pod \"30cc9429-c129-488f-9839-f69607ee7640\" (UID: \"30cc9429-c129-488f-9839-f69607ee7640\") " Oct 13 06:48:27 crc kubenswrapper[4833]: I1013 06:48:27.696823 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30cc9429-c129-488f-9839-f69607ee7640-config-data\") pod \"30cc9429-c129-488f-9839-f69607ee7640\" (UID: \"30cc9429-c129-488f-9839-f69607ee7640\") " Oct 13 06:48:27 crc kubenswrapper[4833]: I1013 06:48:27.697012 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30cc9429-c129-488f-9839-f69607ee7640-combined-ca-bundle\") pod \"30cc9429-c129-488f-9839-f69607ee7640\" (UID: \"30cc9429-c129-488f-9839-f69607ee7640\") " Oct 13 06:48:27 crc kubenswrapper[4833]: I1013 06:48:27.704823 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30cc9429-c129-488f-9839-f69607ee7640-kube-api-access-6klls" (OuterVolumeSpecName: "kube-api-access-6klls") pod "30cc9429-c129-488f-9839-f69607ee7640" (UID: "30cc9429-c129-488f-9839-f69607ee7640"). InnerVolumeSpecName "kube-api-access-6klls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:48:27 crc kubenswrapper[4833]: I1013 06:48:27.728857 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30cc9429-c129-488f-9839-f69607ee7640-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30cc9429-c129-488f-9839-f69607ee7640" (UID: "30cc9429-c129-488f-9839-f69607ee7640"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:27 crc kubenswrapper[4833]: I1013 06:48:27.731685 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30cc9429-c129-488f-9839-f69607ee7640-config-data" (OuterVolumeSpecName: "config-data") pod "30cc9429-c129-488f-9839-f69607ee7640" (UID: "30cc9429-c129-488f-9839-f69607ee7640"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:27 crc kubenswrapper[4833]: I1013 06:48:27.798592 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30cc9429-c129-488f-9839-f69607ee7640-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:27 crc kubenswrapper[4833]: I1013 06:48:27.798652 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6klls\" (UniqueName: \"kubernetes.io/projected/30cc9429-c129-488f-9839-f69607ee7640-kube-api-access-6klls\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:27 crc kubenswrapper[4833]: I1013 06:48:27.798664 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30cc9429-c129-488f-9839-f69607ee7640-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:27 crc kubenswrapper[4833]: I1013 06:48:27.980497 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 06:48:27 crc kubenswrapper[4833]: I1013 06:48:27.991363 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.004131 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 06:48:28 crc kubenswrapper[4833]: E1013 06:48:28.005014 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30cc9429-c129-488f-9839-f69607ee7640" containerName="nova-scheduler-scheduler" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.005046 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="30cc9429-c129-488f-9839-f69607ee7640" containerName="nova-scheduler-scheduler" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.005350 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="30cc9429-c129-488f-9839-f69607ee7640" containerName="nova-scheduler-scheduler" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.006163 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.009028 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.036142 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.102871 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dea0d66-db51-422c-add8-6d97b7731116-config-data\") pod \"nova-scheduler-0\" (UID: \"9dea0d66-db51-422c-add8-6d97b7731116\") " pod="openstack/nova-scheduler-0" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.102961 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dea0d66-db51-422c-add8-6d97b7731116-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9dea0d66-db51-422c-add8-6d97b7731116\") " pod="openstack/nova-scheduler-0" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.102985 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjbdg\" (UniqueName: \"kubernetes.io/projected/9dea0d66-db51-422c-add8-6d97b7731116-kube-api-access-tjbdg\") pod \"nova-scheduler-0\" (UID: \"9dea0d66-db51-422c-add8-6d97b7731116\") " pod="openstack/nova-scheduler-0" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.204585 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dea0d66-db51-422c-add8-6d97b7731116-config-data\") pod \"nova-scheduler-0\" (UID: \"9dea0d66-db51-422c-add8-6d97b7731116\") " pod="openstack/nova-scheduler-0" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.204701 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dea0d66-db51-422c-add8-6d97b7731116-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9dea0d66-db51-422c-add8-6d97b7731116\") " pod="openstack/nova-scheduler-0" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.204733 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjbdg\" (UniqueName: \"kubernetes.io/projected/9dea0d66-db51-422c-add8-6d97b7731116-kube-api-access-tjbdg\") pod \"nova-scheduler-0\" (UID: \"9dea0d66-db51-422c-add8-6d97b7731116\") " pod="openstack/nova-scheduler-0" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.209684 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dea0d66-db51-422c-add8-6d97b7731116-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9dea0d66-db51-422c-add8-6d97b7731116\") " pod="openstack/nova-scheduler-0" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.209762 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dea0d66-db51-422c-add8-6d97b7731116-config-data\") pod \"nova-scheduler-0\" (UID: \"9dea0d66-db51-422c-add8-6d97b7731116\") " pod="openstack/nova-scheduler-0" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.222185 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjbdg\" (UniqueName: \"kubernetes.io/projected/9dea0d66-db51-422c-add8-6d97b7731116-kube-api-access-tjbdg\") pod \"nova-scheduler-0\" (UID: \"9dea0d66-db51-422c-add8-6d97b7731116\") " pod="openstack/nova-scheduler-0" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.322651 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.593002 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.641359 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30cc9429-c129-488f-9839-f69607ee7640" path="/var/lib/kubelet/pods/30cc9429-c129-488f-9839-f69607ee7640/volumes" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.655839 4833 generic.go:334] "Generic (PLEG): container finished" podID="f2846b39-c826-4f73-aff8-dccd5a1f4ad1" containerID="94b0bc1da13c4a10e0526402ab5b203934392d21cd77951bd225cfecf37e89cf" exitCode=0 Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.655884 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f2846b39-c826-4f73-aff8-dccd5a1f4ad1","Type":"ContainerDied","Data":"94b0bc1da13c4a10e0526402ab5b203934392d21cd77951bd225cfecf37e89cf"} Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.655914 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f2846b39-c826-4f73-aff8-dccd5a1f4ad1","Type":"ContainerDied","Data":"b5d0cb9036b38837b39e284076018d88e4defdde9e50a55c17c3703d2b683773"} Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.655914 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.655932 4833 scope.go:117] "RemoveContainer" containerID="94b0bc1da13c4a10e0526402ab5b203934392d21cd77951bd225cfecf37e89cf" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.688988 4833 scope.go:117] "RemoveContainer" containerID="41535e1ffe936b7cec7b5c301b358027575f5edfb8097d0294cec6c89d967144" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.709608 4833 scope.go:117] "RemoveContainer" containerID="94b0bc1da13c4a10e0526402ab5b203934392d21cd77951bd225cfecf37e89cf" Oct 13 06:48:28 crc kubenswrapper[4833]: E1013 06:48:28.710864 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94b0bc1da13c4a10e0526402ab5b203934392d21cd77951bd225cfecf37e89cf\": container with ID starting with 94b0bc1da13c4a10e0526402ab5b203934392d21cd77951bd225cfecf37e89cf not found: ID does not exist" containerID="94b0bc1da13c4a10e0526402ab5b203934392d21cd77951bd225cfecf37e89cf" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.710947 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94b0bc1da13c4a10e0526402ab5b203934392d21cd77951bd225cfecf37e89cf"} err="failed to get container status \"94b0bc1da13c4a10e0526402ab5b203934392d21cd77951bd225cfecf37e89cf\": rpc error: code = NotFound desc = could not find container \"94b0bc1da13c4a10e0526402ab5b203934392d21cd77951bd225cfecf37e89cf\": container with ID starting with 94b0bc1da13c4a10e0526402ab5b203934392d21cd77951bd225cfecf37e89cf not found: ID does not exist" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.710984 4833 scope.go:117] "RemoveContainer" containerID="41535e1ffe936b7cec7b5c301b358027575f5edfb8097d0294cec6c89d967144" Oct 13 06:48:28 crc kubenswrapper[4833]: E1013 06:48:28.711593 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41535e1ffe936b7cec7b5c301b358027575f5edfb8097d0294cec6c89d967144\": container with ID starting with 41535e1ffe936b7cec7b5c301b358027575f5edfb8097d0294cec6c89d967144 not found: ID does not exist" containerID="41535e1ffe936b7cec7b5c301b358027575f5edfb8097d0294cec6c89d967144" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.711643 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41535e1ffe936b7cec7b5c301b358027575f5edfb8097d0294cec6c89d967144"} err="failed to get container status \"41535e1ffe936b7cec7b5c301b358027575f5edfb8097d0294cec6c89d967144\": rpc error: code = NotFound desc = could not find container \"41535e1ffe936b7cec7b5c301b358027575f5edfb8097d0294cec6c89d967144\": container with ID starting with 41535e1ffe936b7cec7b5c301b358027575f5edfb8097d0294cec6c89d967144 not found: ID does not exist" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.715287 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2846b39-c826-4f73-aff8-dccd5a1f4ad1-combined-ca-bundle\") pod \"f2846b39-c826-4f73-aff8-dccd5a1f4ad1\" (UID: \"f2846b39-c826-4f73-aff8-dccd5a1f4ad1\") " Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.715514 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2846b39-c826-4f73-aff8-dccd5a1f4ad1-logs\") pod \"f2846b39-c826-4f73-aff8-dccd5a1f4ad1\" (UID: \"f2846b39-c826-4f73-aff8-dccd5a1f4ad1\") " Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.715572 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2846b39-c826-4f73-aff8-dccd5a1f4ad1-config-data\") pod \"f2846b39-c826-4f73-aff8-dccd5a1f4ad1\" (UID: \"f2846b39-c826-4f73-aff8-dccd5a1f4ad1\") " Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.715664 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zv9r6\" (UniqueName: \"kubernetes.io/projected/f2846b39-c826-4f73-aff8-dccd5a1f4ad1-kube-api-access-zv9r6\") pod \"f2846b39-c826-4f73-aff8-dccd5a1f4ad1\" (UID: \"f2846b39-c826-4f73-aff8-dccd5a1f4ad1\") " Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.716146 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2846b39-c826-4f73-aff8-dccd5a1f4ad1-logs" (OuterVolumeSpecName: "logs") pod "f2846b39-c826-4f73-aff8-dccd5a1f4ad1" (UID: "f2846b39-c826-4f73-aff8-dccd5a1f4ad1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.720424 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2846b39-c826-4f73-aff8-dccd5a1f4ad1-kube-api-access-zv9r6" (OuterVolumeSpecName: "kube-api-access-zv9r6") pod "f2846b39-c826-4f73-aff8-dccd5a1f4ad1" (UID: "f2846b39-c826-4f73-aff8-dccd5a1f4ad1"). InnerVolumeSpecName "kube-api-access-zv9r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.748747 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2846b39-c826-4f73-aff8-dccd5a1f4ad1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2846b39-c826-4f73-aff8-dccd5a1f4ad1" (UID: "f2846b39-c826-4f73-aff8-dccd5a1f4ad1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.751410 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2846b39-c826-4f73-aff8-dccd5a1f4ad1-config-data" (OuterVolumeSpecName: "config-data") pod "f2846b39-c826-4f73-aff8-dccd5a1f4ad1" (UID: "f2846b39-c826-4f73-aff8-dccd5a1f4ad1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.818523 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2846b39-c826-4f73-aff8-dccd5a1f4ad1-logs\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.818613 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2846b39-c826-4f73-aff8-dccd5a1f4ad1-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.818671 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zv9r6\" (UniqueName: \"kubernetes.io/projected/f2846b39-c826-4f73-aff8-dccd5a1f4ad1-kube-api-access-zv9r6\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.818691 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2846b39-c826-4f73-aff8-dccd5a1f4ad1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.831692 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 06:48:28 crc kubenswrapper[4833]: I1013 06:48:28.996235 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 13 06:48:29 crc kubenswrapper[4833]: I1013 06:48:29.008880 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 13 06:48:29 crc kubenswrapper[4833]: I1013 06:48:29.029746 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 13 06:48:29 crc kubenswrapper[4833]: E1013 06:48:29.030238 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2846b39-c826-4f73-aff8-dccd5a1f4ad1" containerName="nova-api-api" Oct 13 06:48:29 crc kubenswrapper[4833]: I1013 06:48:29.030258 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2846b39-c826-4f73-aff8-dccd5a1f4ad1" containerName="nova-api-api" Oct 13 06:48:29 crc kubenswrapper[4833]: E1013 06:48:29.030295 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2846b39-c826-4f73-aff8-dccd5a1f4ad1" containerName="nova-api-log" Oct 13 06:48:29 crc kubenswrapper[4833]: I1013 06:48:29.030303 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2846b39-c826-4f73-aff8-dccd5a1f4ad1" containerName="nova-api-log" Oct 13 06:48:29 crc kubenswrapper[4833]: I1013 06:48:29.030526 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2846b39-c826-4f73-aff8-dccd5a1f4ad1" containerName="nova-api-log" Oct 13 06:48:29 crc kubenswrapper[4833]: I1013 06:48:29.030588 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2846b39-c826-4f73-aff8-dccd5a1f4ad1" containerName="nova-api-api" Oct 13 06:48:29 crc kubenswrapper[4833]: I1013 06:48:29.031768 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 06:48:29 crc kubenswrapper[4833]: I1013 06:48:29.037203 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 13 06:48:29 crc kubenswrapper[4833]: I1013 06:48:29.039121 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 13 06:48:29 crc kubenswrapper[4833]: I1013 06:48:29.124116 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29886d4a-73c4-4f45-94c5-d551b1e1af37-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"29886d4a-73c4-4f45-94c5-d551b1e1af37\") " pod="openstack/nova-api-0" Oct 13 06:48:29 crc kubenswrapper[4833]: I1013 06:48:29.124203 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29886d4a-73c4-4f45-94c5-d551b1e1af37-logs\") pod \"nova-api-0\" (UID: \"29886d4a-73c4-4f45-94c5-d551b1e1af37\") " pod="openstack/nova-api-0" Oct 13 06:48:29 crc kubenswrapper[4833]: I1013 06:48:29.124283 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29886d4a-73c4-4f45-94c5-d551b1e1af37-config-data\") pod \"nova-api-0\" (UID: \"29886d4a-73c4-4f45-94c5-d551b1e1af37\") " pod="openstack/nova-api-0" Oct 13 06:48:29 crc kubenswrapper[4833]: I1013 06:48:29.124350 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgfsc\" (UniqueName: \"kubernetes.io/projected/29886d4a-73c4-4f45-94c5-d551b1e1af37-kube-api-access-pgfsc\") pod \"nova-api-0\" (UID: \"29886d4a-73c4-4f45-94c5-d551b1e1af37\") " pod="openstack/nova-api-0" Oct 13 06:48:29 crc kubenswrapper[4833]: I1013 06:48:29.226633 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29886d4a-73c4-4f45-94c5-d551b1e1af37-logs\") pod \"nova-api-0\" (UID: \"29886d4a-73c4-4f45-94c5-d551b1e1af37\") " pod="openstack/nova-api-0" Oct 13 06:48:29 crc kubenswrapper[4833]: I1013 06:48:29.226783 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29886d4a-73c4-4f45-94c5-d551b1e1af37-config-data\") pod \"nova-api-0\" (UID: \"29886d4a-73c4-4f45-94c5-d551b1e1af37\") " pod="openstack/nova-api-0" Oct 13 06:48:29 crc kubenswrapper[4833]: I1013 06:48:29.226817 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgfsc\" (UniqueName: \"kubernetes.io/projected/29886d4a-73c4-4f45-94c5-d551b1e1af37-kube-api-access-pgfsc\") pod \"nova-api-0\" (UID: \"29886d4a-73c4-4f45-94c5-d551b1e1af37\") " pod="openstack/nova-api-0" Oct 13 06:48:29 crc kubenswrapper[4833]: I1013 06:48:29.226964 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29886d4a-73c4-4f45-94c5-d551b1e1af37-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"29886d4a-73c4-4f45-94c5-d551b1e1af37\") " pod="openstack/nova-api-0" Oct 13 06:48:29 crc kubenswrapper[4833]: I1013 06:48:29.227321 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29886d4a-73c4-4f45-94c5-d551b1e1af37-logs\") pod \"nova-api-0\" (UID: \"29886d4a-73c4-4f45-94c5-d551b1e1af37\") " pod="openstack/nova-api-0" Oct 13 06:48:29 crc kubenswrapper[4833]: I1013 06:48:29.230530 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29886d4a-73c4-4f45-94c5-d551b1e1af37-config-data\") pod \"nova-api-0\" (UID: \"29886d4a-73c4-4f45-94c5-d551b1e1af37\") " pod="openstack/nova-api-0" Oct 13 06:48:29 crc kubenswrapper[4833]: I1013 06:48:29.231779 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29886d4a-73c4-4f45-94c5-d551b1e1af37-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"29886d4a-73c4-4f45-94c5-d551b1e1af37\") " pod="openstack/nova-api-0" Oct 13 06:48:29 crc kubenswrapper[4833]: I1013 06:48:29.247698 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgfsc\" (UniqueName: \"kubernetes.io/projected/29886d4a-73c4-4f45-94c5-d551b1e1af37-kube-api-access-pgfsc\") pod \"nova-api-0\" (UID: \"29886d4a-73c4-4f45-94c5-d551b1e1af37\") " pod="openstack/nova-api-0" Oct 13 06:48:29 crc kubenswrapper[4833]: I1013 06:48:29.282138 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 13 06:48:29 crc kubenswrapper[4833]: I1013 06:48:29.282198 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 13 06:48:29 crc kubenswrapper[4833]: I1013 06:48:29.358081 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 06:48:29 crc kubenswrapper[4833]: I1013 06:48:29.667223 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9dea0d66-db51-422c-add8-6d97b7731116","Type":"ContainerStarted","Data":"56487496984f25952988b6e0884601eb729cbc56ed79b090bd191c9aae56adb4"} Oct 13 06:48:29 crc kubenswrapper[4833]: I1013 06:48:29.667272 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9dea0d66-db51-422c-add8-6d97b7731116","Type":"ContainerStarted","Data":"b92d7bc7352b1a0769c6ab3bcab3f342f24c4a5fedb425c8ef1e1c364b5291a6"} Oct 13 06:48:29 crc kubenswrapper[4833]: I1013 06:48:29.692805 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.692789391 podStartE2EDuration="2.692789391s" podCreationTimestamp="2025-10-13 06:48:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:48:29.690915219 +0000 UTC m=+1199.791338135" watchObservedRunningTime="2025-10-13 06:48:29.692789391 +0000 UTC m=+1199.793212307" Oct 13 06:48:29 crc kubenswrapper[4833]: I1013 06:48:29.849657 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 13 06:48:29 crc kubenswrapper[4833]: W1013 06:48:29.854526 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29886d4a_73c4_4f45_94c5_d551b1e1af37.slice/crio-01769f7464dddcb4f139d3918b4566f2beb545d8f7de23d19f582cddd3324562 WatchSource:0}: Error finding container 01769f7464dddcb4f139d3918b4566f2beb545d8f7de23d19f582cddd3324562: Status 404 returned error can't find the container with id 01769f7464dddcb4f139d3918b4566f2beb545d8f7de23d19f582cddd3324562 Oct 13 06:48:30 crc kubenswrapper[4833]: I1013 06:48:30.644652 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2846b39-c826-4f73-aff8-dccd5a1f4ad1" path="/var/lib/kubelet/pods/f2846b39-c826-4f73-aff8-dccd5a1f4ad1/volumes" Oct 13 06:48:30 crc kubenswrapper[4833]: I1013 06:48:30.683373 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29886d4a-73c4-4f45-94c5-d551b1e1af37","Type":"ContainerStarted","Data":"239b43a5d8dc3cc744ec699b3fd73a0404e9f8db7a984f3e20f0c54b11566268"} Oct 13 06:48:30 crc kubenswrapper[4833]: I1013 06:48:30.683415 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29886d4a-73c4-4f45-94c5-d551b1e1af37","Type":"ContainerStarted","Data":"d15003c78174e77e48dd8ba92e97da7551c3e2cc1460bd79bb4ac6f52e0430dd"} Oct 13 06:48:30 crc kubenswrapper[4833]: I1013 06:48:30.683433 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29886d4a-73c4-4f45-94c5-d551b1e1af37","Type":"ContainerStarted","Data":"01769f7464dddcb4f139d3918b4566f2beb545d8f7de23d19f582cddd3324562"} Oct 13 06:48:30 crc kubenswrapper[4833]: I1013 06:48:30.720729 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.720707631 podStartE2EDuration="2.720707631s" podCreationTimestamp="2025-10-13 06:48:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:48:30.707174642 +0000 UTC m=+1200.807597548" watchObservedRunningTime="2025-10-13 06:48:30.720707631 +0000 UTC m=+1200.821130547" Oct 13 06:48:31 crc kubenswrapper[4833]: I1013 06:48:31.607404 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 13 06:48:31 crc kubenswrapper[4833]: I1013 06:48:31.607977 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="40245b56-c93c-4c17-873a-dcd87e3f041b" containerName="kube-state-metrics" containerID="cri-o://7ccea8d5874604fb1feb38a49f7828ee3685b0c011032cb698d6975b30650dca" gracePeriod=30 Oct 13 06:48:32 crc kubenswrapper[4833]: I1013 06:48:32.080224 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 13 06:48:32 crc kubenswrapper[4833]: I1013 06:48:32.179727 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxmfh\" (UniqueName: \"kubernetes.io/projected/40245b56-c93c-4c17-873a-dcd87e3f041b-kube-api-access-fxmfh\") pod \"40245b56-c93c-4c17-873a-dcd87e3f041b\" (UID: \"40245b56-c93c-4c17-873a-dcd87e3f041b\") " Oct 13 06:48:32 crc kubenswrapper[4833]: I1013 06:48:32.187714 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40245b56-c93c-4c17-873a-dcd87e3f041b-kube-api-access-fxmfh" (OuterVolumeSpecName: "kube-api-access-fxmfh") pod "40245b56-c93c-4c17-873a-dcd87e3f041b" (UID: "40245b56-c93c-4c17-873a-dcd87e3f041b"). InnerVolumeSpecName "kube-api-access-fxmfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:48:32 crc kubenswrapper[4833]: I1013 06:48:32.281791 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxmfh\" (UniqueName: \"kubernetes.io/projected/40245b56-c93c-4c17-873a-dcd87e3f041b-kube-api-access-fxmfh\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:32 crc kubenswrapper[4833]: I1013 06:48:32.703969 4833 generic.go:334] "Generic (PLEG): container finished" podID="40245b56-c93c-4c17-873a-dcd87e3f041b" containerID="7ccea8d5874604fb1feb38a49f7828ee3685b0c011032cb698d6975b30650dca" exitCode=2 Oct 13 06:48:32 crc kubenswrapper[4833]: I1013 06:48:32.704012 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"40245b56-c93c-4c17-873a-dcd87e3f041b","Type":"ContainerDied","Data":"7ccea8d5874604fb1feb38a49f7828ee3685b0c011032cb698d6975b30650dca"} Oct 13 06:48:32 crc kubenswrapper[4833]: I1013 06:48:32.704036 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"40245b56-c93c-4c17-873a-dcd87e3f041b","Type":"ContainerDied","Data":"594667a192b9b362db3df11438c66001e3f367123a3987fc380ca4d8b9efe160"} Oct 13 06:48:32 crc kubenswrapper[4833]: I1013 06:48:32.704052 4833 scope.go:117] "RemoveContainer" containerID="7ccea8d5874604fb1feb38a49f7828ee3685b0c011032cb698d6975b30650dca" Oct 13 06:48:32 crc kubenswrapper[4833]: I1013 06:48:32.704181 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 13 06:48:32 crc kubenswrapper[4833]: I1013 06:48:32.723608 4833 scope.go:117] "RemoveContainer" containerID="7ccea8d5874604fb1feb38a49f7828ee3685b0c011032cb698d6975b30650dca" Oct 13 06:48:32 crc kubenswrapper[4833]: E1013 06:48:32.724329 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ccea8d5874604fb1feb38a49f7828ee3685b0c011032cb698d6975b30650dca\": container with ID starting with 7ccea8d5874604fb1feb38a49f7828ee3685b0c011032cb698d6975b30650dca not found: ID does not exist" containerID="7ccea8d5874604fb1feb38a49f7828ee3685b0c011032cb698d6975b30650dca" Oct 13 06:48:32 crc kubenswrapper[4833]: I1013 06:48:32.724369 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ccea8d5874604fb1feb38a49f7828ee3685b0c011032cb698d6975b30650dca"} err="failed to get container status \"7ccea8d5874604fb1feb38a49f7828ee3685b0c011032cb698d6975b30650dca\": rpc error: code = NotFound desc = could not find container \"7ccea8d5874604fb1feb38a49f7828ee3685b0c011032cb698d6975b30650dca\": container with ID starting with 7ccea8d5874604fb1feb38a49f7828ee3685b0c011032cb698d6975b30650dca not found: ID does not exist" Oct 13 06:48:32 crc kubenswrapper[4833]: I1013 06:48:32.742590 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 13 06:48:32 crc kubenswrapper[4833]: I1013 06:48:32.752504 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 13 06:48:32 crc kubenswrapper[4833]: I1013 06:48:32.766684 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 13 06:48:32 crc kubenswrapper[4833]: E1013 06:48:32.767114 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40245b56-c93c-4c17-873a-dcd87e3f041b" containerName="kube-state-metrics" Oct 13 06:48:32 crc kubenswrapper[4833]: I1013 06:48:32.767137 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="40245b56-c93c-4c17-873a-dcd87e3f041b" containerName="kube-state-metrics" Oct 13 06:48:32 crc kubenswrapper[4833]: I1013 06:48:32.767338 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="40245b56-c93c-4c17-873a-dcd87e3f041b" containerName="kube-state-metrics" Oct 13 06:48:32 crc kubenswrapper[4833]: I1013 06:48:32.767956 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 13 06:48:32 crc kubenswrapper[4833]: I1013 06:48:32.769648 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Oct 13 06:48:32 crc kubenswrapper[4833]: I1013 06:48:32.769648 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Oct 13 06:48:32 crc kubenswrapper[4833]: I1013 06:48:32.777094 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 13 06:48:32 crc kubenswrapper[4833]: I1013 06:48:32.893259 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11f688b0-b3aa-46f7-a700-c6619e3a3951-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"11f688b0-b3aa-46f7-a700-c6619e3a3951\") " pod="openstack/kube-state-metrics-0" Oct 13 06:48:32 crc kubenswrapper[4833]: I1013 06:48:32.893360 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/11f688b0-b3aa-46f7-a700-c6619e3a3951-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"11f688b0-b3aa-46f7-a700-c6619e3a3951\") " pod="openstack/kube-state-metrics-0" Oct 13 06:48:32 crc kubenswrapper[4833]: I1013 06:48:32.893481 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/11f688b0-b3aa-46f7-a700-c6619e3a3951-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"11f688b0-b3aa-46f7-a700-c6619e3a3951\") " pod="openstack/kube-state-metrics-0" Oct 13 06:48:32 crc kubenswrapper[4833]: I1013 06:48:32.893502 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trn26\" (UniqueName: \"kubernetes.io/projected/11f688b0-b3aa-46f7-a700-c6619e3a3951-kube-api-access-trn26\") pod \"kube-state-metrics-0\" (UID: \"11f688b0-b3aa-46f7-a700-c6619e3a3951\") " pod="openstack/kube-state-metrics-0" Oct 13 06:48:32 crc kubenswrapper[4833]: I1013 06:48:32.994621 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/11f688b0-b3aa-46f7-a700-c6619e3a3951-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"11f688b0-b3aa-46f7-a700-c6619e3a3951\") " pod="openstack/kube-state-metrics-0" Oct 13 06:48:32 crc kubenswrapper[4833]: I1013 06:48:32.994661 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trn26\" (UniqueName: \"kubernetes.io/projected/11f688b0-b3aa-46f7-a700-c6619e3a3951-kube-api-access-trn26\") pod \"kube-state-metrics-0\" (UID: \"11f688b0-b3aa-46f7-a700-c6619e3a3951\") " pod="openstack/kube-state-metrics-0" Oct 13 06:48:32 crc kubenswrapper[4833]: I1013 06:48:32.994691 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11f688b0-b3aa-46f7-a700-c6619e3a3951-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"11f688b0-b3aa-46f7-a700-c6619e3a3951\") " pod="openstack/kube-state-metrics-0" Oct 13 06:48:32 crc kubenswrapper[4833]: I1013 06:48:32.994755 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/11f688b0-b3aa-46f7-a700-c6619e3a3951-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"11f688b0-b3aa-46f7-a700-c6619e3a3951\") " pod="openstack/kube-state-metrics-0" Oct 13 06:48:32 crc kubenswrapper[4833]: I1013 06:48:32.998738 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/11f688b0-b3aa-46f7-a700-c6619e3a3951-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"11f688b0-b3aa-46f7-a700-c6619e3a3951\") " pod="openstack/kube-state-metrics-0" Oct 13 06:48:32 crc kubenswrapper[4833]: I1013 06:48:32.998972 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/11f688b0-b3aa-46f7-a700-c6619e3a3951-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"11f688b0-b3aa-46f7-a700-c6619e3a3951\") " pod="openstack/kube-state-metrics-0" Oct 13 06:48:33 crc kubenswrapper[4833]: I1013 06:48:33.003263 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11f688b0-b3aa-46f7-a700-c6619e3a3951-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"11f688b0-b3aa-46f7-a700-c6619e3a3951\") " pod="openstack/kube-state-metrics-0" Oct 13 06:48:33 crc kubenswrapper[4833]: I1013 06:48:33.017021 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Oct 13 06:48:33 crc kubenswrapper[4833]: I1013 06:48:33.021945 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trn26\" (UniqueName: \"kubernetes.io/projected/11f688b0-b3aa-46f7-a700-c6619e3a3951-kube-api-access-trn26\") pod \"kube-state-metrics-0\" (UID: \"11f688b0-b3aa-46f7-a700-c6619e3a3951\") " pod="openstack/kube-state-metrics-0" Oct 13 06:48:33 crc kubenswrapper[4833]: I1013 06:48:33.082924 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 13 06:48:33 crc kubenswrapper[4833]: I1013 06:48:33.324225 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 13 06:48:33 crc kubenswrapper[4833]: I1013 06:48:33.557091 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 13 06:48:33 crc kubenswrapper[4833]: I1013 06:48:33.675965 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:48:33 crc kubenswrapper[4833]: I1013 06:48:33.676288 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f564521f-ebb2-4103-9326-2acfcb6c90e4" containerName="ceilometer-central-agent" containerID="cri-o://04b344348eb62bb1741dbd68b52de9e84fd9af3c9fd266bc53196689afe34baf" gracePeriod=30 Oct 13 06:48:33 crc kubenswrapper[4833]: I1013 06:48:33.676329 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f564521f-ebb2-4103-9326-2acfcb6c90e4" containerName="proxy-httpd" containerID="cri-o://dccc56cae35eeb688e15c7824a5075cbc1f00517bde392388433981c1ee7e18a" gracePeriod=30 Oct 13 06:48:33 crc kubenswrapper[4833]: I1013 06:48:33.676335 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f564521f-ebb2-4103-9326-2acfcb6c90e4" containerName="sg-core" containerID="cri-o://26823de353a8760c5ebbcaaefc6be008fc3a41771a195d964e142397c91dd7d6" gracePeriod=30 Oct 13 06:48:33 crc kubenswrapper[4833]: I1013 06:48:33.676374 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f564521f-ebb2-4103-9326-2acfcb6c90e4" containerName="ceilometer-notification-agent" containerID="cri-o://7c43d137cab1969d963afc221baf1b8f8d88dea0cb5e37184e172cfef9f58cac" gracePeriod=30 Oct 13 06:48:33 crc kubenswrapper[4833]: I1013 06:48:33.717148 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"11f688b0-b3aa-46f7-a700-c6619e3a3951","Type":"ContainerStarted","Data":"46e55a5dc3c01f45b9a9d34700ad0acecc5eeb0b7d1384d0632bb706f27a7d39"} Oct 13 06:48:34 crc kubenswrapper[4833]: I1013 06:48:34.282460 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 13 06:48:34 crc kubenswrapper[4833]: I1013 06:48:34.282558 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 13 06:48:34 crc kubenswrapper[4833]: I1013 06:48:34.640726 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40245b56-c93c-4c17-873a-dcd87e3f041b" path="/var/lib/kubelet/pods/40245b56-c93c-4c17-873a-dcd87e3f041b/volumes" Oct 13 06:48:34 crc kubenswrapper[4833]: I1013 06:48:34.730815 4833 generic.go:334] "Generic (PLEG): container finished" podID="f564521f-ebb2-4103-9326-2acfcb6c90e4" containerID="dccc56cae35eeb688e15c7824a5075cbc1f00517bde392388433981c1ee7e18a" exitCode=0 Oct 13 06:48:34 crc kubenswrapper[4833]: I1013 06:48:34.731084 4833 generic.go:334] "Generic (PLEG): container finished" podID="f564521f-ebb2-4103-9326-2acfcb6c90e4" containerID="26823de353a8760c5ebbcaaefc6be008fc3a41771a195d964e142397c91dd7d6" exitCode=2 Oct 13 06:48:34 crc kubenswrapper[4833]: I1013 06:48:34.731095 4833 generic.go:334] "Generic (PLEG): container finished" podID="f564521f-ebb2-4103-9326-2acfcb6c90e4" containerID="04b344348eb62bb1741dbd68b52de9e84fd9af3c9fd266bc53196689afe34baf" exitCode=0 Oct 13 06:48:34 crc kubenswrapper[4833]: I1013 06:48:34.731140 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f564521f-ebb2-4103-9326-2acfcb6c90e4","Type":"ContainerDied","Data":"dccc56cae35eeb688e15c7824a5075cbc1f00517bde392388433981c1ee7e18a"} Oct 13 06:48:34 crc kubenswrapper[4833]: I1013 06:48:34.731168 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f564521f-ebb2-4103-9326-2acfcb6c90e4","Type":"ContainerDied","Data":"26823de353a8760c5ebbcaaefc6be008fc3a41771a195d964e142397c91dd7d6"} Oct 13 06:48:34 crc kubenswrapper[4833]: I1013 06:48:34.731182 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f564521f-ebb2-4103-9326-2acfcb6c90e4","Type":"ContainerDied","Data":"04b344348eb62bb1741dbd68b52de9e84fd9af3c9fd266bc53196689afe34baf"} Oct 13 06:48:34 crc kubenswrapper[4833]: I1013 06:48:34.734855 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"11f688b0-b3aa-46f7-a700-c6619e3a3951","Type":"ContainerStarted","Data":"96e44e80a4356d715cd071293816d2a15545c8906346c80f998d3bb584b779b5"} Oct 13 06:48:34 crc kubenswrapper[4833]: I1013 06:48:34.734905 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 13 06:48:34 crc kubenswrapper[4833]: I1013 06:48:34.762466 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.314443451 podStartE2EDuration="2.762439563s" podCreationTimestamp="2025-10-13 06:48:32 +0000 UTC" firstStartedPulling="2025-10-13 06:48:33.551034596 +0000 UTC m=+1203.651457512" lastFinishedPulling="2025-10-13 06:48:33.999030718 +0000 UTC m=+1204.099453624" observedRunningTime="2025-10-13 06:48:34.754777018 +0000 UTC m=+1204.855199944" watchObservedRunningTime="2025-10-13 06:48:34.762439563 +0000 UTC m=+1204.862862489" Oct 13 06:48:35 crc kubenswrapper[4833]: I1013 06:48:35.296717 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="318447c0-0084-4b42-92ad-086709e0576b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 13 06:48:35 crc kubenswrapper[4833]: I1013 06:48:35.296717 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="318447c0-0084-4b42-92ad-086709e0576b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 13 06:48:36 crc kubenswrapper[4833]: I1013 06:48:36.753555 4833 generic.go:334] "Generic (PLEG): container finished" podID="f564521f-ebb2-4103-9326-2acfcb6c90e4" containerID="7c43d137cab1969d963afc221baf1b8f8d88dea0cb5e37184e172cfef9f58cac" exitCode=0 Oct 13 06:48:36 crc kubenswrapper[4833]: I1013 06:48:36.753640 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f564521f-ebb2-4103-9326-2acfcb6c90e4","Type":"ContainerDied","Data":"7c43d137cab1969d963afc221baf1b8f8d88dea0cb5e37184e172cfef9f58cac"} Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.087585 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.173371 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbxd9\" (UniqueName: \"kubernetes.io/projected/f564521f-ebb2-4103-9326-2acfcb6c90e4-kube-api-access-vbxd9\") pod \"f564521f-ebb2-4103-9326-2acfcb6c90e4\" (UID: \"f564521f-ebb2-4103-9326-2acfcb6c90e4\") " Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.173422 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f564521f-ebb2-4103-9326-2acfcb6c90e4-run-httpd\") pod \"f564521f-ebb2-4103-9326-2acfcb6c90e4\" (UID: \"f564521f-ebb2-4103-9326-2acfcb6c90e4\") " Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.173475 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f564521f-ebb2-4103-9326-2acfcb6c90e4-config-data\") pod \"f564521f-ebb2-4103-9326-2acfcb6c90e4\" (UID: \"f564521f-ebb2-4103-9326-2acfcb6c90e4\") " Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.173610 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f564521f-ebb2-4103-9326-2acfcb6c90e4-scripts\") pod \"f564521f-ebb2-4103-9326-2acfcb6c90e4\" (UID: \"f564521f-ebb2-4103-9326-2acfcb6c90e4\") " Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.173656 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f564521f-ebb2-4103-9326-2acfcb6c90e4-sg-core-conf-yaml\") pod \"f564521f-ebb2-4103-9326-2acfcb6c90e4\" (UID: \"f564521f-ebb2-4103-9326-2acfcb6c90e4\") " Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.173674 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f564521f-ebb2-4103-9326-2acfcb6c90e4-log-httpd\") pod \"f564521f-ebb2-4103-9326-2acfcb6c90e4\" (UID: \"f564521f-ebb2-4103-9326-2acfcb6c90e4\") " Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.173741 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f564521f-ebb2-4103-9326-2acfcb6c90e4-combined-ca-bundle\") pod \"f564521f-ebb2-4103-9326-2acfcb6c90e4\" (UID: \"f564521f-ebb2-4103-9326-2acfcb6c90e4\") " Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.175282 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f564521f-ebb2-4103-9326-2acfcb6c90e4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f564521f-ebb2-4103-9326-2acfcb6c90e4" (UID: "f564521f-ebb2-4103-9326-2acfcb6c90e4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.181662 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f564521f-ebb2-4103-9326-2acfcb6c90e4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f564521f-ebb2-4103-9326-2acfcb6c90e4" (UID: "f564521f-ebb2-4103-9326-2acfcb6c90e4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.186922 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f564521f-ebb2-4103-9326-2acfcb6c90e4-kube-api-access-vbxd9" (OuterVolumeSpecName: "kube-api-access-vbxd9") pod "f564521f-ebb2-4103-9326-2acfcb6c90e4" (UID: "f564521f-ebb2-4103-9326-2acfcb6c90e4"). InnerVolumeSpecName "kube-api-access-vbxd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.214795 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f564521f-ebb2-4103-9326-2acfcb6c90e4-scripts" (OuterVolumeSpecName: "scripts") pod "f564521f-ebb2-4103-9326-2acfcb6c90e4" (UID: "f564521f-ebb2-4103-9326-2acfcb6c90e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.218933 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f564521f-ebb2-4103-9326-2acfcb6c90e4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f564521f-ebb2-4103-9326-2acfcb6c90e4" (UID: "f564521f-ebb2-4103-9326-2acfcb6c90e4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.275768 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f564521f-ebb2-4103-9326-2acfcb6c90e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f564521f-ebb2-4103-9326-2acfcb6c90e4" (UID: "f564521f-ebb2-4103-9326-2acfcb6c90e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.276285 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f564521f-ebb2-4103-9326-2acfcb6c90e4-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.276316 4833 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f564521f-ebb2-4103-9326-2acfcb6c90e4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.276325 4833 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f564521f-ebb2-4103-9326-2acfcb6c90e4-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.276334 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f564521f-ebb2-4103-9326-2acfcb6c90e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.276344 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbxd9\" (UniqueName: \"kubernetes.io/projected/f564521f-ebb2-4103-9326-2acfcb6c90e4-kube-api-access-vbxd9\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.276352 4833 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f564521f-ebb2-4103-9326-2acfcb6c90e4-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.288817 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f564521f-ebb2-4103-9326-2acfcb6c90e4-config-data" (OuterVolumeSpecName: "config-data") pod "f564521f-ebb2-4103-9326-2acfcb6c90e4" (UID: "f564521f-ebb2-4103-9326-2acfcb6c90e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.378375 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f564521f-ebb2-4103-9326-2acfcb6c90e4-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.766797 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f564521f-ebb2-4103-9326-2acfcb6c90e4","Type":"ContainerDied","Data":"41ffc940e438488d037ac5dff346a1c8f1b485f36b34b89b7a0aa77cc1c39f6e"} Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.766864 4833 scope.go:117] "RemoveContainer" containerID="dccc56cae35eeb688e15c7824a5075cbc1f00517bde392388433981c1ee7e18a" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.769620 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.796033 4833 scope.go:117] "RemoveContainer" containerID="26823de353a8760c5ebbcaaefc6be008fc3a41771a195d964e142397c91dd7d6" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.807732 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.818350 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.822138 4833 scope.go:117] "RemoveContainer" containerID="7c43d137cab1969d963afc221baf1b8f8d88dea0cb5e37184e172cfef9f58cac" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.831982 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:48:37 crc kubenswrapper[4833]: E1013 06:48:37.832439 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f564521f-ebb2-4103-9326-2acfcb6c90e4" containerName="ceilometer-central-agent" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.832463 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f564521f-ebb2-4103-9326-2acfcb6c90e4" containerName="ceilometer-central-agent" Oct 13 06:48:37 crc kubenswrapper[4833]: E1013 06:48:37.832477 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f564521f-ebb2-4103-9326-2acfcb6c90e4" containerName="sg-core" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.832484 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f564521f-ebb2-4103-9326-2acfcb6c90e4" containerName="sg-core" Oct 13 06:48:37 crc kubenswrapper[4833]: E1013 06:48:37.832515 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f564521f-ebb2-4103-9326-2acfcb6c90e4" containerName="ceilometer-notification-agent" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.832525 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f564521f-ebb2-4103-9326-2acfcb6c90e4" containerName="ceilometer-notification-agent" Oct 13 06:48:37 crc kubenswrapper[4833]: E1013 06:48:37.832663 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f564521f-ebb2-4103-9326-2acfcb6c90e4" containerName="proxy-httpd" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.832675 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f564521f-ebb2-4103-9326-2acfcb6c90e4" containerName="proxy-httpd" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.832912 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f564521f-ebb2-4103-9326-2acfcb6c90e4" containerName="ceilometer-central-agent" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.832942 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f564521f-ebb2-4103-9326-2acfcb6c90e4" containerName="sg-core" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.832953 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f564521f-ebb2-4103-9326-2acfcb6c90e4" containerName="proxy-httpd" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.832964 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f564521f-ebb2-4103-9326-2acfcb6c90e4" containerName="ceilometer-notification-agent" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.835068 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.838647 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.838872 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.839022 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.848071 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.852115 4833 scope.go:117] "RemoveContainer" containerID="04b344348eb62bb1741dbd68b52de9e84fd9af3c9fd266bc53196689afe34baf" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.886787 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " pod="openstack/ceilometer-0" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.886837 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-config-data\") pod \"ceilometer-0\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " pod="openstack/ceilometer-0" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.886868 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mckg\" (UniqueName: \"kubernetes.io/projected/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-kube-api-access-6mckg\") pod \"ceilometer-0\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " pod="openstack/ceilometer-0" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.886942 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-log-httpd\") pod \"ceilometer-0\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " pod="openstack/ceilometer-0" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.887011 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " pod="openstack/ceilometer-0" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.887032 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-run-httpd\") pod \"ceilometer-0\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " pod="openstack/ceilometer-0" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.887077 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-scripts\") pod \"ceilometer-0\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " pod="openstack/ceilometer-0" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.887106 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " pod="openstack/ceilometer-0" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.988440 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " pod="openstack/ceilometer-0" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.988745 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-config-data\") pod \"ceilometer-0\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " pod="openstack/ceilometer-0" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.988770 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mckg\" (UniqueName: \"kubernetes.io/projected/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-kube-api-access-6mckg\") pod \"ceilometer-0\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " pod="openstack/ceilometer-0" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.988798 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-log-httpd\") pod \"ceilometer-0\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " pod="openstack/ceilometer-0" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.988862 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " pod="openstack/ceilometer-0" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.988880 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-run-httpd\") pod \"ceilometer-0\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " pod="openstack/ceilometer-0" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.988909 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-scripts\") pod \"ceilometer-0\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " pod="openstack/ceilometer-0" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.988931 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " pod="openstack/ceilometer-0" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.989687 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-log-httpd\") pod \"ceilometer-0\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " pod="openstack/ceilometer-0" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.990063 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-run-httpd\") pod \"ceilometer-0\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " pod="openstack/ceilometer-0" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.993040 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-scripts\") pod \"ceilometer-0\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " pod="openstack/ceilometer-0" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.995374 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " pod="openstack/ceilometer-0" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.995413 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " pod="openstack/ceilometer-0" Oct 13 06:48:37 crc kubenswrapper[4833]: I1013 06:48:37.995386 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " pod="openstack/ceilometer-0" Oct 13 06:48:38 crc kubenswrapper[4833]: I1013 06:48:38.001348 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-config-data\") pod \"ceilometer-0\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " pod="openstack/ceilometer-0" Oct 13 06:48:38 crc kubenswrapper[4833]: I1013 06:48:38.009111 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mckg\" (UniqueName: \"kubernetes.io/projected/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-kube-api-access-6mckg\") pod \"ceilometer-0\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " pod="openstack/ceilometer-0" Oct 13 06:48:38 crc kubenswrapper[4833]: I1013 06:48:38.153362 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:48:38 crc kubenswrapper[4833]: I1013 06:48:38.324060 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 13 06:48:38 crc kubenswrapper[4833]: I1013 06:48:38.353114 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 13 06:48:38 crc kubenswrapper[4833]: I1013 06:48:38.621684 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:48:38 crc kubenswrapper[4833]: I1013 06:48:38.638893 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f564521f-ebb2-4103-9326-2acfcb6c90e4" path="/var/lib/kubelet/pods/f564521f-ebb2-4103-9326-2acfcb6c90e4/volumes" Oct 13 06:48:38 crc kubenswrapper[4833]: I1013 06:48:38.776624 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf8aeef-36b7-4381-a9fb-4882c7628ca5","Type":"ContainerStarted","Data":"f4ea9f41fae00cb161a0a7a32890a04e532dcd26736bd83fb8ed6cf3ddeb614e"} Oct 13 06:48:38 crc kubenswrapper[4833]: I1013 06:48:38.815684 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 13 06:48:39 crc kubenswrapper[4833]: I1013 06:48:39.358657 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 13 06:48:39 crc kubenswrapper[4833]: I1013 06:48:39.359081 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 13 06:48:39 crc kubenswrapper[4833]: I1013 06:48:39.795108 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf8aeef-36b7-4381-a9fb-4882c7628ca5","Type":"ContainerStarted","Data":"7c83227c81824ee6a08079584ec9d4b6fd174a5ed8fb03b9b9ec5c4de5c2b22d"} Oct 13 06:48:40 crc kubenswrapper[4833]: I1013 06:48:40.441816 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="29886d4a-73c4-4f45-94c5-d551b1e1af37" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 13 06:48:40 crc kubenswrapper[4833]: I1013 06:48:40.441818 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="29886d4a-73c4-4f45-94c5-d551b1e1af37" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 13 06:48:40 crc kubenswrapper[4833]: I1013 06:48:40.806930 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf8aeef-36b7-4381-a9fb-4882c7628ca5","Type":"ContainerStarted","Data":"4158ba0b331b1638982b29a55f886a4ccf3132233df02a0d17ced367d54ba2fd"} Oct 13 06:48:41 crc kubenswrapper[4833]: I1013 06:48:41.818822 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf8aeef-36b7-4381-a9fb-4882c7628ca5","Type":"ContainerStarted","Data":"b0b3597acbd4ff1692d209be570cda391b692a9aa04e9c1cb6d2b8702ed9e54d"} Oct 13 06:48:42 crc kubenswrapper[4833]: I1013 06:48:42.828851 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf8aeef-36b7-4381-a9fb-4882c7628ca5","Type":"ContainerStarted","Data":"0b31e405c0a055675988bc2d2c45b6f222f71c7e498c51022dd6e38ca641c878"} Oct 13 06:48:42 crc kubenswrapper[4833]: I1013 06:48:42.830654 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 13 06:48:42 crc kubenswrapper[4833]: I1013 06:48:42.848615 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.361346116 podStartE2EDuration="5.848595893s" podCreationTimestamp="2025-10-13 06:48:37 +0000 UTC" firstStartedPulling="2025-10-13 06:48:38.628697882 +0000 UTC m=+1208.729120798" lastFinishedPulling="2025-10-13 06:48:42.115947659 +0000 UTC m=+1212.216370575" observedRunningTime="2025-10-13 06:48:42.848115659 +0000 UTC m=+1212.948538575" watchObservedRunningTime="2025-10-13 06:48:42.848595893 +0000 UTC m=+1212.949018809" Oct 13 06:48:43 crc kubenswrapper[4833]: I1013 06:48:43.094792 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 13 06:48:44 crc kubenswrapper[4833]: I1013 06:48:44.286323 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 13 06:48:44 crc kubenswrapper[4833]: I1013 06:48:44.290381 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 13 06:48:44 crc kubenswrapper[4833]: I1013 06:48:44.296382 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 13 06:48:44 crc kubenswrapper[4833]: I1013 06:48:44.851667 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 13 06:48:46 crc kubenswrapper[4833]: I1013 06:48:46.866324 4833 generic.go:334] "Generic (PLEG): container finished" podID="f75d210b-f440-4d30-ae98-927b7660dad6" containerID="bdea18d1dda04a6f48a543d7b832b414dd45a318e0945919afca57e801a3a01c" exitCode=137 Oct 13 06:48:46 crc kubenswrapper[4833]: I1013 06:48:46.866384 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f75d210b-f440-4d30-ae98-927b7660dad6","Type":"ContainerDied","Data":"bdea18d1dda04a6f48a543d7b832b414dd45a318e0945919afca57e801a3a01c"} Oct 13 06:48:46 crc kubenswrapper[4833]: I1013 06:48:46.866930 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f75d210b-f440-4d30-ae98-927b7660dad6","Type":"ContainerDied","Data":"c1e92efc1a98f3d5da5d97e47e91766c4493561c4366d6a2a532622e5c10d42f"} Oct 13 06:48:46 crc kubenswrapper[4833]: I1013 06:48:46.866946 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1e92efc1a98f3d5da5d97e47e91766c4493561c4366d6a2a532622e5c10d42f" Oct 13 06:48:46 crc kubenswrapper[4833]: I1013 06:48:46.900014 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:46 crc kubenswrapper[4833]: I1013 06:48:46.964097 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc4xj\" (UniqueName: \"kubernetes.io/projected/f75d210b-f440-4d30-ae98-927b7660dad6-kube-api-access-wc4xj\") pod \"f75d210b-f440-4d30-ae98-927b7660dad6\" (UID: \"f75d210b-f440-4d30-ae98-927b7660dad6\") " Oct 13 06:48:46 crc kubenswrapper[4833]: I1013 06:48:46.964173 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f75d210b-f440-4d30-ae98-927b7660dad6-config-data\") pod \"f75d210b-f440-4d30-ae98-927b7660dad6\" (UID: \"f75d210b-f440-4d30-ae98-927b7660dad6\") " Oct 13 06:48:46 crc kubenswrapper[4833]: I1013 06:48:46.964193 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f75d210b-f440-4d30-ae98-927b7660dad6-combined-ca-bundle\") pod \"f75d210b-f440-4d30-ae98-927b7660dad6\" (UID: \"f75d210b-f440-4d30-ae98-927b7660dad6\") " Oct 13 06:48:46 crc kubenswrapper[4833]: I1013 06:48:46.989435 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f75d210b-f440-4d30-ae98-927b7660dad6-kube-api-access-wc4xj" (OuterVolumeSpecName: "kube-api-access-wc4xj") pod "f75d210b-f440-4d30-ae98-927b7660dad6" (UID: "f75d210b-f440-4d30-ae98-927b7660dad6"). InnerVolumeSpecName "kube-api-access-wc4xj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:48:46 crc kubenswrapper[4833]: I1013 06:48:46.997187 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f75d210b-f440-4d30-ae98-927b7660dad6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f75d210b-f440-4d30-ae98-927b7660dad6" (UID: "f75d210b-f440-4d30-ae98-927b7660dad6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:47 crc kubenswrapper[4833]: I1013 06:48:47.012384 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f75d210b-f440-4d30-ae98-927b7660dad6-config-data" (OuterVolumeSpecName: "config-data") pod "f75d210b-f440-4d30-ae98-927b7660dad6" (UID: "f75d210b-f440-4d30-ae98-927b7660dad6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:47 crc kubenswrapper[4833]: I1013 06:48:47.066850 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc4xj\" (UniqueName: \"kubernetes.io/projected/f75d210b-f440-4d30-ae98-927b7660dad6-kube-api-access-wc4xj\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:47 crc kubenswrapper[4833]: I1013 06:48:47.066888 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f75d210b-f440-4d30-ae98-927b7660dad6-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:47 crc kubenswrapper[4833]: I1013 06:48:47.066898 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f75d210b-f440-4d30-ae98-927b7660dad6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:47 crc kubenswrapper[4833]: I1013 06:48:47.876579 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:47 crc kubenswrapper[4833]: I1013 06:48:47.920926 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 13 06:48:47 crc kubenswrapper[4833]: I1013 06:48:47.929340 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 13 06:48:47 crc kubenswrapper[4833]: I1013 06:48:47.951938 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 13 06:48:47 crc kubenswrapper[4833]: E1013 06:48:47.952533 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f75d210b-f440-4d30-ae98-927b7660dad6" containerName="nova-cell1-novncproxy-novncproxy" Oct 13 06:48:47 crc kubenswrapper[4833]: I1013 06:48:47.952585 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f75d210b-f440-4d30-ae98-927b7660dad6" containerName="nova-cell1-novncproxy-novncproxy" Oct 13 06:48:47 crc kubenswrapper[4833]: I1013 06:48:47.952988 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f75d210b-f440-4d30-ae98-927b7660dad6" containerName="nova-cell1-novncproxy-novncproxy" Oct 13 06:48:47 crc kubenswrapper[4833]: I1013 06:48:47.954141 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:47 crc kubenswrapper[4833]: I1013 06:48:47.958404 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Oct 13 06:48:47 crc kubenswrapper[4833]: I1013 06:48:47.959634 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Oct 13 06:48:47 crc kubenswrapper[4833]: I1013 06:48:47.959788 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 13 06:48:47 crc kubenswrapper[4833]: I1013 06:48:47.963841 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 13 06:48:47 crc kubenswrapper[4833]: I1013 06:48:47.984324 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7113b07b-875e-4a09-a221-be312e4d0dce-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7113b07b-875e-4a09-a221-be312e4d0dce\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:47 crc kubenswrapper[4833]: I1013 06:48:47.984438 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7113b07b-875e-4a09-a221-be312e4d0dce-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7113b07b-875e-4a09-a221-be312e4d0dce\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:47 crc kubenswrapper[4833]: I1013 06:48:47.984499 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7113b07b-875e-4a09-a221-be312e4d0dce-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7113b07b-875e-4a09-a221-be312e4d0dce\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:47 crc kubenswrapper[4833]: I1013 06:48:47.984674 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbx2t\" (UniqueName: \"kubernetes.io/projected/7113b07b-875e-4a09-a221-be312e4d0dce-kube-api-access-wbx2t\") pod \"nova-cell1-novncproxy-0\" (UID: \"7113b07b-875e-4a09-a221-be312e4d0dce\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:47 crc kubenswrapper[4833]: I1013 06:48:47.984788 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7113b07b-875e-4a09-a221-be312e4d0dce-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7113b07b-875e-4a09-a221-be312e4d0dce\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:48 crc kubenswrapper[4833]: I1013 06:48:48.086141 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7113b07b-875e-4a09-a221-be312e4d0dce-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7113b07b-875e-4a09-a221-be312e4d0dce\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:48 crc kubenswrapper[4833]: I1013 06:48:48.086228 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7113b07b-875e-4a09-a221-be312e4d0dce-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7113b07b-875e-4a09-a221-be312e4d0dce\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:48 crc kubenswrapper[4833]: I1013 06:48:48.086262 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7113b07b-875e-4a09-a221-be312e4d0dce-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7113b07b-875e-4a09-a221-be312e4d0dce\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:48 crc kubenswrapper[4833]: I1013 06:48:48.086313 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbx2t\" (UniqueName: \"kubernetes.io/projected/7113b07b-875e-4a09-a221-be312e4d0dce-kube-api-access-wbx2t\") pod \"nova-cell1-novncproxy-0\" (UID: \"7113b07b-875e-4a09-a221-be312e4d0dce\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:48 crc kubenswrapper[4833]: I1013 06:48:48.086459 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7113b07b-875e-4a09-a221-be312e4d0dce-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7113b07b-875e-4a09-a221-be312e4d0dce\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:48 crc kubenswrapper[4833]: I1013 06:48:48.094488 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7113b07b-875e-4a09-a221-be312e4d0dce-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7113b07b-875e-4a09-a221-be312e4d0dce\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:48 crc kubenswrapper[4833]: I1013 06:48:48.098068 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7113b07b-875e-4a09-a221-be312e4d0dce-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7113b07b-875e-4a09-a221-be312e4d0dce\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:48 crc kubenswrapper[4833]: I1013 06:48:48.100127 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7113b07b-875e-4a09-a221-be312e4d0dce-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7113b07b-875e-4a09-a221-be312e4d0dce\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:48 crc kubenswrapper[4833]: I1013 06:48:48.101151 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7113b07b-875e-4a09-a221-be312e4d0dce-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7113b07b-875e-4a09-a221-be312e4d0dce\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:48 crc kubenswrapper[4833]: I1013 06:48:48.141260 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbx2t\" (UniqueName: \"kubernetes.io/projected/7113b07b-875e-4a09-a221-be312e4d0dce-kube-api-access-wbx2t\") pod \"nova-cell1-novncproxy-0\" (UID: \"7113b07b-875e-4a09-a221-be312e4d0dce\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:48 crc kubenswrapper[4833]: I1013 06:48:48.287304 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:48 crc kubenswrapper[4833]: I1013 06:48:48.641279 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f75d210b-f440-4d30-ae98-927b7660dad6" path="/var/lib/kubelet/pods/f75d210b-f440-4d30-ae98-927b7660dad6/volumes" Oct 13 06:48:48 crc kubenswrapper[4833]: I1013 06:48:48.806230 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 13 06:48:48 crc kubenswrapper[4833]: W1013 06:48:48.806769 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7113b07b_875e_4a09_a221_be312e4d0dce.slice/crio-3cf0379046d4ed15a16b89883583db031e5ad6809c3a8317821f6d6c5711d528 WatchSource:0}: Error finding container 3cf0379046d4ed15a16b89883583db031e5ad6809c3a8317821f6d6c5711d528: Status 404 returned error can't find the container with id 3cf0379046d4ed15a16b89883583db031e5ad6809c3a8317821f6d6c5711d528 Oct 13 06:48:48 crc kubenswrapper[4833]: I1013 06:48:48.886958 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7113b07b-875e-4a09-a221-be312e4d0dce","Type":"ContainerStarted","Data":"3cf0379046d4ed15a16b89883583db031e5ad6809c3a8317821f6d6c5711d528"} Oct 13 06:48:49 crc kubenswrapper[4833]: I1013 06:48:49.362689 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 13 06:48:49 crc kubenswrapper[4833]: I1013 06:48:49.363078 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 13 06:48:49 crc kubenswrapper[4833]: I1013 06:48:49.363596 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 13 06:48:49 crc kubenswrapper[4833]: I1013 06:48:49.363619 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 13 06:48:49 crc kubenswrapper[4833]: I1013 06:48:49.366385 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 13 06:48:49 crc kubenswrapper[4833]: I1013 06:48:49.369163 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 13 06:48:49 crc kubenswrapper[4833]: I1013 06:48:49.583583 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d4d96bb9-t557n"] Oct 13 06:48:49 crc kubenswrapper[4833]: I1013 06:48:49.585453 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d4d96bb9-t557n" Oct 13 06:48:49 crc kubenswrapper[4833]: I1013 06:48:49.612009 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d4d96bb9-t557n"] Oct 13 06:48:49 crc kubenswrapper[4833]: I1013 06:48:49.720976 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-dns-swift-storage-0\") pod \"dnsmasq-dns-6d4d96bb9-t557n\" (UID: \"61fe1ee9-51ff-4f77-8dd7-4e29e3365556\") " pod="openstack/dnsmasq-dns-6d4d96bb9-t557n" Oct 13 06:48:49 crc kubenswrapper[4833]: I1013 06:48:49.721041 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-ovsdbserver-sb\") pod \"dnsmasq-dns-6d4d96bb9-t557n\" (UID: \"61fe1ee9-51ff-4f77-8dd7-4e29e3365556\") " pod="openstack/dnsmasq-dns-6d4d96bb9-t557n" Oct 13 06:48:49 crc kubenswrapper[4833]: I1013 06:48:49.721194 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-config\") pod \"dnsmasq-dns-6d4d96bb9-t557n\" (UID: \"61fe1ee9-51ff-4f77-8dd7-4e29e3365556\") " pod="openstack/dnsmasq-dns-6d4d96bb9-t557n" Oct 13 06:48:49 crc kubenswrapper[4833]: I1013 06:48:49.721226 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-dns-svc\") pod \"dnsmasq-dns-6d4d96bb9-t557n\" (UID: \"61fe1ee9-51ff-4f77-8dd7-4e29e3365556\") " pod="openstack/dnsmasq-dns-6d4d96bb9-t557n" Oct 13 06:48:49 crc kubenswrapper[4833]: I1013 06:48:49.721369 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-ovsdbserver-nb\") pod \"dnsmasq-dns-6d4d96bb9-t557n\" (UID: \"61fe1ee9-51ff-4f77-8dd7-4e29e3365556\") " pod="openstack/dnsmasq-dns-6d4d96bb9-t557n" Oct 13 06:48:49 crc kubenswrapper[4833]: I1013 06:48:49.721413 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgb85\" (UniqueName: \"kubernetes.io/projected/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-kube-api-access-lgb85\") pod \"dnsmasq-dns-6d4d96bb9-t557n\" (UID: \"61fe1ee9-51ff-4f77-8dd7-4e29e3365556\") " pod="openstack/dnsmasq-dns-6d4d96bb9-t557n" Oct 13 06:48:49 crc kubenswrapper[4833]: I1013 06:48:49.823070 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-dns-swift-storage-0\") pod \"dnsmasq-dns-6d4d96bb9-t557n\" (UID: \"61fe1ee9-51ff-4f77-8dd7-4e29e3365556\") " pod="openstack/dnsmasq-dns-6d4d96bb9-t557n" Oct 13 06:48:49 crc kubenswrapper[4833]: I1013 06:48:49.823132 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-ovsdbserver-sb\") pod \"dnsmasq-dns-6d4d96bb9-t557n\" (UID: \"61fe1ee9-51ff-4f77-8dd7-4e29e3365556\") " pod="openstack/dnsmasq-dns-6d4d96bb9-t557n" Oct 13 06:48:49 crc kubenswrapper[4833]: I1013 06:48:49.823183 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-config\") pod \"dnsmasq-dns-6d4d96bb9-t557n\" (UID: \"61fe1ee9-51ff-4f77-8dd7-4e29e3365556\") " pod="openstack/dnsmasq-dns-6d4d96bb9-t557n" Oct 13 06:48:49 crc kubenswrapper[4833]: I1013 06:48:49.823208 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-dns-svc\") pod \"dnsmasq-dns-6d4d96bb9-t557n\" (UID: \"61fe1ee9-51ff-4f77-8dd7-4e29e3365556\") " pod="openstack/dnsmasq-dns-6d4d96bb9-t557n" Oct 13 06:48:49 crc kubenswrapper[4833]: I1013 06:48:49.823273 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-ovsdbserver-nb\") pod \"dnsmasq-dns-6d4d96bb9-t557n\" (UID: \"61fe1ee9-51ff-4f77-8dd7-4e29e3365556\") " pod="openstack/dnsmasq-dns-6d4d96bb9-t557n" Oct 13 06:48:49 crc kubenswrapper[4833]: I1013 06:48:49.823307 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgb85\" (UniqueName: \"kubernetes.io/projected/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-kube-api-access-lgb85\") pod \"dnsmasq-dns-6d4d96bb9-t557n\" (UID: \"61fe1ee9-51ff-4f77-8dd7-4e29e3365556\") " pod="openstack/dnsmasq-dns-6d4d96bb9-t557n" Oct 13 06:48:49 crc kubenswrapper[4833]: I1013 06:48:49.824041 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-dns-swift-storage-0\") pod \"dnsmasq-dns-6d4d96bb9-t557n\" (UID: \"61fe1ee9-51ff-4f77-8dd7-4e29e3365556\") " pod="openstack/dnsmasq-dns-6d4d96bb9-t557n" Oct 13 06:48:49 crc kubenswrapper[4833]: I1013 06:48:49.824069 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-ovsdbserver-sb\") pod \"dnsmasq-dns-6d4d96bb9-t557n\" (UID: \"61fe1ee9-51ff-4f77-8dd7-4e29e3365556\") " pod="openstack/dnsmasq-dns-6d4d96bb9-t557n" Oct 13 06:48:49 crc kubenswrapper[4833]: I1013 06:48:49.824354 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-config\") pod \"dnsmasq-dns-6d4d96bb9-t557n\" (UID: \"61fe1ee9-51ff-4f77-8dd7-4e29e3365556\") " pod="openstack/dnsmasq-dns-6d4d96bb9-t557n" Oct 13 06:48:49 crc kubenswrapper[4833]: I1013 06:48:49.824668 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-ovsdbserver-nb\") pod \"dnsmasq-dns-6d4d96bb9-t557n\" (UID: \"61fe1ee9-51ff-4f77-8dd7-4e29e3365556\") " pod="openstack/dnsmasq-dns-6d4d96bb9-t557n" Oct 13 06:48:49 crc kubenswrapper[4833]: I1013 06:48:49.825333 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-dns-svc\") pod \"dnsmasq-dns-6d4d96bb9-t557n\" (UID: \"61fe1ee9-51ff-4f77-8dd7-4e29e3365556\") " pod="openstack/dnsmasq-dns-6d4d96bb9-t557n" Oct 13 06:48:49 crc kubenswrapper[4833]: I1013 06:48:49.844653 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgb85\" (UniqueName: \"kubernetes.io/projected/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-kube-api-access-lgb85\") pod \"dnsmasq-dns-6d4d96bb9-t557n\" (UID: \"61fe1ee9-51ff-4f77-8dd7-4e29e3365556\") " pod="openstack/dnsmasq-dns-6d4d96bb9-t557n" Oct 13 06:48:49 crc kubenswrapper[4833]: I1013 06:48:49.901590 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7113b07b-875e-4a09-a221-be312e4d0dce","Type":"ContainerStarted","Data":"dd3cdf5ce0a05f6193bc89231be6e02af61a3556f7cc4765d4ed080a58b95e71"} Oct 13 06:48:49 crc kubenswrapper[4833]: I1013 06:48:49.922347 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d4d96bb9-t557n" Oct 13 06:48:49 crc kubenswrapper[4833]: I1013 06:48:49.936919 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.936899584 podStartE2EDuration="2.936899584s" podCreationTimestamp="2025-10-13 06:48:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:48:49.923419477 +0000 UTC m=+1220.023842403" watchObservedRunningTime="2025-10-13 06:48:49.936899584 +0000 UTC m=+1220.037322500" Oct 13 06:48:50 crc kubenswrapper[4833]: I1013 06:48:50.413101 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d4d96bb9-t557n"] Oct 13 06:48:50 crc kubenswrapper[4833]: I1013 06:48:50.914136 4833 generic.go:334] "Generic (PLEG): container finished" podID="61fe1ee9-51ff-4f77-8dd7-4e29e3365556" containerID="39da9c88b223beca5465c065282b2d0f918ef244673bc10245cc5a611b105a98" exitCode=0 Oct 13 06:48:50 crc kubenswrapper[4833]: I1013 06:48:50.914242 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4d96bb9-t557n" event={"ID":"61fe1ee9-51ff-4f77-8dd7-4e29e3365556","Type":"ContainerDied","Data":"39da9c88b223beca5465c065282b2d0f918ef244673bc10245cc5a611b105a98"} Oct 13 06:48:50 crc kubenswrapper[4833]: I1013 06:48:50.914549 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4d96bb9-t557n" event={"ID":"61fe1ee9-51ff-4f77-8dd7-4e29e3365556","Type":"ContainerStarted","Data":"5a42c0a290cb727ea32cd425f987d2d33ebb3b618ecc9547251db15c231d4309"} Oct 13 06:48:51 crc kubenswrapper[4833]: I1013 06:48:51.755884 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:48:51 crc kubenswrapper[4833]: I1013 06:48:51.756530 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4bf8aeef-36b7-4381-a9fb-4882c7628ca5" containerName="ceilometer-central-agent" containerID="cri-o://7c83227c81824ee6a08079584ec9d4b6fd174a5ed8fb03b9b9ec5c4de5c2b22d" gracePeriod=30 Oct 13 06:48:51 crc kubenswrapper[4833]: I1013 06:48:51.756919 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4bf8aeef-36b7-4381-a9fb-4882c7628ca5" containerName="sg-core" containerID="cri-o://b0b3597acbd4ff1692d209be570cda391b692a9aa04e9c1cb6d2b8702ed9e54d" gracePeriod=30 Oct 13 06:48:51 crc kubenswrapper[4833]: I1013 06:48:51.756967 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4bf8aeef-36b7-4381-a9fb-4882c7628ca5" containerName="proxy-httpd" containerID="cri-o://0b31e405c0a055675988bc2d2c45b6f222f71c7e498c51022dd6e38ca641c878" gracePeriod=30 Oct 13 06:48:51 crc kubenswrapper[4833]: I1013 06:48:51.756984 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4bf8aeef-36b7-4381-a9fb-4882c7628ca5" containerName="ceilometer-notification-agent" containerID="cri-o://4158ba0b331b1638982b29a55f886a4ccf3132233df02a0d17ced367d54ba2fd" gracePeriod=30 Oct 13 06:48:51 crc kubenswrapper[4833]: I1013 06:48:51.926951 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4d96bb9-t557n" event={"ID":"61fe1ee9-51ff-4f77-8dd7-4e29e3365556","Type":"ContainerStarted","Data":"49f0e204158e68516824c137ad4d21ef43d7abc0112420959fabb71ee75d4288"} Oct 13 06:48:51 crc kubenswrapper[4833]: I1013 06:48:51.926998 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d4d96bb9-t557n" Oct 13 06:48:51 crc kubenswrapper[4833]: I1013 06:48:51.929484 4833 generic.go:334] "Generic (PLEG): container finished" podID="4bf8aeef-36b7-4381-a9fb-4882c7628ca5" containerID="0b31e405c0a055675988bc2d2c45b6f222f71c7e498c51022dd6e38ca641c878" exitCode=0 Oct 13 06:48:51 crc kubenswrapper[4833]: I1013 06:48:51.929519 4833 generic.go:334] "Generic (PLEG): container finished" podID="4bf8aeef-36b7-4381-a9fb-4882c7628ca5" containerID="b0b3597acbd4ff1692d209be570cda391b692a9aa04e9c1cb6d2b8702ed9e54d" exitCode=2 Oct 13 06:48:51 crc kubenswrapper[4833]: I1013 06:48:51.929558 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf8aeef-36b7-4381-a9fb-4882c7628ca5","Type":"ContainerDied","Data":"0b31e405c0a055675988bc2d2c45b6f222f71c7e498c51022dd6e38ca641c878"} Oct 13 06:48:51 crc kubenswrapper[4833]: I1013 06:48:51.929581 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf8aeef-36b7-4381-a9fb-4882c7628ca5","Type":"ContainerDied","Data":"b0b3597acbd4ff1692d209be570cda391b692a9aa04e9c1cb6d2b8702ed9e54d"} Oct 13 06:48:51 crc kubenswrapper[4833]: I1013 06:48:51.958730 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d4d96bb9-t557n" podStartSLOduration=2.958708261 podStartE2EDuration="2.958708261s" podCreationTimestamp="2025-10-13 06:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:48:51.946738836 +0000 UTC m=+1222.047161792" watchObservedRunningTime="2025-10-13 06:48:51.958708261 +0000 UTC m=+1222.059131187" Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.523424 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.526006 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="29886d4a-73c4-4f45-94c5-d551b1e1af37" containerName="nova-api-log" containerID="cri-o://d15003c78174e77e48dd8ba92e97da7551c3e2cc1460bd79bb4ac6f52e0430dd" gracePeriod=30 Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.526604 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="29886d4a-73c4-4f45-94c5-d551b1e1af37" containerName="nova-api-api" containerID="cri-o://239b43a5d8dc3cc744ec699b3fd73a0404e9f8db7a984f3e20f0c54b11566268" gracePeriod=30 Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.804901 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.889400 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-run-httpd\") pod \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.889515 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-combined-ca-bundle\") pod \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.889667 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-log-httpd\") pod \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.889740 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-ceilometer-tls-certs\") pod \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.889789 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-scripts\") pod \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.889852 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-sg-core-conf-yaml\") pod \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.889872 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mckg\" (UniqueName: \"kubernetes.io/projected/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-kube-api-access-6mckg\") pod \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.889928 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4bf8aeef-36b7-4381-a9fb-4882c7628ca5" (UID: "4bf8aeef-36b7-4381-a9fb-4882c7628ca5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.889932 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-config-data\") pod \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\" (UID: \"4bf8aeef-36b7-4381-a9fb-4882c7628ca5\") " Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.890362 4833 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.894043 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4bf8aeef-36b7-4381-a9fb-4882c7628ca5" (UID: "4bf8aeef-36b7-4381-a9fb-4882c7628ca5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.902076 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-scripts" (OuterVolumeSpecName: "scripts") pod "4bf8aeef-36b7-4381-a9fb-4882c7628ca5" (UID: "4bf8aeef-36b7-4381-a9fb-4882c7628ca5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.902248 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-kube-api-access-6mckg" (OuterVolumeSpecName: "kube-api-access-6mckg") pod "4bf8aeef-36b7-4381-a9fb-4882c7628ca5" (UID: "4bf8aeef-36b7-4381-a9fb-4882c7628ca5"). InnerVolumeSpecName "kube-api-access-6mckg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.928981 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4bf8aeef-36b7-4381-a9fb-4882c7628ca5" (UID: "4bf8aeef-36b7-4381-a9fb-4882c7628ca5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.941730 4833 generic.go:334] "Generic (PLEG): container finished" podID="4bf8aeef-36b7-4381-a9fb-4882c7628ca5" containerID="4158ba0b331b1638982b29a55f886a4ccf3132233df02a0d17ced367d54ba2fd" exitCode=0 Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.941764 4833 generic.go:334] "Generic (PLEG): container finished" podID="4bf8aeef-36b7-4381-a9fb-4882c7628ca5" containerID="7c83227c81824ee6a08079584ec9d4b6fd174a5ed8fb03b9b9ec5c4de5c2b22d" exitCode=0 Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.941808 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf8aeef-36b7-4381-a9fb-4882c7628ca5","Type":"ContainerDied","Data":"4158ba0b331b1638982b29a55f886a4ccf3132233df02a0d17ced367d54ba2fd"} Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.941838 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf8aeef-36b7-4381-a9fb-4882c7628ca5","Type":"ContainerDied","Data":"7c83227c81824ee6a08079584ec9d4b6fd174a5ed8fb03b9b9ec5c4de5c2b22d"} Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.941852 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf8aeef-36b7-4381-a9fb-4882c7628ca5","Type":"ContainerDied","Data":"f4ea9f41fae00cb161a0a7a32890a04e532dcd26736bd83fb8ed6cf3ddeb614e"} Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.941871 4833 scope.go:117] "RemoveContainer" containerID="0b31e405c0a055675988bc2d2c45b6f222f71c7e498c51022dd6e38ca641c878" Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.942092 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.947499 4833 generic.go:334] "Generic (PLEG): container finished" podID="29886d4a-73c4-4f45-94c5-d551b1e1af37" containerID="d15003c78174e77e48dd8ba92e97da7551c3e2cc1460bd79bb4ac6f52e0430dd" exitCode=143 Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.947612 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29886d4a-73c4-4f45-94c5-d551b1e1af37","Type":"ContainerDied","Data":"d15003c78174e77e48dd8ba92e97da7551c3e2cc1460bd79bb4ac6f52e0430dd"} Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.971923 4833 scope.go:117] "RemoveContainer" containerID="b0b3597acbd4ff1692d209be570cda391b692a9aa04e9c1cb6d2b8702ed9e54d" Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.974680 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "4bf8aeef-36b7-4381-a9fb-4882c7628ca5" (UID: "4bf8aeef-36b7-4381-a9fb-4882c7628ca5"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.992930 4833 scope.go:117] "RemoveContainer" containerID="4158ba0b331b1638982b29a55f886a4ccf3132233df02a0d17ced367d54ba2fd" Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.993859 4833 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.993893 4833 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.993903 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.993911 4833 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:52 crc kubenswrapper[4833]: I1013 06:48:52.993921 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mckg\" (UniqueName: \"kubernetes.io/projected/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-kube-api-access-6mckg\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.018407 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bf8aeef-36b7-4381-a9fb-4882c7628ca5" (UID: "4bf8aeef-36b7-4381-a9fb-4882c7628ca5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.042849 4833 scope.go:117] "RemoveContainer" containerID="7c83227c81824ee6a08079584ec9d4b6fd174a5ed8fb03b9b9ec5c4de5c2b22d" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.045892 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-config-data" (OuterVolumeSpecName: "config-data") pod "4bf8aeef-36b7-4381-a9fb-4882c7628ca5" (UID: "4bf8aeef-36b7-4381-a9fb-4882c7628ca5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.066148 4833 scope.go:117] "RemoveContainer" containerID="0b31e405c0a055675988bc2d2c45b6f222f71c7e498c51022dd6e38ca641c878" Oct 13 06:48:53 crc kubenswrapper[4833]: E1013 06:48:53.066676 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b31e405c0a055675988bc2d2c45b6f222f71c7e498c51022dd6e38ca641c878\": container with ID starting with 0b31e405c0a055675988bc2d2c45b6f222f71c7e498c51022dd6e38ca641c878 not found: ID does not exist" containerID="0b31e405c0a055675988bc2d2c45b6f222f71c7e498c51022dd6e38ca641c878" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.066733 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b31e405c0a055675988bc2d2c45b6f222f71c7e498c51022dd6e38ca641c878"} err="failed to get container status \"0b31e405c0a055675988bc2d2c45b6f222f71c7e498c51022dd6e38ca641c878\": rpc error: code = NotFound desc = could not find container \"0b31e405c0a055675988bc2d2c45b6f222f71c7e498c51022dd6e38ca641c878\": container with ID starting with 0b31e405c0a055675988bc2d2c45b6f222f71c7e498c51022dd6e38ca641c878 not found: ID does not exist" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.066775 4833 scope.go:117] "RemoveContainer" containerID="b0b3597acbd4ff1692d209be570cda391b692a9aa04e9c1cb6d2b8702ed9e54d" Oct 13 06:48:53 crc kubenswrapper[4833]: E1013 06:48:53.067341 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0b3597acbd4ff1692d209be570cda391b692a9aa04e9c1cb6d2b8702ed9e54d\": container with ID starting with b0b3597acbd4ff1692d209be570cda391b692a9aa04e9c1cb6d2b8702ed9e54d not found: ID does not exist" containerID="b0b3597acbd4ff1692d209be570cda391b692a9aa04e9c1cb6d2b8702ed9e54d" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.067413 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0b3597acbd4ff1692d209be570cda391b692a9aa04e9c1cb6d2b8702ed9e54d"} err="failed to get container status \"b0b3597acbd4ff1692d209be570cda391b692a9aa04e9c1cb6d2b8702ed9e54d\": rpc error: code = NotFound desc = could not find container \"b0b3597acbd4ff1692d209be570cda391b692a9aa04e9c1cb6d2b8702ed9e54d\": container with ID starting with b0b3597acbd4ff1692d209be570cda391b692a9aa04e9c1cb6d2b8702ed9e54d not found: ID does not exist" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.067476 4833 scope.go:117] "RemoveContainer" containerID="4158ba0b331b1638982b29a55f886a4ccf3132233df02a0d17ced367d54ba2fd" Oct 13 06:48:53 crc kubenswrapper[4833]: E1013 06:48:53.067904 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4158ba0b331b1638982b29a55f886a4ccf3132233df02a0d17ced367d54ba2fd\": container with ID starting with 4158ba0b331b1638982b29a55f886a4ccf3132233df02a0d17ced367d54ba2fd not found: ID does not exist" containerID="4158ba0b331b1638982b29a55f886a4ccf3132233df02a0d17ced367d54ba2fd" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.067949 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4158ba0b331b1638982b29a55f886a4ccf3132233df02a0d17ced367d54ba2fd"} err="failed to get container status \"4158ba0b331b1638982b29a55f886a4ccf3132233df02a0d17ced367d54ba2fd\": rpc error: code = NotFound desc = could not find container \"4158ba0b331b1638982b29a55f886a4ccf3132233df02a0d17ced367d54ba2fd\": container with ID starting with 4158ba0b331b1638982b29a55f886a4ccf3132233df02a0d17ced367d54ba2fd not found: ID does not exist" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.067976 4833 scope.go:117] "RemoveContainer" containerID="7c83227c81824ee6a08079584ec9d4b6fd174a5ed8fb03b9b9ec5c4de5c2b22d" Oct 13 06:48:53 crc kubenswrapper[4833]: E1013 06:48:53.068329 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c83227c81824ee6a08079584ec9d4b6fd174a5ed8fb03b9b9ec5c4de5c2b22d\": container with ID starting with 7c83227c81824ee6a08079584ec9d4b6fd174a5ed8fb03b9b9ec5c4de5c2b22d not found: ID does not exist" containerID="7c83227c81824ee6a08079584ec9d4b6fd174a5ed8fb03b9b9ec5c4de5c2b22d" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.068365 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c83227c81824ee6a08079584ec9d4b6fd174a5ed8fb03b9b9ec5c4de5c2b22d"} err="failed to get container status \"7c83227c81824ee6a08079584ec9d4b6fd174a5ed8fb03b9b9ec5c4de5c2b22d\": rpc error: code = NotFound desc = could not find container \"7c83227c81824ee6a08079584ec9d4b6fd174a5ed8fb03b9b9ec5c4de5c2b22d\": container with ID starting with 7c83227c81824ee6a08079584ec9d4b6fd174a5ed8fb03b9b9ec5c4de5c2b22d not found: ID does not exist" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.068386 4833 scope.go:117] "RemoveContainer" containerID="0b31e405c0a055675988bc2d2c45b6f222f71c7e498c51022dd6e38ca641c878" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.068808 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b31e405c0a055675988bc2d2c45b6f222f71c7e498c51022dd6e38ca641c878"} err="failed to get container status \"0b31e405c0a055675988bc2d2c45b6f222f71c7e498c51022dd6e38ca641c878\": rpc error: code = NotFound desc = could not find container \"0b31e405c0a055675988bc2d2c45b6f222f71c7e498c51022dd6e38ca641c878\": container with ID starting with 0b31e405c0a055675988bc2d2c45b6f222f71c7e498c51022dd6e38ca641c878 not found: ID does not exist" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.068838 4833 scope.go:117] "RemoveContainer" containerID="b0b3597acbd4ff1692d209be570cda391b692a9aa04e9c1cb6d2b8702ed9e54d" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.069359 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0b3597acbd4ff1692d209be570cda391b692a9aa04e9c1cb6d2b8702ed9e54d"} err="failed to get container status \"b0b3597acbd4ff1692d209be570cda391b692a9aa04e9c1cb6d2b8702ed9e54d\": rpc error: code = NotFound desc = could not find container \"b0b3597acbd4ff1692d209be570cda391b692a9aa04e9c1cb6d2b8702ed9e54d\": container with ID starting with b0b3597acbd4ff1692d209be570cda391b692a9aa04e9c1cb6d2b8702ed9e54d not found: ID does not exist" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.069385 4833 scope.go:117] "RemoveContainer" containerID="4158ba0b331b1638982b29a55f886a4ccf3132233df02a0d17ced367d54ba2fd" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.069625 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4158ba0b331b1638982b29a55f886a4ccf3132233df02a0d17ced367d54ba2fd"} err="failed to get container status \"4158ba0b331b1638982b29a55f886a4ccf3132233df02a0d17ced367d54ba2fd\": rpc error: code = NotFound desc = could not find container \"4158ba0b331b1638982b29a55f886a4ccf3132233df02a0d17ced367d54ba2fd\": container with ID starting with 4158ba0b331b1638982b29a55f886a4ccf3132233df02a0d17ced367d54ba2fd not found: ID does not exist" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.069704 4833 scope.go:117] "RemoveContainer" containerID="7c83227c81824ee6a08079584ec9d4b6fd174a5ed8fb03b9b9ec5c4de5c2b22d" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.069994 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c83227c81824ee6a08079584ec9d4b6fd174a5ed8fb03b9b9ec5c4de5c2b22d"} err="failed to get container status \"7c83227c81824ee6a08079584ec9d4b6fd174a5ed8fb03b9b9ec5c4de5c2b22d\": rpc error: code = NotFound desc = could not find container \"7c83227c81824ee6a08079584ec9d4b6fd174a5ed8fb03b9b9ec5c4de5c2b22d\": container with ID starting with 7c83227c81824ee6a08079584ec9d4b6fd174a5ed8fb03b9b9ec5c4de5c2b22d not found: ID does not exist" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.095367 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.095396 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf8aeef-36b7-4381-a9fb-4882c7628ca5-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.275221 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.286819 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.288247 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.301653 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:48:53 crc kubenswrapper[4833]: E1013 06:48:53.302007 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf8aeef-36b7-4381-a9fb-4882c7628ca5" containerName="sg-core" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.302023 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf8aeef-36b7-4381-a9fb-4882c7628ca5" containerName="sg-core" Oct 13 06:48:53 crc kubenswrapper[4833]: E1013 06:48:53.302052 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf8aeef-36b7-4381-a9fb-4882c7628ca5" containerName="proxy-httpd" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.302063 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf8aeef-36b7-4381-a9fb-4882c7628ca5" containerName="proxy-httpd" Oct 13 06:48:53 crc kubenswrapper[4833]: E1013 06:48:53.302074 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf8aeef-36b7-4381-a9fb-4882c7628ca5" containerName="ceilometer-notification-agent" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.302080 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf8aeef-36b7-4381-a9fb-4882c7628ca5" containerName="ceilometer-notification-agent" Oct 13 06:48:53 crc kubenswrapper[4833]: E1013 06:48:53.302094 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf8aeef-36b7-4381-a9fb-4882c7628ca5" containerName="ceilometer-central-agent" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.302100 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf8aeef-36b7-4381-a9fb-4882c7628ca5" containerName="ceilometer-central-agent" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.302267 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf8aeef-36b7-4381-a9fb-4882c7628ca5" containerName="proxy-httpd" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.302292 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf8aeef-36b7-4381-a9fb-4882c7628ca5" containerName="ceilometer-notification-agent" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.302312 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf8aeef-36b7-4381-a9fb-4882c7628ca5" containerName="ceilometer-central-agent" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.303477 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf8aeef-36b7-4381-a9fb-4882c7628ca5" containerName="sg-core" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.305644 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.308315 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.308515 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.308648 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.337242 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.400154 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7aa3029-9236-4ace-a63d-b5857a6b0e30-run-httpd\") pod \"ceilometer-0\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " pod="openstack/ceilometer-0" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.400207 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7aa3029-9236-4ace-a63d-b5857a6b0e30-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " pod="openstack/ceilometer-0" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.400281 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7aa3029-9236-4ace-a63d-b5857a6b0e30-log-httpd\") pod \"ceilometer-0\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " pod="openstack/ceilometer-0" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.400323 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7aa3029-9236-4ace-a63d-b5857a6b0e30-scripts\") pod \"ceilometer-0\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " pod="openstack/ceilometer-0" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.400398 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7aa3029-9236-4ace-a63d-b5857a6b0e30-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " pod="openstack/ceilometer-0" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.400537 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7dmg\" (UniqueName: \"kubernetes.io/projected/c7aa3029-9236-4ace-a63d-b5857a6b0e30-kube-api-access-d7dmg\") pod \"ceilometer-0\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " pod="openstack/ceilometer-0" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.400663 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7aa3029-9236-4ace-a63d-b5857a6b0e30-config-data\") pod \"ceilometer-0\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " pod="openstack/ceilometer-0" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.400682 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7aa3029-9236-4ace-a63d-b5857a6b0e30-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " pod="openstack/ceilometer-0" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.502456 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7aa3029-9236-4ace-a63d-b5857a6b0e30-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " pod="openstack/ceilometer-0" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.502523 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7dmg\" (UniqueName: \"kubernetes.io/projected/c7aa3029-9236-4ace-a63d-b5857a6b0e30-kube-api-access-d7dmg\") pod \"ceilometer-0\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " pod="openstack/ceilometer-0" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.502564 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7aa3029-9236-4ace-a63d-b5857a6b0e30-config-data\") pod \"ceilometer-0\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " pod="openstack/ceilometer-0" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.502594 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7aa3029-9236-4ace-a63d-b5857a6b0e30-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " pod="openstack/ceilometer-0" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.502640 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7aa3029-9236-4ace-a63d-b5857a6b0e30-run-httpd\") pod \"ceilometer-0\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " pod="openstack/ceilometer-0" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.502668 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7aa3029-9236-4ace-a63d-b5857a6b0e30-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " pod="openstack/ceilometer-0" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.502701 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7aa3029-9236-4ace-a63d-b5857a6b0e30-log-httpd\") pod \"ceilometer-0\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " pod="openstack/ceilometer-0" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.503228 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7aa3029-9236-4ace-a63d-b5857a6b0e30-run-httpd\") pod \"ceilometer-0\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " pod="openstack/ceilometer-0" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.503281 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7aa3029-9236-4ace-a63d-b5857a6b0e30-log-httpd\") pod \"ceilometer-0\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " pod="openstack/ceilometer-0" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.503310 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7aa3029-9236-4ace-a63d-b5857a6b0e30-scripts\") pod \"ceilometer-0\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " pod="openstack/ceilometer-0" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.508267 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7aa3029-9236-4ace-a63d-b5857a6b0e30-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " pod="openstack/ceilometer-0" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.508365 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7aa3029-9236-4ace-a63d-b5857a6b0e30-scripts\") pod \"ceilometer-0\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " pod="openstack/ceilometer-0" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.508795 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7aa3029-9236-4ace-a63d-b5857a6b0e30-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " pod="openstack/ceilometer-0" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.509175 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7aa3029-9236-4ace-a63d-b5857a6b0e30-config-data\") pod \"ceilometer-0\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " pod="openstack/ceilometer-0" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.509279 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7aa3029-9236-4ace-a63d-b5857a6b0e30-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " pod="openstack/ceilometer-0" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.522209 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7dmg\" (UniqueName: \"kubernetes.io/projected/c7aa3029-9236-4ace-a63d-b5857a6b0e30-kube-api-access-d7dmg\") pod \"ceilometer-0\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " pod="openstack/ceilometer-0" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.623087 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:48:53 crc kubenswrapper[4833]: I1013 06:48:53.652401 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:48:54 crc kubenswrapper[4833]: W1013 06:48:54.080319 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7aa3029_9236_4ace_a63d_b5857a6b0e30.slice/crio-14218ee2f018da7641aa37b57bf6af8928716aba21d18ea399419ca87f0890f9 WatchSource:0}: Error finding container 14218ee2f018da7641aa37b57bf6af8928716aba21d18ea399419ca87f0890f9: Status 404 returned error can't find the container with id 14218ee2f018da7641aa37b57bf6af8928716aba21d18ea399419ca87f0890f9 Oct 13 06:48:54 crc kubenswrapper[4833]: I1013 06:48:54.082829 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:48:54 crc kubenswrapper[4833]: I1013 06:48:54.638314 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bf8aeef-36b7-4381-a9fb-4882c7628ca5" path="/var/lib/kubelet/pods/4bf8aeef-36b7-4381-a9fb-4882c7628ca5/volumes" Oct 13 06:48:54 crc kubenswrapper[4833]: I1013 06:48:54.966872 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7aa3029-9236-4ace-a63d-b5857a6b0e30","Type":"ContainerStarted","Data":"ff40f2dd9ee2d022bcf742c6564fd9ec8546b1ba810741aa28f2f8f5122f30f1"} Oct 13 06:48:54 crc kubenswrapper[4833]: I1013 06:48:54.967368 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7aa3029-9236-4ace-a63d-b5857a6b0e30","Type":"ContainerStarted","Data":"14218ee2f018da7641aa37b57bf6af8928716aba21d18ea399419ca87f0890f9"} Oct 13 06:48:55 crc kubenswrapper[4833]: I1013 06:48:55.989402 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7aa3029-9236-4ace-a63d-b5857a6b0e30","Type":"ContainerStarted","Data":"f16c52ee6d647e7be5c25026890bd0304f9bf2e84c1e22e7e94d29cc520a2502"} Oct 13 06:48:55 crc kubenswrapper[4833]: I1013 06:48:55.990020 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7aa3029-9236-4ace-a63d-b5857a6b0e30","Type":"ContainerStarted","Data":"08eb8f7e21e6fc3a7f26edbc89c72af26f01c0669232306a904d880d433b79bd"} Oct 13 06:48:55 crc kubenswrapper[4833]: I1013 06:48:55.992667 4833 generic.go:334] "Generic (PLEG): container finished" podID="29886d4a-73c4-4f45-94c5-d551b1e1af37" containerID="239b43a5d8dc3cc744ec699b3fd73a0404e9f8db7a984f3e20f0c54b11566268" exitCode=0 Oct 13 06:48:55 crc kubenswrapper[4833]: I1013 06:48:55.992706 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29886d4a-73c4-4f45-94c5-d551b1e1af37","Type":"ContainerDied","Data":"239b43a5d8dc3cc744ec699b3fd73a0404e9f8db7a984f3e20f0c54b11566268"} Oct 13 06:48:56 crc kubenswrapper[4833]: I1013 06:48:56.162779 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 06:48:56 crc kubenswrapper[4833]: I1013 06:48:56.264096 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgfsc\" (UniqueName: \"kubernetes.io/projected/29886d4a-73c4-4f45-94c5-d551b1e1af37-kube-api-access-pgfsc\") pod \"29886d4a-73c4-4f45-94c5-d551b1e1af37\" (UID: \"29886d4a-73c4-4f45-94c5-d551b1e1af37\") " Oct 13 06:48:56 crc kubenswrapper[4833]: I1013 06:48:56.264426 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29886d4a-73c4-4f45-94c5-d551b1e1af37-combined-ca-bundle\") pod \"29886d4a-73c4-4f45-94c5-d551b1e1af37\" (UID: \"29886d4a-73c4-4f45-94c5-d551b1e1af37\") " Oct 13 06:48:56 crc kubenswrapper[4833]: I1013 06:48:56.264547 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29886d4a-73c4-4f45-94c5-d551b1e1af37-config-data\") pod \"29886d4a-73c4-4f45-94c5-d551b1e1af37\" (UID: \"29886d4a-73c4-4f45-94c5-d551b1e1af37\") " Oct 13 06:48:56 crc kubenswrapper[4833]: I1013 06:48:56.264670 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29886d4a-73c4-4f45-94c5-d551b1e1af37-logs\") pod \"29886d4a-73c4-4f45-94c5-d551b1e1af37\" (UID: \"29886d4a-73c4-4f45-94c5-d551b1e1af37\") " Oct 13 06:48:56 crc kubenswrapper[4833]: I1013 06:48:56.265453 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29886d4a-73c4-4f45-94c5-d551b1e1af37-logs" (OuterVolumeSpecName: "logs") pod "29886d4a-73c4-4f45-94c5-d551b1e1af37" (UID: "29886d4a-73c4-4f45-94c5-d551b1e1af37"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:48:56 crc kubenswrapper[4833]: I1013 06:48:56.272052 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29886d4a-73c4-4f45-94c5-d551b1e1af37-kube-api-access-pgfsc" (OuterVolumeSpecName: "kube-api-access-pgfsc") pod "29886d4a-73c4-4f45-94c5-d551b1e1af37" (UID: "29886d4a-73c4-4f45-94c5-d551b1e1af37"). InnerVolumeSpecName "kube-api-access-pgfsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:48:56 crc kubenswrapper[4833]: I1013 06:48:56.299269 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29886d4a-73c4-4f45-94c5-d551b1e1af37-config-data" (OuterVolumeSpecName: "config-data") pod "29886d4a-73c4-4f45-94c5-d551b1e1af37" (UID: "29886d4a-73c4-4f45-94c5-d551b1e1af37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:56 crc kubenswrapper[4833]: I1013 06:48:56.315937 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29886d4a-73c4-4f45-94c5-d551b1e1af37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29886d4a-73c4-4f45-94c5-d551b1e1af37" (UID: "29886d4a-73c4-4f45-94c5-d551b1e1af37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:56 crc kubenswrapper[4833]: I1013 06:48:56.367103 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgfsc\" (UniqueName: \"kubernetes.io/projected/29886d4a-73c4-4f45-94c5-d551b1e1af37-kube-api-access-pgfsc\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:56 crc kubenswrapper[4833]: I1013 06:48:56.367139 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29886d4a-73c4-4f45-94c5-d551b1e1af37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:56 crc kubenswrapper[4833]: I1013 06:48:56.367153 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29886d4a-73c4-4f45-94c5-d551b1e1af37-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:56 crc kubenswrapper[4833]: I1013 06:48:56.367164 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29886d4a-73c4-4f45-94c5-d551b1e1af37-logs\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.025766 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29886d4a-73c4-4f45-94c5-d551b1e1af37","Type":"ContainerDied","Data":"01769f7464dddcb4f139d3918b4566f2beb545d8f7de23d19f582cddd3324562"} Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.025825 4833 scope.go:117] "RemoveContainer" containerID="239b43a5d8dc3cc744ec699b3fd73a0404e9f8db7a984f3e20f0c54b11566268" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.026040 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.081943 4833 scope.go:117] "RemoveContainer" containerID="d15003c78174e77e48dd8ba92e97da7551c3e2cc1460bd79bb4ac6f52e0430dd" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.110027 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.132699 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.142500 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 13 06:48:57 crc kubenswrapper[4833]: E1013 06:48:57.143035 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29886d4a-73c4-4f45-94c5-d551b1e1af37" containerName="nova-api-log" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.143064 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="29886d4a-73c4-4f45-94c5-d551b1e1af37" containerName="nova-api-log" Oct 13 06:48:57 crc kubenswrapper[4833]: E1013 06:48:57.143086 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29886d4a-73c4-4f45-94c5-d551b1e1af37" containerName="nova-api-api" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.143095 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="29886d4a-73c4-4f45-94c5-d551b1e1af37" containerName="nova-api-api" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.143314 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="29886d4a-73c4-4f45-94c5-d551b1e1af37" containerName="nova-api-log" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.143336 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="29886d4a-73c4-4f45-94c5-d551b1e1af37" containerName="nova-api-api" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.144598 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.147408 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.147630 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.147778 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.158201 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.288318 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7eb83c9-6234-42dc-b7de-e9b945d46a50-logs\") pod \"nova-api-0\" (UID: \"c7eb83c9-6234-42dc-b7de-e9b945d46a50\") " pod="openstack/nova-api-0" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.288714 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7eb83c9-6234-42dc-b7de-e9b945d46a50-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c7eb83c9-6234-42dc-b7de-e9b945d46a50\") " pod="openstack/nova-api-0" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.288815 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7eb83c9-6234-42dc-b7de-e9b945d46a50-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c7eb83c9-6234-42dc-b7de-e9b945d46a50\") " pod="openstack/nova-api-0" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.288851 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7eb83c9-6234-42dc-b7de-e9b945d46a50-config-data\") pod \"nova-api-0\" (UID: \"c7eb83c9-6234-42dc-b7de-e9b945d46a50\") " pod="openstack/nova-api-0" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.288904 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rr75\" (UniqueName: \"kubernetes.io/projected/c7eb83c9-6234-42dc-b7de-e9b945d46a50-kube-api-access-6rr75\") pod \"nova-api-0\" (UID: \"c7eb83c9-6234-42dc-b7de-e9b945d46a50\") " pod="openstack/nova-api-0" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.288973 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7eb83c9-6234-42dc-b7de-e9b945d46a50-public-tls-certs\") pod \"nova-api-0\" (UID: \"c7eb83c9-6234-42dc-b7de-e9b945d46a50\") " pod="openstack/nova-api-0" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.390827 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7eb83c9-6234-42dc-b7de-e9b945d46a50-logs\") pod \"nova-api-0\" (UID: \"c7eb83c9-6234-42dc-b7de-e9b945d46a50\") " pod="openstack/nova-api-0" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.391050 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7eb83c9-6234-42dc-b7de-e9b945d46a50-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c7eb83c9-6234-42dc-b7de-e9b945d46a50\") " pod="openstack/nova-api-0" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.391192 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7eb83c9-6234-42dc-b7de-e9b945d46a50-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c7eb83c9-6234-42dc-b7de-e9b945d46a50\") " pod="openstack/nova-api-0" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.391303 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7eb83c9-6234-42dc-b7de-e9b945d46a50-config-data\") pod \"nova-api-0\" (UID: \"c7eb83c9-6234-42dc-b7de-e9b945d46a50\") " pod="openstack/nova-api-0" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.391416 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rr75\" (UniqueName: \"kubernetes.io/projected/c7eb83c9-6234-42dc-b7de-e9b945d46a50-kube-api-access-6rr75\") pod \"nova-api-0\" (UID: \"c7eb83c9-6234-42dc-b7de-e9b945d46a50\") " pod="openstack/nova-api-0" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.391261 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7eb83c9-6234-42dc-b7de-e9b945d46a50-logs\") pod \"nova-api-0\" (UID: \"c7eb83c9-6234-42dc-b7de-e9b945d46a50\") " pod="openstack/nova-api-0" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.391643 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7eb83c9-6234-42dc-b7de-e9b945d46a50-public-tls-certs\") pod \"nova-api-0\" (UID: \"c7eb83c9-6234-42dc-b7de-e9b945d46a50\") " pod="openstack/nova-api-0" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.394960 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7eb83c9-6234-42dc-b7de-e9b945d46a50-config-data\") pod \"nova-api-0\" (UID: \"c7eb83c9-6234-42dc-b7de-e9b945d46a50\") " pod="openstack/nova-api-0" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.395017 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7eb83c9-6234-42dc-b7de-e9b945d46a50-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c7eb83c9-6234-42dc-b7de-e9b945d46a50\") " pod="openstack/nova-api-0" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.395487 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7eb83c9-6234-42dc-b7de-e9b945d46a50-public-tls-certs\") pod \"nova-api-0\" (UID: \"c7eb83c9-6234-42dc-b7de-e9b945d46a50\") " pod="openstack/nova-api-0" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.395898 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7eb83c9-6234-42dc-b7de-e9b945d46a50-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c7eb83c9-6234-42dc-b7de-e9b945d46a50\") " pod="openstack/nova-api-0" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.411050 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rr75\" (UniqueName: \"kubernetes.io/projected/c7eb83c9-6234-42dc-b7de-e9b945d46a50-kube-api-access-6rr75\") pod \"nova-api-0\" (UID: \"c7eb83c9-6234-42dc-b7de-e9b945d46a50\") " pod="openstack/nova-api-0" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.459729 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 06:48:57 crc kubenswrapper[4833]: I1013 06:48:57.926367 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 13 06:48:57 crc kubenswrapper[4833]: W1013 06:48:57.927523 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7eb83c9_6234_42dc_b7de_e9b945d46a50.slice/crio-435f00967789b84fd7e2c88e6ac41ea173037f959b95bf714626974700075795 WatchSource:0}: Error finding container 435f00967789b84fd7e2c88e6ac41ea173037f959b95bf714626974700075795: Status 404 returned error can't find the container with id 435f00967789b84fd7e2c88e6ac41ea173037f959b95bf714626974700075795 Oct 13 06:48:58 crc kubenswrapper[4833]: I1013 06:48:58.035925 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7aa3029-9236-4ace-a63d-b5857a6b0e30","Type":"ContainerStarted","Data":"334c92fde7b02ce152bb11e6a6403510ab8bf7c4918a7c1f82247490c49cc085"} Oct 13 06:48:58 crc kubenswrapper[4833]: I1013 06:48:58.036092 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7aa3029-9236-4ace-a63d-b5857a6b0e30" containerName="ceilometer-central-agent" containerID="cri-o://ff40f2dd9ee2d022bcf742c6564fd9ec8546b1ba810741aa28f2f8f5122f30f1" gracePeriod=30 Oct 13 06:48:58 crc kubenswrapper[4833]: I1013 06:48:58.036176 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7aa3029-9236-4ace-a63d-b5857a6b0e30" containerName="ceilometer-notification-agent" containerID="cri-o://08eb8f7e21e6fc3a7f26edbc89c72af26f01c0669232306a904d880d433b79bd" gracePeriod=30 Oct 13 06:48:58 crc kubenswrapper[4833]: I1013 06:48:58.036187 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7aa3029-9236-4ace-a63d-b5857a6b0e30" containerName="sg-core" containerID="cri-o://f16c52ee6d647e7be5c25026890bd0304f9bf2e84c1e22e7e94d29cc520a2502" gracePeriod=30 Oct 13 06:48:58 crc kubenswrapper[4833]: I1013 06:48:58.036226 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7aa3029-9236-4ace-a63d-b5857a6b0e30" containerName="proxy-httpd" containerID="cri-o://334c92fde7b02ce152bb11e6a6403510ab8bf7c4918a7c1f82247490c49cc085" gracePeriod=30 Oct 13 06:48:58 crc kubenswrapper[4833]: I1013 06:48:58.036471 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 13 06:48:58 crc kubenswrapper[4833]: I1013 06:48:58.047377 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7eb83c9-6234-42dc-b7de-e9b945d46a50","Type":"ContainerStarted","Data":"435f00967789b84fd7e2c88e6ac41ea173037f959b95bf714626974700075795"} Oct 13 06:48:58 crc kubenswrapper[4833]: I1013 06:48:58.066620 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.256596458 podStartE2EDuration="5.066590703s" podCreationTimestamp="2025-10-13 06:48:53 +0000 UTC" firstStartedPulling="2025-10-13 06:48:54.082513975 +0000 UTC m=+1224.182936881" lastFinishedPulling="2025-10-13 06:48:56.89250821 +0000 UTC m=+1226.992931126" observedRunningTime="2025-10-13 06:48:58.057657433 +0000 UTC m=+1228.158080359" watchObservedRunningTime="2025-10-13 06:48:58.066590703 +0000 UTC m=+1228.167013619" Oct 13 06:48:58 crc kubenswrapper[4833]: I1013 06:48:58.288310 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:58 crc kubenswrapper[4833]: I1013 06:48:58.305617 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:58 crc kubenswrapper[4833]: I1013 06:48:58.638167 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29886d4a-73c4-4f45-94c5-d551b1e1af37" path="/var/lib/kubelet/pods/29886d4a-73c4-4f45-94c5-d551b1e1af37/volumes" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.058516 4833 generic.go:334] "Generic (PLEG): container finished" podID="c7aa3029-9236-4ace-a63d-b5857a6b0e30" containerID="334c92fde7b02ce152bb11e6a6403510ab8bf7c4918a7c1f82247490c49cc085" exitCode=0 Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.058569 4833 generic.go:334] "Generic (PLEG): container finished" podID="c7aa3029-9236-4ace-a63d-b5857a6b0e30" containerID="f16c52ee6d647e7be5c25026890bd0304f9bf2e84c1e22e7e94d29cc520a2502" exitCode=2 Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.058578 4833 generic.go:334] "Generic (PLEG): container finished" podID="c7aa3029-9236-4ace-a63d-b5857a6b0e30" containerID="08eb8f7e21e6fc3a7f26edbc89c72af26f01c0669232306a904d880d433b79bd" exitCode=0 Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.058583 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7aa3029-9236-4ace-a63d-b5857a6b0e30","Type":"ContainerDied","Data":"334c92fde7b02ce152bb11e6a6403510ab8bf7c4918a7c1f82247490c49cc085"} Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.058634 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7aa3029-9236-4ace-a63d-b5857a6b0e30","Type":"ContainerDied","Data":"f16c52ee6d647e7be5c25026890bd0304f9bf2e84c1e22e7e94d29cc520a2502"} Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.058648 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7aa3029-9236-4ace-a63d-b5857a6b0e30","Type":"ContainerDied","Data":"08eb8f7e21e6fc3a7f26edbc89c72af26f01c0669232306a904d880d433b79bd"} Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.060663 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7eb83c9-6234-42dc-b7de-e9b945d46a50","Type":"ContainerStarted","Data":"19c53edada7f93a3cca5097f292dc63d54f8b0990ac4f79fcdff6c6f000f4259"} Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.060690 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7eb83c9-6234-42dc-b7de-e9b945d46a50","Type":"ContainerStarted","Data":"2c83a9a0d8fc7d90698f81bcc9041aad81c96aa81135f206855fa76cb82c0a17"} Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.085004 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.084987246 podStartE2EDuration="2.084987246s" podCreationTimestamp="2025-10-13 06:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:48:59.084961335 +0000 UTC m=+1229.185384261" watchObservedRunningTime="2025-10-13 06:48:59.084987246 +0000 UTC m=+1229.185410162" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.095497 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.226666 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-gf8r2"] Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.228890 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-gf8r2" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.231759 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.231780 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.239425 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-gf8r2"] Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.334696 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03753511-4f13-4f91-abb0-1158faba0e60-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-gf8r2\" (UID: \"03753511-4f13-4f91-abb0-1158faba0e60\") " pod="openstack/nova-cell1-cell-mapping-gf8r2" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.334917 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03753511-4f13-4f91-abb0-1158faba0e60-config-data\") pod \"nova-cell1-cell-mapping-gf8r2\" (UID: \"03753511-4f13-4f91-abb0-1158faba0e60\") " pod="openstack/nova-cell1-cell-mapping-gf8r2" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.335009 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03753511-4f13-4f91-abb0-1158faba0e60-scripts\") pod \"nova-cell1-cell-mapping-gf8r2\" (UID: \"03753511-4f13-4f91-abb0-1158faba0e60\") " pod="openstack/nova-cell1-cell-mapping-gf8r2" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.335128 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgvnp\" (UniqueName: \"kubernetes.io/projected/03753511-4f13-4f91-abb0-1158faba0e60-kube-api-access-zgvnp\") pod \"nova-cell1-cell-mapping-gf8r2\" (UID: \"03753511-4f13-4f91-abb0-1158faba0e60\") " pod="openstack/nova-cell1-cell-mapping-gf8r2" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.436861 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgvnp\" (UniqueName: \"kubernetes.io/projected/03753511-4f13-4f91-abb0-1158faba0e60-kube-api-access-zgvnp\") pod \"nova-cell1-cell-mapping-gf8r2\" (UID: \"03753511-4f13-4f91-abb0-1158faba0e60\") " pod="openstack/nova-cell1-cell-mapping-gf8r2" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.437319 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03753511-4f13-4f91-abb0-1158faba0e60-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-gf8r2\" (UID: \"03753511-4f13-4f91-abb0-1158faba0e60\") " pod="openstack/nova-cell1-cell-mapping-gf8r2" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.437767 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03753511-4f13-4f91-abb0-1158faba0e60-config-data\") pod \"nova-cell1-cell-mapping-gf8r2\" (UID: \"03753511-4f13-4f91-abb0-1158faba0e60\") " pod="openstack/nova-cell1-cell-mapping-gf8r2" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.437968 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03753511-4f13-4f91-abb0-1158faba0e60-scripts\") pod \"nova-cell1-cell-mapping-gf8r2\" (UID: \"03753511-4f13-4f91-abb0-1158faba0e60\") " pod="openstack/nova-cell1-cell-mapping-gf8r2" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.443377 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03753511-4f13-4f91-abb0-1158faba0e60-scripts\") pod \"nova-cell1-cell-mapping-gf8r2\" (UID: \"03753511-4f13-4f91-abb0-1158faba0e60\") " pod="openstack/nova-cell1-cell-mapping-gf8r2" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.443401 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03753511-4f13-4f91-abb0-1158faba0e60-config-data\") pod \"nova-cell1-cell-mapping-gf8r2\" (UID: \"03753511-4f13-4f91-abb0-1158faba0e60\") " pod="openstack/nova-cell1-cell-mapping-gf8r2" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.443417 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03753511-4f13-4f91-abb0-1158faba0e60-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-gf8r2\" (UID: \"03753511-4f13-4f91-abb0-1158faba0e60\") " pod="openstack/nova-cell1-cell-mapping-gf8r2" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.452390 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgvnp\" (UniqueName: \"kubernetes.io/projected/03753511-4f13-4f91-abb0-1158faba0e60-kube-api-access-zgvnp\") pod \"nova-cell1-cell-mapping-gf8r2\" (UID: \"03753511-4f13-4f91-abb0-1158faba0e60\") " pod="openstack/nova-cell1-cell-mapping-gf8r2" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.539247 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.552297 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-gf8r2" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.641231 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7aa3029-9236-4ace-a63d-b5857a6b0e30-log-httpd\") pod \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.641495 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7aa3029-9236-4ace-a63d-b5857a6b0e30-ceilometer-tls-certs\") pod \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.641581 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7aa3029-9236-4ace-a63d-b5857a6b0e30-combined-ca-bundle\") pod \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.641629 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7aa3029-9236-4ace-a63d-b5857a6b0e30-config-data\") pod \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.641734 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7dmg\" (UniqueName: \"kubernetes.io/projected/c7aa3029-9236-4ace-a63d-b5857a6b0e30-kube-api-access-d7dmg\") pod \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.641771 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7aa3029-9236-4ace-a63d-b5857a6b0e30-sg-core-conf-yaml\") pod \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.641816 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7aa3029-9236-4ace-a63d-b5857a6b0e30-run-httpd\") pod \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.641861 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7aa3029-9236-4ace-a63d-b5857a6b0e30-scripts\") pod \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\" (UID: \"c7aa3029-9236-4ace-a63d-b5857a6b0e30\") " Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.644404 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7aa3029-9236-4ace-a63d-b5857a6b0e30-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c7aa3029-9236-4ace-a63d-b5857a6b0e30" (UID: "c7aa3029-9236-4ace-a63d-b5857a6b0e30"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.647923 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7aa3029-9236-4ace-a63d-b5857a6b0e30-scripts" (OuterVolumeSpecName: "scripts") pod "c7aa3029-9236-4ace-a63d-b5857a6b0e30" (UID: "c7aa3029-9236-4ace-a63d-b5857a6b0e30"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.650739 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7aa3029-9236-4ace-a63d-b5857a6b0e30-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c7aa3029-9236-4ace-a63d-b5857a6b0e30" (UID: "c7aa3029-9236-4ace-a63d-b5857a6b0e30"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.651143 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7aa3029-9236-4ace-a63d-b5857a6b0e30-kube-api-access-d7dmg" (OuterVolumeSpecName: "kube-api-access-d7dmg") pod "c7aa3029-9236-4ace-a63d-b5857a6b0e30" (UID: "c7aa3029-9236-4ace-a63d-b5857a6b0e30"). InnerVolumeSpecName "kube-api-access-d7dmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.702315 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7aa3029-9236-4ace-a63d-b5857a6b0e30-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c7aa3029-9236-4ace-a63d-b5857a6b0e30" (UID: "c7aa3029-9236-4ace-a63d-b5857a6b0e30"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.717577 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7aa3029-9236-4ace-a63d-b5857a6b0e30-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c7aa3029-9236-4ace-a63d-b5857a6b0e30" (UID: "c7aa3029-9236-4ace-a63d-b5857a6b0e30"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.744583 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7dmg\" (UniqueName: \"kubernetes.io/projected/c7aa3029-9236-4ace-a63d-b5857a6b0e30-kube-api-access-d7dmg\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.744608 4833 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7aa3029-9236-4ace-a63d-b5857a6b0e30-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.744617 4833 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7aa3029-9236-4ace-a63d-b5857a6b0e30-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.744626 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7aa3029-9236-4ace-a63d-b5857a6b0e30-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.744635 4833 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7aa3029-9236-4ace-a63d-b5857a6b0e30-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.744643 4833 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7aa3029-9236-4ace-a63d-b5857a6b0e30-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.751510 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7aa3029-9236-4ace-a63d-b5857a6b0e30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7aa3029-9236-4ace-a63d-b5857a6b0e30" (UID: "c7aa3029-9236-4ace-a63d-b5857a6b0e30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.779694 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7aa3029-9236-4ace-a63d-b5857a6b0e30-config-data" (OuterVolumeSpecName: "config-data") pod "c7aa3029-9236-4ace-a63d-b5857a6b0e30" (UID: "c7aa3029-9236-4ace-a63d-b5857a6b0e30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.846136 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7aa3029-9236-4ace-a63d-b5857a6b0e30-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.846199 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7aa3029-9236-4ace-a63d-b5857a6b0e30-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:48:59 crc kubenswrapper[4833]: I1013 06:48:59.927964 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d4d96bb9-t557n" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.024166 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffc974fdf-26tbs"] Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.024772 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6ffc974fdf-26tbs" podUID="1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9" containerName="dnsmasq-dns" containerID="cri-o://32890bbdeeeca403a2277afb0203e9c187bb14905e5fda3b7f3a40ea1824d364" gracePeriod=10 Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.096752 4833 generic.go:334] "Generic (PLEG): container finished" podID="c7aa3029-9236-4ace-a63d-b5857a6b0e30" containerID="ff40f2dd9ee2d022bcf742c6564fd9ec8546b1ba810741aa28f2f8f5122f30f1" exitCode=0 Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.098139 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.101746 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7aa3029-9236-4ace-a63d-b5857a6b0e30","Type":"ContainerDied","Data":"ff40f2dd9ee2d022bcf742c6564fd9ec8546b1ba810741aa28f2f8f5122f30f1"} Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.101829 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7aa3029-9236-4ace-a63d-b5857a6b0e30","Type":"ContainerDied","Data":"14218ee2f018da7641aa37b57bf6af8928716aba21d18ea399419ca87f0890f9"} Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.101854 4833 scope.go:117] "RemoveContainer" containerID="334c92fde7b02ce152bb11e6a6403510ab8bf7c4918a7c1f82247490c49cc085" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.136125 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-gf8r2"] Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.155667 4833 scope.go:117] "RemoveContainer" containerID="f16c52ee6d647e7be5c25026890bd0304f9bf2e84c1e22e7e94d29cc520a2502" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.210705 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.234867 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.264615 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:49:00 crc kubenswrapper[4833]: E1013 06:49:00.265052 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7aa3029-9236-4ace-a63d-b5857a6b0e30" containerName="ceilometer-notification-agent" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.265070 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7aa3029-9236-4ace-a63d-b5857a6b0e30" containerName="ceilometer-notification-agent" Oct 13 06:49:00 crc kubenswrapper[4833]: E1013 06:49:00.265084 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7aa3029-9236-4ace-a63d-b5857a6b0e30" containerName="proxy-httpd" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.265091 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7aa3029-9236-4ace-a63d-b5857a6b0e30" containerName="proxy-httpd" Oct 13 06:49:00 crc kubenswrapper[4833]: E1013 06:49:00.265106 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7aa3029-9236-4ace-a63d-b5857a6b0e30" containerName="ceilometer-central-agent" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.265112 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7aa3029-9236-4ace-a63d-b5857a6b0e30" containerName="ceilometer-central-agent" Oct 13 06:49:00 crc kubenswrapper[4833]: E1013 06:49:00.265125 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7aa3029-9236-4ace-a63d-b5857a6b0e30" containerName="sg-core" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.265131 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7aa3029-9236-4ace-a63d-b5857a6b0e30" containerName="sg-core" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.265318 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7aa3029-9236-4ace-a63d-b5857a6b0e30" containerName="sg-core" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.265333 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7aa3029-9236-4ace-a63d-b5857a6b0e30" containerName="ceilometer-notification-agent" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.265345 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7aa3029-9236-4ace-a63d-b5857a6b0e30" containerName="ceilometer-central-agent" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.265357 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7aa3029-9236-4ace-a63d-b5857a6b0e30" containerName="proxy-httpd" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.266981 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.269045 4833 scope.go:117] "RemoveContainer" containerID="08eb8f7e21e6fc3a7f26edbc89c72af26f01c0669232306a904d880d433b79bd" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.272033 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.272129 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.272244 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.289846 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.311794 4833 scope.go:117] "RemoveContainer" containerID="ff40f2dd9ee2d022bcf742c6564fd9ec8546b1ba810741aa28f2f8f5122f30f1" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.343842 4833 scope.go:117] "RemoveContainer" containerID="334c92fde7b02ce152bb11e6a6403510ab8bf7c4918a7c1f82247490c49cc085" Oct 13 06:49:00 crc kubenswrapper[4833]: E1013 06:49:00.344920 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"334c92fde7b02ce152bb11e6a6403510ab8bf7c4918a7c1f82247490c49cc085\": container with ID starting with 334c92fde7b02ce152bb11e6a6403510ab8bf7c4918a7c1f82247490c49cc085 not found: ID does not exist" containerID="334c92fde7b02ce152bb11e6a6403510ab8bf7c4918a7c1f82247490c49cc085" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.344960 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"334c92fde7b02ce152bb11e6a6403510ab8bf7c4918a7c1f82247490c49cc085"} err="failed to get container status \"334c92fde7b02ce152bb11e6a6403510ab8bf7c4918a7c1f82247490c49cc085\": rpc error: code = NotFound desc = could not find container \"334c92fde7b02ce152bb11e6a6403510ab8bf7c4918a7c1f82247490c49cc085\": container with ID starting with 334c92fde7b02ce152bb11e6a6403510ab8bf7c4918a7c1f82247490c49cc085 not found: ID does not exist" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.344986 4833 scope.go:117] "RemoveContainer" containerID="f16c52ee6d647e7be5c25026890bd0304f9bf2e84c1e22e7e94d29cc520a2502" Oct 13 06:49:00 crc kubenswrapper[4833]: E1013 06:49:00.345410 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f16c52ee6d647e7be5c25026890bd0304f9bf2e84c1e22e7e94d29cc520a2502\": container with ID starting with f16c52ee6d647e7be5c25026890bd0304f9bf2e84c1e22e7e94d29cc520a2502 not found: ID does not exist" containerID="f16c52ee6d647e7be5c25026890bd0304f9bf2e84c1e22e7e94d29cc520a2502" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.345454 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f16c52ee6d647e7be5c25026890bd0304f9bf2e84c1e22e7e94d29cc520a2502"} err="failed to get container status \"f16c52ee6d647e7be5c25026890bd0304f9bf2e84c1e22e7e94d29cc520a2502\": rpc error: code = NotFound desc = could not find container \"f16c52ee6d647e7be5c25026890bd0304f9bf2e84c1e22e7e94d29cc520a2502\": container with ID starting with f16c52ee6d647e7be5c25026890bd0304f9bf2e84c1e22e7e94d29cc520a2502 not found: ID does not exist" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.345484 4833 scope.go:117] "RemoveContainer" containerID="08eb8f7e21e6fc3a7f26edbc89c72af26f01c0669232306a904d880d433b79bd" Oct 13 06:49:00 crc kubenswrapper[4833]: E1013 06:49:00.345761 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08eb8f7e21e6fc3a7f26edbc89c72af26f01c0669232306a904d880d433b79bd\": container with ID starting with 08eb8f7e21e6fc3a7f26edbc89c72af26f01c0669232306a904d880d433b79bd not found: ID does not exist" containerID="08eb8f7e21e6fc3a7f26edbc89c72af26f01c0669232306a904d880d433b79bd" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.345801 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08eb8f7e21e6fc3a7f26edbc89c72af26f01c0669232306a904d880d433b79bd"} err="failed to get container status \"08eb8f7e21e6fc3a7f26edbc89c72af26f01c0669232306a904d880d433b79bd\": rpc error: code = NotFound desc = could not find container \"08eb8f7e21e6fc3a7f26edbc89c72af26f01c0669232306a904d880d433b79bd\": container with ID starting with 08eb8f7e21e6fc3a7f26edbc89c72af26f01c0669232306a904d880d433b79bd not found: ID does not exist" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.345839 4833 scope.go:117] "RemoveContainer" containerID="ff40f2dd9ee2d022bcf742c6564fd9ec8546b1ba810741aa28f2f8f5122f30f1" Oct 13 06:49:00 crc kubenswrapper[4833]: E1013 06:49:00.346051 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff40f2dd9ee2d022bcf742c6564fd9ec8546b1ba810741aa28f2f8f5122f30f1\": container with ID starting with ff40f2dd9ee2d022bcf742c6564fd9ec8546b1ba810741aa28f2f8f5122f30f1 not found: ID does not exist" containerID="ff40f2dd9ee2d022bcf742c6564fd9ec8546b1ba810741aa28f2f8f5122f30f1" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.346074 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff40f2dd9ee2d022bcf742c6564fd9ec8546b1ba810741aa28f2f8f5122f30f1"} err="failed to get container status \"ff40f2dd9ee2d022bcf742c6564fd9ec8546b1ba810741aa28f2f8f5122f30f1\": rpc error: code = NotFound desc = could not find container \"ff40f2dd9ee2d022bcf742c6564fd9ec8546b1ba810741aa28f2f8f5122f30f1\": container with ID starting with ff40f2dd9ee2d022bcf742c6564fd9ec8546b1ba810741aa28f2f8f5122f30f1 not found: ID does not exist" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.375474 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4418034e-f484-4638-94bd-5b086af9e8f3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " pod="openstack/ceilometer-0" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.375557 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4418034e-f484-4638-94bd-5b086af9e8f3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " pod="openstack/ceilometer-0" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.375586 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4418034e-f484-4638-94bd-5b086af9e8f3-scripts\") pod \"ceilometer-0\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " pod="openstack/ceilometer-0" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.375603 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p92c\" (UniqueName: \"kubernetes.io/projected/4418034e-f484-4638-94bd-5b086af9e8f3-kube-api-access-8p92c\") pod \"ceilometer-0\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " pod="openstack/ceilometer-0" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.375641 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4418034e-f484-4638-94bd-5b086af9e8f3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " pod="openstack/ceilometer-0" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.375728 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4418034e-f484-4638-94bd-5b086af9e8f3-config-data\") pod \"ceilometer-0\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " pod="openstack/ceilometer-0" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.375753 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4418034e-f484-4638-94bd-5b086af9e8f3-log-httpd\") pod \"ceilometer-0\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " pod="openstack/ceilometer-0" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.375885 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4418034e-f484-4638-94bd-5b086af9e8f3-run-httpd\") pod \"ceilometer-0\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " pod="openstack/ceilometer-0" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.477189 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4418034e-f484-4638-94bd-5b086af9e8f3-run-httpd\") pod \"ceilometer-0\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " pod="openstack/ceilometer-0" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.477231 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4418034e-f484-4638-94bd-5b086af9e8f3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " pod="openstack/ceilometer-0" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.477283 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4418034e-f484-4638-94bd-5b086af9e8f3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " pod="openstack/ceilometer-0" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.477308 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4418034e-f484-4638-94bd-5b086af9e8f3-scripts\") pod \"ceilometer-0\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " pod="openstack/ceilometer-0" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.477323 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p92c\" (UniqueName: \"kubernetes.io/projected/4418034e-f484-4638-94bd-5b086af9e8f3-kube-api-access-8p92c\") pod \"ceilometer-0\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " pod="openstack/ceilometer-0" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.477345 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4418034e-f484-4638-94bd-5b086af9e8f3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " pod="openstack/ceilometer-0" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.477384 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4418034e-f484-4638-94bd-5b086af9e8f3-config-data\") pod \"ceilometer-0\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " pod="openstack/ceilometer-0" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.477412 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4418034e-f484-4638-94bd-5b086af9e8f3-log-httpd\") pod \"ceilometer-0\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " pod="openstack/ceilometer-0" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.477778 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4418034e-f484-4638-94bd-5b086af9e8f3-run-httpd\") pod \"ceilometer-0\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " pod="openstack/ceilometer-0" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.478234 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4418034e-f484-4638-94bd-5b086af9e8f3-log-httpd\") pod \"ceilometer-0\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " pod="openstack/ceilometer-0" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.483207 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4418034e-f484-4638-94bd-5b086af9e8f3-config-data\") pod \"ceilometer-0\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " pod="openstack/ceilometer-0" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.483430 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4418034e-f484-4638-94bd-5b086af9e8f3-scripts\") pod \"ceilometer-0\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " pod="openstack/ceilometer-0" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.483920 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4418034e-f484-4638-94bd-5b086af9e8f3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " pod="openstack/ceilometer-0" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.487838 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4418034e-f484-4638-94bd-5b086af9e8f3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " pod="openstack/ceilometer-0" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.488869 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4418034e-f484-4638-94bd-5b086af9e8f3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " pod="openstack/ceilometer-0" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.496060 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p92c\" (UniqueName: \"kubernetes.io/projected/4418034e-f484-4638-94bd-5b086af9e8f3-kube-api-access-8p92c\") pod \"ceilometer-0\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " pod="openstack/ceilometer-0" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.590970 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.603277 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffc974fdf-26tbs" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.654029 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7aa3029-9236-4ace-a63d-b5857a6b0e30" path="/var/lib/kubelet/pods/c7aa3029-9236-4ace-a63d-b5857a6b0e30/volumes" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.683049 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-dns-svc\") pod \"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9\" (UID: \"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9\") " Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.683255 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt56q\" (UniqueName: \"kubernetes.io/projected/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-kube-api-access-mt56q\") pod \"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9\" (UID: \"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9\") " Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.683432 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-config\") pod \"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9\" (UID: \"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9\") " Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.683516 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-ovsdbserver-nb\") pod \"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9\" (UID: \"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9\") " Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.683598 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-ovsdbserver-sb\") pod \"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9\" (UID: \"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9\") " Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.683709 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-dns-swift-storage-0\") pod \"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9\" (UID: \"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9\") " Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.686448 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-kube-api-access-mt56q" (OuterVolumeSpecName: "kube-api-access-mt56q") pod "1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9" (UID: "1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9"). InnerVolumeSpecName "kube-api-access-mt56q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.737370 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9" (UID: "1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.744335 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9" (UID: "1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.758502 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-config" (OuterVolumeSpecName: "config") pod "1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9" (UID: "1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.766160 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9" (UID: "1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.769914 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9" (UID: "1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.786300 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.786333 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.786346 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.786358 4833 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.786369 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:00 crc kubenswrapper[4833]: I1013 06:49:00.786377 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mt56q\" (UniqueName: \"kubernetes.io/projected/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9-kube-api-access-mt56q\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:01 crc kubenswrapper[4833]: I1013 06:49:01.058698 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:49:01 crc kubenswrapper[4833]: I1013 06:49:01.106949 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-gf8r2" event={"ID":"03753511-4f13-4f91-abb0-1158faba0e60","Type":"ContainerStarted","Data":"dfc611bec1d4ce90589a0eac486b1559ddf622b45b5b507cb8abab55d933b401"} Oct 13 06:49:01 crc kubenswrapper[4833]: I1013 06:49:01.107000 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-gf8r2" event={"ID":"03753511-4f13-4f91-abb0-1158faba0e60","Type":"ContainerStarted","Data":"7216aae07c88a3df6d34c70fd9293933120ab2ecab9c8bd9d2e57db6785882d3"} Oct 13 06:49:01 crc kubenswrapper[4833]: I1013 06:49:01.108380 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4418034e-f484-4638-94bd-5b086af9e8f3","Type":"ContainerStarted","Data":"0ca875ce5679a12bb4cabcf0ebe06fabe16ded1d72a923c749d8a492f0f73dd1"} Oct 13 06:49:01 crc kubenswrapper[4833]: I1013 06:49:01.113522 4833 generic.go:334] "Generic (PLEG): container finished" podID="1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9" containerID="32890bbdeeeca403a2277afb0203e9c187bb14905e5fda3b7f3a40ea1824d364" exitCode=0 Oct 13 06:49:01 crc kubenswrapper[4833]: I1013 06:49:01.113635 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffc974fdf-26tbs" event={"ID":"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9","Type":"ContainerDied","Data":"32890bbdeeeca403a2277afb0203e9c187bb14905e5fda3b7f3a40ea1824d364"} Oct 13 06:49:01 crc kubenswrapper[4833]: I1013 06:49:01.113665 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffc974fdf-26tbs" event={"ID":"1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9","Type":"ContainerDied","Data":"f1a34e39026a0d783506a761bceb4c71203a92921c1074dd163fc2424e486a9f"} Oct 13 06:49:01 crc kubenswrapper[4833]: I1013 06:49:01.113687 4833 scope.go:117] "RemoveContainer" containerID="32890bbdeeeca403a2277afb0203e9c187bb14905e5fda3b7f3a40ea1824d364" Oct 13 06:49:01 crc kubenswrapper[4833]: I1013 06:49:01.113824 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffc974fdf-26tbs" Oct 13 06:49:01 crc kubenswrapper[4833]: I1013 06:49:01.132182 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-gf8r2" podStartSLOduration=2.132147854 podStartE2EDuration="2.132147854s" podCreationTimestamp="2025-10-13 06:48:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:49:01.120303402 +0000 UTC m=+1231.220726318" watchObservedRunningTime="2025-10-13 06:49:01.132147854 +0000 UTC m=+1231.232570810" Oct 13 06:49:01 crc kubenswrapper[4833]: I1013 06:49:01.157421 4833 scope.go:117] "RemoveContainer" containerID="8f1fe79426b9fa427cc5c31e21ca7823fdf4110a096cbb45b248814edc3f7aed" Oct 13 06:49:01 crc kubenswrapper[4833]: I1013 06:49:01.158925 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffc974fdf-26tbs"] Oct 13 06:49:01 crc kubenswrapper[4833]: I1013 06:49:01.172319 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6ffc974fdf-26tbs"] Oct 13 06:49:01 crc kubenswrapper[4833]: I1013 06:49:01.206862 4833 scope.go:117] "RemoveContainer" containerID="32890bbdeeeca403a2277afb0203e9c187bb14905e5fda3b7f3a40ea1824d364" Oct 13 06:49:01 crc kubenswrapper[4833]: E1013 06:49:01.207795 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32890bbdeeeca403a2277afb0203e9c187bb14905e5fda3b7f3a40ea1824d364\": container with ID starting with 32890bbdeeeca403a2277afb0203e9c187bb14905e5fda3b7f3a40ea1824d364 not found: ID does not exist" containerID="32890bbdeeeca403a2277afb0203e9c187bb14905e5fda3b7f3a40ea1824d364" Oct 13 06:49:01 crc kubenswrapper[4833]: I1013 06:49:01.207846 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32890bbdeeeca403a2277afb0203e9c187bb14905e5fda3b7f3a40ea1824d364"} err="failed to get container status \"32890bbdeeeca403a2277afb0203e9c187bb14905e5fda3b7f3a40ea1824d364\": rpc error: code = NotFound desc = could not find container \"32890bbdeeeca403a2277afb0203e9c187bb14905e5fda3b7f3a40ea1824d364\": container with ID starting with 32890bbdeeeca403a2277afb0203e9c187bb14905e5fda3b7f3a40ea1824d364 not found: ID does not exist" Oct 13 06:49:01 crc kubenswrapper[4833]: I1013 06:49:01.207877 4833 scope.go:117] "RemoveContainer" containerID="8f1fe79426b9fa427cc5c31e21ca7823fdf4110a096cbb45b248814edc3f7aed" Oct 13 06:49:01 crc kubenswrapper[4833]: E1013 06:49:01.209082 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f1fe79426b9fa427cc5c31e21ca7823fdf4110a096cbb45b248814edc3f7aed\": container with ID starting with 8f1fe79426b9fa427cc5c31e21ca7823fdf4110a096cbb45b248814edc3f7aed not found: ID does not exist" containerID="8f1fe79426b9fa427cc5c31e21ca7823fdf4110a096cbb45b248814edc3f7aed" Oct 13 06:49:01 crc kubenswrapper[4833]: I1013 06:49:01.209166 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f1fe79426b9fa427cc5c31e21ca7823fdf4110a096cbb45b248814edc3f7aed"} err="failed to get container status \"8f1fe79426b9fa427cc5c31e21ca7823fdf4110a096cbb45b248814edc3f7aed\": rpc error: code = NotFound desc = could not find container \"8f1fe79426b9fa427cc5c31e21ca7823fdf4110a096cbb45b248814edc3f7aed\": container with ID starting with 8f1fe79426b9fa427cc5c31e21ca7823fdf4110a096cbb45b248814edc3f7aed not found: ID does not exist" Oct 13 06:49:02 crc kubenswrapper[4833]: I1013 06:49:02.124233 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4418034e-f484-4638-94bd-5b086af9e8f3","Type":"ContainerStarted","Data":"b1d8fdb61d049ee070745d6bb37299a7e3a2ea5b6a2822cfd684cbef80477fcd"} Oct 13 06:49:02 crc kubenswrapper[4833]: I1013 06:49:02.666643 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9" path="/var/lib/kubelet/pods/1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9/volumes" Oct 13 06:49:03 crc kubenswrapper[4833]: I1013 06:49:03.151107 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4418034e-f484-4638-94bd-5b086af9e8f3","Type":"ContainerStarted","Data":"56972d15184e848e3cb04578a7051cd018536014dd786389c2513e45b4aaedf5"} Oct 13 06:49:03 crc kubenswrapper[4833]: I1013 06:49:03.151422 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4418034e-f484-4638-94bd-5b086af9e8f3","Type":"ContainerStarted","Data":"d1b5ef65dd6c04a469b4e26abcf945980198eb1805346763f419d84fce76a1df"} Oct 13 06:49:05 crc kubenswrapper[4833]: I1013 06:49:05.170411 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4418034e-f484-4638-94bd-5b086af9e8f3","Type":"ContainerStarted","Data":"2cabbee089607667537683595e667e4aa78e6c197269f4dd6ca05d0b1ab6e461"} Oct 13 06:49:05 crc kubenswrapper[4833]: I1013 06:49:05.172146 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 13 06:49:05 crc kubenswrapper[4833]: I1013 06:49:05.200089 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.6511463549999998 podStartE2EDuration="5.200074779s" podCreationTimestamp="2025-10-13 06:49:00 +0000 UTC" firstStartedPulling="2025-10-13 06:49:01.074721316 +0000 UTC m=+1231.175144222" lastFinishedPulling="2025-10-13 06:49:04.62364973 +0000 UTC m=+1234.724072646" observedRunningTime="2025-10-13 06:49:05.195688347 +0000 UTC m=+1235.296111253" watchObservedRunningTime="2025-10-13 06:49:05.200074779 +0000 UTC m=+1235.300497695" Oct 13 06:49:06 crc kubenswrapper[4833]: I1013 06:49:06.180153 4833 generic.go:334] "Generic (PLEG): container finished" podID="03753511-4f13-4f91-abb0-1158faba0e60" containerID="dfc611bec1d4ce90589a0eac486b1559ddf622b45b5b507cb8abab55d933b401" exitCode=0 Oct 13 06:49:06 crc kubenswrapper[4833]: I1013 06:49:06.181013 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-gf8r2" event={"ID":"03753511-4f13-4f91-abb0-1158faba0e60","Type":"ContainerDied","Data":"dfc611bec1d4ce90589a0eac486b1559ddf622b45b5b507cb8abab55d933b401"} Oct 13 06:49:07 crc kubenswrapper[4833]: I1013 06:49:07.461427 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 13 06:49:07 crc kubenswrapper[4833]: I1013 06:49:07.462303 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 13 06:49:07 crc kubenswrapper[4833]: I1013 06:49:07.570558 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-gf8r2" Oct 13 06:49:07 crc kubenswrapper[4833]: I1013 06:49:07.648327 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03753511-4f13-4f91-abb0-1158faba0e60-scripts\") pod \"03753511-4f13-4f91-abb0-1158faba0e60\" (UID: \"03753511-4f13-4f91-abb0-1158faba0e60\") " Oct 13 06:49:07 crc kubenswrapper[4833]: I1013 06:49:07.648408 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgvnp\" (UniqueName: \"kubernetes.io/projected/03753511-4f13-4f91-abb0-1158faba0e60-kube-api-access-zgvnp\") pod \"03753511-4f13-4f91-abb0-1158faba0e60\" (UID: \"03753511-4f13-4f91-abb0-1158faba0e60\") " Oct 13 06:49:07 crc kubenswrapper[4833]: I1013 06:49:07.648562 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03753511-4f13-4f91-abb0-1158faba0e60-config-data\") pod \"03753511-4f13-4f91-abb0-1158faba0e60\" (UID: \"03753511-4f13-4f91-abb0-1158faba0e60\") " Oct 13 06:49:07 crc kubenswrapper[4833]: I1013 06:49:07.648607 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03753511-4f13-4f91-abb0-1158faba0e60-combined-ca-bundle\") pod \"03753511-4f13-4f91-abb0-1158faba0e60\" (UID: \"03753511-4f13-4f91-abb0-1158faba0e60\") " Oct 13 06:49:07 crc kubenswrapper[4833]: I1013 06:49:07.668450 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03753511-4f13-4f91-abb0-1158faba0e60-scripts" (OuterVolumeSpecName: "scripts") pod "03753511-4f13-4f91-abb0-1158faba0e60" (UID: "03753511-4f13-4f91-abb0-1158faba0e60"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:07 crc kubenswrapper[4833]: I1013 06:49:07.670705 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03753511-4f13-4f91-abb0-1158faba0e60-kube-api-access-zgvnp" (OuterVolumeSpecName: "kube-api-access-zgvnp") pod "03753511-4f13-4f91-abb0-1158faba0e60" (UID: "03753511-4f13-4f91-abb0-1158faba0e60"). InnerVolumeSpecName "kube-api-access-zgvnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:49:07 crc kubenswrapper[4833]: E1013 06:49:07.686588 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03753511-4f13-4f91-abb0-1158faba0e60-combined-ca-bundle podName:03753511-4f13-4f91-abb0-1158faba0e60 nodeName:}" failed. No retries permitted until 2025-10-13 06:49:08.186530096 +0000 UTC m=+1238.286953012 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/03753511-4f13-4f91-abb0-1158faba0e60-combined-ca-bundle") pod "03753511-4f13-4f91-abb0-1158faba0e60" (UID: "03753511-4f13-4f91-abb0-1158faba0e60") : error deleting /var/lib/kubelet/pods/03753511-4f13-4f91-abb0-1158faba0e60/volume-subpaths: remove /var/lib/kubelet/pods/03753511-4f13-4f91-abb0-1158faba0e60/volume-subpaths: no such file or directory Oct 13 06:49:07 crc kubenswrapper[4833]: I1013 06:49:07.689667 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03753511-4f13-4f91-abb0-1158faba0e60-config-data" (OuterVolumeSpecName: "config-data") pod "03753511-4f13-4f91-abb0-1158faba0e60" (UID: "03753511-4f13-4f91-abb0-1158faba0e60"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:07 crc kubenswrapper[4833]: I1013 06:49:07.750629 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03753511-4f13-4f91-abb0-1158faba0e60-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:07 crc kubenswrapper[4833]: I1013 06:49:07.750660 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03753511-4f13-4f91-abb0-1158faba0e60-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:07 crc kubenswrapper[4833]: I1013 06:49:07.750669 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgvnp\" (UniqueName: \"kubernetes.io/projected/03753511-4f13-4f91-abb0-1158faba0e60-kube-api-access-zgvnp\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:08 crc kubenswrapper[4833]: I1013 06:49:08.226305 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-gf8r2" event={"ID":"03753511-4f13-4f91-abb0-1158faba0e60","Type":"ContainerDied","Data":"7216aae07c88a3df6d34c70fd9293933120ab2ecab9c8bd9d2e57db6785882d3"} Oct 13 06:49:08 crc kubenswrapper[4833]: I1013 06:49:08.226342 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7216aae07c88a3df6d34c70fd9293933120ab2ecab9c8bd9d2e57db6785882d3" Oct 13 06:49:08 crc kubenswrapper[4833]: I1013 06:49:08.226400 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-gf8r2" Oct 13 06:49:08 crc kubenswrapper[4833]: I1013 06:49:08.259271 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03753511-4f13-4f91-abb0-1158faba0e60-combined-ca-bundle\") pod \"03753511-4f13-4f91-abb0-1158faba0e60\" (UID: \"03753511-4f13-4f91-abb0-1158faba0e60\") " Oct 13 06:49:08 crc kubenswrapper[4833]: I1013 06:49:08.264399 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03753511-4f13-4f91-abb0-1158faba0e60-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03753511-4f13-4f91-abb0-1158faba0e60" (UID: "03753511-4f13-4f91-abb0-1158faba0e60"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:08 crc kubenswrapper[4833]: I1013 06:49:08.363843 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03753511-4f13-4f91-abb0-1158faba0e60-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:08 crc kubenswrapper[4833]: I1013 06:49:08.387348 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 06:49:08 crc kubenswrapper[4833]: I1013 06:49:08.387616 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9dea0d66-db51-422c-add8-6d97b7731116" containerName="nova-scheduler-scheduler" containerID="cri-o://56487496984f25952988b6e0884601eb729cbc56ed79b090bd191c9aae56adb4" gracePeriod=30 Oct 13 06:49:08 crc kubenswrapper[4833]: I1013 06:49:08.399752 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 13 06:49:08 crc kubenswrapper[4833]: I1013 06:49:08.399997 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c7eb83c9-6234-42dc-b7de-e9b945d46a50" containerName="nova-api-log" containerID="cri-o://2c83a9a0d8fc7d90698f81bcc9041aad81c96aa81135f206855fa76cb82c0a17" gracePeriod=30 Oct 13 06:49:08 crc kubenswrapper[4833]: I1013 06:49:08.400136 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c7eb83c9-6234-42dc-b7de-e9b945d46a50" containerName="nova-api-api" containerID="cri-o://19c53edada7f93a3cca5097f292dc63d54f8b0990ac4f79fcdff6c6f000f4259" gracePeriod=30 Oct 13 06:49:08 crc kubenswrapper[4833]: I1013 06:49:08.419078 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c7eb83c9-6234-42dc-b7de-e9b945d46a50" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": EOF" Oct 13 06:49:08 crc kubenswrapper[4833]: I1013 06:49:08.419088 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c7eb83c9-6234-42dc-b7de-e9b945d46a50" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": EOF" Oct 13 06:49:08 crc kubenswrapper[4833]: I1013 06:49:08.421354 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 06:49:08 crc kubenswrapper[4833]: I1013 06:49:08.421597 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="318447c0-0084-4b42-92ad-086709e0576b" containerName="nova-metadata-log" containerID="cri-o://5c79350ae41df5a5e240931a133bde60403509305f60a215355a42b198e5d899" gracePeriod=30 Oct 13 06:49:08 crc kubenswrapper[4833]: I1013 06:49:08.421682 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="318447c0-0084-4b42-92ad-086709e0576b" containerName="nova-metadata-metadata" containerID="cri-o://e02872934d4c48155bf45e3d9abfa4cf548374373601ace4dac6abbcc54f7d7b" gracePeriod=30 Oct 13 06:49:09 crc kubenswrapper[4833]: I1013 06:49:09.236771 4833 generic.go:334] "Generic (PLEG): container finished" podID="318447c0-0084-4b42-92ad-086709e0576b" containerID="5c79350ae41df5a5e240931a133bde60403509305f60a215355a42b198e5d899" exitCode=143 Oct 13 06:49:09 crc kubenswrapper[4833]: I1013 06:49:09.236856 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"318447c0-0084-4b42-92ad-086709e0576b","Type":"ContainerDied","Data":"5c79350ae41df5a5e240931a133bde60403509305f60a215355a42b198e5d899"} Oct 13 06:49:09 crc kubenswrapper[4833]: I1013 06:49:09.239802 4833 generic.go:334] "Generic (PLEG): container finished" podID="c7eb83c9-6234-42dc-b7de-e9b945d46a50" containerID="2c83a9a0d8fc7d90698f81bcc9041aad81c96aa81135f206855fa76cb82c0a17" exitCode=143 Oct 13 06:49:09 crc kubenswrapper[4833]: I1013 06:49:09.239896 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7eb83c9-6234-42dc-b7de-e9b945d46a50","Type":"ContainerDied","Data":"2c83a9a0d8fc7d90698f81bcc9041aad81c96aa81135f206855fa76cb82c0a17"} Oct 13 06:49:11 crc kubenswrapper[4833]: I1013 06:49:11.568268 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="318447c0-0084-4b42-92ad-086709e0576b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": read tcp 10.217.0.2:58636->10.217.0.194:8775: read: connection reset by peer" Oct 13 06:49:11 crc kubenswrapper[4833]: I1013 06:49:11.568692 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="318447c0-0084-4b42-92ad-086709e0576b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": read tcp 10.217.0.2:58620->10.217.0.194:8775: read: connection reset by peer" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.039958 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.134102 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/318447c0-0084-4b42-92ad-086709e0576b-logs" (OuterVolumeSpecName: "logs") pod "318447c0-0084-4b42-92ad-086709e0576b" (UID: "318447c0-0084-4b42-92ad-086709e0576b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.133370 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/318447c0-0084-4b42-92ad-086709e0576b-logs\") pod \"318447c0-0084-4b42-92ad-086709e0576b\" (UID: \"318447c0-0084-4b42-92ad-086709e0576b\") " Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.134428 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/318447c0-0084-4b42-92ad-086709e0576b-config-data\") pod \"318447c0-0084-4b42-92ad-086709e0576b\" (UID: \"318447c0-0084-4b42-92ad-086709e0576b\") " Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.135373 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/318447c0-0084-4b42-92ad-086709e0576b-combined-ca-bundle\") pod \"318447c0-0084-4b42-92ad-086709e0576b\" (UID: \"318447c0-0084-4b42-92ad-086709e0576b\") " Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.135430 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/318447c0-0084-4b42-92ad-086709e0576b-nova-metadata-tls-certs\") pod \"318447c0-0084-4b42-92ad-086709e0576b\" (UID: \"318447c0-0084-4b42-92ad-086709e0576b\") " Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.135520 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqhww\" (UniqueName: \"kubernetes.io/projected/318447c0-0084-4b42-92ad-086709e0576b-kube-api-access-sqhww\") pod \"318447c0-0084-4b42-92ad-086709e0576b\" (UID: \"318447c0-0084-4b42-92ad-086709e0576b\") " Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.136860 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/318447c0-0084-4b42-92ad-086709e0576b-logs\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.145212 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/318447c0-0084-4b42-92ad-086709e0576b-kube-api-access-sqhww" (OuterVolumeSpecName: "kube-api-access-sqhww") pod "318447c0-0084-4b42-92ad-086709e0576b" (UID: "318447c0-0084-4b42-92ad-086709e0576b"). InnerVolumeSpecName "kube-api-access-sqhww". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.198558 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/318447c0-0084-4b42-92ad-086709e0576b-config-data" (OuterVolumeSpecName: "config-data") pod "318447c0-0084-4b42-92ad-086709e0576b" (UID: "318447c0-0084-4b42-92ad-086709e0576b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.199196 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/318447c0-0084-4b42-92ad-086709e0576b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "318447c0-0084-4b42-92ad-086709e0576b" (UID: "318447c0-0084-4b42-92ad-086709e0576b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.239444 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/318447c0-0084-4b42-92ad-086709e0576b-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.239514 4833 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/318447c0-0084-4b42-92ad-086709e0576b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.239554 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqhww\" (UniqueName: \"kubernetes.io/projected/318447c0-0084-4b42-92ad-086709e0576b-kube-api-access-sqhww\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.247061 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/318447c0-0084-4b42-92ad-086709e0576b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "318447c0-0084-4b42-92ad-086709e0576b" (UID: "318447c0-0084-4b42-92ad-086709e0576b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.284317 4833 generic.go:334] "Generic (PLEG): container finished" podID="318447c0-0084-4b42-92ad-086709e0576b" containerID="e02872934d4c48155bf45e3d9abfa4cf548374373601ace4dac6abbcc54f7d7b" exitCode=0 Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.284389 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.284410 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"318447c0-0084-4b42-92ad-086709e0576b","Type":"ContainerDied","Data":"e02872934d4c48155bf45e3d9abfa4cf548374373601ace4dac6abbcc54f7d7b"} Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.286830 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"318447c0-0084-4b42-92ad-086709e0576b","Type":"ContainerDied","Data":"c43a2e6bad732959845bce0303151e98b1f8e920e12981b12654fc07f87efeaf"} Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.286853 4833 scope.go:117] "RemoveContainer" containerID="e02872934d4c48155bf45e3d9abfa4cf548374373601ace4dac6abbcc54f7d7b" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.291566 4833 generic.go:334] "Generic (PLEG): container finished" podID="9dea0d66-db51-422c-add8-6d97b7731116" containerID="56487496984f25952988b6e0884601eb729cbc56ed79b090bd191c9aae56adb4" exitCode=0 Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.291609 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9dea0d66-db51-422c-add8-6d97b7731116","Type":"ContainerDied","Data":"56487496984f25952988b6e0884601eb729cbc56ed79b090bd191c9aae56adb4"} Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.323668 4833 scope.go:117] "RemoveContainer" containerID="5c79350ae41df5a5e240931a133bde60403509305f60a215355a42b198e5d899" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.333624 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.343660 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/318447c0-0084-4b42-92ad-086709e0576b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.358338 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.373281 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 13 06:49:12 crc kubenswrapper[4833]: E1013 06:49:12.373761 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9" containerName="dnsmasq-dns" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.373777 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9" containerName="dnsmasq-dns" Oct 13 06:49:12 crc kubenswrapper[4833]: E1013 06:49:12.373792 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9" containerName="init" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.373798 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9" containerName="init" Oct 13 06:49:12 crc kubenswrapper[4833]: E1013 06:49:12.373810 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03753511-4f13-4f91-abb0-1158faba0e60" containerName="nova-manage" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.373817 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="03753511-4f13-4f91-abb0-1158faba0e60" containerName="nova-manage" Oct 13 06:49:12 crc kubenswrapper[4833]: E1013 06:49:12.373842 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="318447c0-0084-4b42-92ad-086709e0576b" containerName="nova-metadata-metadata" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.373847 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="318447c0-0084-4b42-92ad-086709e0576b" containerName="nova-metadata-metadata" Oct 13 06:49:12 crc kubenswrapper[4833]: E1013 06:49:12.373858 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="318447c0-0084-4b42-92ad-086709e0576b" containerName="nova-metadata-log" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.373864 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="318447c0-0084-4b42-92ad-086709e0576b" containerName="nova-metadata-log" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.374038 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="318447c0-0084-4b42-92ad-086709e0576b" containerName="nova-metadata-log" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.374052 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="318447c0-0084-4b42-92ad-086709e0576b" containerName="nova-metadata-metadata" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.374070 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="03753511-4f13-4f91-abb0-1158faba0e60" containerName="nova-manage" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.374080 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c6a27f2-c55b-45bc-b6e3-9a4d7ecba2e9" containerName="dnsmasq-dns" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.375726 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.378750 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.382879 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.382877 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.401005 4833 scope.go:117] "RemoveContainer" containerID="e02872934d4c48155bf45e3d9abfa4cf548374373601ace4dac6abbcc54f7d7b" Oct 13 06:49:12 crc kubenswrapper[4833]: E1013 06:49:12.401461 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e02872934d4c48155bf45e3d9abfa4cf548374373601ace4dac6abbcc54f7d7b\": container with ID starting with e02872934d4c48155bf45e3d9abfa4cf548374373601ace4dac6abbcc54f7d7b not found: ID does not exist" containerID="e02872934d4c48155bf45e3d9abfa4cf548374373601ace4dac6abbcc54f7d7b" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.401548 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e02872934d4c48155bf45e3d9abfa4cf548374373601ace4dac6abbcc54f7d7b"} err="failed to get container status \"e02872934d4c48155bf45e3d9abfa4cf548374373601ace4dac6abbcc54f7d7b\": rpc error: code = NotFound desc = could not find container \"e02872934d4c48155bf45e3d9abfa4cf548374373601ace4dac6abbcc54f7d7b\": container with ID starting with e02872934d4c48155bf45e3d9abfa4cf548374373601ace4dac6abbcc54f7d7b not found: ID does not exist" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.401580 4833 scope.go:117] "RemoveContainer" containerID="5c79350ae41df5a5e240931a133bde60403509305f60a215355a42b198e5d899" Oct 13 06:49:12 crc kubenswrapper[4833]: E1013 06:49:12.401823 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c79350ae41df5a5e240931a133bde60403509305f60a215355a42b198e5d899\": container with ID starting with 5c79350ae41df5a5e240931a133bde60403509305f60a215355a42b198e5d899 not found: ID does not exist" containerID="5c79350ae41df5a5e240931a133bde60403509305f60a215355a42b198e5d899" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.401851 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c79350ae41df5a5e240931a133bde60403509305f60a215355a42b198e5d899"} err="failed to get container status \"5c79350ae41df5a5e240931a133bde60403509305f60a215355a42b198e5d899\": rpc error: code = NotFound desc = could not find container \"5c79350ae41df5a5e240931a133bde60403509305f60a215355a42b198e5d899\": container with ID starting with 5c79350ae41df5a5e240931a133bde60403509305f60a215355a42b198e5d899 not found: ID does not exist" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.471684 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.547178 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dea0d66-db51-422c-add8-6d97b7731116-config-data\") pod \"9dea0d66-db51-422c-add8-6d97b7731116\" (UID: \"9dea0d66-db51-422c-add8-6d97b7731116\") " Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.547431 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dea0d66-db51-422c-add8-6d97b7731116-combined-ca-bundle\") pod \"9dea0d66-db51-422c-add8-6d97b7731116\" (UID: \"9dea0d66-db51-422c-add8-6d97b7731116\") " Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.547533 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjbdg\" (UniqueName: \"kubernetes.io/projected/9dea0d66-db51-422c-add8-6d97b7731116-kube-api-access-tjbdg\") pod \"9dea0d66-db51-422c-add8-6d97b7731116\" (UID: \"9dea0d66-db51-422c-add8-6d97b7731116\") " Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.548040 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kt55\" (UniqueName: \"kubernetes.io/projected/2aaf5d8e-00de-473b-91d2-1dd8a7354853-kube-api-access-6kt55\") pod \"nova-metadata-0\" (UID: \"2aaf5d8e-00de-473b-91d2-1dd8a7354853\") " pod="openstack/nova-metadata-0" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.548129 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aaf5d8e-00de-473b-91d2-1dd8a7354853-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2aaf5d8e-00de-473b-91d2-1dd8a7354853\") " pod="openstack/nova-metadata-0" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.548222 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aaf5d8e-00de-473b-91d2-1dd8a7354853-config-data\") pod \"nova-metadata-0\" (UID: \"2aaf5d8e-00de-473b-91d2-1dd8a7354853\") " pod="openstack/nova-metadata-0" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.548388 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2aaf5d8e-00de-473b-91d2-1dd8a7354853-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2aaf5d8e-00de-473b-91d2-1dd8a7354853\") " pod="openstack/nova-metadata-0" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.548454 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2aaf5d8e-00de-473b-91d2-1dd8a7354853-logs\") pod \"nova-metadata-0\" (UID: \"2aaf5d8e-00de-473b-91d2-1dd8a7354853\") " pod="openstack/nova-metadata-0" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.554080 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dea0d66-db51-422c-add8-6d97b7731116-kube-api-access-tjbdg" (OuterVolumeSpecName: "kube-api-access-tjbdg") pod "9dea0d66-db51-422c-add8-6d97b7731116" (UID: "9dea0d66-db51-422c-add8-6d97b7731116"). InnerVolumeSpecName "kube-api-access-tjbdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.581610 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dea0d66-db51-422c-add8-6d97b7731116-config-data" (OuterVolumeSpecName: "config-data") pod "9dea0d66-db51-422c-add8-6d97b7731116" (UID: "9dea0d66-db51-422c-add8-6d97b7731116"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.583845 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dea0d66-db51-422c-add8-6d97b7731116-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9dea0d66-db51-422c-add8-6d97b7731116" (UID: "9dea0d66-db51-422c-add8-6d97b7731116"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.643762 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="318447c0-0084-4b42-92ad-086709e0576b" path="/var/lib/kubelet/pods/318447c0-0084-4b42-92ad-086709e0576b/volumes" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.651206 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aaf5d8e-00de-473b-91d2-1dd8a7354853-config-data\") pod \"nova-metadata-0\" (UID: \"2aaf5d8e-00de-473b-91d2-1dd8a7354853\") " pod="openstack/nova-metadata-0" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.651319 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2aaf5d8e-00de-473b-91d2-1dd8a7354853-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2aaf5d8e-00de-473b-91d2-1dd8a7354853\") " pod="openstack/nova-metadata-0" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.651375 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2aaf5d8e-00de-473b-91d2-1dd8a7354853-logs\") pod \"nova-metadata-0\" (UID: \"2aaf5d8e-00de-473b-91d2-1dd8a7354853\") " pod="openstack/nova-metadata-0" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.651590 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kt55\" (UniqueName: \"kubernetes.io/projected/2aaf5d8e-00de-473b-91d2-1dd8a7354853-kube-api-access-6kt55\") pod \"nova-metadata-0\" (UID: \"2aaf5d8e-00de-473b-91d2-1dd8a7354853\") " pod="openstack/nova-metadata-0" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.651642 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aaf5d8e-00de-473b-91d2-1dd8a7354853-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2aaf5d8e-00de-473b-91d2-1dd8a7354853\") " pod="openstack/nova-metadata-0" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.651744 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjbdg\" (UniqueName: \"kubernetes.io/projected/9dea0d66-db51-422c-add8-6d97b7731116-kube-api-access-tjbdg\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.651774 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dea0d66-db51-422c-add8-6d97b7731116-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.651792 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dea0d66-db51-422c-add8-6d97b7731116-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.655439 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2aaf5d8e-00de-473b-91d2-1dd8a7354853-logs\") pod \"nova-metadata-0\" (UID: \"2aaf5d8e-00de-473b-91d2-1dd8a7354853\") " pod="openstack/nova-metadata-0" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.660058 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aaf5d8e-00de-473b-91d2-1dd8a7354853-config-data\") pod \"nova-metadata-0\" (UID: \"2aaf5d8e-00de-473b-91d2-1dd8a7354853\") " pod="openstack/nova-metadata-0" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.662283 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aaf5d8e-00de-473b-91d2-1dd8a7354853-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2aaf5d8e-00de-473b-91d2-1dd8a7354853\") " pod="openstack/nova-metadata-0" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.667187 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2aaf5d8e-00de-473b-91d2-1dd8a7354853-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2aaf5d8e-00de-473b-91d2-1dd8a7354853\") " pod="openstack/nova-metadata-0" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.675683 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kt55\" (UniqueName: \"kubernetes.io/projected/2aaf5d8e-00de-473b-91d2-1dd8a7354853-kube-api-access-6kt55\") pod \"nova-metadata-0\" (UID: \"2aaf5d8e-00de-473b-91d2-1dd8a7354853\") " pod="openstack/nova-metadata-0" Oct 13 06:49:12 crc kubenswrapper[4833]: I1013 06:49:12.700020 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 06:49:13 crc kubenswrapper[4833]: I1013 06:49:13.141703 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 06:49:13 crc kubenswrapper[4833]: W1013 06:49:13.152155 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2aaf5d8e_00de_473b_91d2_1dd8a7354853.slice/crio-9ba2a863b1273cd7f53154afc472c8c317f55783b478d39378e76bab634d4ee5 WatchSource:0}: Error finding container 9ba2a863b1273cd7f53154afc472c8c317f55783b478d39378e76bab634d4ee5: Status 404 returned error can't find the container with id 9ba2a863b1273cd7f53154afc472c8c317f55783b478d39378e76bab634d4ee5 Oct 13 06:49:13 crc kubenswrapper[4833]: I1013 06:49:13.303487 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2aaf5d8e-00de-473b-91d2-1dd8a7354853","Type":"ContainerStarted","Data":"9ba2a863b1273cd7f53154afc472c8c317f55783b478d39378e76bab634d4ee5"} Oct 13 06:49:13 crc kubenswrapper[4833]: I1013 06:49:13.305566 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9dea0d66-db51-422c-add8-6d97b7731116","Type":"ContainerDied","Data":"b92d7bc7352b1a0769c6ab3bcab3f342f24c4a5fedb425c8ef1e1c364b5291a6"} Oct 13 06:49:13 crc kubenswrapper[4833]: I1013 06:49:13.305604 4833 scope.go:117] "RemoveContainer" containerID="56487496984f25952988b6e0884601eb729cbc56ed79b090bd191c9aae56adb4" Oct 13 06:49:13 crc kubenswrapper[4833]: I1013 06:49:13.305659 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 06:49:13 crc kubenswrapper[4833]: I1013 06:49:13.342761 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 06:49:13 crc kubenswrapper[4833]: I1013 06:49:13.354638 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 06:49:13 crc kubenswrapper[4833]: I1013 06:49:13.365850 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 06:49:13 crc kubenswrapper[4833]: E1013 06:49:13.366368 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dea0d66-db51-422c-add8-6d97b7731116" containerName="nova-scheduler-scheduler" Oct 13 06:49:13 crc kubenswrapper[4833]: I1013 06:49:13.366386 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dea0d66-db51-422c-add8-6d97b7731116" containerName="nova-scheduler-scheduler" Oct 13 06:49:13 crc kubenswrapper[4833]: I1013 06:49:13.366616 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dea0d66-db51-422c-add8-6d97b7731116" containerName="nova-scheduler-scheduler" Oct 13 06:49:13 crc kubenswrapper[4833]: I1013 06:49:13.367347 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 06:49:13 crc kubenswrapper[4833]: I1013 06:49:13.370453 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 13 06:49:13 crc kubenswrapper[4833]: I1013 06:49:13.376965 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 06:49:13 crc kubenswrapper[4833]: I1013 06:49:13.472632 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9gfz\" (UniqueName: \"kubernetes.io/projected/475289a4-cf33-4f56-93d9-73f7551026f8-kube-api-access-z9gfz\") pod \"nova-scheduler-0\" (UID: \"475289a4-cf33-4f56-93d9-73f7551026f8\") " pod="openstack/nova-scheduler-0" Oct 13 06:49:13 crc kubenswrapper[4833]: I1013 06:49:13.472736 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/475289a4-cf33-4f56-93d9-73f7551026f8-config-data\") pod \"nova-scheduler-0\" (UID: \"475289a4-cf33-4f56-93d9-73f7551026f8\") " pod="openstack/nova-scheduler-0" Oct 13 06:49:13 crc kubenswrapper[4833]: I1013 06:49:13.472789 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/475289a4-cf33-4f56-93d9-73f7551026f8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"475289a4-cf33-4f56-93d9-73f7551026f8\") " pod="openstack/nova-scheduler-0" Oct 13 06:49:13 crc kubenswrapper[4833]: I1013 06:49:13.574522 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/475289a4-cf33-4f56-93d9-73f7551026f8-config-data\") pod \"nova-scheduler-0\" (UID: \"475289a4-cf33-4f56-93d9-73f7551026f8\") " pod="openstack/nova-scheduler-0" Oct 13 06:49:13 crc kubenswrapper[4833]: I1013 06:49:13.574846 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/475289a4-cf33-4f56-93d9-73f7551026f8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"475289a4-cf33-4f56-93d9-73f7551026f8\") " pod="openstack/nova-scheduler-0" Oct 13 06:49:13 crc kubenswrapper[4833]: I1013 06:49:13.575055 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9gfz\" (UniqueName: \"kubernetes.io/projected/475289a4-cf33-4f56-93d9-73f7551026f8-kube-api-access-z9gfz\") pod \"nova-scheduler-0\" (UID: \"475289a4-cf33-4f56-93d9-73f7551026f8\") " pod="openstack/nova-scheduler-0" Oct 13 06:49:13 crc kubenswrapper[4833]: I1013 06:49:13.579193 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/475289a4-cf33-4f56-93d9-73f7551026f8-config-data\") pod \"nova-scheduler-0\" (UID: \"475289a4-cf33-4f56-93d9-73f7551026f8\") " pod="openstack/nova-scheduler-0" Oct 13 06:49:13 crc kubenswrapper[4833]: I1013 06:49:13.579719 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/475289a4-cf33-4f56-93d9-73f7551026f8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"475289a4-cf33-4f56-93d9-73f7551026f8\") " pod="openstack/nova-scheduler-0" Oct 13 06:49:13 crc kubenswrapper[4833]: I1013 06:49:13.591640 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9gfz\" (UniqueName: \"kubernetes.io/projected/475289a4-cf33-4f56-93d9-73f7551026f8-kube-api-access-z9gfz\") pod \"nova-scheduler-0\" (UID: \"475289a4-cf33-4f56-93d9-73f7551026f8\") " pod="openstack/nova-scheduler-0" Oct 13 06:49:13 crc kubenswrapper[4833]: I1013 06:49:13.737862 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 06:49:14 crc kubenswrapper[4833]: W1013 06:49:14.186338 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod475289a4_cf33_4f56_93d9_73f7551026f8.slice/crio-0abc208f9dd84299dd86758a437d24681925a350649698b8d7d6c9dd896f1e42 WatchSource:0}: Error finding container 0abc208f9dd84299dd86758a437d24681925a350649698b8d7d6c9dd896f1e42: Status 404 returned error can't find the container with id 0abc208f9dd84299dd86758a437d24681925a350649698b8d7d6c9dd896f1e42 Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.186642 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.228604 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.316769 4833 generic.go:334] "Generic (PLEG): container finished" podID="c7eb83c9-6234-42dc-b7de-e9b945d46a50" containerID="19c53edada7f93a3cca5097f292dc63d54f8b0990ac4f79fcdff6c6f000f4259" exitCode=0 Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.316863 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7eb83c9-6234-42dc-b7de-e9b945d46a50","Type":"ContainerDied","Data":"19c53edada7f93a3cca5097f292dc63d54f8b0990ac4f79fcdff6c6f000f4259"} Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.316885 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.316899 4833 scope.go:117] "RemoveContainer" containerID="19c53edada7f93a3cca5097f292dc63d54f8b0990ac4f79fcdff6c6f000f4259" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.316889 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7eb83c9-6234-42dc-b7de-e9b945d46a50","Type":"ContainerDied","Data":"435f00967789b84fd7e2c88e6ac41ea173037f959b95bf714626974700075795"} Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.318275 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"475289a4-cf33-4f56-93d9-73f7551026f8","Type":"ContainerStarted","Data":"0abc208f9dd84299dd86758a437d24681925a350649698b8d7d6c9dd896f1e42"} Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.321107 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2aaf5d8e-00de-473b-91d2-1dd8a7354853","Type":"ContainerStarted","Data":"0c49851d3254ed77c14a56073d79efd51082af7a60fed7458676ff9c919c96c6"} Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.321139 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2aaf5d8e-00de-473b-91d2-1dd8a7354853","Type":"ContainerStarted","Data":"988658baa74f964f157fcd718e94c95fc2e7688fc3335d190e84005d02e7fd3a"} Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.336681 4833 scope.go:117] "RemoveContainer" containerID="2c83a9a0d8fc7d90698f81bcc9041aad81c96aa81135f206855fa76cb82c0a17" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.348724 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.348700756 podStartE2EDuration="2.348700756s" podCreationTimestamp="2025-10-13 06:49:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:49:14.339738765 +0000 UTC m=+1244.440161691" watchObservedRunningTime="2025-10-13 06:49:14.348700756 +0000 UTC m=+1244.449123672" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.363430 4833 scope.go:117] "RemoveContainer" containerID="19c53edada7f93a3cca5097f292dc63d54f8b0990ac4f79fcdff6c6f000f4259" Oct 13 06:49:14 crc kubenswrapper[4833]: E1013 06:49:14.365450 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19c53edada7f93a3cca5097f292dc63d54f8b0990ac4f79fcdff6c6f000f4259\": container with ID starting with 19c53edada7f93a3cca5097f292dc63d54f8b0990ac4f79fcdff6c6f000f4259 not found: ID does not exist" containerID="19c53edada7f93a3cca5097f292dc63d54f8b0990ac4f79fcdff6c6f000f4259" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.365501 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19c53edada7f93a3cca5097f292dc63d54f8b0990ac4f79fcdff6c6f000f4259"} err="failed to get container status \"19c53edada7f93a3cca5097f292dc63d54f8b0990ac4f79fcdff6c6f000f4259\": rpc error: code = NotFound desc = could not find container \"19c53edada7f93a3cca5097f292dc63d54f8b0990ac4f79fcdff6c6f000f4259\": container with ID starting with 19c53edada7f93a3cca5097f292dc63d54f8b0990ac4f79fcdff6c6f000f4259 not found: ID does not exist" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.365530 4833 scope.go:117] "RemoveContainer" containerID="2c83a9a0d8fc7d90698f81bcc9041aad81c96aa81135f206855fa76cb82c0a17" Oct 13 06:49:14 crc kubenswrapper[4833]: E1013 06:49:14.365958 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c83a9a0d8fc7d90698f81bcc9041aad81c96aa81135f206855fa76cb82c0a17\": container with ID starting with 2c83a9a0d8fc7d90698f81bcc9041aad81c96aa81135f206855fa76cb82c0a17 not found: ID does not exist" containerID="2c83a9a0d8fc7d90698f81bcc9041aad81c96aa81135f206855fa76cb82c0a17" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.365992 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c83a9a0d8fc7d90698f81bcc9041aad81c96aa81135f206855fa76cb82c0a17"} err="failed to get container status \"2c83a9a0d8fc7d90698f81bcc9041aad81c96aa81135f206855fa76cb82c0a17\": rpc error: code = NotFound desc = could not find container \"2c83a9a0d8fc7d90698f81bcc9041aad81c96aa81135f206855fa76cb82c0a17\": container with ID starting with 2c83a9a0d8fc7d90698f81bcc9041aad81c96aa81135f206855fa76cb82c0a17 not found: ID does not exist" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.389820 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rr75\" (UniqueName: \"kubernetes.io/projected/c7eb83c9-6234-42dc-b7de-e9b945d46a50-kube-api-access-6rr75\") pod \"c7eb83c9-6234-42dc-b7de-e9b945d46a50\" (UID: \"c7eb83c9-6234-42dc-b7de-e9b945d46a50\") " Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.389910 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7eb83c9-6234-42dc-b7de-e9b945d46a50-internal-tls-certs\") pod \"c7eb83c9-6234-42dc-b7de-e9b945d46a50\" (UID: \"c7eb83c9-6234-42dc-b7de-e9b945d46a50\") " Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.390043 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7eb83c9-6234-42dc-b7de-e9b945d46a50-public-tls-certs\") pod \"c7eb83c9-6234-42dc-b7de-e9b945d46a50\" (UID: \"c7eb83c9-6234-42dc-b7de-e9b945d46a50\") " Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.390078 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7eb83c9-6234-42dc-b7de-e9b945d46a50-combined-ca-bundle\") pod \"c7eb83c9-6234-42dc-b7de-e9b945d46a50\" (UID: \"c7eb83c9-6234-42dc-b7de-e9b945d46a50\") " Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.390131 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7eb83c9-6234-42dc-b7de-e9b945d46a50-logs\") pod \"c7eb83c9-6234-42dc-b7de-e9b945d46a50\" (UID: \"c7eb83c9-6234-42dc-b7de-e9b945d46a50\") " Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.390187 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7eb83c9-6234-42dc-b7de-e9b945d46a50-config-data\") pod \"c7eb83c9-6234-42dc-b7de-e9b945d46a50\" (UID: \"c7eb83c9-6234-42dc-b7de-e9b945d46a50\") " Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.390491 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7eb83c9-6234-42dc-b7de-e9b945d46a50-logs" (OuterVolumeSpecName: "logs") pod "c7eb83c9-6234-42dc-b7de-e9b945d46a50" (UID: "c7eb83c9-6234-42dc-b7de-e9b945d46a50"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.390806 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7eb83c9-6234-42dc-b7de-e9b945d46a50-logs\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.393439 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7eb83c9-6234-42dc-b7de-e9b945d46a50-kube-api-access-6rr75" (OuterVolumeSpecName: "kube-api-access-6rr75") pod "c7eb83c9-6234-42dc-b7de-e9b945d46a50" (UID: "c7eb83c9-6234-42dc-b7de-e9b945d46a50"). InnerVolumeSpecName "kube-api-access-6rr75". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.416266 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7eb83c9-6234-42dc-b7de-e9b945d46a50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7eb83c9-6234-42dc-b7de-e9b945d46a50" (UID: "c7eb83c9-6234-42dc-b7de-e9b945d46a50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.418211 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7eb83c9-6234-42dc-b7de-e9b945d46a50-config-data" (OuterVolumeSpecName: "config-data") pod "c7eb83c9-6234-42dc-b7de-e9b945d46a50" (UID: "c7eb83c9-6234-42dc-b7de-e9b945d46a50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.437027 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7eb83c9-6234-42dc-b7de-e9b945d46a50-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c7eb83c9-6234-42dc-b7de-e9b945d46a50" (UID: "c7eb83c9-6234-42dc-b7de-e9b945d46a50"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.444065 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7eb83c9-6234-42dc-b7de-e9b945d46a50-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c7eb83c9-6234-42dc-b7de-e9b945d46a50" (UID: "c7eb83c9-6234-42dc-b7de-e9b945d46a50"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.492970 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rr75\" (UniqueName: \"kubernetes.io/projected/c7eb83c9-6234-42dc-b7de-e9b945d46a50-kube-api-access-6rr75\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.493005 4833 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7eb83c9-6234-42dc-b7de-e9b945d46a50-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.493013 4833 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7eb83c9-6234-42dc-b7de-e9b945d46a50-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.493022 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7eb83c9-6234-42dc-b7de-e9b945d46a50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.493031 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7eb83c9-6234-42dc-b7de-e9b945d46a50-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.640295 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dea0d66-db51-422c-add8-6d97b7731116" path="/var/lib/kubelet/pods/9dea0d66-db51-422c-add8-6d97b7731116/volumes" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.729957 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.744059 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.754497 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 13 06:49:14 crc kubenswrapper[4833]: E1013 06:49:14.754880 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7eb83c9-6234-42dc-b7de-e9b945d46a50" containerName="nova-api-api" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.754899 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7eb83c9-6234-42dc-b7de-e9b945d46a50" containerName="nova-api-api" Oct 13 06:49:14 crc kubenswrapper[4833]: E1013 06:49:14.754920 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7eb83c9-6234-42dc-b7de-e9b945d46a50" containerName="nova-api-log" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.754928 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7eb83c9-6234-42dc-b7de-e9b945d46a50" containerName="nova-api-log" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.755102 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7eb83c9-6234-42dc-b7de-e9b945d46a50" containerName="nova-api-log" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.755111 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7eb83c9-6234-42dc-b7de-e9b945d46a50" containerName="nova-api-api" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.756059 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.762155 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.765501 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.766980 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.773660 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.900182 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69c5134b-fc5b-453c-87ee-6a26e08796cf-logs\") pod \"nova-api-0\" (UID: \"69c5134b-fc5b-453c-87ee-6a26e08796cf\") " pod="openstack/nova-api-0" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.900248 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69c5134b-fc5b-453c-87ee-6a26e08796cf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"69c5134b-fc5b-453c-87ee-6a26e08796cf\") " pod="openstack/nova-api-0" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.900361 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98fjg\" (UniqueName: \"kubernetes.io/projected/69c5134b-fc5b-453c-87ee-6a26e08796cf-kube-api-access-98fjg\") pod \"nova-api-0\" (UID: \"69c5134b-fc5b-453c-87ee-6a26e08796cf\") " pod="openstack/nova-api-0" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.900390 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69c5134b-fc5b-453c-87ee-6a26e08796cf-config-data\") pod \"nova-api-0\" (UID: \"69c5134b-fc5b-453c-87ee-6a26e08796cf\") " pod="openstack/nova-api-0" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.900413 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69c5134b-fc5b-453c-87ee-6a26e08796cf-internal-tls-certs\") pod \"nova-api-0\" (UID: \"69c5134b-fc5b-453c-87ee-6a26e08796cf\") " pod="openstack/nova-api-0" Oct 13 06:49:14 crc kubenswrapper[4833]: I1013 06:49:14.900473 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69c5134b-fc5b-453c-87ee-6a26e08796cf-public-tls-certs\") pod \"nova-api-0\" (UID: \"69c5134b-fc5b-453c-87ee-6a26e08796cf\") " pod="openstack/nova-api-0" Oct 13 06:49:15 crc kubenswrapper[4833]: I1013 06:49:15.001500 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98fjg\" (UniqueName: \"kubernetes.io/projected/69c5134b-fc5b-453c-87ee-6a26e08796cf-kube-api-access-98fjg\") pod \"nova-api-0\" (UID: \"69c5134b-fc5b-453c-87ee-6a26e08796cf\") " pod="openstack/nova-api-0" Oct 13 06:49:15 crc kubenswrapper[4833]: I1013 06:49:15.001575 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69c5134b-fc5b-453c-87ee-6a26e08796cf-config-data\") pod \"nova-api-0\" (UID: \"69c5134b-fc5b-453c-87ee-6a26e08796cf\") " pod="openstack/nova-api-0" Oct 13 06:49:15 crc kubenswrapper[4833]: I1013 06:49:15.001594 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69c5134b-fc5b-453c-87ee-6a26e08796cf-internal-tls-certs\") pod \"nova-api-0\" (UID: \"69c5134b-fc5b-453c-87ee-6a26e08796cf\") " pod="openstack/nova-api-0" Oct 13 06:49:15 crc kubenswrapper[4833]: I1013 06:49:15.001640 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69c5134b-fc5b-453c-87ee-6a26e08796cf-public-tls-certs\") pod \"nova-api-0\" (UID: \"69c5134b-fc5b-453c-87ee-6a26e08796cf\") " pod="openstack/nova-api-0" Oct 13 06:49:15 crc kubenswrapper[4833]: I1013 06:49:15.001682 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69c5134b-fc5b-453c-87ee-6a26e08796cf-logs\") pod \"nova-api-0\" (UID: \"69c5134b-fc5b-453c-87ee-6a26e08796cf\") " pod="openstack/nova-api-0" Oct 13 06:49:15 crc kubenswrapper[4833]: I1013 06:49:15.001702 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69c5134b-fc5b-453c-87ee-6a26e08796cf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"69c5134b-fc5b-453c-87ee-6a26e08796cf\") " pod="openstack/nova-api-0" Oct 13 06:49:15 crc kubenswrapper[4833]: I1013 06:49:15.002840 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69c5134b-fc5b-453c-87ee-6a26e08796cf-logs\") pod \"nova-api-0\" (UID: \"69c5134b-fc5b-453c-87ee-6a26e08796cf\") " pod="openstack/nova-api-0" Oct 13 06:49:15 crc kubenswrapper[4833]: I1013 06:49:15.006776 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69c5134b-fc5b-453c-87ee-6a26e08796cf-config-data\") pod \"nova-api-0\" (UID: \"69c5134b-fc5b-453c-87ee-6a26e08796cf\") " pod="openstack/nova-api-0" Oct 13 06:49:15 crc kubenswrapper[4833]: I1013 06:49:15.006788 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69c5134b-fc5b-453c-87ee-6a26e08796cf-internal-tls-certs\") pod \"nova-api-0\" (UID: \"69c5134b-fc5b-453c-87ee-6a26e08796cf\") " pod="openstack/nova-api-0" Oct 13 06:49:15 crc kubenswrapper[4833]: I1013 06:49:15.008013 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69c5134b-fc5b-453c-87ee-6a26e08796cf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"69c5134b-fc5b-453c-87ee-6a26e08796cf\") " pod="openstack/nova-api-0" Oct 13 06:49:15 crc kubenswrapper[4833]: I1013 06:49:15.019132 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69c5134b-fc5b-453c-87ee-6a26e08796cf-public-tls-certs\") pod \"nova-api-0\" (UID: \"69c5134b-fc5b-453c-87ee-6a26e08796cf\") " pod="openstack/nova-api-0" Oct 13 06:49:15 crc kubenswrapper[4833]: I1013 06:49:15.022341 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98fjg\" (UniqueName: \"kubernetes.io/projected/69c5134b-fc5b-453c-87ee-6a26e08796cf-kube-api-access-98fjg\") pod \"nova-api-0\" (UID: \"69c5134b-fc5b-453c-87ee-6a26e08796cf\") " pod="openstack/nova-api-0" Oct 13 06:49:15 crc kubenswrapper[4833]: I1013 06:49:15.075810 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 06:49:15 crc kubenswrapper[4833]: I1013 06:49:15.344363 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"475289a4-cf33-4f56-93d9-73f7551026f8","Type":"ContainerStarted","Data":"3e0342cb2a85fe207f3129a530b551fb8f028c6b0f6607f69d53f7edaab9d8e1"} Oct 13 06:49:15 crc kubenswrapper[4833]: I1013 06:49:15.371961 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.371937665 podStartE2EDuration="2.371937665s" podCreationTimestamp="2025-10-13 06:49:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:49:15.357932083 +0000 UTC m=+1245.458355009" watchObservedRunningTime="2025-10-13 06:49:15.371937665 +0000 UTC m=+1245.472360591" Oct 13 06:49:15 crc kubenswrapper[4833]: I1013 06:49:15.524003 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 13 06:49:15 crc kubenswrapper[4833]: W1013 06:49:15.531649 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69c5134b_fc5b_453c_87ee_6a26e08796cf.slice/crio-84213ef112374c121d2b79a915af71f5d3bc419e0ffe5f2dbd77da56240629a4 WatchSource:0}: Error finding container 84213ef112374c121d2b79a915af71f5d3bc419e0ffe5f2dbd77da56240629a4: Status 404 returned error can't find the container with id 84213ef112374c121d2b79a915af71f5d3bc419e0ffe5f2dbd77da56240629a4 Oct 13 06:49:16 crc kubenswrapper[4833]: I1013 06:49:16.357586 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69c5134b-fc5b-453c-87ee-6a26e08796cf","Type":"ContainerStarted","Data":"d05913f08dee4311606e7fd0c07f800f52f54e4b74d0ee36fae94de7571c4162"} Oct 13 06:49:16 crc kubenswrapper[4833]: I1013 06:49:16.358266 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69c5134b-fc5b-453c-87ee-6a26e08796cf","Type":"ContainerStarted","Data":"d419f7d589b55bb7907d4d67106a93b203046358490c3edf1dc9eeca8e0bd809"} Oct 13 06:49:16 crc kubenswrapper[4833]: I1013 06:49:16.358284 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69c5134b-fc5b-453c-87ee-6a26e08796cf","Type":"ContainerStarted","Data":"84213ef112374c121d2b79a915af71f5d3bc419e0ffe5f2dbd77da56240629a4"} Oct 13 06:49:16 crc kubenswrapper[4833]: I1013 06:49:16.388633 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.388614551 podStartE2EDuration="2.388614551s" podCreationTimestamp="2025-10-13 06:49:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:49:16.382767057 +0000 UTC m=+1246.483189973" watchObservedRunningTime="2025-10-13 06:49:16.388614551 +0000 UTC m=+1246.489037467" Oct 13 06:49:16 crc kubenswrapper[4833]: I1013 06:49:16.663119 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7eb83c9-6234-42dc-b7de-e9b945d46a50" path="/var/lib/kubelet/pods/c7eb83c9-6234-42dc-b7de-e9b945d46a50/volumes" Oct 13 06:49:17 crc kubenswrapper[4833]: I1013 06:49:17.700226 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 13 06:49:17 crc kubenswrapper[4833]: I1013 06:49:17.700735 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 13 06:49:18 crc kubenswrapper[4833]: I1013 06:49:18.738882 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 13 06:49:22 crc kubenswrapper[4833]: I1013 06:49:22.700696 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 13 06:49:22 crc kubenswrapper[4833]: I1013 06:49:22.701305 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 13 06:49:23 crc kubenswrapper[4833]: I1013 06:49:23.715907 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2aaf5d8e-00de-473b-91d2-1dd8a7354853" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 13 06:49:23 crc kubenswrapper[4833]: I1013 06:49:23.715947 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2aaf5d8e-00de-473b-91d2-1dd8a7354853" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 13 06:49:23 crc kubenswrapper[4833]: I1013 06:49:23.738700 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 13 06:49:23 crc kubenswrapper[4833]: I1013 06:49:23.770171 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 13 06:49:24 crc kubenswrapper[4833]: I1013 06:49:24.496300 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 13 06:49:25 crc kubenswrapper[4833]: I1013 06:49:25.077287 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 13 06:49:25 crc kubenswrapper[4833]: I1013 06:49:25.077660 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 13 06:49:26 crc kubenswrapper[4833]: I1013 06:49:26.092864 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="69c5134b-fc5b-453c-87ee-6a26e08796cf" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 13 06:49:26 crc kubenswrapper[4833]: I1013 06:49:26.092913 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="69c5134b-fc5b-453c-87ee-6a26e08796cf" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 13 06:49:30 crc kubenswrapper[4833]: I1013 06:49:30.609779 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 13 06:49:32 crc kubenswrapper[4833]: I1013 06:49:32.707610 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 13 06:49:32 crc kubenswrapper[4833]: I1013 06:49:32.711667 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 13 06:49:32 crc kubenswrapper[4833]: I1013 06:49:32.717592 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 13 06:49:33 crc kubenswrapper[4833]: I1013 06:49:33.611919 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 13 06:49:35 crc kubenswrapper[4833]: I1013 06:49:35.085477 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 13 06:49:35 crc kubenswrapper[4833]: I1013 06:49:35.086021 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 13 06:49:35 crc kubenswrapper[4833]: I1013 06:49:35.086415 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 13 06:49:35 crc kubenswrapper[4833]: I1013 06:49:35.086462 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 13 06:49:35 crc kubenswrapper[4833]: I1013 06:49:35.098465 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 13 06:49:35 crc kubenswrapper[4833]: I1013 06:49:35.098951 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 13 06:49:53 crc kubenswrapper[4833]: I1013 06:49:53.615669 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Oct 13 06:49:53 crc kubenswrapper[4833]: I1013 06:49:53.616375 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="5783401d-3007-4df3-a902-1869d62c4acc" containerName="openstackclient" containerID="cri-o://c8ad3d74107bc327da884a44b88aa948e92843c3f297250dc65f8ce46d13f20f" gracePeriod=2 Oct 13 06:49:53 crc kubenswrapper[4833]: I1013 06:49:53.624179 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Oct 13 06:49:53 crc kubenswrapper[4833]: I1013 06:49:53.903501 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Oct 13 06:49:53 crc kubenswrapper[4833]: I1013 06:49:53.903729 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="c1a7008b-3448-4108-81b0-4d16484a6f7b" containerName="ovn-northd" containerID="cri-o://e0d2353375289df900cadbe52a7dfd8067c5455ffa6c327d4b7380ccf466e04d" gracePeriod=30 Oct 13 06:49:53 crc kubenswrapper[4833]: I1013 06:49:53.904089 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="c1a7008b-3448-4108-81b0-4d16484a6f7b" containerName="openstack-network-exporter" containerID="cri-o://54a34d37063fa7510c51a589e85db2af1e8eef4bc3dcb4482d914746021edcd6" gracePeriod=30 Oct 13 06:49:53 crc kubenswrapper[4833]: I1013 06:49:53.924719 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 13 06:49:53 crc kubenswrapper[4833]: E1013 06:49:53.981691 4833 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Oct 13 06:49:53 crc kubenswrapper[4833]: E1013 06:49:53.981785 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/827f736f-2193-4ebd-ab7f-99fb22945d1e-config-data podName:827f736f-2193-4ebd-ab7f-99fb22945d1e nodeName:}" failed. No retries permitted until 2025-10-13 06:49:54.48176266 +0000 UTC m=+1284.582185576 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/827f736f-2193-4ebd-ab7f-99fb22945d1e-config-data") pod "rabbitmq-server-0" (UID: "827f736f-2193-4ebd-ab7f-99fb22945d1e") : configmap "rabbitmq-config-data" not found Oct 13 06:49:53 crc kubenswrapper[4833]: I1013 06:49:53.998450 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 13 06:49:53 crc kubenswrapper[4833]: I1013 06:49:53.999066 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="b1ab7add-ea30-4610-a96a-2cad6ae8e40c" containerName="openstack-network-exporter" containerID="cri-o://f94b7170cff535d70b886a880f441f6bc49ccf39c462e54f24bba46d4e1405e6" gracePeriod=300 Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.021616 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinderb181-account-delete-hqpht"] Oct 13 06:49:54 crc kubenswrapper[4833]: E1013 06:49:54.022205 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5783401d-3007-4df3-a902-1869d62c4acc" containerName="openstackclient" Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.022218 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5783401d-3007-4df3-a902-1869d62c4acc" containerName="openstackclient" Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.022423 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="5783401d-3007-4df3-a902-1869d62c4acc" containerName="openstackclient" Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.023006 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinderb181-account-delete-hqpht" Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.035964 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinderb181-account-delete-hqpht"] Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.088632 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh728\" (UniqueName: \"kubernetes.io/projected/e71af496-4851-4904-9003-0358adc97b94-kube-api-access-lh728\") pod \"cinderb181-account-delete-hqpht\" (UID: \"e71af496-4851-4904-9003-0358adc97b94\") " pod="openstack/cinderb181-account-delete-hqpht" Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.109856 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="b1ab7add-ea30-4610-a96a-2cad6ae8e40c" containerName="ovsdbserver-nb" containerID="cri-o://e320ad7d5893dd2a3cf0ab4db95afc8ff7b33d93872d0c9924dfcdb12787887f" gracePeriod=300 Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.153613 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.165957 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement3a43-account-delete-t5vdd"] Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.169646 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement3a43-account-delete-t5vdd" Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.261475 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh728\" (UniqueName: \"kubernetes.io/projected/e71af496-4851-4904-9003-0358adc97b94-kube-api-access-lh728\") pod \"cinderb181-account-delete-hqpht\" (UID: \"e71af496-4851-4904-9003-0358adc97b94\") " pod="openstack/cinderb181-account-delete-hqpht" Oct 13 06:49:54 crc kubenswrapper[4833]: E1013 06:49:54.265939 4833 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Oct 13 06:49:54 crc kubenswrapper[4833]: E1013 06:49:54.286706 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0a6ab499-ed60-45e7-b510-5a43422aa7f5-config-data podName:0a6ab499-ed60-45e7-b510-5a43422aa7f5 nodeName:}" failed. No retries permitted until 2025-10-13 06:49:54.786674727 +0000 UTC m=+1284.887097643 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0a6ab499-ed60-45e7-b510-5a43422aa7f5-config-data") pod "rabbitmq-cell1-server-0" (UID: "0a6ab499-ed60-45e7-b510-5a43422aa7f5") : configmap "rabbitmq-cell1-config-data" not found Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.327752 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh728\" (UniqueName: \"kubernetes.io/projected/e71af496-4851-4904-9003-0358adc97b94-kube-api-access-lh728\") pod \"cinderb181-account-delete-hqpht\" (UID: \"e71af496-4851-4904-9003-0358adc97b94\") " pod="openstack/cinderb181-account-delete-hqpht" Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.335988 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement3a43-account-delete-t5vdd"] Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.363805 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d89f8\" (UniqueName: \"kubernetes.io/projected/3f980bce-4b41-461d-9a1f-af4e6fb7455b-kube-api-access-d89f8\") pod \"placement3a43-account-delete-t5vdd\" (UID: \"3f980bce-4b41-461d-9a1f-af4e6fb7455b\") " pod="openstack/placement3a43-account-delete-t5vdd" Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.429819 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.430581 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="336d549b-b94b-4966-af57-2289b1c8acc8" containerName="openstack-network-exporter" containerID="cri-o://006c322d6580fcba72f2451b54eabfb708adf2bd8b5526724079641631c1a6be" gracePeriod=300 Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.440956 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-ltqfn"] Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.451492 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-ltqfn"] Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.467385 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d89f8\" (UniqueName: \"kubernetes.io/projected/3f980bce-4b41-461d-9a1f-af4e6fb7455b-kube-api-access-d89f8\") pod \"placement3a43-account-delete-t5vdd\" (UID: \"3f980bce-4b41-461d-9a1f-af4e6fb7455b\") " pod="openstack/placement3a43-account-delete-t5vdd" Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.532261 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d89f8\" (UniqueName: \"kubernetes.io/projected/3f980bce-4b41-461d-9a1f-af4e6fb7455b-kube-api-access-d89f8\") pod \"placement3a43-account-delete-t5vdd\" (UID: \"3f980bce-4b41-461d-9a1f-af4e6fb7455b\") " pod="openstack/placement3a43-account-delete-t5vdd" Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.575040 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="336d549b-b94b-4966-af57-2289b1c8acc8" containerName="ovsdbserver-sb" containerID="cri-o://4862d927879e0dfe854052f89e5cade3b64a709d334d9d354f523dde629a88ac" gracePeriod=300 Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.575487 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinderb181-account-delete-hqpht" Oct 13 06:49:54 crc kubenswrapper[4833]: E1013 06:49:54.576784 4833 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Oct 13 06:49:54 crc kubenswrapper[4833]: E1013 06:49:54.576824 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/827f736f-2193-4ebd-ab7f-99fb22945d1e-config-data podName:827f736f-2193-4ebd-ab7f-99fb22945d1e nodeName:}" failed. No retries permitted until 2025-10-13 06:49:55.57680913 +0000 UTC m=+1285.677232046 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/827f736f-2193-4ebd-ab7f-99fb22945d1e-config-data") pod "rabbitmq-server-0" (UID: "827f736f-2193-4ebd-ab7f-99fb22945d1e") : configmap "rabbitmq-config-data" not found Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.581642 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-z7kn8"] Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.617613 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-z7kn8"] Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.669768 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b33d85d-c95b-4e57-a0d3-be407351e33b" path="/var/lib/kubelet/pods/8b33d85d-c95b-4e57-a0d3-be407351e33b/volumes" Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.670581 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5d6e331-404e-48b3-b9ee-66386208af92" path="/var/lib/kubelet/pods/b5d6e331-404e-48b3-b9ee-66386208af92/volumes" Oct 13 06:49:54 crc kubenswrapper[4833]: E1013 06:49:54.703991 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4862d927879e0dfe854052f89e5cade3b64a709d334d9d354f523dde629a88ac" cmd=["/usr/bin/pidof","ovsdb-server"] Oct 13 06:49:54 crc kubenswrapper[4833]: E1013 06:49:54.714658 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4862d927879e0dfe854052f89e5cade3b64a709d334d9d354f523dde629a88ac is running failed: container process not found" containerID="4862d927879e0dfe854052f89e5cade3b64a709d334d9d354f523dde629a88ac" cmd=["/usr/bin/pidof","ovsdb-server"] Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.723067 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement3a43-account-delete-t5vdd" Oct 13 06:49:54 crc kubenswrapper[4833]: E1013 06:49:54.723508 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4862d927879e0dfe854052f89e5cade3b64a709d334d9d354f523dde629a88ac is running failed: container process not found" containerID="4862d927879e0dfe854052f89e5cade3b64a709d334d9d354f523dde629a88ac" cmd=["/usr/bin/pidof","ovsdb-server"] Oct 13 06:49:54 crc kubenswrapper[4833]: E1013 06:49:54.723554 4833 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4862d927879e0dfe854052f89e5cade3b64a709d334d9d354f523dde629a88ac is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-sb-0" podUID="336d549b-b94b-4966-af57-2289b1c8acc8" containerName="ovsdbserver-sb" Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.738436 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novaapi1bdf-account-delete-k9wpj"] Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.739913 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi1bdf-account-delete-k9wpj" Oct 13 06:49:54 crc kubenswrapper[4833]: E1013 06:49:54.792343 4833 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Oct 13 06:49:54 crc kubenswrapper[4833]: E1013 06:49:54.792429 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0a6ab499-ed60-45e7-b510-5a43422aa7f5-config-data podName:0a6ab499-ed60-45e7-b510-5a43422aa7f5 nodeName:}" failed. No retries permitted until 2025-10-13 06:49:55.792411007 +0000 UTC m=+1285.892833933 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0a6ab499-ed60-45e7-b510-5a43422aa7f5-config-data") pod "rabbitmq-cell1-server-0" (UID: "0a6ab499-ed60-45e7-b510-5a43422aa7f5") : configmap "rabbitmq-cell1-config-data" not found Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.793388 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapi1bdf-account-delete-k9wpj"] Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.884973 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-vljcq"] Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.893708 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-vljcq"] Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.901309 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc4t9\" (UniqueName: \"kubernetes.io/projected/f8c66f33-3fbd-4a35-8e0d-2b38c3cd513a-kube-api-access-sc4t9\") pod \"novaapi1bdf-account-delete-k9wpj\" (UID: \"f8c66f33-3fbd-4a35-8e0d-2b38c3cd513a\") " pod="openstack/novaapi1bdf-account-delete-k9wpj" Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.902597 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_336d549b-b94b-4966-af57-2289b1c8acc8/ovsdbserver-sb/0.log" Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.902656 4833 generic.go:334] "Generic (PLEG): container finished" podID="336d549b-b94b-4966-af57-2289b1c8acc8" containerID="006c322d6580fcba72f2451b54eabfb708adf2bd8b5526724079641631c1a6be" exitCode=2 Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.902678 4833 generic.go:334] "Generic (PLEG): container finished" podID="336d549b-b94b-4966-af57-2289b1c8acc8" containerID="4862d927879e0dfe854052f89e5cade3b64a709d334d9d354f523dde629a88ac" exitCode=143 Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.902772 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"336d549b-b94b-4966-af57-2289b1c8acc8","Type":"ContainerDied","Data":"006c322d6580fcba72f2451b54eabfb708adf2bd8b5526724079641631c1a6be"} Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.902803 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"336d549b-b94b-4966-af57-2289b1c8acc8","Type":"ContainerDied","Data":"4862d927879e0dfe854052f89e5cade3b64a709d334d9d354f523dde629a88ac"} Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.940550 4833 generic.go:334] "Generic (PLEG): container finished" podID="c1a7008b-3448-4108-81b0-4d16484a6f7b" containerID="54a34d37063fa7510c51a589e85db2af1e8eef4bc3dcb4482d914746021edcd6" exitCode=2 Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.940670 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c1a7008b-3448-4108-81b0-4d16484a6f7b","Type":"ContainerDied","Data":"54a34d37063fa7510c51a589e85db2af1e8eef4bc3dcb4482d914746021edcd6"} Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.950642 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b1ab7add-ea30-4610-a96a-2cad6ae8e40c/ovsdbserver-nb/0.log" Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.950684 4833 generic.go:334] "Generic (PLEG): container finished" podID="b1ab7add-ea30-4610-a96a-2cad6ae8e40c" containerID="f94b7170cff535d70b886a880f441f6bc49ccf39c462e54f24bba46d4e1405e6" exitCode=2 Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.950702 4833 generic.go:334] "Generic (PLEG): container finished" podID="b1ab7add-ea30-4610-a96a-2cad6ae8e40c" containerID="e320ad7d5893dd2a3cf0ab4db95afc8ff7b33d93872d0c9924dfcdb12787887f" exitCode=143 Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.950720 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b1ab7add-ea30-4610-a96a-2cad6ae8e40c","Type":"ContainerDied","Data":"f94b7170cff535d70b886a880f441f6bc49ccf39c462e54f24bba46d4e1405e6"} Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.950746 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b1ab7add-ea30-4610-a96a-2cad6ae8e40c","Type":"ContainerDied","Data":"e320ad7d5893dd2a3cf0ab4db95afc8ff7b33d93872d0c9924dfcdb12787887f"} Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.954660 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novacell13e16-account-delete-nqr9l"] Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.976336 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell13e16-account-delete-nqr9l" Oct 13 06:49:54 crc kubenswrapper[4833]: I1013 06:49:54.992959 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell13e16-account-delete-nqr9l"] Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.003671 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc4t9\" (UniqueName: \"kubernetes.io/projected/f8c66f33-3fbd-4a35-8e0d-2b38c3cd513a-kube-api-access-sc4t9\") pod \"novaapi1bdf-account-delete-k9wpj\" (UID: \"f8c66f33-3fbd-4a35-8e0d-2b38c3cd513a\") " pod="openstack/novaapi1bdf-account-delete-k9wpj" Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.038418 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-l6lww"] Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.079034 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-l6lww"] Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.108865 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc4t9\" (UniqueName: \"kubernetes.io/projected/f8c66f33-3fbd-4a35-8e0d-2b38c3cd513a-kube-api-access-sc4t9\") pod \"novaapi1bdf-account-delete-k9wpj\" (UID: \"f8c66f33-3fbd-4a35-8e0d-2b38c3cd513a\") " pod="openstack/novaapi1bdf-account-delete-k9wpj" Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.108938 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novacell07d06-account-delete-22d5b"] Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.110796 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell07d06-account-delete-22d5b" Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.113040 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb9ts\" (UniqueName: \"kubernetes.io/projected/d0bc4033-85b9-4212-b1e2-3c5888ddcf0a-kube-api-access-lb9ts\") pod \"novacell13e16-account-delete-nqr9l\" (UID: \"d0bc4033-85b9-4212-b1e2-3c5888ddcf0a\") " pod="openstack/novacell13e16-account-delete-nqr9l" Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.122118 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-6zmp4"] Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.141900 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-6zmp4"] Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.155373 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell07d06-account-delete-22d5b"] Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.162634 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi1bdf-account-delete-k9wpj" Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.174663 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-lx4t5"] Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.174950 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-lx4t5" podUID="c7b98eb9-459c-4a87-88e3-63624b7969b9" containerName="openstack-network-exporter" containerID="cri-o://475bd41d6600098aca15ac0e690b3a40fb08bae6907e1462c6932c353651641a" gracePeriod=30 Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.178317 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-rtrth"] Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.222992 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82hbd\" (UniqueName: \"kubernetes.io/projected/cb825980-5dc2-420a-8638-9607a9f1eb1f-kube-api-access-82hbd\") pod \"novacell07d06-account-delete-22d5b\" (UID: \"cb825980-5dc2-420a-8638-9607a9f1eb1f\") " pod="openstack/novacell07d06-account-delete-22d5b" Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.223059 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb9ts\" (UniqueName: \"kubernetes.io/projected/d0bc4033-85b9-4212-b1e2-3c5888ddcf0a-kube-api-access-lb9ts\") pod \"novacell13e16-account-delete-nqr9l\" (UID: \"d0bc4033-85b9-4212-b1e2-3c5888ddcf0a\") " pod="openstack/novacell13e16-account-delete-nqr9l" Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.327553 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82hbd\" (UniqueName: \"kubernetes.io/projected/cb825980-5dc2-420a-8638-9607a9f1eb1f-kube-api-access-82hbd\") pod \"novacell07d06-account-delete-22d5b\" (UID: \"cb825980-5dc2-420a-8638-9607a9f1eb1f\") " pod="openstack/novacell07d06-account-delete-22d5b" Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.339384 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb9ts\" (UniqueName: \"kubernetes.io/projected/d0bc4033-85b9-4212-b1e2-3c5888ddcf0a-kube-api-access-lb9ts\") pod \"novacell13e16-account-delete-nqr9l\" (UID: \"d0bc4033-85b9-4212-b1e2-3c5888ddcf0a\") " pod="openstack/novacell13e16-account-delete-nqr9l" Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.395132 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82hbd\" (UniqueName: \"kubernetes.io/projected/cb825980-5dc2-420a-8638-9607a9f1eb1f-kube-api-access-82hbd\") pod \"novacell07d06-account-delete-22d5b\" (UID: \"cb825980-5dc2-420a-8638-9607a9f1eb1f\") " pod="openstack/novacell07d06-account-delete-22d5b" Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.406699 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-7j8gx"] Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.413215 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell13e16-account-delete-nqr9l" Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.480196 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-bt47h"] Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.569253 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell07d06-account-delete-22d5b" Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.575584 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.576139 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="account-server" containerID="cri-o://7126480ee2e234f256253f3be3f11958f282b8685399c352e9fe1fed288e1a27" gracePeriod=30 Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.576572 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="swift-recon-cron" containerID="cri-o://b8c0fd99cc7bf147089ee3034a7d63738ca80123381a9e4fcfb1fb0f59148960" gracePeriod=30 Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.576634 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="rsync" containerID="cri-o://9b5d782d1b0574c39149c8bb487ccb192e4ad78574ba00d0053886812eecf629" gracePeriod=30 Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.576673 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="object-expirer" containerID="cri-o://b4b5158af1d09b9e60b53b67061ee2a7c79d89b8a882cf00a94e754f31eeb82c" gracePeriod=30 Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.576713 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="object-updater" containerID="cri-o://9dad12e9c90578194f390432ae46d99079a4a5d4c95d825ba6dcc15e26e20fb2" gracePeriod=30 Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.576754 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="object-auditor" containerID="cri-o://02e170a5ebde87992af1b9ec82acf052249debf50eb102dbdc067004eac83dd6" gracePeriod=30 Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.576792 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="object-replicator" containerID="cri-o://ef4bcd2d312a9e41b4e42cf22758d715ea58715ab0b3bcd2ec00f09ab616489b" gracePeriod=30 Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.576827 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="object-server" containerID="cri-o://5847c7fbaaa19a0f3623af3ea4be590fad1d82ea8d09cd6086994de5af8c21c0" gracePeriod=30 Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.576869 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="container-updater" containerID="cri-o://7fbc873a90a0e18d29a4c28fb0bffb723bba4761bbd24dad68303e83c89729b5" gracePeriod=30 Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.576906 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="container-auditor" containerID="cri-o://dda623bd500bc7d4d2d7d9bda0087208d82cc295d3ca8170fefd53b38c5cb99b" gracePeriod=30 Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.576944 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="container-replicator" containerID="cri-o://b4fe6dd76ddecca8a3c9f5a3f305a67a70a4c5075c8827646cbfd73ae58679f8" gracePeriod=30 Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.576982 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="container-server" containerID="cri-o://a338bdcb17781b39a4745895b5274ba984f3740577bcb756eb359867e4c8349d" gracePeriod=30 Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.577020 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="account-reaper" containerID="cri-o://39a80ccb5dcfc3109b31f5ea15bdac0c69f4fb148fff6b2e14183efb30f32315" gracePeriod=30 Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.577067 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="account-auditor" containerID="cri-o://b40d94a3b28168dc3adfbd67bb111dd625c1b3a8e28dfcf65f21de1d71ac05ef" gracePeriod=30 Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.577105 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="account-replicator" containerID="cri-o://ddc798bf52735ed655b9f2029dcd6fac626a69a57beb0d6ecfacaf0af9255c10" gracePeriod=30 Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.582503 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b1ab7add-ea30-4610-a96a-2cad6ae8e40c/ovsdbserver-nb/0.log" Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.582598 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.597074 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-bt47h"] Oct 13 06:49:55 crc kubenswrapper[4833]: E1013 06:49:55.598415 4833 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Oct 13 06:49:55 crc kubenswrapper[4833]: E1013 06:49:55.598458 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/827f736f-2193-4ebd-ab7f-99fb22945d1e-config-data podName:827f736f-2193-4ebd-ab7f-99fb22945d1e nodeName:}" failed. No retries permitted until 2025-10-13 06:49:57.598443915 +0000 UTC m=+1287.698866831 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/827f736f-2193-4ebd-ab7f-99fb22945d1e-config-data") pod "rabbitmq-server-0" (UID: "827f736f-2193-4ebd-ab7f-99fb22945d1e") : configmap "rabbitmq-config-data" not found Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.621572 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.621944 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2b3604db-dabe-4d61-918d-b41a85fbcbf5" containerName="cinder-scheduler" containerID="cri-o://a3e737b2f25b20ffb3b6db74d1d62d4e6066ed41e5b09d860374f17370033973" gracePeriod=30 Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.622211 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2b3604db-dabe-4d61-918d-b41a85fbcbf5" containerName="probe" containerID="cri-o://2c269f1c0068b7093464c1d749f2f94c414ec34d98624840bb84d4f79d7523e2" gracePeriod=30 Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.632539 4833 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/cinder-api-0" secret="" err="secret \"cinder-cinder-dockercfg-wjvmq\" not found" Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.636497 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d4d96bb9-t557n"] Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.636913 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d4d96bb9-t557n" podUID="61fe1ee9-51ff-4f77-8dd7-4e29e3365556" containerName="dnsmasq-dns" containerID="cri-o://49f0e204158e68516824c137ad4d21ef43d7abc0112420959fabb71ee75d4288" gracePeriod=10 Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.653486 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-jpl8n"] Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.665739 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-jpl8n"] Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.790646 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.791205 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="baa4dafc-7be7-4f97-ba72-359c27e3151c" containerName="glance-log" containerID="cri-o://bb3c5e96c00181e44f04caa42689894453b910ac136df05f2f6dad225247c410" gracePeriod=30 Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.791379 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="baa4dafc-7be7-4f97-ba72-359c27e3151c" containerName="glance-httpd" containerID="cri-o://ef7d82e1a30e86fc26d1eaeeddc5dbfd7806656a7caf33159465853af570230a" gracePeriod=30 Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.809493 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.809539 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-ovsdb-rundir\") pod \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.809608 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-metrics-certs-tls-certs\") pod \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.809634 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kbh9\" (UniqueName: \"kubernetes.io/projected/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-kube-api-access-9kbh9\") pod \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.809670 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-ovsdbserver-nb-tls-certs\") pod \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.809702 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-scripts\") pod \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.809731 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-combined-ca-bundle\") pod \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.809756 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-config\") pod \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\" (UID: \"b1ab7add-ea30-4610-a96a-2cad6ae8e40c\") " Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.811301 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "b1ab7add-ea30-4610-a96a-2cad6ae8e40c" (UID: "b1ab7add-ea30-4610-a96a-2cad6ae8e40c"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.814763 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-scripts" (OuterVolumeSpecName: "scripts") pod "b1ab7add-ea30-4610-a96a-2cad6ae8e40c" (UID: "b1ab7add-ea30-4610-a96a-2cad6ae8e40c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.815292 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-config" (OuterVolumeSpecName: "config") pod "b1ab7add-ea30-4610-a96a-2cad6ae8e40c" (UID: "b1ab7add-ea30-4610-a96a-2cad6ae8e40c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:49:55 crc kubenswrapper[4833]: E1013 06:49:55.821120 4833 secret.go:188] Couldn't get secret openstack/cinder-api-config-data: secret "cinder-api-config-data" not found Oct 13 06:49:55 crc kubenswrapper[4833]: E1013 06:49:55.821189 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-config-data-custom podName:aaeaef09-d532-4399-b9bb-c9e59fbf1a62 nodeName:}" failed. No retries permitted until 2025-10-13 06:49:56.321170321 +0000 UTC m=+1286.421593237 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-config-data-custom") pod "cinder-api-0" (UID: "aaeaef09-d532-4399-b9bb-c9e59fbf1a62") : secret "cinder-api-config-data" not found Oct 13 06:49:55 crc kubenswrapper[4833]: E1013 06:49:55.821240 4833 secret.go:188] Couldn't get secret openstack/cinder-config-data: secret "cinder-config-data" not found Oct 13 06:49:55 crc kubenswrapper[4833]: E1013 06:49:55.821325 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-config-data podName:aaeaef09-d532-4399-b9bb-c9e59fbf1a62 nodeName:}" failed. No retries permitted until 2025-10-13 06:49:56.321305855 +0000 UTC m=+1286.421728771 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-config-data") pod "cinder-api-0" (UID: "aaeaef09-d532-4399-b9bb-c9e59fbf1a62") : secret "cinder-config-data" not found Oct 13 06:49:55 crc kubenswrapper[4833]: E1013 06:49:55.822247 4833 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Oct 13 06:49:55 crc kubenswrapper[4833]: E1013 06:49:55.822279 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0a6ab499-ed60-45e7-b510-5a43422aa7f5-config-data podName:0a6ab499-ed60-45e7-b510-5a43422aa7f5 nodeName:}" failed. No retries permitted until 2025-10-13 06:49:57.822270922 +0000 UTC m=+1287.922693838 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0a6ab499-ed60-45e7-b510-5a43422aa7f5-config-data") pod "rabbitmq-cell1-server-0" (UID: "0a6ab499-ed60-45e7-b510-5a43422aa7f5") : configmap "rabbitmq-cell1-config-data" not found Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.822914 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:55 crc kubenswrapper[4833]: E1013 06:49:55.823959 4833 secret.go:188] Couldn't get secret openstack/cinder-scripts: secret "cinder-scripts" not found Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.823983 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:55 crc kubenswrapper[4833]: E1013 06:49:55.824017 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-scripts podName:aaeaef09-d532-4399-b9bb-c9e59fbf1a62 nodeName:}" failed. No retries permitted until 2025-10-13 06:49:56.3240052 +0000 UTC m=+1286.424428116 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-scripts") pod "cinder-api-0" (UID: "aaeaef09-d532-4399-b9bb-c9e59fbf1a62") : secret "cinder-scripts" not found Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.824055 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.832940 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "b1ab7add-ea30-4610-a96a-2cad6ae8e40c" (UID: "b1ab7add-ea30-4610-a96a-2cad6ae8e40c"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.864494 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-gf8r2"] Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.870224 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-kube-api-access-9kbh9" (OuterVolumeSpecName: "kube-api-access-9kbh9") pod "b1ab7add-ea30-4610-a96a-2cad6ae8e40c" (UID: "b1ab7add-ea30-4610-a96a-2cad6ae8e40c"). InnerVolumeSpecName "kube-api-access-9kbh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.919540 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6869bc4646-lrqdg"] Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.919787 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6869bc4646-lrqdg" podUID="626d71e0-e957-4a46-9565-d19058a575c9" containerName="placement-log" containerID="cri-o://b30092ce1d6d2a4148dfa7a2b34e676c77f00b1ab27c3818e714c5780982199f" gracePeriod=30 Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.920141 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6869bc4646-lrqdg" podUID="626d71e0-e957-4a46-9565-d19058a575c9" containerName="placement-api" containerID="cri-o://65e6c17688173888d5cd8825b6cc56823eb1c0169fcf3185554758380d59282a" gracePeriod=30 Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.929429 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.929461 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kbh9\" (UniqueName: \"kubernetes.io/projected/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-kube-api-access-9kbh9\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:55 crc kubenswrapper[4833]: I1013 06:49:55.990943 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-gf8r2"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.045941 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.120141 4833 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.138857 4833 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.149542 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.149785 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d04cb142-7473-455b-8d5b-f79d879d8d58" containerName="glance-log" containerID="cri-o://a14dbf5251baa6dac8fcb1a3b7d4c495bc7314e806a21af252e2c6ac6c47c059" gracePeriod=30 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.150167 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d04cb142-7473-455b-8d5b-f79d879d8d58" containerName="glance-httpd" containerID="cri-o://51e7bc679df23d3e526317bc29126a1542ce20237f8a48a696b934699094819a" gracePeriod=30 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.202796 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1ab7add-ea30-4610-a96a-2cad6ae8e40c" (UID: "b1ab7add-ea30-4610-a96a-2cad6ae8e40c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.234959 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.242838 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.248723 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b78565d7c-d78jk"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.248916 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-b78565d7c-d78jk" podUID="65e5cee6-ee1c-4612-89b8-c2cfe968438b" containerName="neutron-api" containerID="cri-o://693f1a344ba18ce292d370fac9613ada4bf6424ec01d376fafe1fa5f5d79c8b2" gracePeriod=30 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.249237 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-b78565d7c-d78jk" podUID="65e5cee6-ee1c-4612-89b8-c2cfe968438b" containerName="neutron-httpd" containerID="cri-o://d43c06194342280710813b12ad00477467b337fe1567ed350bad5cbf383d8289" gracePeriod=30 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.260006 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-cwhzf"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.273595 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-cwhzf"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.277194 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-lx4t5_c7b98eb9-459c-4a87-88e3-63624b7969b9/openstack-network-exporter/0.log" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.277234 4833 generic.go:334] "Generic (PLEG): container finished" podID="c7b98eb9-459c-4a87-88e3-63624b7969b9" containerID="475bd41d6600098aca15ac0e690b3a40fb08bae6907e1462c6932c353651641a" exitCode=2 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.277275 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-lx4t5" event={"ID":"c7b98eb9-459c-4a87-88e3-63624b7969b9","Type":"ContainerDied","Data":"475bd41d6600098aca15ac0e690b3a40fb08bae6907e1462c6932c353651641a"} Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.291713 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-kgzz2"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.301067 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b1ab7add-ea30-4610-a96a-2cad6ae8e40c/ovsdbserver-nb/0.log" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.301258 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.301710 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-kgzz2"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.301772 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b1ab7add-ea30-4610-a96a-2cad6ae8e40c","Type":"ContainerDied","Data":"0c041d57a850254e5a259cd1bbac5d33a62e8cb63bbd03709ff6a7e402699fb6"} Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.301810 4833 scope.go:117] "RemoveContainer" containerID="f94b7170cff535d70b886a880f441f6bc49ccf39c462e54f24bba46d4e1405e6" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.325701 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-c086-account-create-2xc6m"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.333290 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-c086-account-create-2xc6m"] Oct 13 06:49:56 crc kubenswrapper[4833]: E1013 06:49:56.349381 4833 secret.go:188] Couldn't get secret openstack/cinder-config-data: secret "cinder-config-data" not found Oct 13 06:49:56 crc kubenswrapper[4833]: E1013 06:49:56.349706 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-config-data podName:aaeaef09-d532-4399-b9bb-c9e59fbf1a62 nodeName:}" failed. No retries permitted until 2025-10-13 06:49:57.349689628 +0000 UTC m=+1287.450112544 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-config-data") pod "cinder-api-0" (UID: "aaeaef09-d532-4399-b9bb-c9e59fbf1a62") : secret "cinder-config-data" not found Oct 13 06:49:56 crc kubenswrapper[4833]: E1013 06:49:56.349944 4833 secret.go:188] Couldn't get secret openstack/cinder-scripts: secret "cinder-scripts" not found Oct 13 06:49:56 crc kubenswrapper[4833]: E1013 06:49:56.350004 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-scripts podName:aaeaef09-d532-4399-b9bb-c9e59fbf1a62 nodeName:}" failed. No retries permitted until 2025-10-13 06:49:57.349987316 +0000 UTC m=+1287.450410232 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-scripts") pod "cinder-api-0" (UID: "aaeaef09-d532-4399-b9bb-c9e59fbf1a62") : secret "cinder-scripts" not found Oct 13 06:49:56 crc kubenswrapper[4833]: E1013 06:49:56.350004 4833 secret.go:188] Couldn't get secret openstack/cinder-api-config-data: secret "cinder-api-config-data" not found Oct 13 06:49:56 crc kubenswrapper[4833]: E1013 06:49:56.350032 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-config-data-custom podName:aaeaef09-d532-4399-b9bb-c9e59fbf1a62 nodeName:}" failed. No retries permitted until 2025-10-13 06:49:57.350024997 +0000 UTC m=+1287.450447913 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-config-data-custom") pod "cinder-api-0" (UID: "aaeaef09-d532-4399-b9bb-c9e59fbf1a62") : secret "cinder-api-config-data" not found Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.351494 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b181-account-create-7wdwx"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.351524 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b181-account-create-7wdwx"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.360613 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-6csh9"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.380302 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinderb181-account-delete-hqpht"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.380337 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-6csh9"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.395342 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-68a3-account-create-tmjzb"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.397907 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-68a3-account-create-tmjzb"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.400802 4833 generic.go:334] "Generic (PLEG): container finished" podID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerID="b4b5158af1d09b9e60b53b67061ee2a7c79d89b8a882cf00a94e754f31eeb82c" exitCode=0 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.400834 4833 generic.go:334] "Generic (PLEG): container finished" podID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerID="9dad12e9c90578194f390432ae46d99079a4a5d4c95d825ba6dcc15e26e20fb2" exitCode=0 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.400847 4833 generic.go:334] "Generic (PLEG): container finished" podID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerID="02e170a5ebde87992af1b9ec82acf052249debf50eb102dbdc067004eac83dd6" exitCode=0 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.400855 4833 generic.go:334] "Generic (PLEG): container finished" podID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerID="ef4bcd2d312a9e41b4e42cf22758d715ea58715ab0b3bcd2ec00f09ab616489b" exitCode=0 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.400863 4833 generic.go:334] "Generic (PLEG): container finished" podID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerID="7fbc873a90a0e18d29a4c28fb0bffb723bba4761bbd24dad68303e83c89729b5" exitCode=0 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.400873 4833 generic.go:334] "Generic (PLEG): container finished" podID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerID="dda623bd500bc7d4d2d7d9bda0087208d82cc295d3ca8170fefd53b38c5cb99b" exitCode=0 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.400882 4833 generic.go:334] "Generic (PLEG): container finished" podID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerID="b4fe6dd76ddecca8a3c9f5a3f305a67a70a4c5075c8827646cbfd73ae58679f8" exitCode=0 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.400889 4833 generic.go:334] "Generic (PLEG): container finished" podID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerID="39a80ccb5dcfc3109b31f5ea15bdac0c69f4fb148fff6b2e14183efb30f32315" exitCode=0 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.400897 4833 generic.go:334] "Generic (PLEG): container finished" podID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerID="b40d94a3b28168dc3adfbd67bb111dd625c1b3a8e28dfcf65f21de1d71ac05ef" exitCode=0 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.400905 4833 generic.go:334] "Generic (PLEG): container finished" podID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerID="ddc798bf52735ed655b9f2029dcd6fac626a69a57beb0d6ecfacaf0af9255c10" exitCode=0 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.400951 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerDied","Data":"b4b5158af1d09b9e60b53b67061ee2a7c79d89b8a882cf00a94e754f31eeb82c"} Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.400976 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerDied","Data":"9dad12e9c90578194f390432ae46d99079a4a5d4c95d825ba6dcc15e26e20fb2"} Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.400987 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerDied","Data":"02e170a5ebde87992af1b9ec82acf052249debf50eb102dbdc067004eac83dd6"} Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.400998 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerDied","Data":"ef4bcd2d312a9e41b4e42cf22758d715ea58715ab0b3bcd2ec00f09ab616489b"} Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.401009 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerDied","Data":"7fbc873a90a0e18d29a4c28fb0bffb723bba4761bbd24dad68303e83c89729b5"} Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.401019 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerDied","Data":"dda623bd500bc7d4d2d7d9bda0087208d82cc295d3ca8170fefd53b38c5cb99b"} Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.401029 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerDied","Data":"b4fe6dd76ddecca8a3c9f5a3f305a67a70a4c5075c8827646cbfd73ae58679f8"} Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.401040 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerDied","Data":"39a80ccb5dcfc3109b31f5ea15bdac0c69f4fb148fff6b2e14183efb30f32315"} Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.401051 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerDied","Data":"b40d94a3b28168dc3adfbd67bb111dd625c1b3a8e28dfcf65f21de1d71ac05ef"} Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.401062 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerDied","Data":"ddc798bf52735ed655b9f2029dcd6fac626a69a57beb0d6ecfacaf0af9255c10"} Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.408726 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f5ac-account-create-dhkt2"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.412102 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="827f736f-2193-4ebd-ab7f-99fb22945d1e" containerName="rabbitmq" containerID="cri-o://0e7b21d947b33ba49437a8fc41d929e050f2e2654fda6595a5bdceb0af1cad5b" gracePeriod=604800 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.412151 4833 generic.go:334] "Generic (PLEG): container finished" podID="5783401d-3007-4df3-a902-1869d62c4acc" containerID="c8ad3d74107bc327da884a44b88aa948e92843c3f297250dc65f8ce46d13f20f" exitCode=137 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.422726 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "b1ab7add-ea30-4610-a96a-2cad6ae8e40c" (UID: "b1ab7add-ea30-4610-a96a-2cad6ae8e40c"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.423801 4833 generic.go:334] "Generic (PLEG): container finished" podID="61fe1ee9-51ff-4f77-8dd7-4e29e3365556" containerID="49f0e204158e68516824c137ad4d21ef43d7abc0112420959fabb71ee75d4288" exitCode=0 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.424060 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="aaeaef09-d532-4399-b9bb-c9e59fbf1a62" containerName="cinder-api-log" containerID="cri-o://ae447bf76892b7eb14df95538c7ae37b62536247b753f59a219c7f2aae34cdf7" gracePeriod=30 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.424439 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4d96bb9-t557n" event={"ID":"61fe1ee9-51ff-4f77-8dd7-4e29e3365556","Type":"ContainerDied","Data":"49f0e204158e68516824c137ad4d21ef43d7abc0112420959fabb71ee75d4288"} Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.424850 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="aaeaef09-d532-4399-b9bb-c9e59fbf1a62" containerName="cinder-api" containerID="cri-o://b566af3cdcf6966c91d8eb92814d438d4b7a59c8593fa14b053ca258afc3130a" gracePeriod=30 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.424939 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-f5ac-account-create-dhkt2"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.429073 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "b1ab7add-ea30-4610-a96a-2cad6ae8e40c" (UID: "b1ab7add-ea30-4610-a96a-2cad6ae8e40c"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.433713 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.444665 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-2s986"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.450896 4833 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.450921 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ab7add-ea30-4610-a96a-2cad6ae8e40c-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.454071 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-2s986"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.468730 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-hqwdg"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.478675 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-3a43-account-create-tzctv"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.492706 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-hqwdg"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.505476 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement3a43-account-delete-t5vdd"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.505876 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="0a6ab499-ed60-45e7-b510-5a43422aa7f5" containerName="rabbitmq" containerID="cri-o://24aad4a10d73945e5a0646981275abbd2aeda300a5f6a5262692650bb4e35a27" gracePeriod=604800 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.514279 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-3a43-account-create-tzctv"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.516502 4833 scope.go:117] "RemoveContainer" containerID="e320ad7d5893dd2a3cf0ab4db95afc8ff7b33d93872d0c9924dfcdb12787887f" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.521032 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_336d549b-b94b-4966-af57-2289b1c8acc8/ovsdbserver-sb/0.log" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.521125 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.542624 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.570997 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-lx4t5_c7b98eb9-459c-4a87-88e3-63624b7969b9/openstack-network-exporter/0.log" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.571079 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-lx4t5" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.573556 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5f847dcbd8-p95b9"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.573790 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5f847dcbd8-p95b9" podUID="8a18c26d-a476-4e4b-9320-84369da38cf2" containerName="barbican-keystone-listener-log" containerID="cri-o://4c2d835c2cdf83c5990f9e667ecb740187ba835cbe395bfdce7fceef0f080f02" gracePeriod=30 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.573887 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5f847dcbd8-p95b9" podUID="8a18c26d-a476-4e4b-9320-84369da38cf2" containerName="barbican-keystone-listener" containerID="cri-o://40c6e393bbfaf517c5fecd9b2453770dae8d96c73815045f791a0be9bcebd55d" gracePeriod=30 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.611637 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.611928 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2aaf5d8e-00de-473b-91d2-1dd8a7354853" containerName="nova-metadata-log" containerID="cri-o://988658baa74f964f157fcd718e94c95fc2e7688fc3335d190e84005d02e7fd3a" gracePeriod=30 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.612340 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2aaf5d8e-00de-473b-91d2-1dd8a7354853" containerName="nova-metadata-metadata" containerID="cri-o://0c49851d3254ed77c14a56073d79efd51082af7a60fed7458676ff9c919c96c6" gracePeriod=30 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.672481 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-7j8gx" podUID="6aef55de-c4dd-409e-b9f1-b79adc99ea8d" containerName="ovs-vswitchd" containerID="cri-o://8e35e2a8250ae17cf941b37d8802e61ec860dff89840f9a3f6729e8f6064f8e0" gracePeriod=29 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.677461 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm6jx\" (UniqueName: \"kubernetes.io/projected/336d549b-b94b-4966-af57-2289b1c8acc8-kube-api-access-wm6jx\") pod \"336d549b-b94b-4966-af57-2289b1c8acc8\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.677544 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/336d549b-b94b-4966-af57-2289b1c8acc8-ovsdbserver-sb-tls-certs\") pod \"336d549b-b94b-4966-af57-2289b1c8acc8\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.678912 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/336d549b-b94b-4966-af57-2289b1c8acc8-ovsdb-rundir\") pod \"336d549b-b94b-4966-af57-2289b1c8acc8\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.679016 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/336d549b-b94b-4966-af57-2289b1c8acc8-combined-ca-bundle\") pod \"336d549b-b94b-4966-af57-2289b1c8acc8\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.681254 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/336d549b-b94b-4966-af57-2289b1c8acc8-config\") pod \"336d549b-b94b-4966-af57-2289b1c8acc8\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.682355 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/336d549b-b94b-4966-af57-2289b1c8acc8-scripts\") pod \"336d549b-b94b-4966-af57-2289b1c8acc8\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.682401 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/336d549b-b94b-4966-af57-2289b1c8acc8-metrics-certs-tls-certs\") pod \"336d549b-b94b-4966-af57-2289b1c8acc8\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.682433 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"336d549b-b94b-4966-af57-2289b1c8acc8\" (UID: \"336d549b-b94b-4966-af57-2289b1c8acc8\") " Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.692261 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/336d549b-b94b-4966-af57-2289b1c8acc8-config" (OuterVolumeSpecName: "config") pod "336d549b-b94b-4966-af57-2289b1c8acc8" (UID: "336d549b-b94b-4966-af57-2289b1c8acc8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.692468 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/336d549b-b94b-4966-af57-2289b1c8acc8-scripts" (OuterVolumeSpecName: "scripts") pod "336d549b-b94b-4966-af57-2289b1c8acc8" (UID: "336d549b-b94b-4966-af57-2289b1c8acc8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.695587 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/336d549b-b94b-4966-af57-2289b1c8acc8-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "336d549b-b94b-4966-af57-2289b1c8acc8" (UID: "336d549b-b94b-4966-af57-2289b1c8acc8"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.727868 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "336d549b-b94b-4966-af57-2289b1c8acc8" (UID: "336d549b-b94b-4966-af57-2289b1c8acc8"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.730440 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03753511-4f13-4f91-abb0-1158faba0e60" path="/var/lib/kubelet/pods/03753511-4f13-4f91-abb0-1158faba0e60/volumes" Oct 13 06:49:56 crc kubenswrapper[4833]: E1013 06:49:56.734921 4833 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Oct 13 06:49:56 crc kubenswrapper[4833]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Oct 13 06:49:56 crc kubenswrapper[4833]: + source /usr/local/bin/container-scripts/functions Oct 13 06:49:56 crc kubenswrapper[4833]: ++ OVNBridge=br-int Oct 13 06:49:56 crc kubenswrapper[4833]: ++ OVNRemote=tcp:localhost:6642 Oct 13 06:49:56 crc kubenswrapper[4833]: ++ OVNEncapType=geneve Oct 13 06:49:56 crc kubenswrapper[4833]: ++ OVNAvailabilityZones= Oct 13 06:49:56 crc kubenswrapper[4833]: ++ EnableChassisAsGateway=true Oct 13 06:49:56 crc kubenswrapper[4833]: ++ PhysicalNetworks= Oct 13 06:49:56 crc kubenswrapper[4833]: ++ OVNHostName= Oct 13 06:49:56 crc kubenswrapper[4833]: ++ DB_FILE=/etc/openvswitch/conf.db Oct 13 06:49:56 crc kubenswrapper[4833]: ++ ovs_dir=/var/lib/openvswitch Oct 13 06:49:56 crc kubenswrapper[4833]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Oct 13 06:49:56 crc kubenswrapper[4833]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Oct 13 06:49:56 crc kubenswrapper[4833]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Oct 13 06:49:56 crc kubenswrapper[4833]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 13 06:49:56 crc kubenswrapper[4833]: + sleep 0.5 Oct 13 06:49:56 crc kubenswrapper[4833]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 13 06:49:56 crc kubenswrapper[4833]: + sleep 0.5 Oct 13 06:49:56 crc kubenswrapper[4833]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 13 06:49:56 crc kubenswrapper[4833]: + cleanup_ovsdb_server_semaphore Oct 13 06:49:56 crc kubenswrapper[4833]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Oct 13 06:49:56 crc kubenswrapper[4833]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Oct 13 06:49:56 crc kubenswrapper[4833]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-7j8gx" message=< Oct 13 06:49:56 crc kubenswrapper[4833]: Exiting ovsdb-server (5) ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Oct 13 06:49:56 crc kubenswrapper[4833]: + source /usr/local/bin/container-scripts/functions Oct 13 06:49:56 crc kubenswrapper[4833]: ++ OVNBridge=br-int Oct 13 06:49:56 crc kubenswrapper[4833]: ++ OVNRemote=tcp:localhost:6642 Oct 13 06:49:56 crc kubenswrapper[4833]: ++ OVNEncapType=geneve Oct 13 06:49:56 crc kubenswrapper[4833]: ++ OVNAvailabilityZones= Oct 13 06:49:56 crc kubenswrapper[4833]: ++ EnableChassisAsGateway=true Oct 13 06:49:56 crc kubenswrapper[4833]: ++ PhysicalNetworks= Oct 13 06:49:56 crc kubenswrapper[4833]: ++ OVNHostName= Oct 13 06:49:56 crc kubenswrapper[4833]: ++ DB_FILE=/etc/openvswitch/conf.db Oct 13 06:49:56 crc kubenswrapper[4833]: ++ ovs_dir=/var/lib/openvswitch Oct 13 06:49:56 crc kubenswrapper[4833]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Oct 13 06:49:56 crc kubenswrapper[4833]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Oct 13 06:49:56 crc kubenswrapper[4833]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Oct 13 06:49:56 crc kubenswrapper[4833]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 13 06:49:56 crc kubenswrapper[4833]: + sleep 0.5 Oct 13 06:49:56 crc kubenswrapper[4833]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 13 06:49:56 crc kubenswrapper[4833]: + sleep 0.5 Oct 13 06:49:56 crc kubenswrapper[4833]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 13 06:49:56 crc kubenswrapper[4833]: + cleanup_ovsdb_server_semaphore Oct 13 06:49:56 crc kubenswrapper[4833]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Oct 13 06:49:56 crc kubenswrapper[4833]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Oct 13 06:49:56 crc kubenswrapper[4833]: > Oct 13 06:49:56 crc kubenswrapper[4833]: E1013 06:49:56.735001 4833 kuberuntime_container.go:691] "PreStop hook failed" err=< Oct 13 06:49:56 crc kubenswrapper[4833]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Oct 13 06:49:56 crc kubenswrapper[4833]: + source /usr/local/bin/container-scripts/functions Oct 13 06:49:56 crc kubenswrapper[4833]: ++ OVNBridge=br-int Oct 13 06:49:56 crc kubenswrapper[4833]: ++ OVNRemote=tcp:localhost:6642 Oct 13 06:49:56 crc kubenswrapper[4833]: ++ OVNEncapType=geneve Oct 13 06:49:56 crc kubenswrapper[4833]: ++ OVNAvailabilityZones= Oct 13 06:49:56 crc kubenswrapper[4833]: ++ EnableChassisAsGateway=true Oct 13 06:49:56 crc kubenswrapper[4833]: ++ PhysicalNetworks= Oct 13 06:49:56 crc kubenswrapper[4833]: ++ OVNHostName= Oct 13 06:49:56 crc kubenswrapper[4833]: ++ DB_FILE=/etc/openvswitch/conf.db Oct 13 06:49:56 crc kubenswrapper[4833]: ++ ovs_dir=/var/lib/openvswitch Oct 13 06:49:56 crc kubenswrapper[4833]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Oct 13 06:49:56 crc kubenswrapper[4833]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Oct 13 06:49:56 crc kubenswrapper[4833]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Oct 13 06:49:56 crc kubenswrapper[4833]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 13 06:49:56 crc kubenswrapper[4833]: + sleep 0.5 Oct 13 06:49:56 crc kubenswrapper[4833]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 13 06:49:56 crc kubenswrapper[4833]: + sleep 0.5 Oct 13 06:49:56 crc kubenswrapper[4833]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 13 06:49:56 crc kubenswrapper[4833]: + cleanup_ovsdb_server_semaphore Oct 13 06:49:56 crc kubenswrapper[4833]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Oct 13 06:49:56 crc kubenswrapper[4833]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Oct 13 06:49:56 crc kubenswrapper[4833]: > pod="openstack/ovn-controller-ovs-7j8gx" podUID="6aef55de-c4dd-409e-b9f1-b79adc99ea8d" containerName="ovsdb-server" containerID="cri-o://ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.735038 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-7j8gx" podUID="6aef55de-c4dd-409e-b9f1-b79adc99ea8d" containerName="ovsdb-server" containerID="cri-o://ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574" gracePeriod=29 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.741924 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18230b58-b3cf-42e9-afa9-cf99564680d4" path="/var/lib/kubelet/pods/18230b58-b3cf-42e9-afa9-cf99564680d4/volumes" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.742888 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="401c9b31-e308-4305-b56e-29fc8594856d" path="/var/lib/kubelet/pods/401c9b31-e308-4305-b56e-29fc8594856d/volumes" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.752190 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5920217a-f2c4-4b9a-97ac-b5b98be2e85d" path="/var/lib/kubelet/pods/5920217a-f2c4-4b9a-97ac-b5b98be2e85d/volumes" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.754157 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d4d96bb9-t557n" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.754999 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.758180 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/336d549b-b94b-4966-af57-2289b1c8acc8-kube-api-access-wm6jx" (OuterVolumeSpecName: "kube-api-access-wm6jx") pod "336d549b-b94b-4966-af57-2289b1c8acc8" (UID: "336d549b-b94b-4966-af57-2289b1c8acc8"). InnerVolumeSpecName "kube-api-access-wm6jx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.772583 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6571f0ea-a7f7-4ba4-bd41-a59f92642ddc" path="/var/lib/kubelet/pods/6571f0ea-a7f7-4ba4-bd41-a59f92642ddc/volumes" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.791104 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="fa2db326-7b3a-4cc8-acb4-9c680c8f4972" containerName="galera" containerID="cri-o://fd89fcb801c3e73ae689bfd58be2d8c38227f0a3b7769b280f02d0cbef0d1f9c" gracePeriod=30 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.796444 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/336d549b-b94b-4966-af57-2289b1c8acc8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "336d549b-b94b-4966-af57-2289b1c8acc8" (UID: "336d549b-b94b-4966-af57-2289b1c8acc8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.801224 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d00616b-9c95-4ae5-aabc-60e2fb039035" path="/var/lib/kubelet/pods/6d00616b-9c95-4ae5-aabc-60e2fb039035/volumes" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.803770 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="705bcc31-a619-447e-b29a-e98c322e5617" path="/var/lib/kubelet/pods/705bcc31-a619-447e-b29a-e98c322e5617/volumes" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.815960 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5783401d-3007-4df3-a902-1869d62c4acc-openstack-config-secret\") pod \"5783401d-3007-4df3-a902-1869d62c4acc\" (UID: \"5783401d-3007-4df3-a902-1869d62c4acc\") " Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.816500 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7b98eb9-459c-4a87-88e3-63624b7969b9-config\") pod \"c7b98eb9-459c-4a87-88e3-63624b7969b9\" (UID: \"c7b98eb9-459c-4a87-88e3-63624b7969b9\") " Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.817038 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdn74\" (UniqueName: \"kubernetes.io/projected/5783401d-3007-4df3-a902-1869d62c4acc-kube-api-access-fdn74\") pod \"5783401d-3007-4df3-a902-1869d62c4acc\" (UID: \"5783401d-3007-4df3-a902-1869d62c4acc\") " Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.818312 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73076b06-20be-4053-9aeb-08c4e6db07a7" path="/var/lib/kubelet/pods/73076b06-20be-4053-9aeb-08c4e6db07a7/volumes" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.820203 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7b98eb9-459c-4a87-88e3-63624b7969b9-config" (OuterVolumeSpecName: "config") pod "c7b98eb9-459c-4a87-88e3-63624b7969b9" (UID: "c7b98eb9-459c-4a87-88e3-63624b7969b9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.820810 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b98eb9-459c-4a87-88e3-63624b7969b9-combined-ca-bundle\") pod \"c7b98eb9-459c-4a87-88e3-63624b7969b9\" (UID: \"c7b98eb9-459c-4a87-88e3-63624b7969b9\") " Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.820925 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-config\") pod \"61fe1ee9-51ff-4f77-8dd7-4e29e3365556\" (UID: \"61fe1ee9-51ff-4f77-8dd7-4e29e3365556\") " Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.821041 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5783401d-3007-4df3-a902-1869d62c4acc-openstack-config\") pod \"5783401d-3007-4df3-a902-1869d62c4acc\" (UID: \"5783401d-3007-4df3-a902-1869d62c4acc\") " Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.821154 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-dns-swift-storage-0\") pod \"61fe1ee9-51ff-4f77-8dd7-4e29e3365556\" (UID: \"61fe1ee9-51ff-4f77-8dd7-4e29e3365556\") " Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.821248 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5783401d-3007-4df3-a902-1869d62c4acc-combined-ca-bundle\") pod \"5783401d-3007-4df3-a902-1869d62c4acc\" (UID: \"5783401d-3007-4df3-a902-1869d62c4acc\") " Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.822273 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c7b98eb9-459c-4a87-88e3-63624b7969b9-ovn-rundir\") pod \"c7b98eb9-459c-4a87-88e3-63624b7969b9\" (UID: \"c7b98eb9-459c-4a87-88e3-63624b7969b9\") " Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.822385 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-ovsdbserver-nb\") pod \"61fe1ee9-51ff-4f77-8dd7-4e29e3365556\" (UID: \"61fe1ee9-51ff-4f77-8dd7-4e29e3365556\") " Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.822521 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-dns-svc\") pod \"61fe1ee9-51ff-4f77-8dd7-4e29e3365556\" (UID: \"61fe1ee9-51ff-4f77-8dd7-4e29e3365556\") " Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.823346 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-ovsdbserver-sb\") pod \"61fe1ee9-51ff-4f77-8dd7-4e29e3365556\" (UID: \"61fe1ee9-51ff-4f77-8dd7-4e29e3365556\") " Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.823453 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxshq\" (UniqueName: \"kubernetes.io/projected/c7b98eb9-459c-4a87-88e3-63624b7969b9-kube-api-access-qxshq\") pod \"c7b98eb9-459c-4a87-88e3-63624b7969b9\" (UID: \"c7b98eb9-459c-4a87-88e3-63624b7969b9\") " Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.823544 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b98eb9-459c-4a87-88e3-63624b7969b9-metrics-certs-tls-certs\") pod \"c7b98eb9-459c-4a87-88e3-63624b7969b9\" (UID: \"c7b98eb9-459c-4a87-88e3-63624b7969b9\") " Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.823654 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgb85\" (UniqueName: \"kubernetes.io/projected/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-kube-api-access-lgb85\") pod \"61fe1ee9-51ff-4f77-8dd7-4e29e3365556\" (UID: \"61fe1ee9-51ff-4f77-8dd7-4e29e3365556\") " Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.823758 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c7b98eb9-459c-4a87-88e3-63624b7969b9-ovs-rundir\") pod \"c7b98eb9-459c-4a87-88e3-63624b7969b9\" (UID: \"c7b98eb9-459c-4a87-88e3-63624b7969b9\") " Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.824105 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7b98eb9-459c-4a87-88e3-63624b7969b9-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "c7b98eb9-459c-4a87-88e3-63624b7969b9" (UID: "c7b98eb9-459c-4a87-88e3-63624b7969b9"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.827367 4833 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c7b98eb9-459c-4a87-88e3-63624b7969b9-ovs-rundir\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.827410 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/336d549b-b94b-4966-af57-2289b1c8acc8-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.827436 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.827449 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7b98eb9-459c-4a87-88e3-63624b7969b9-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.827462 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm6jx\" (UniqueName: \"kubernetes.io/projected/336d549b-b94b-4966-af57-2289b1c8acc8-kube-api-access-wm6jx\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.827474 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/336d549b-b94b-4966-af57-2289b1c8acc8-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.827485 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/336d549b-b94b-4966-af57-2289b1c8acc8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.827496 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/336d549b-b94b-4966-af57-2289b1c8acc8-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.829657 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="803c0a56-1e0a-4c20-a1c7-32ecf709cda4" path="/var/lib/kubelet/pods/803c0a56-1e0a-4c20-a1c7-32ecf709cda4/volumes" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.841910 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7b98eb9-459c-4a87-88e3-63624b7969b9-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "c7b98eb9-459c-4a87-88e3-63624b7969b9" (UID: "c7b98eb9-459c-4a87-88e3-63624b7969b9"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.842618 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5783401d-3007-4df3-a902-1869d62c4acc-kube-api-access-fdn74" (OuterVolumeSpecName: "kube-api-access-fdn74") pod "5783401d-3007-4df3-a902-1869d62c4acc" (UID: "5783401d-3007-4df3-a902-1869d62c4acc"). InnerVolumeSpecName "kube-api-access-fdn74". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.844812 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82be7aac-cd15-4ed6-bec2-07ff9928d194" path="/var/lib/kubelet/pods/82be7aac-cd15-4ed6-bec2-07ff9928d194/volumes" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.847234 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="852eefdb-1f3c-4a86-a930-24627d79056e" path="/var/lib/kubelet/pods/852eefdb-1f3c-4a86-a930-24627d79056e/volumes" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.848967 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88fb9926-26ba-4c88-b633-7192f7391494" path="/var/lib/kubelet/pods/88fb9926-26ba-4c88-b633-7192f7391494/volumes" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.850395 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d211019-3f1c-40de-82fc-7ed19c831c7c" path="/var/lib/kubelet/pods/8d211019-3f1c-40de-82fc-7ed19c831c7c/volumes" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.851399 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="949b1dbe-5e00-401f-a0a6-d0830a0092ad" path="/var/lib/kubelet/pods/949b1dbe-5e00-401f-a0a6-d0830a0092ad/volumes" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.852053 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="991076c0-40c3-4bdb-9766-a2c71b011caf" path="/var/lib/kubelet/pods/991076c0-40c3-4bdb-9766-a2c71b011caf/volumes" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.852782 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d2b8534-5a7a-4f8c-95d6-f3ceb6475639" path="/var/lib/kubelet/pods/9d2b8534-5a7a-4f8c-95d6-f3ceb6475639/volumes" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.860430 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7b98eb9-459c-4a87-88e3-63624b7969b9-kube-api-access-qxshq" (OuterVolumeSpecName: "kube-api-access-qxshq") pod "c7b98eb9-459c-4a87-88e3-63624b7969b9" (UID: "c7b98eb9-459c-4a87-88e3-63624b7969b9"). InnerVolumeSpecName "kube-api-access-qxshq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.861220 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/336d549b-b94b-4966-af57-2289b1c8acc8-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "336d549b-b94b-4966-af57-2289b1c8acc8" (UID: "336d549b-b94b-4966-af57-2289b1c8acc8"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.867372 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-kube-api-access-lgb85" (OuterVolumeSpecName: "kube-api-access-lgb85") pod "61fe1ee9-51ff-4f77-8dd7-4e29e3365556" (UID: "61fe1ee9-51ff-4f77-8dd7-4e29e3365556"). InnerVolumeSpecName "kube-api-access-lgb85". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.871286 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-bf7fd98f9-j4rff"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.871324 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-wmxsj"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.871336 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-wmxsj"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.871350 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-595797578d-ddhnv"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.871360 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-3e16-account-create-mf4v7"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.871370 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-3e16-account-create-mf4v7"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.871575 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-595797578d-ddhnv" podUID="7be1410c-e237-4abe-9a2d-c8e8b5242d93" containerName="barbican-api-log" containerID="cri-o://e7c4fb08195b32e50c609a55ad8f5ba6ccf4ebfb598a3cf9e868edc5d55a8023" gracePeriod=30 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.872884 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-bf7fd98f9-j4rff" podUID="6f85d40e-16b8-4ece-a268-8b4d227ac36c" containerName="barbican-worker-log" containerID="cri-o://f36c7308b02b9cd8d73f30a8ea3b598f9d78fa61234c951811b74664fe47b465" gracePeriod=30 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.873178 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-595797578d-ddhnv" podUID="7be1410c-e237-4abe-9a2d-c8e8b5242d93" containerName="barbican-api" containerID="cri-o://cbe232a2ef3f6d567a6669fbc2756b76c79d3815ef20ff4c3ce4de44b9dfa6da" gracePeriod=30 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.873224 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-bf7fd98f9-j4rff" podUID="6f85d40e-16b8-4ece-a268-8b4d227ac36c" containerName="barbican-worker" containerID="cri-o://c1bd611de8c17665166390a0cbc9052a69c7ff68323956a33d62985368f8cd99" gracePeriod=30 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.873377 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.873493 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="69c5134b-fc5b-453c-87ee-6a26e08796cf" containerName="nova-api-log" containerID="cri-o://d419f7d589b55bb7907d4d67106a93b203046358490c3edf1dc9eeca8e0bd809" gracePeriod=30 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.873578 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="69c5134b-fc5b-453c-87ee-6a26e08796cf" containerName="nova-api-api" containerID="cri-o://d05913f08dee4311606e7fd0c07f800f52f54e4b74d0ee36fae94de7571c4162" gracePeriod=30 Oct 13 06:49:56 crc kubenswrapper[4833]: W1013 06:49:56.878234 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8c66f33_3fbd_4a35_8e0d_2b38c3cd513a.slice/crio-482ab47ae80b8a955457765311b18bc8725ec266f7a4be7f5bb96f06d3aaaf2e WatchSource:0}: Error finding container 482ab47ae80b8a955457765311b18bc8725ec266f7a4be7f5bb96f06d3aaaf2e: Status 404 returned error can't find the container with id 482ab47ae80b8a955457765311b18bc8725ec266f7a4be7f5bb96f06d3aaaf2e Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.887297 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell13e16-account-delete-nqr9l"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.900984 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bxq6n"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.913526 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bxq6n"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.924946 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.925220 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="fd0bf370-6aac-4334-b612-db75770844df" containerName="nova-cell0-conductor-conductor" containerID="cri-o://d828744544a28555a1b28a9ac7c2a4e7360927b89674b6368f01b8b2cf5d2ad8" gracePeriod=30 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.930163 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgb85\" (UniqueName: \"kubernetes.io/projected/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-kube-api-access-lgb85\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.930189 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdn74\" (UniqueName: \"kubernetes.io/projected/5783401d-3007-4df3-a902-1869d62c4acc-kube-api-access-fdn74\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.930198 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/336d549b-b94b-4966-af57-2289b1c8acc8-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.930208 4833 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c7b98eb9-459c-4a87-88e3-63624b7969b9-ovn-rundir\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.930216 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxshq\" (UniqueName: \"kubernetes.io/projected/c7b98eb9-459c-4a87-88e3-63624b7969b9-kube-api-access-qxshq\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.938606 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mc8l7"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.960818 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.961041 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b" containerName="nova-cell1-conductor-conductor" containerID="cri-o://0f094acdb89c411f919d5e575dcd1514370d320b5ec95bb3019deaf50dd6a0bc" gracePeriod=30 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.972676 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mc8l7"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.985495 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.985802 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="7113b07b-875e-4a09-a221-be312e4d0dce" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://dd3cdf5ce0a05f6193bc89231be6e02af61a3556f7cc4765d4ed080a58b95e71" gracePeriod=30 Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.991931 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 06:49:56 crc kubenswrapper[4833]: I1013 06:49:56.992163 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="475289a4-cf33-4f56-93d9-73f7551026f8" containerName="nova-scheduler-scheduler" containerID="cri-o://3e0342cb2a85fe207f3129a530b551fb8f028c6b0f6607f69d53f7edaab9d8e1" gracePeriod=30 Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.005801 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5783401d-3007-4df3-a902-1869d62c4acc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5783401d-3007-4df3-a902-1869d62c4acc" (UID: "5783401d-3007-4df3-a902-1869d62c4acc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.017827 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.024696 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.030018 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b98eb9-459c-4a87-88e3-63624b7969b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7b98eb9-459c-4a87-88e3-63624b7969b9" (UID: "c7b98eb9-459c-4a87-88e3-63624b7969b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.031073 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinderb181-account-delete-hqpht"] Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.051534 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b98eb9-459c-4a87-88e3-63624b7969b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.051582 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5783401d-3007-4df3-a902-1869d62c4acc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.091902 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "61fe1ee9-51ff-4f77-8dd7-4e29e3365556" (UID: "61fe1ee9-51ff-4f77-8dd7-4e29e3365556"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.135473 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/336d549b-b94b-4966-af57-2289b1c8acc8-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "336d549b-b94b-4966-af57-2289b1c8acc8" (UID: "336d549b-b94b-4966-af57-2289b1c8acc8"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.150331 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement3a43-account-delete-t5vdd"] Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.153734 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.154010 4833 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/336d549b-b94b-4966-af57-2289b1c8acc8-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.156055 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapi1bdf-account-delete-k9wpj"] Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.164724 4833 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.168679 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell13e16-account-delete-nqr9l"] Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.171494 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5783401d-3007-4df3-a902-1869d62c4acc-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "5783401d-3007-4df3-a902-1869d62c4acc" (UID: "5783401d-3007-4df3-a902-1869d62c4acc"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.173959 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell07d06-account-delete-22d5b"] Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.198074 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5783401d-3007-4df3-a902-1869d62c4acc-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "5783401d-3007-4df3-a902-1869d62c4acc" (UID: "5783401d-3007-4df3-a902-1869d62c4acc"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.214088 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "61fe1ee9-51ff-4f77-8dd7-4e29e3365556" (UID: "61fe1ee9-51ff-4f77-8dd7-4e29e3365556"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.236270 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-config" (OuterVolumeSpecName: "config") pod "61fe1ee9-51ff-4f77-8dd7-4e29e3365556" (UID: "61fe1ee9-51ff-4f77-8dd7-4e29e3365556"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.255956 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.256005 4833 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5783401d-3007-4df3-a902-1869d62c4acc-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.256021 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.256035 4833 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.256048 4833 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5783401d-3007-4df3-a902-1869d62c4acc-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.290826 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "61fe1ee9-51ff-4f77-8dd7-4e29e3365556" (UID: "61fe1ee9-51ff-4f77-8dd7-4e29e3365556"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.308406 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "61fe1ee9-51ff-4f77-8dd7-4e29e3365556" (UID: "61fe1ee9-51ff-4f77-8dd7-4e29e3365556"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:49:57 crc kubenswrapper[4833]: E1013 06:49:57.382807 4833 secret.go:188] Couldn't get secret openstack/cinder-api-config-data: secret "cinder-api-config-data" not found Oct 13 06:49:57 crc kubenswrapper[4833]: E1013 06:49:57.382896 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-config-data-custom podName:aaeaef09-d532-4399-b9bb-c9e59fbf1a62 nodeName:}" failed. No retries permitted until 2025-10-13 06:49:59.382870896 +0000 UTC m=+1289.483293892 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-config-data-custom") pod "cinder-api-0" (UID: "aaeaef09-d532-4399-b9bb-c9e59fbf1a62") : secret "cinder-api-config-data" not found Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.383287 4833 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.383392 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61fe1ee9-51ff-4f77-8dd7-4e29e3365556-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:57 crc kubenswrapper[4833]: E1013 06:49:57.383344 4833 secret.go:188] Couldn't get secret openstack/cinder-scripts: secret "cinder-scripts" not found Oct 13 06:49:57 crc kubenswrapper[4833]: E1013 06:49:57.383674 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-scripts podName:aaeaef09-d532-4399-b9bb-c9e59fbf1a62 nodeName:}" failed. No retries permitted until 2025-10-13 06:49:59.383610547 +0000 UTC m=+1289.484033563 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-scripts") pod "cinder-api-0" (UID: "aaeaef09-d532-4399-b9bb-c9e59fbf1a62") : secret "cinder-scripts" not found Oct 13 06:49:57 crc kubenswrapper[4833]: E1013 06:49:57.384395 4833 secret.go:188] Couldn't get secret openstack/cinder-config-data: secret "cinder-config-data" not found Oct 13 06:49:57 crc kubenswrapper[4833]: E1013 06:49:57.384453 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-config-data podName:aaeaef09-d532-4399-b9bb-c9e59fbf1a62 nodeName:}" failed. No retries permitted until 2025-10-13 06:49:59.38443756 +0000 UTC m=+1289.484860586 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-config-data") pod "cinder-api-0" (UID: "aaeaef09-d532-4399-b9bb-c9e59fbf1a62") : secret "cinder-config-data" not found Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.431266 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b98eb9-459c-4a87-88e3-63624b7969b9-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "c7b98eb9-459c-4a87-88e3-63624b7969b9" (UID: "c7b98eb9-459c-4a87-88e3-63624b7969b9"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.440772 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-lx4t5_c7b98eb9-459c-4a87-88e3-63624b7969b9/openstack-network-exporter/0.log" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.441000 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-lx4t5" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.441639 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-lx4t5" event={"ID":"c7b98eb9-459c-4a87-88e3-63624b7969b9","Type":"ContainerDied","Data":"d3bf196a8a0c5a27ec39877a9487a2fb49faed62d9de1c40612c34a540858e39"} Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.441704 4833 scope.go:117] "RemoveContainer" containerID="475bd41d6600098aca15ac0e690b3a40fb08bae6907e1462c6932c353651641a" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.457795 4833 generic.go:334] "Generic (PLEG): container finished" podID="7be1410c-e237-4abe-9a2d-c8e8b5242d93" containerID="e7c4fb08195b32e50c609a55ad8f5ba6ccf4ebfb598a3cf9e868edc5d55a8023" exitCode=143 Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.457893 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-595797578d-ddhnv" event={"ID":"7be1410c-e237-4abe-9a2d-c8e8b5242d93","Type":"ContainerDied","Data":"e7c4fb08195b32e50c609a55ad8f5ba6ccf4ebfb598a3cf9e868edc5d55a8023"} Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.498407 4833 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b98eb9-459c-4a87-88e3-63624b7969b9-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.502732 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-lx4t5"] Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.507874 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-lx4t5"] Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.518948 4833 generic.go:334] "Generic (PLEG): container finished" podID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerID="9b5d782d1b0574c39149c8bb487ccb192e4ad78574ba00d0053886812eecf629" exitCode=0 Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.518978 4833 generic.go:334] "Generic (PLEG): container finished" podID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerID="5847c7fbaaa19a0f3623af3ea4be590fad1d82ea8d09cd6086994de5af8c21c0" exitCode=0 Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.518986 4833 generic.go:334] "Generic (PLEG): container finished" podID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerID="a338bdcb17781b39a4745895b5274ba984f3740577bcb756eb359867e4c8349d" exitCode=0 Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.518993 4833 generic.go:334] "Generic (PLEG): container finished" podID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerID="7126480ee2e234f256253f3be3f11958f282b8685399c352e9fe1fed288e1a27" exitCode=0 Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.519038 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerDied","Data":"9b5d782d1b0574c39149c8bb487ccb192e4ad78574ba00d0053886812eecf629"} Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.519062 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerDied","Data":"5847c7fbaaa19a0f3623af3ea4be590fad1d82ea8d09cd6086994de5af8c21c0"} Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.519071 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerDied","Data":"a338bdcb17781b39a4745895b5274ba984f3740577bcb756eb359867e4c8349d"} Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.519079 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerDied","Data":"7126480ee2e234f256253f3be3f11958f282b8685399c352e9fe1fed288e1a27"} Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.527300 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell07d06-account-delete-22d5b" event={"ID":"cb825980-5dc2-420a-8638-9607a9f1eb1f","Type":"ContainerStarted","Data":"3d572d1e31130a37e1ab161e394d41c0e399953a2d148de59f63141b31c9a3af"} Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.534069 4833 generic.go:334] "Generic (PLEG): container finished" podID="6f85d40e-16b8-4ece-a268-8b4d227ac36c" containerID="f36c7308b02b9cd8d73f30a8ea3b598f9d78fa61234c951811b74664fe47b465" exitCode=143 Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.534129 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-bf7fd98f9-j4rff" event={"ID":"6f85d40e-16b8-4ece-a268-8b4d227ac36c","Type":"ContainerDied","Data":"f36c7308b02b9cd8d73f30a8ea3b598f9d78fa61234c951811b74664fe47b465"} Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.544901 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement3a43-account-delete-t5vdd" event={"ID":"3f980bce-4b41-461d-9a1f-af4e6fb7455b","Type":"ContainerStarted","Data":"e3acced9f54b0c35f66144dd5bb8908e36538d2ecb0db8b933eaa70b09ea5a05"} Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.559890 4833 generic.go:334] "Generic (PLEG): container finished" podID="69c5134b-fc5b-453c-87ee-6a26e08796cf" containerID="d419f7d589b55bb7907d4d67106a93b203046358490c3edf1dc9eeca8e0bd809" exitCode=143 Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.559971 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69c5134b-fc5b-453c-87ee-6a26e08796cf","Type":"ContainerDied","Data":"d419f7d589b55bb7907d4d67106a93b203046358490c3edf1dc9eeca8e0bd809"} Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.562085 4833 generic.go:334] "Generic (PLEG): container finished" podID="2b3604db-dabe-4d61-918d-b41a85fbcbf5" containerID="2c269f1c0068b7093464c1d749f2f94c414ec34d98624840bb84d4f79d7523e2" exitCode=0 Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.562150 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2b3604db-dabe-4d61-918d-b41a85fbcbf5","Type":"ContainerDied","Data":"2c269f1c0068b7093464c1d749f2f94c414ec34d98624840bb84d4f79d7523e2"} Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.564657 4833 generic.go:334] "Generic (PLEG): container finished" podID="baa4dafc-7be7-4f97-ba72-359c27e3151c" containerID="bb3c5e96c00181e44f04caa42689894453b910ac136df05f2f6dad225247c410" exitCode=143 Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.564699 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"baa4dafc-7be7-4f97-ba72-359c27e3151c","Type":"ContainerDied","Data":"bb3c5e96c00181e44f04caa42689894453b910ac136df05f2f6dad225247c410"} Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.578358 4833 generic.go:334] "Generic (PLEG): container finished" podID="6aef55de-c4dd-409e-b9f1-b79adc99ea8d" containerID="ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574" exitCode=0 Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.578427 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7j8gx" event={"ID":"6aef55de-c4dd-409e-b9f1-b79adc99ea8d","Type":"ContainerDied","Data":"ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574"} Oct 13 06:49:57 crc kubenswrapper[4833]: E1013 06:49:57.600187 4833 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Oct 13 06:49:57 crc kubenswrapper[4833]: E1013 06:49:57.600250 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/827f736f-2193-4ebd-ab7f-99fb22945d1e-config-data podName:827f736f-2193-4ebd-ab7f-99fb22945d1e nodeName:}" failed. No retries permitted until 2025-10-13 06:50:01.600232032 +0000 UTC m=+1291.700654938 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/827f736f-2193-4ebd-ab7f-99fb22945d1e-config-data") pod "rabbitmq-server-0" (UID: "827f736f-2193-4ebd-ab7f-99fb22945d1e") : configmap "rabbitmq-config-data" not found Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.606981 4833 generic.go:334] "Generic (PLEG): container finished" podID="aaeaef09-d532-4399-b9bb-c9e59fbf1a62" containerID="ae447bf76892b7eb14df95538c7ae37b62536247b753f59a219c7f2aae34cdf7" exitCode=143 Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.607101 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"aaeaef09-d532-4399-b9bb-c9e59fbf1a62","Type":"ContainerDied","Data":"ae447bf76892b7eb14df95538c7ae37b62536247b753f59a219c7f2aae34cdf7"} Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.610772 4833 generic.go:334] "Generic (PLEG): container finished" podID="65e5cee6-ee1c-4612-89b8-c2cfe968438b" containerID="d43c06194342280710813b12ad00477467b337fe1567ed350bad5cbf383d8289" exitCode=0 Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.610823 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b78565d7c-d78jk" event={"ID":"65e5cee6-ee1c-4612-89b8-c2cfe968438b","Type":"ContainerDied","Data":"d43c06194342280710813b12ad00477467b337fe1567ed350bad5cbf383d8289"} Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.612739 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4d96bb9-t557n" event={"ID":"61fe1ee9-51ff-4f77-8dd7-4e29e3365556","Type":"ContainerDied","Data":"5a42c0a290cb727ea32cd425f987d2d33ebb3b618ecc9547251db15c231d4309"} Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.612835 4833 scope.go:117] "RemoveContainer" containerID="49f0e204158e68516824c137ad4d21ef43d7abc0112420959fabb71ee75d4288" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.612972 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d4d96bb9-t557n" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.646312 4833 generic.go:334] "Generic (PLEG): container finished" podID="8a18c26d-a476-4e4b-9320-84369da38cf2" containerID="4c2d835c2cdf83c5990f9e667ecb740187ba835cbe395bfdce7fceef0f080f02" exitCode=143 Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.646393 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f847dcbd8-p95b9" event={"ID":"8a18c26d-a476-4e4b-9320-84369da38cf2","Type":"ContainerDied","Data":"4c2d835c2cdf83c5990f9e667ecb740187ba835cbe395bfdce7fceef0f080f02"} Oct 13 06:49:57 crc kubenswrapper[4833]: E1013 06:49:57.650082 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e0d2353375289df900cadbe52a7dfd8067c5455ffa6c327d4b7380ccf466e04d" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Oct 13 06:49:57 crc kubenswrapper[4833]: E1013 06:49:57.656693 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e0d2353375289df900cadbe52a7dfd8067c5455ffa6c327d4b7380ccf466e04d" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.657290 4833 generic.go:334] "Generic (PLEG): container finished" podID="d04cb142-7473-455b-8d5b-f79d879d8d58" containerID="a14dbf5251baa6dac8fcb1a3b7d4c495bc7314e806a21af252e2c6ac6c47c059" exitCode=143 Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.657352 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d04cb142-7473-455b-8d5b-f79d879d8d58","Type":"ContainerDied","Data":"a14dbf5251baa6dac8fcb1a3b7d4c495bc7314e806a21af252e2c6ac6c47c059"} Oct 13 06:49:57 crc kubenswrapper[4833]: E1013 06:49:57.661138 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e0d2353375289df900cadbe52a7dfd8067c5455ffa6c327d4b7380ccf466e04d" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Oct 13 06:49:57 crc kubenswrapper[4833]: E1013 06:49:57.661184 4833 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="c1a7008b-3448-4108-81b0-4d16484a6f7b" containerName="ovn-northd" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.662390 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinderb181-account-delete-hqpht" event={"ID":"e71af496-4851-4904-9003-0358adc97b94","Type":"ContainerStarted","Data":"c04550cbfa74b033fe8bd4f01744b1ac2db6a04a8e44ce1d4bc1152097543cfa"} Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.676686 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.699060 4833 generic.go:334] "Generic (PLEG): container finished" podID="626d71e0-e957-4a46-9565-d19058a575c9" containerID="b30092ce1d6d2a4148dfa7a2b34e676c77f00b1ab27c3818e714c5780982199f" exitCode=143 Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.699139 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6869bc4646-lrqdg" event={"ID":"626d71e0-e957-4a46-9565-d19058a575c9","Type":"ContainerDied","Data":"b30092ce1d6d2a4148dfa7a2b34e676c77f00b1ab27c3818e714c5780982199f"} Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.727508 4833 generic.go:334] "Generic (PLEG): container finished" podID="2aaf5d8e-00de-473b-91d2-1dd8a7354853" containerID="988658baa74f964f157fcd718e94c95fc2e7688fc3335d190e84005d02e7fd3a" exitCode=143 Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.727623 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2aaf5d8e-00de-473b-91d2-1dd8a7354853","Type":"ContainerDied","Data":"988658baa74f964f157fcd718e94c95fc2e7688fc3335d190e84005d02e7fd3a"} Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.797370 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell13e16-account-delete-nqr9l" event={"ID":"d0bc4033-85b9-4212-b1e2-3c5888ddcf0a","Type":"ContainerStarted","Data":"084f5b319b94339308257cc211ed98fbc933a70e5795aa552b94ef3e7e9b65f2"} Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.811878 4833 scope.go:117] "RemoveContainer" containerID="39da9c88b223beca5465c065282b2d0f918ef244673bc10245cc5a611b105a98" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.821679 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d4d96bb9-t557n"] Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.844213 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi1bdf-account-delete-k9wpj" event={"ID":"f8c66f33-3fbd-4a35-8e0d-2b38c3cd513a","Type":"ContainerStarted","Data":"482ab47ae80b8a955457765311b18bc8725ec266f7a4be7f5bb96f06d3aaaf2e"} Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.851974 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_336d549b-b94b-4966-af57-2289b1c8acc8/ovsdbserver-sb/0.log" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.852038 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"336d549b-b94b-4966-af57-2289b1c8acc8","Type":"ContainerDied","Data":"d1120e2bdc884e7d95099ddcf3bbe34694190a558f9990df9daf43b8fd5a6bde"} Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.852136 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.874951 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-85d74757d5-v95tz"] Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.875228 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-85d74757d5-v95tz" podUID="77004520-24e0-4076-8155-b4a8b6b3e1a2" containerName="proxy-httpd" containerID="cri-o://bd81c5b70960bb5c69c83b108d56e6ca81fcf9ac0a02765d463866b0cd5ed1af" gracePeriod=30 Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.875667 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-85d74757d5-v95tz" podUID="77004520-24e0-4076-8155-b4a8b6b3e1a2" containerName="proxy-server" containerID="cri-o://ffc9dd1e713324f809d315d085d14b604a856f57e2677cc7a6979ac4e967d33f" gracePeriod=30 Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.893735 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d4d96bb9-t557n"] Oct 13 06:49:57 crc kubenswrapper[4833]: E1013 06:49:57.906770 4833 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Oct 13 06:49:57 crc kubenswrapper[4833]: E1013 06:49:57.906831 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0a6ab499-ed60-45e7-b510-5a43422aa7f5-config-data podName:0a6ab499-ed60-45e7-b510-5a43422aa7f5 nodeName:}" failed. No retries permitted until 2025-10-13 06:50:01.906816796 +0000 UTC m=+1292.007239712 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0a6ab499-ed60-45e7-b510-5a43422aa7f5-config-data") pod "rabbitmq-cell1-server-0" (UID: "0a6ab499-ed60-45e7-b510-5a43422aa7f5") : configmap "rabbitmq-cell1-config-data" not found Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.934806 4833 scope.go:117] "RemoveContainer" containerID="c8ad3d74107bc327da884a44b88aa948e92843c3f297250dc65f8ce46d13f20f" Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.957913 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.982209 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 13 06:49:57 crc kubenswrapper[4833]: I1013 06:49:57.982519 4833 scope.go:117] "RemoveContainer" containerID="006c322d6580fcba72f2451b54eabfb708adf2bd8b5526724079641631c1a6be" Oct 13 06:49:58 crc kubenswrapper[4833]: E1013 06:49:58.006197 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0f094acdb89c411f919d5e575dcd1514370d320b5ec95bb3019deaf50dd6a0bc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 13 06:49:58 crc kubenswrapper[4833]: E1013 06:49:58.025702 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0f094acdb89c411f919d5e575dcd1514370d320b5ec95bb3019deaf50dd6a0bc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 13 06:49:58 crc kubenswrapper[4833]: E1013 06:49:58.028376 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0f094acdb89c411f919d5e575dcd1514370d320b5ec95bb3019deaf50dd6a0bc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 13 06:49:58 crc kubenswrapper[4833]: E1013 06:49:58.028436 4833 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b" containerName="nova-cell1-conductor-conductor" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.137493 4833 scope.go:117] "RemoveContainer" containerID="4862d927879e0dfe854052f89e5cade3b64a709d334d9d354f523dde629a88ac" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.471831 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-85d74757d5-v95tz" podUID="77004520-24e0-4076-8155-b4a8b6b3e1a2" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.168:8080/healthcheck\": read tcp 10.217.0.2:59696->10.217.0.168:8080: read: connection reset by peer" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.471882 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-85d74757d5-v95tz" podUID="77004520-24e0-4076-8155-b4a8b6b3e1a2" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.168:8080/healthcheck\": read tcp 10.217.0.2:59682->10.217.0.168:8080: read: connection reset by peer" Oct 13 06:49:58 crc kubenswrapper[4833]: W1013 06:49:58.591756 4833 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb825980_5dc2_420a_8638_9607a9f1eb1f.slice/crio-conmon-d8b67d2984098598469efeab7a295956ffe682e1e8a8ed1a870f368f9c9f736f.scope/cpu.weight": read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb825980_5dc2_420a_8638_9607a9f1eb1f.slice/crio-conmon-d8b67d2984098598469efeab7a295956ffe682e1e8a8ed1a870f368f9c9f736f.scope/cpu.weight: no such device Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.673708 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="336d549b-b94b-4966-af57-2289b1c8acc8" path="/var/lib/kubelet/pods/336d549b-b94b-4966-af57-2289b1c8acc8/volumes" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.675878 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56a9b023-885e-4bfe-8fa8-21cb68518b48" path="/var/lib/kubelet/pods/56a9b023-885e-4bfe-8fa8-21cb68518b48/volumes" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.676392 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5783401d-3007-4df3-a902-1869d62c4acc" path="/var/lib/kubelet/pods/5783401d-3007-4df3-a902-1869d62c4acc/volumes" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.677365 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61fe1ee9-51ff-4f77-8dd7-4e29e3365556" path="/var/lib/kubelet/pods/61fe1ee9-51ff-4f77-8dd7-4e29e3365556/volumes" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.678201 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1ab7add-ea30-4610-a96a-2cad6ae8e40c" path="/var/lib/kubelet/pods/b1ab7add-ea30-4610-a96a-2cad6ae8e40c/volumes" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.679026 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc812647-c154-4fe8-b6cc-fcf008841900" path="/var/lib/kubelet/pods/bc812647-c154-4fe8-b6cc-fcf008841900/volumes" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.680045 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7b98eb9-459c-4a87-88e3-63624b7969b9" path="/var/lib/kubelet/pods/c7b98eb9-459c-4a87-88e3-63624b7969b9/volumes" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.680713 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3c4d37b-34eb-411f-9a0f-e266fdf37141" path="/var/lib/kubelet/pods/d3c4d37b-34eb-411f-9a0f-e266fdf37141/volumes" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.681234 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa712112-5d57-44d0-9417-a5eb9d993780" path="/var/lib/kubelet/pods/fa712112-5d57-44d0-9417-a5eb9d993780/volumes" Oct 13 06:49:58 crc kubenswrapper[4833]: E1013 06:49:58.688312 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77004520_24e0_4076_8155_b4a8b6b3e1a2.slice/crio-conmon-bd81c5b70960bb5c69c83b108d56e6ca81fcf9ac0a02765d463866b0cd5ed1af.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77004520_24e0_4076_8155_b4a8b6b3e1a2.slice/crio-bd81c5b70960bb5c69c83b108d56e6ca81fcf9ac0a02765d463866b0cd5ed1af.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb825980_5dc2_420a_8638_9607a9f1eb1f.slice/crio-conmon-d8b67d2984098598469efeab7a295956ffe682e1e8a8ed1a870f368f9c9f736f.scope\": RecentStats: unable to find data in memory cache]" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.705888 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.732383 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.732917 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.846235 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-kolla-config\") pod \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.846280 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-config-data-generated\") pod \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.846333 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7113b07b-875e-4a09-a221-be312e4d0dce-nova-novncproxy-tls-certs\") pod \"7113b07b-875e-4a09-a221-be312e4d0dce\" (UID: \"7113b07b-875e-4a09-a221-be312e4d0dce\") " Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.846356 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t2wr\" (UniqueName: \"kubernetes.io/projected/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-kube-api-access-6t2wr\") pod \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.846377 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7113b07b-875e-4a09-a221-be312e4d0dce-config-data\") pod \"7113b07b-875e-4a09-a221-be312e4d0dce\" (UID: \"7113b07b-875e-4a09-a221-be312e4d0dce\") " Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.846402 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-galera-tls-certs\") pod \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.847448 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7113b07b-875e-4a09-a221-be312e4d0dce-vencrypt-tls-certs\") pod \"7113b07b-875e-4a09-a221-be312e4d0dce\" (UID: \"7113b07b-875e-4a09-a221-be312e4d0dce\") " Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.847494 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-combined-ca-bundle\") pod \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.847556 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbx2t\" (UniqueName: \"kubernetes.io/projected/7113b07b-875e-4a09-a221-be312e4d0dce-kube-api-access-wbx2t\") pod \"7113b07b-875e-4a09-a221-be312e4d0dce\" (UID: \"7113b07b-875e-4a09-a221-be312e4d0dce\") " Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.847592 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.847609 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9gfz\" (UniqueName: \"kubernetes.io/projected/475289a4-cf33-4f56-93d9-73f7551026f8-kube-api-access-z9gfz\") pod \"475289a4-cf33-4f56-93d9-73f7551026f8\" (UID: \"475289a4-cf33-4f56-93d9-73f7551026f8\") " Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.847657 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-config-data-default\") pod \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.847679 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-operator-scripts\") pod \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.847705 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/475289a4-cf33-4f56-93d9-73f7551026f8-config-data\") pod \"475289a4-cf33-4f56-93d9-73f7551026f8\" (UID: \"475289a4-cf33-4f56-93d9-73f7551026f8\") " Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.847749 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7113b07b-875e-4a09-a221-be312e4d0dce-combined-ca-bundle\") pod \"7113b07b-875e-4a09-a221-be312e4d0dce\" (UID: \"7113b07b-875e-4a09-a221-be312e4d0dce\") " Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.847850 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/475289a4-cf33-4f56-93d9-73f7551026f8-combined-ca-bundle\") pod \"475289a4-cf33-4f56-93d9-73f7551026f8\" (UID: \"475289a4-cf33-4f56-93d9-73f7551026f8\") " Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.847887 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-secrets\") pod \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\" (UID: \"fa2db326-7b3a-4cc8-acb4-9c680c8f4972\") " Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.849416 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "fa2db326-7b3a-4cc8-acb4-9c680c8f4972" (UID: "fa2db326-7b3a-4cc8-acb4-9c680c8f4972"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.849886 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "fa2db326-7b3a-4cc8-acb4-9c680c8f4972" (UID: "fa2db326-7b3a-4cc8-acb4-9c680c8f4972"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.853401 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/475289a4-cf33-4f56-93d9-73f7551026f8-kube-api-access-z9gfz" (OuterVolumeSpecName: "kube-api-access-z9gfz") pod "475289a4-cf33-4f56-93d9-73f7551026f8" (UID: "475289a4-cf33-4f56-93d9-73f7551026f8"). InnerVolumeSpecName "kube-api-access-z9gfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.854387 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-kube-api-access-6t2wr" (OuterVolumeSpecName: "kube-api-access-6t2wr") pod "fa2db326-7b3a-4cc8-acb4-9c680c8f4972" (UID: "fa2db326-7b3a-4cc8-acb4-9c680c8f4972"). InnerVolumeSpecName "kube-api-access-6t2wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.862096 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "fa2db326-7b3a-4cc8-acb4-9c680c8f4972" (UID: "fa2db326-7b3a-4cc8-acb4-9c680c8f4972"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.862678 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fa2db326-7b3a-4cc8-acb4-9c680c8f4972" (UID: "fa2db326-7b3a-4cc8-acb4-9c680c8f4972"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.865207 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-secrets" (OuterVolumeSpecName: "secrets") pod "fa2db326-7b3a-4cc8-acb4-9c680c8f4972" (UID: "fa2db326-7b3a-4cc8-acb4-9c680c8f4972"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.873132 4833 generic.go:334] "Generic (PLEG): container finished" podID="77004520-24e0-4076-8155-b4a8b6b3e1a2" containerID="ffc9dd1e713324f809d315d085d14b604a856f57e2677cc7a6979ac4e967d33f" exitCode=0 Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.873184 4833 generic.go:334] "Generic (PLEG): container finished" podID="77004520-24e0-4076-8155-b4a8b6b3e1a2" containerID="bd81c5b70960bb5c69c83b108d56e6ca81fcf9ac0a02765d463866b0cd5ed1af" exitCode=0 Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.873220 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85d74757d5-v95tz" event={"ID":"77004520-24e0-4076-8155-b4a8b6b3e1a2","Type":"ContainerDied","Data":"ffc9dd1e713324f809d315d085d14b604a856f57e2677cc7a6979ac4e967d33f"} Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.873243 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85d74757d5-v95tz" event={"ID":"77004520-24e0-4076-8155-b4a8b6b3e1a2","Type":"ContainerDied","Data":"bd81c5b70960bb5c69c83b108d56e6ca81fcf9ac0a02765d463866b0cd5ed1af"} Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.874412 4833 generic.go:334] "Generic (PLEG): container finished" podID="cb825980-5dc2-420a-8638-9607a9f1eb1f" containerID="d8b67d2984098598469efeab7a295956ffe682e1e8a8ed1a870f368f9c9f736f" exitCode=0 Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.874447 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell07d06-account-delete-22d5b" event={"ID":"cb825980-5dc2-420a-8638-9607a9f1eb1f","Type":"ContainerDied","Data":"d8b67d2984098598469efeab7a295956ffe682e1e8a8ed1a870f368f9c9f736f"} Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.876288 4833 generic.go:334] "Generic (PLEG): container finished" podID="3f980bce-4b41-461d-9a1f-af4e6fb7455b" containerID="92baacf08545a4d0d20572f7ee58b21cd9c99873a45dab1e9d54ae3ddc847186" exitCode=0 Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.876329 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement3a43-account-delete-t5vdd" event={"ID":"3f980bce-4b41-461d-9a1f-af4e6fb7455b","Type":"ContainerDied","Data":"92baacf08545a4d0d20572f7ee58b21cd9c99873a45dab1e9d54ae3ddc847186"} Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.883827 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7113b07b-875e-4a09-a221-be312e4d0dce-kube-api-access-wbx2t" (OuterVolumeSpecName: "kube-api-access-wbx2t") pod "7113b07b-875e-4a09-a221-be312e4d0dce" (UID: "7113b07b-875e-4a09-a221-be312e4d0dce"). InnerVolumeSpecName "kube-api-access-wbx2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.890418 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa2db326-7b3a-4cc8-acb4-9c680c8f4972" containerID="fd89fcb801c3e73ae689bfd58be2d8c38227f0a3b7769b280f02d0cbef0d1f9c" exitCode=0 Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.890608 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.891341 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fa2db326-7b3a-4cc8-acb4-9c680c8f4972","Type":"ContainerDied","Data":"fd89fcb801c3e73ae689bfd58be2d8c38227f0a3b7769b280f02d0cbef0d1f9c"} Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.891367 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fa2db326-7b3a-4cc8-acb4-9c680c8f4972","Type":"ContainerDied","Data":"95e2db78158d9efaa6ce3d6cf05b6c8d57049caddb3ab58ab62b842d68ff3d79"} Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.891385 4833 scope.go:117] "RemoveContainer" containerID="fd89fcb801c3e73ae689bfd58be2d8c38227f0a3b7769b280f02d0cbef0d1f9c" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.895785 4833 generic.go:334] "Generic (PLEG): container finished" podID="f8c66f33-3fbd-4a35-8e0d-2b38c3cd513a" containerID="1bde0c46530488bbd8249b35833c0404f05caf636dc071a435f090a5decab08b" exitCode=0 Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.895862 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi1bdf-account-delete-k9wpj" event={"ID":"f8c66f33-3fbd-4a35-8e0d-2b38c3cd513a","Type":"ContainerDied","Data":"1bde0c46530488bbd8249b35833c0404f05caf636dc071a435f090a5decab08b"} Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.905680 4833 generic.go:334] "Generic (PLEG): container finished" podID="7113b07b-875e-4a09-a221-be312e4d0dce" containerID="dd3cdf5ce0a05f6193bc89231be6e02af61a3556f7cc4765d4ed080a58b95e71" exitCode=0 Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.906132 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.907017 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7113b07b-875e-4a09-a221-be312e4d0dce","Type":"ContainerDied","Data":"dd3cdf5ce0a05f6193bc89231be6e02af61a3556f7cc4765d4ed080a58b95e71"} Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.907055 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7113b07b-875e-4a09-a221-be312e4d0dce","Type":"ContainerDied","Data":"3cf0379046d4ed15a16b89883583db031e5ad6809c3a8317821f6d6c5711d528"} Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.917698 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "mysql-db") pod "fa2db326-7b3a-4cc8-acb4-9c680c8f4972" (UID: "fa2db326-7b3a-4cc8-acb4-9c680c8f4972"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.935278 4833 scope.go:117] "RemoveContainer" containerID="e0e632db85cd6307c8c0ca386cb79abd88b3e5b632d72bd8657bfe05be5f7132" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.938145 4833 generic.go:334] "Generic (PLEG): container finished" podID="d0bc4033-85b9-4212-b1e2-3c5888ddcf0a" containerID="95826628cdc2a76d70549b876e6f4af509be1931f75e497eca5b943ede2e51ce" exitCode=1 Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.938260 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell13e16-account-delete-nqr9l" event={"ID":"d0bc4033-85b9-4212-b1e2-3c5888ddcf0a","Type":"ContainerDied","Data":"95826628cdc2a76d70549b876e6f4af509be1931f75e497eca5b943ede2e51ce"} Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.940799 4833 generic.go:334] "Generic (PLEG): container finished" podID="e71af496-4851-4904-9003-0358adc97b94" containerID="149c8e3ce91dc7834b024b0cc72a73ba4a7ebf9adfb1a90e69396b5c7bb5813c" exitCode=0 Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.940907 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinderb181-account-delete-hqpht" event={"ID":"e71af496-4851-4904-9003-0358adc97b94","Type":"ContainerDied","Data":"149c8e3ce91dc7834b024b0cc72a73ba4a7ebf9adfb1a90e69396b5c7bb5813c"} Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.953314 4833 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-secrets\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.953340 4833 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-kolla-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.953350 4833 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-config-data-generated\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.953359 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6t2wr\" (UniqueName: \"kubernetes.io/projected/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-kube-api-access-6t2wr\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.953368 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbx2t\" (UniqueName: \"kubernetes.io/projected/7113b07b-875e-4a09-a221-be312e4d0dce-kube-api-access-wbx2t\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.953386 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.953396 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9gfz\" (UniqueName: \"kubernetes.io/projected/475289a4-cf33-4f56-93d9-73f7551026f8-kube-api-access-z9gfz\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.953405 4833 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-config-data-default\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.953413 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-operator-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.955048 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/475289a4-cf33-4f56-93d9-73f7551026f8-config-data" (OuterVolumeSpecName: "config-data") pod "475289a4-cf33-4f56-93d9-73f7551026f8" (UID: "475289a4-cf33-4f56-93d9-73f7551026f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.960482 4833 generic.go:334] "Generic (PLEG): container finished" podID="475289a4-cf33-4f56-93d9-73f7551026f8" containerID="3e0342cb2a85fe207f3129a530b551fb8f028c6b0f6607f69d53f7edaab9d8e1" exitCode=0 Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.960523 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"475289a4-cf33-4f56-93d9-73f7551026f8","Type":"ContainerDied","Data":"3e0342cb2a85fe207f3129a530b551fb8f028c6b0f6607f69d53f7edaab9d8e1"} Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.960563 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"475289a4-cf33-4f56-93d9-73f7551026f8","Type":"ContainerDied","Data":"0abc208f9dd84299dd86758a437d24681925a350649698b8d7d6c9dd896f1e42"} Oct 13 06:49:58 crc kubenswrapper[4833]: I1013 06:49:58.960619 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.000898 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/475289a4-cf33-4f56-93d9-73f7551026f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "475289a4-cf33-4f56-93d9-73f7551026f8" (UID: "475289a4-cf33-4f56-93d9-73f7551026f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.016704 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7113b07b-875e-4a09-a221-be312e4d0dce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7113b07b-875e-4a09-a221-be312e4d0dce" (UID: "7113b07b-875e-4a09-a221-be312e4d0dce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.027745 4833 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.029184 4833 scope.go:117] "RemoveContainer" containerID="fd89fcb801c3e73ae689bfd58be2d8c38227f0a3b7769b280f02d0cbef0d1f9c" Oct 13 06:49:59 crc kubenswrapper[4833]: E1013 06:49:59.030510 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd89fcb801c3e73ae689bfd58be2d8c38227f0a3b7769b280f02d0cbef0d1f9c\": container with ID starting with fd89fcb801c3e73ae689bfd58be2d8c38227f0a3b7769b280f02d0cbef0d1f9c not found: ID does not exist" containerID="fd89fcb801c3e73ae689bfd58be2d8c38227f0a3b7769b280f02d0cbef0d1f9c" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.030537 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd89fcb801c3e73ae689bfd58be2d8c38227f0a3b7769b280f02d0cbef0d1f9c"} err="failed to get container status \"fd89fcb801c3e73ae689bfd58be2d8c38227f0a3b7769b280f02d0cbef0d1f9c\": rpc error: code = NotFound desc = could not find container \"fd89fcb801c3e73ae689bfd58be2d8c38227f0a3b7769b280f02d0cbef0d1f9c\": container with ID starting with fd89fcb801c3e73ae689bfd58be2d8c38227f0a3b7769b280f02d0cbef0d1f9c not found: ID does not exist" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.030605 4833 scope.go:117] "RemoveContainer" containerID="e0e632db85cd6307c8c0ca386cb79abd88b3e5b632d72bd8657bfe05be5f7132" Oct 13 06:49:59 crc kubenswrapper[4833]: E1013 06:49:59.031061 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0e632db85cd6307c8c0ca386cb79abd88b3e5b632d72bd8657bfe05be5f7132\": container with ID starting with e0e632db85cd6307c8c0ca386cb79abd88b3e5b632d72bd8657bfe05be5f7132 not found: ID does not exist" containerID="e0e632db85cd6307c8c0ca386cb79abd88b3e5b632d72bd8657bfe05be5f7132" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.031112 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0e632db85cd6307c8c0ca386cb79abd88b3e5b632d72bd8657bfe05be5f7132"} err="failed to get container status \"e0e632db85cd6307c8c0ca386cb79abd88b3e5b632d72bd8657bfe05be5f7132\": rpc error: code = NotFound desc = could not find container \"e0e632db85cd6307c8c0ca386cb79abd88b3e5b632d72bd8657bfe05be5f7132\": container with ID starting with e0e632db85cd6307c8c0ca386cb79abd88b3e5b632d72bd8657bfe05be5f7132 not found: ID does not exist" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.031137 4833 scope.go:117] "RemoveContainer" containerID="dd3cdf5ce0a05f6193bc89231be6e02af61a3556f7cc4765d4ed080a58b95e71" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.044031 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa2db326-7b3a-4cc8-acb4-9c680c8f4972" (UID: "fa2db326-7b3a-4cc8-acb4-9c680c8f4972"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.055250 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/475289a4-cf33-4f56-93d9-73f7551026f8-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.055474 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7113b07b-875e-4a09-a221-be312e4d0dce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.055484 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/475289a4-cf33-4f56-93d9-73f7551026f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.055493 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.055501 4833 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.074432 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7113b07b-875e-4a09-a221-be312e4d0dce-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "7113b07b-875e-4a09-a221-be312e4d0dce" (UID: "7113b07b-875e-4a09-a221-be312e4d0dce"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.092455 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="baa4dafc-7be7-4f97-ba72-359c27e3151c" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.175:9292/healthcheck\": read tcp 10.217.0.2:38326->10.217.0.175:9292: read: connection reset by peer" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.092750 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="baa4dafc-7be7-4f97-ba72-359c27e3151c" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.175:9292/healthcheck\": read tcp 10.217.0.2:38338->10.217.0.175:9292: read: connection reset by peer" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.093590 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7113b07b-875e-4a09-a221-be312e4d0dce-config-data" (OuterVolumeSpecName: "config-data") pod "7113b07b-875e-4a09-a221-be312e4d0dce" (UID: "7113b07b-875e-4a09-a221-be312e4d0dce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.094382 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7113b07b-875e-4a09-a221-be312e4d0dce-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "7113b07b-875e-4a09-a221-be312e4d0dce" (UID: "7113b07b-875e-4a09-a221-be312e4d0dce"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.120189 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "fa2db326-7b3a-4cc8-acb4-9c680c8f4972" (UID: "fa2db326-7b3a-4cc8-acb4-9c680c8f4972"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.158017 4833 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7113b07b-875e-4a09-a221-be312e4d0dce-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.158054 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7113b07b-875e-4a09-a221-be312e4d0dce-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.158065 4833 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa2db326-7b3a-4cc8-acb4-9c680c8f4972-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.158074 4833 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7113b07b-875e-4a09-a221-be312e4d0dce-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.349204 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.376025 4833 scope.go:117] "RemoveContainer" containerID="dd3cdf5ce0a05f6193bc89231be6e02af61a3556f7cc4765d4ed080a58b95e71" Oct 13 06:49:59 crc kubenswrapper[4833]: E1013 06:49:59.376937 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd3cdf5ce0a05f6193bc89231be6e02af61a3556f7cc4765d4ed080a58b95e71\": container with ID starting with dd3cdf5ce0a05f6193bc89231be6e02af61a3556f7cc4765d4ed080a58b95e71 not found: ID does not exist" containerID="dd3cdf5ce0a05f6193bc89231be6e02af61a3556f7cc4765d4ed080a58b95e71" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.376966 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd3cdf5ce0a05f6193bc89231be6e02af61a3556f7cc4765d4ed080a58b95e71"} err="failed to get container status \"dd3cdf5ce0a05f6193bc89231be6e02af61a3556f7cc4765d4ed080a58b95e71\": rpc error: code = NotFound desc = could not find container \"dd3cdf5ce0a05f6193bc89231be6e02af61a3556f7cc4765d4ed080a58b95e71\": container with ID starting with dd3cdf5ce0a05f6193bc89231be6e02af61a3556f7cc4765d4ed080a58b95e71 not found: ID does not exist" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.376988 4833 scope.go:117] "RemoveContainer" containerID="3e0342cb2a85fe207f3129a530b551fb8f028c6b0f6607f69d53f7edaab9d8e1" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.383977 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.397650 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.401252 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="d04cb142-7473-455b-8d5b-f79d879d8d58" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.177:9292/healthcheck\": read tcp 10.217.0.2:38446->10.217.0.177:9292: read: connection reset by peer" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.402572 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="d04cb142-7473-455b-8d5b-f79d879d8d58" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.177:9292/healthcheck\": read tcp 10.217.0.2:38458->10.217.0.177:9292: read: connection reset by peer" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.411608 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.418880 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.433274 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.441013 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.473594 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77004520-24e0-4076-8155-b4a8b6b3e1a2-log-httpd\") pod \"77004520-24e0-4076-8155-b4a8b6b3e1a2\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.474039 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/77004520-24e0-4076-8155-b4a8b6b3e1a2-etc-swift\") pod \"77004520-24e0-4076-8155-b4a8b6b3e1a2\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.474189 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77004520-24e0-4076-8155-b4a8b6b3e1a2-run-httpd\") pod \"77004520-24e0-4076-8155-b4a8b6b3e1a2\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.474239 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77004520-24e0-4076-8155-b4a8b6b3e1a2-combined-ca-bundle\") pod \"77004520-24e0-4076-8155-b4a8b6b3e1a2\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.474273 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77004520-24e0-4076-8155-b4a8b6b3e1a2-internal-tls-certs\") pod \"77004520-24e0-4076-8155-b4a8b6b3e1a2\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.474307 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbb2r\" (UniqueName: \"kubernetes.io/projected/77004520-24e0-4076-8155-b4a8b6b3e1a2-kube-api-access-cbb2r\") pod \"77004520-24e0-4076-8155-b4a8b6b3e1a2\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.474347 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77004520-24e0-4076-8155-b4a8b6b3e1a2-config-data\") pod \"77004520-24e0-4076-8155-b4a8b6b3e1a2\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.474415 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77004520-24e0-4076-8155-b4a8b6b3e1a2-public-tls-certs\") pod \"77004520-24e0-4076-8155-b4a8b6b3e1a2\" (UID: \"77004520-24e0-4076-8155-b4a8b6b3e1a2\") " Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.474649 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77004520-24e0-4076-8155-b4a8b6b3e1a2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "77004520-24e0-4076-8155-b4a8b6b3e1a2" (UID: "77004520-24e0-4076-8155-b4a8b6b3e1a2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.475993 4833 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77004520-24e0-4076-8155-b4a8b6b3e1a2-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:59 crc kubenswrapper[4833]: E1013 06:49:59.476093 4833 secret.go:188] Couldn't get secret openstack/cinder-api-config-data: secret "cinder-api-config-data" not found Oct 13 06:49:59 crc kubenswrapper[4833]: E1013 06:49:59.476139 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-config-data-custom podName:aaeaef09-d532-4399-b9bb-c9e59fbf1a62 nodeName:}" failed. No retries permitted until 2025-10-13 06:50:03.476124164 +0000 UTC m=+1293.576547080 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-config-data-custom") pod "cinder-api-0" (UID: "aaeaef09-d532-4399-b9bb-c9e59fbf1a62") : secret "cinder-api-config-data" not found Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.480721 4833 scope.go:117] "RemoveContainer" containerID="3e0342cb2a85fe207f3129a530b551fb8f028c6b0f6607f69d53f7edaab9d8e1" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.481068 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77004520-24e0-4076-8155-b4a8b6b3e1a2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "77004520-24e0-4076-8155-b4a8b6b3e1a2" (UID: "77004520-24e0-4076-8155-b4a8b6b3e1a2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:49:59 crc kubenswrapper[4833]: E1013 06:49:59.485635 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e0342cb2a85fe207f3129a530b551fb8f028c6b0f6607f69d53f7edaab9d8e1\": container with ID starting with 3e0342cb2a85fe207f3129a530b551fb8f028c6b0f6607f69d53f7edaab9d8e1 not found: ID does not exist" containerID="3e0342cb2a85fe207f3129a530b551fb8f028c6b0f6607f69d53f7edaab9d8e1" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.485675 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e0342cb2a85fe207f3129a530b551fb8f028c6b0f6607f69d53f7edaab9d8e1"} err="failed to get container status \"3e0342cb2a85fe207f3129a530b551fb8f028c6b0f6607f69d53f7edaab9d8e1\": rpc error: code = NotFound desc = could not find container \"3e0342cb2a85fe207f3129a530b551fb8f028c6b0f6607f69d53f7edaab9d8e1\": container with ID starting with 3e0342cb2a85fe207f3129a530b551fb8f028c6b0f6607f69d53f7edaab9d8e1 not found: ID does not exist" Oct 13 06:49:59 crc kubenswrapper[4833]: E1013 06:49:59.485773 4833 secret.go:188] Couldn't get secret openstack/cinder-config-data: secret "cinder-config-data" not found Oct 13 06:49:59 crc kubenswrapper[4833]: E1013 06:49:59.485827 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-config-data podName:aaeaef09-d532-4399-b9bb-c9e59fbf1a62 nodeName:}" failed. No retries permitted until 2025-10-13 06:50:03.485804415 +0000 UTC m=+1293.586227331 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-config-data") pod "cinder-api-0" (UID: "aaeaef09-d532-4399-b9bb-c9e59fbf1a62") : secret "cinder-config-data" not found Oct 13 06:49:59 crc kubenswrapper[4833]: E1013 06:49:59.488034 4833 secret.go:188] Couldn't get secret openstack/cinder-scripts: secret "cinder-scripts" not found Oct 13 06:49:59 crc kubenswrapper[4833]: E1013 06:49:59.488097 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-scripts podName:aaeaef09-d532-4399-b9bb-c9e59fbf1a62 nodeName:}" failed. No retries permitted until 2025-10-13 06:50:03.488079829 +0000 UTC m=+1293.588502745 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-scripts") pod "cinder-api-0" (UID: "aaeaef09-d532-4399-b9bb-c9e59fbf1a62") : secret "cinder-scripts" not found Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.490001 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77004520-24e0-4076-8155-b4a8b6b3e1a2-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "77004520-24e0-4076-8155-b4a8b6b3e1a2" (UID: "77004520-24e0-4076-8155-b4a8b6b3e1a2"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.505995 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77004520-24e0-4076-8155-b4a8b6b3e1a2-kube-api-access-cbb2r" (OuterVolumeSpecName: "kube-api-access-cbb2r") pod "77004520-24e0-4076-8155-b4a8b6b3e1a2" (UID: "77004520-24e0-4076-8155-b4a8b6b3e1a2"). InnerVolumeSpecName "kube-api-access-cbb2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.549357 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell13e16-account-delete-nqr9l" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.584786 4833 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77004520-24e0-4076-8155-b4a8b6b3e1a2-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.584831 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbb2r\" (UniqueName: \"kubernetes.io/projected/77004520-24e0-4076-8155-b4a8b6b3e1a2-kube-api-access-cbb2r\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.584845 4833 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/77004520-24e0-4076-8155-b4a8b6b3e1a2-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.630338 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinderb181-account-delete-hqpht" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.631191 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement3a43-account-delete-t5vdd" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.654792 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77004520-24e0-4076-8155-b4a8b6b3e1a2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "77004520-24e0-4076-8155-b4a8b6b3e1a2" (UID: "77004520-24e0-4076-8155-b4a8b6b3e1a2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.690655 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lb9ts\" (UniqueName: \"kubernetes.io/projected/d0bc4033-85b9-4212-b1e2-3c5888ddcf0a-kube-api-access-lb9ts\") pod \"d0bc4033-85b9-4212-b1e2-3c5888ddcf0a\" (UID: \"d0bc4033-85b9-4212-b1e2-3c5888ddcf0a\") " Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.691518 4833 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77004520-24e0-4076-8155-b4a8b6b3e1a2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.737195 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0bc4033-85b9-4212-b1e2-3c5888ddcf0a-kube-api-access-lb9ts" (OuterVolumeSpecName: "kube-api-access-lb9ts") pod "d0bc4033-85b9-4212-b1e2-3c5888ddcf0a" (UID: "d0bc4033-85b9-4212-b1e2-3c5888ddcf0a"). InnerVolumeSpecName "kube-api-access-lb9ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.792472 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh728\" (UniqueName: \"kubernetes.io/projected/e71af496-4851-4904-9003-0358adc97b94-kube-api-access-lh728\") pod \"e71af496-4851-4904-9003-0358adc97b94\" (UID: \"e71af496-4851-4904-9003-0358adc97b94\") " Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.792606 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d89f8\" (UniqueName: \"kubernetes.io/projected/3f980bce-4b41-461d-9a1f-af4e6fb7455b-kube-api-access-d89f8\") pod \"3f980bce-4b41-461d-9a1f-af4e6fb7455b\" (UID: \"3f980bce-4b41-461d-9a1f-af4e6fb7455b\") " Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.793049 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lb9ts\" (UniqueName: \"kubernetes.io/projected/d0bc4033-85b9-4212-b1e2-3c5888ddcf0a-kube-api-access-lb9ts\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.796811 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e71af496-4851-4904-9003-0358adc97b94-kube-api-access-lh728" (OuterVolumeSpecName: "kube-api-access-lh728") pod "e71af496-4851-4904-9003-0358adc97b94" (UID: "e71af496-4851-4904-9003-0358adc97b94"). InnerVolumeSpecName "kube-api-access-lh728". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.798390 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="2aaf5d8e-00de-473b-91d2-1dd8a7354853" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": read tcp 10.217.0.2:50006->10.217.0.205:8775: read: connection reset by peer" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.799753 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="2aaf5d8e-00de-473b-91d2-1dd8a7354853" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": read tcp 10.217.0.2:50016->10.217.0.205:8775: read: connection reset by peer" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.822356 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f980bce-4b41-461d-9a1f-af4e6fb7455b-kube-api-access-d89f8" (OuterVolumeSpecName: "kube-api-access-d89f8") pod "3f980bce-4b41-461d-9a1f-af4e6fb7455b" (UID: "3f980bce-4b41-461d-9a1f-af4e6fb7455b"). InnerVolumeSpecName "kube-api-access-d89f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.823240 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77004520-24e0-4076-8155-b4a8b6b3e1a2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "77004520-24e0-4076-8155-b4a8b6b3e1a2" (UID: "77004520-24e0-4076-8155-b4a8b6b3e1a2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.835887 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-g7h2h"] Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.840187 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-g7h2h"] Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.853041 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.858676 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77004520-24e0-4076-8155-b4a8b6b3e1a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77004520-24e0-4076-8155-b4a8b6b3e1a2" (UID: "77004520-24e0-4076-8155-b4a8b6b3e1a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.859261 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1bdf-account-create-rmzqc"] Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.867839 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapi1bdf-account-delete-k9wpj"] Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.894464 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lh728\" (UniqueName: \"kubernetes.io/projected/e71af496-4851-4904-9003-0358adc97b94-kube-api-access-lh728\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.894493 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77004520-24e0-4076-8155-b4a8b6b3e1a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.894502 4833 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77004520-24e0-4076-8155-b4a8b6b3e1a2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.894510 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d89f8\" (UniqueName: \"kubernetes.io/projected/3f980bce-4b41-461d-9a1f-af4e6fb7455b-kube-api-access-d89f8\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.894539 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-1bdf-account-create-rmzqc"] Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.924449 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6869bc4646-lrqdg" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.924911 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.925175 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4418034e-f484-4638-94bd-5b086af9e8f3" containerName="ceilometer-central-agent" containerID="cri-o://b1d8fdb61d049ee070745d6bb37299a7e3a2ea5b6a2822cfd684cbef80477fcd" gracePeriod=30 Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.925354 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4418034e-f484-4638-94bd-5b086af9e8f3" containerName="proxy-httpd" containerID="cri-o://2cabbee089607667537683595e667e4aa78e6c197269f4dd6ca05d0b1ab6e461" gracePeriod=30 Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.925409 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4418034e-f484-4638-94bd-5b086af9e8f3" containerName="sg-core" containerID="cri-o://56972d15184e848e3cb04578a7051cd018536014dd786389c2513e45b4aaedf5" gracePeriod=30 Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.925448 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4418034e-f484-4638-94bd-5b086af9e8f3" containerName="ceilometer-notification-agent" containerID="cri-o://d1b5ef65dd6c04a469b4e26abcf945980198eb1805346763f419d84fce76a1df" gracePeriod=30 Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.950899 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.951207 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="11f688b0-b3aa-46f7-a700-c6619e3a3951" containerName="kube-state-metrics" containerID="cri-o://96e44e80a4356d715cd071293816d2a15545c8906346c80f998d3bb584b779b5" gracePeriod=30 Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.962720 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77004520-24e0-4076-8155-b4a8b6b3e1a2-config-data" (OuterVolumeSpecName: "config-data") pod "77004520-24e0-4076-8155-b4a8b6b3e1a2" (UID: "77004520-24e0-4076-8155-b4a8b6b3e1a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.978660 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.978994 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba" containerName="memcached" containerID="cri-o://5e2a48246659117dd99f108e17e66bdef082cff5d89afeb57ea91830bf119391" gracePeriod=30 Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.995530 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baa4dafc-7be7-4f97-ba72-359c27e3151c-scripts\") pod \"baa4dafc-7be7-4f97-ba72-359c27e3151c\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.995640 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baa4dafc-7be7-4f97-ba72-359c27e3151c-logs\") pod \"baa4dafc-7be7-4f97-ba72-359c27e3151c\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.995658 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnsqr\" (UniqueName: \"kubernetes.io/projected/626d71e0-e957-4a46-9565-d19058a575c9-kube-api-access-mnsqr\") pod \"626d71e0-e957-4a46-9565-d19058a575c9\" (UID: \"626d71e0-e957-4a46-9565-d19058a575c9\") " Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.995686 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/626d71e0-e957-4a46-9565-d19058a575c9-public-tls-certs\") pod \"626d71e0-e957-4a46-9565-d19058a575c9\" (UID: \"626d71e0-e957-4a46-9565-d19058a575c9\") " Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.995733 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/baa4dafc-7be7-4f97-ba72-359c27e3151c-public-tls-certs\") pod \"baa4dafc-7be7-4f97-ba72-359c27e3151c\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.995785 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/626d71e0-e957-4a46-9565-d19058a575c9-logs\") pod \"626d71e0-e957-4a46-9565-d19058a575c9\" (UID: \"626d71e0-e957-4a46-9565-d19058a575c9\") " Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.995812 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/626d71e0-e957-4a46-9565-d19058a575c9-config-data\") pod \"626d71e0-e957-4a46-9565-d19058a575c9\" (UID: \"626d71e0-e957-4a46-9565-d19058a575c9\") " Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.995853 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baa4dafc-7be7-4f97-ba72-359c27e3151c-combined-ca-bundle\") pod \"baa4dafc-7be7-4f97-ba72-359c27e3151c\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.995885 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/baa4dafc-7be7-4f97-ba72-359c27e3151c-httpd-run\") pod \"baa4dafc-7be7-4f97-ba72-359c27e3151c\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.995901 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/626d71e0-e957-4a46-9565-d19058a575c9-scripts\") pod \"626d71e0-e957-4a46-9565-d19058a575c9\" (UID: \"626d71e0-e957-4a46-9565-d19058a575c9\") " Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.995919 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baa4dafc-7be7-4f97-ba72-359c27e3151c-config-data\") pod \"baa4dafc-7be7-4f97-ba72-359c27e3151c\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.995937 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zdkc\" (UniqueName: \"kubernetes.io/projected/baa4dafc-7be7-4f97-ba72-359c27e3151c-kube-api-access-4zdkc\") pod \"baa4dafc-7be7-4f97-ba72-359c27e3151c\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.995967 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"baa4dafc-7be7-4f97-ba72-359c27e3151c\" (UID: \"baa4dafc-7be7-4f97-ba72-359c27e3151c\") " Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.995983 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/626d71e0-e957-4a46-9565-d19058a575c9-combined-ca-bundle\") pod \"626d71e0-e957-4a46-9565-d19058a575c9\" (UID: \"626d71e0-e957-4a46-9565-d19058a575c9\") " Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.996068 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/626d71e0-e957-4a46-9565-d19058a575c9-internal-tls-certs\") pod \"626d71e0-e957-4a46-9565-d19058a575c9\" (UID: \"626d71e0-e957-4a46-9565-d19058a575c9\") " Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.996112 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/baa4dafc-7be7-4f97-ba72-359c27e3151c-logs" (OuterVolumeSpecName: "logs") pod "baa4dafc-7be7-4f97-ba72-359c27e3151c" (UID: "baa4dafc-7be7-4f97-ba72-359c27e3151c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.996418 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baa4dafc-7be7-4f97-ba72-359c27e3151c-logs\") on node \"crc\" DevicePath \"\"" Oct 13 06:49:59 crc kubenswrapper[4833]: I1013 06:49:59.996431 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77004520-24e0-4076-8155-b4a8b6b3e1a2-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.007008 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/baa4dafc-7be7-4f97-ba72-359c27e3151c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "baa4dafc-7be7-4f97-ba72-359c27e3151c" (UID: "baa4dafc-7be7-4f97-ba72-359c27e3151c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.007290 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/626d71e0-e957-4a46-9565-d19058a575c9-logs" (OuterVolumeSpecName: "logs") pod "626d71e0-e957-4a46-9565-d19058a575c9" (UID: "626d71e0-e957-4a46-9565-d19058a575c9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.008807 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baa4dafc-7be7-4f97-ba72-359c27e3151c-scripts" (OuterVolumeSpecName: "scripts") pod "baa4dafc-7be7-4f97-ba72-359c27e3151c" (UID: "baa4dafc-7be7-4f97-ba72-359c27e3151c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.045340 4833 generic.go:334] "Generic (PLEG): container finished" podID="d04cb142-7473-455b-8d5b-f79d879d8d58" containerID="51e7bc679df23d3e526317bc29126a1542ce20237f8a48a696b934699094819a" exitCode=0 Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.045447 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d04cb142-7473-455b-8d5b-f79d879d8d58","Type":"ContainerDied","Data":"51e7bc679df23d3e526317bc29126a1542ce20237f8a48a696b934699094819a"} Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.055643 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-pg5rw"] Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.068402 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baa4dafc-7be7-4f97-ba72-359c27e3151c-kube-api-access-4zdkc" (OuterVolumeSpecName: "kube-api-access-4zdkc") pod "baa4dafc-7be7-4f97-ba72-359c27e3151c" (UID: "baa4dafc-7be7-4f97-ba72-359c27e3151c"). InnerVolumeSpecName "kube-api-access-4zdkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.070629 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85d74757d5-v95tz" event={"ID":"77004520-24e0-4076-8155-b4a8b6b3e1a2","Type":"ContainerDied","Data":"fc4185496a8f5aceebc4214786114cd6386579632e77074f673c4c0161c217d8"} Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.070671 4833 scope.go:117] "RemoveContainer" containerID="ffc9dd1e713324f809d315d085d14b604a856f57e2677cc7a6979ac4e967d33f" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.070796 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-85d74757d5-v95tz" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.100651 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/626d71e0-e957-4a46-9565-d19058a575c9-logs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.100676 4833 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/baa4dafc-7be7-4f97-ba72-359c27e3151c-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.100685 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zdkc\" (UniqueName: \"kubernetes.io/projected/baa4dafc-7be7-4f97-ba72-359c27e3151c-kube-api-access-4zdkc\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.100693 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baa4dafc-7be7-4f97-ba72-359c27e3151c-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.115041 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-pg5rw"] Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.129933 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "baa4dafc-7be7-4f97-ba72-359c27e3151c" (UID: "baa4dafc-7be7-4f97-ba72-359c27e3151c"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.130278 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/626d71e0-e957-4a46-9565-d19058a575c9-kube-api-access-mnsqr" (OuterVolumeSpecName: "kube-api-access-mnsqr") pod "626d71e0-e957-4a46-9565-d19058a575c9" (UID: "626d71e0-e957-4a46-9565-d19058a575c9"). InnerVolumeSpecName "kube-api-access-mnsqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.135832 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/626d71e0-e957-4a46-9565-d19058a575c9-scripts" (OuterVolumeSpecName: "scripts") pod "626d71e0-e957-4a46-9565-d19058a575c9" (UID: "626d71e0-e957-4a46-9565-d19058a575c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.153230 4833 generic.go:334] "Generic (PLEG): container finished" podID="626d71e0-e957-4a46-9565-d19058a575c9" containerID="65e6c17688173888d5cd8825b6cc56823eb1c0169fcf3185554758380d59282a" exitCode=0 Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.153329 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6869bc4646-lrqdg" event={"ID":"626d71e0-e957-4a46-9565-d19058a575c9","Type":"ContainerDied","Data":"65e6c17688173888d5cd8825b6cc56823eb1c0169fcf3185554758380d59282a"} Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.153358 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6869bc4646-lrqdg" event={"ID":"626d71e0-e957-4a46-9565-d19058a575c9","Type":"ContainerDied","Data":"3c00a96fa2cd6e1491463d8d7820e7911b1a7848da1885898a55495e12e3893f"} Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.153346 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6869bc4646-lrqdg" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.161235 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="aaeaef09-d532-4399-b9bb-c9e59fbf1a62" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.169:8776/healthcheck\": read tcp 10.217.0.2:58058->10.217.0.169:8776: read: connection reset by peer" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.164743 4833 generic.go:334] "Generic (PLEG): container finished" podID="2aaf5d8e-00de-473b-91d2-1dd8a7354853" containerID="0c49851d3254ed77c14a56073d79efd51082af7a60fed7458676ff9c919c96c6" exitCode=0 Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.164820 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2aaf5d8e-00de-473b-91d2-1dd8a7354853","Type":"ContainerDied","Data":"0c49851d3254ed77c14a56073d79efd51082af7a60fed7458676ff9c919c96c6"} Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.166422 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-l8snz"] Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.173637 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell13e16-account-delete-nqr9l" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.174087 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-l8snz"] Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.174949 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell13e16-account-delete-nqr9l" event={"ID":"d0bc4033-85b9-4212-b1e2-3c5888ddcf0a","Type":"ContainerDied","Data":"084f5b319b94339308257cc211ed98fbc933a70e5795aa552b94ef3e7e9b65f2"} Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.178607 4833 generic.go:334] "Generic (PLEG): container finished" podID="baa4dafc-7be7-4f97-ba72-359c27e3151c" containerID="ef7d82e1a30e86fc26d1eaeeddc5dbfd7806656a7caf33159465853af570230a" exitCode=0 Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.178660 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"baa4dafc-7be7-4f97-ba72-359c27e3151c","Type":"ContainerDied","Data":"ef7d82e1a30e86fc26d1eaeeddc5dbfd7806656a7caf33159465853af570230a"} Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.178679 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"baa4dafc-7be7-4f97-ba72-359c27e3151c","Type":"ContainerDied","Data":"1ca508c1342303ad5c31c8a5e603015c22d16995025ac25d7f8d695d9a220d32"} Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.178745 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.180783 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinderb181-account-delete-hqpht" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.180815 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinderb181-account-delete-hqpht" event={"ID":"e71af496-4851-4904-9003-0358adc97b94","Type":"ContainerDied","Data":"c04550cbfa74b033fe8bd4f01744b1ac2db6a04a8e44ce1d4bc1152097543cfa"} Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.184716 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement3a43-account-delete-t5vdd" event={"ID":"3f980bce-4b41-461d-9a1f-af4e6fb7455b","Type":"ContainerDied","Data":"e3acced9f54b0c35f66144dd5bb8908e36538d2ecb0db8b933eaa70b09ea5a05"} Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.184791 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement3a43-account-delete-t5vdd" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.189477 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-558c47b6d4-9zp2v"] Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.189706 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-558c47b6d4-9zp2v" podUID="c03db41f-e7fb-4188-bd67-13f35c231490" containerName="keystone-api" containerID="cri-o://9e6742134cdf90f69643cd249cb0d8765be245ae58a1aadfbd64d0b1618d524e" gracePeriod=30 Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.202346 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/626d71e0-e957-4a46-9565-d19058a575c9-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.202385 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.202395 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnsqr\" (UniqueName: \"kubernetes.io/projected/626d71e0-e957-4a46-9565-d19058a575c9-kube-api-access-mnsqr\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.206431 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.207000 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-mhvzb"] Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.213918 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-mhvzb"] Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.230160 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-97cb-account-create-rn9b5"] Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.230208 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-97cb-account-create-rn9b5"] Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.234790 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-9f6qc"] Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.245107 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-9f6qc"] Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.252646 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7d06-account-create-dk8dj"] Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.258132 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-7d06-account-create-dk8dj"] Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.263669 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell07d06-account-delete-22d5b"] Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.275758 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-595797578d-ddhnv" podUID="7be1410c-e237-4abe-9a2d-c8e8b5242d93" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": read tcp 10.217.0.2:46258->10.217.0.159:9311: read: connection reset by peer" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.276075 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-595797578d-ddhnv" podUID="7be1410c-e237-4abe-9a2d-c8e8b5242d93" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": read tcp 10.217.0.2:46274->10.217.0.159:9311: read: connection reset by peer" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.349505 4833 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.372007 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baa4dafc-7be7-4f97-ba72-359c27e3151c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "baa4dafc-7be7-4f97-ba72-359c27e3151c" (UID: "baa4dafc-7be7-4f97-ba72-359c27e3151c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.373656 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/626d71e0-e957-4a46-9565-d19058a575c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "626d71e0-e957-4a46-9565-d19058a575c9" (UID: "626d71e0-e957-4a46-9565-d19058a575c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.410054 4833 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.410414 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/626d71e0-e957-4a46-9565-d19058a575c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.410431 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baa4dafc-7be7-4f97-ba72-359c27e3151c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.416152 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baa4dafc-7be7-4f97-ba72-359c27e3151c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "baa4dafc-7be7-4f97-ba72-359c27e3151c" (UID: "baa4dafc-7be7-4f97-ba72-359c27e3151c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.417798 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/626d71e0-e957-4a46-9565-d19058a575c9-config-data" (OuterVolumeSpecName: "config-data") pod "626d71e0-e957-4a46-9565-d19058a575c9" (UID: "626d71e0-e957-4a46-9565-d19058a575c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.451794 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baa4dafc-7be7-4f97-ba72-359c27e3151c-config-data" (OuterVolumeSpecName: "config-data") pod "baa4dafc-7be7-4f97-ba72-359c27e3151c" (UID: "baa4dafc-7be7-4f97-ba72-359c27e3151c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.474069 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/626d71e0-e957-4a46-9565-d19058a575c9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "626d71e0-e957-4a46-9565-d19058a575c9" (UID: "626d71e0-e957-4a46-9565-d19058a575c9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.483623 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/626d71e0-e957-4a46-9565-d19058a575c9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "626d71e0-e957-4a46-9565-d19058a575c9" (UID: "626d71e0-e957-4a46-9565-d19058a575c9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.512282 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/626d71e0-e957-4a46-9565-d19058a575c9-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.512311 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baa4dafc-7be7-4f97-ba72-359c27e3151c-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.512321 4833 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/626d71e0-e957-4a46-9565-d19058a575c9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.512332 4833 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/626d71e0-e957-4a46-9565-d19058a575c9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.512349 4833 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/baa4dafc-7be7-4f97-ba72-359c27e3151c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.535094 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="aa0ca608-57b5-4289-9271-fcc10a6c7422" containerName="galera" containerID="cri-o://be1096a47ac15e28bada61021cfe94e95ea96a8a39abb449b9b81d46dc563b03" gracePeriod=30 Oct 13 06:50:00 crc kubenswrapper[4833]: E1013 06:50:00.542583 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d828744544a28555a1b28a9ac7c2a4e7360927b89674b6368f01b8b2cf5d2ad8 is running failed: container process not found" containerID="d828744544a28555a1b28a9ac7c2a4e7360927b89674b6368f01b8b2cf5d2ad8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.542666 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.542717 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 06:50:00 crc kubenswrapper[4833]: E1013 06:50:00.542810 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d828744544a28555a1b28a9ac7c2a4e7360927b89674b6368f01b8b2cf5d2ad8 is running failed: container process not found" containerID="d828744544a28555a1b28a9ac7c2a4e7360927b89674b6368f01b8b2cf5d2ad8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 13 06:50:00 crc kubenswrapper[4833]: E1013 06:50:00.542967 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d828744544a28555a1b28a9ac7c2a4e7360927b89674b6368f01b8b2cf5d2ad8 is running failed: container process not found" containerID="d828744544a28555a1b28a9ac7c2a4e7360927b89674b6368f01b8b2cf5d2ad8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 13 06:50:00 crc kubenswrapper[4833]: E1013 06:50:00.542990 4833 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d828744544a28555a1b28a9ac7c2a4e7360927b89674b6368f01b8b2cf5d2ad8 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="fd0bf370-6aac-4334-b612-db75770844df" containerName="nova-cell0-conductor-conductor" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.593340 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="4418034e-f484-4638-94bd-5b086af9e8f3" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.204:3000/\": dial tcp 10.217.0.204:3000: connect: connection refused" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.648060 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03bb2849-0073-48ac-b568-609f917fe111" path="/var/lib/kubelet/pods/03bb2849-0073-48ac-b568-609f917fe111/volumes" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.648938 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c65395b-b132-4f36-98d1-f8eb739bab83" path="/var/lib/kubelet/pods/0c65395b-b132-4f36-98d1-f8eb739bab83/volumes" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.649408 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="152bb234-9831-4182-b04e-61e6693051f8" path="/var/lib/kubelet/pods/152bb234-9831-4182-b04e-61e6693051f8/volumes" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.649968 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="232038a6-156e-437b-9975-ac0fb0385c76" path="/var/lib/kubelet/pods/232038a6-156e-437b-9975-ac0fb0385c76/volumes" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.651377 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="475289a4-cf33-4f56-93d9-73f7551026f8" path="/var/lib/kubelet/pods/475289a4-cf33-4f56-93d9-73f7551026f8/volumes" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.652424 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="604fdcdc-4fc5-4dcb-98b2-42e44f2bad23" path="/var/lib/kubelet/pods/604fdcdc-4fc5-4dcb-98b2-42e44f2bad23/volumes" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.653398 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6924055c-4a48-4fdc-ba3f-fb5c48bd110e" path="/var/lib/kubelet/pods/6924055c-4a48-4fdc-ba3f-fb5c48bd110e/volumes" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.654476 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7113b07b-875e-4a09-a221-be312e4d0dce" path="/var/lib/kubelet/pods/7113b07b-875e-4a09-a221-be312e4d0dce/volumes" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.655414 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d46a93cd-a9e7-492d-99f2-931ea5e957c2" path="/var/lib/kubelet/pods/d46a93cd-a9e7-492d-99f2-931ea5e957c2/volumes" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.656024 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f60dfd99-68a4-49c4-8265-06cb09bca910" path="/var/lib/kubelet/pods/f60dfd99-68a4-49c4-8265-06cb09bca910/volumes" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.656783 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa2db326-7b3a-4cc8-acb4-9c680c8f4972" path="/var/lib/kubelet/pods/fa2db326-7b3a-4cc8-acb4-9c680c8f4972/volumes" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.872317 4833 scope.go:117] "RemoveContainer" containerID="bd81c5b70960bb5c69c83b108d56e6ca81fcf9ac0a02765d463866b0cd5ed1af" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.877261 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.886011 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="0a6ab499-ed60-45e7-b510-5a43422aa7f5" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: connect: connection refused" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.894223 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.913846 4833 scope.go:117] "RemoveContainer" containerID="65e6c17688173888d5cd8825b6cc56823eb1c0169fcf3185554758380d59282a" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.935429 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell07d06-account-delete-22d5b" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.939706 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi1bdf-account-delete-k9wpj" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.955654 4833 scope.go:117] "RemoveContainer" containerID="b30092ce1d6d2a4148dfa7a2b34e676c77f00b1ab27c3818e714c5780982199f" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.957613 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-bf7fd98f9-j4rff" Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.973421 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinderb181-account-delete-hqpht"] Oct 13 06:50:00 crc kubenswrapper[4833]: I1013 06:50:00.977504 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.001224 4833 scope.go:117] "RemoveContainer" containerID="65e6c17688173888d5cd8825b6cc56823eb1c0169fcf3185554758380d59282a" Oct 13 06:50:01 crc kubenswrapper[4833]: E1013 06:50:01.002277 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65e6c17688173888d5cd8825b6cc56823eb1c0169fcf3185554758380d59282a\": container with ID starting with 65e6c17688173888d5cd8825b6cc56823eb1c0169fcf3185554758380d59282a not found: ID does not exist" containerID="65e6c17688173888d5cd8825b6cc56823eb1c0169fcf3185554758380d59282a" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.002308 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65e6c17688173888d5cd8825b6cc56823eb1c0169fcf3185554758380d59282a"} err="failed to get container status \"65e6c17688173888d5cd8825b6cc56823eb1c0169fcf3185554758380d59282a\": rpc error: code = NotFound desc = could not find container \"65e6c17688173888d5cd8825b6cc56823eb1c0169fcf3185554758380d59282a\": container with ID starting with 65e6c17688173888d5cd8825b6cc56823eb1c0169fcf3185554758380d59282a not found: ID does not exist" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.002330 4833 scope.go:117] "RemoveContainer" containerID="b30092ce1d6d2a4148dfa7a2b34e676c77f00b1ab27c3818e714c5780982199f" Oct 13 06:50:01 crc kubenswrapper[4833]: E1013 06:50:01.002633 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b30092ce1d6d2a4148dfa7a2b34e676c77f00b1ab27c3818e714c5780982199f\": container with ID starting with b30092ce1d6d2a4148dfa7a2b34e676c77f00b1ab27c3818e714c5780982199f not found: ID does not exist" containerID="b30092ce1d6d2a4148dfa7a2b34e676c77f00b1ab27c3818e714c5780982199f" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.002656 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b30092ce1d6d2a4148dfa7a2b34e676c77f00b1ab27c3818e714c5780982199f"} err="failed to get container status \"b30092ce1d6d2a4148dfa7a2b34e676c77f00b1ab27c3818e714c5780982199f\": rpc error: code = NotFound desc = could not find container \"b30092ce1d6d2a4148dfa7a2b34e676c77f00b1ab27c3818e714c5780982199f\": container with ID starting with b30092ce1d6d2a4148dfa7a2b34e676c77f00b1ab27c3818e714c5780982199f not found: ID does not exist" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.002668 4833 scope.go:117] "RemoveContainer" containerID="95826628cdc2a76d70549b876e6f4af509be1931f75e497eca5b943ede2e51ce" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.011971 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinderb181-account-delete-hqpht"] Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.022160 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell13e16-account-delete-nqr9l"] Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.029166 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novacell13e16-account-delete-nqr9l"] Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.035668 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-85d74757d5-v95tz"] Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.036346 4833 scope.go:117] "RemoveContainer" containerID="ef7d82e1a30e86fc26d1eaeeddc5dbfd7806656a7caf33159465853af570230a" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.043445 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-85d74757d5-v95tz"] Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.048816 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b7nh\" (UniqueName: \"kubernetes.io/projected/6f85d40e-16b8-4ece-a268-8b4d227ac36c-kube-api-access-7b7nh\") pod \"6f85d40e-16b8-4ece-a268-8b4d227ac36c\" (UID: \"6f85d40e-16b8-4ece-a268-8b4d227ac36c\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.048868 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f85d40e-16b8-4ece-a268-8b4d227ac36c-combined-ca-bundle\") pod \"6f85d40e-16b8-4ece-a268-8b4d227ac36c\" (UID: \"6f85d40e-16b8-4ece-a268-8b4d227ac36c\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.048900 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d04cb142-7473-455b-8d5b-f79d879d8d58-httpd-run\") pod \"d04cb142-7473-455b-8d5b-f79d879d8d58\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.048994 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/11f688b0-b3aa-46f7-a700-c6619e3a3951-kube-state-metrics-tls-config\") pod \"11f688b0-b3aa-46f7-a700-c6619e3a3951\" (UID: \"11f688b0-b3aa-46f7-a700-c6619e3a3951\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.049016 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d04cb142-7473-455b-8d5b-f79d879d8d58-scripts\") pod \"d04cb142-7473-455b-8d5b-f79d879d8d58\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.049033 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d04cb142-7473-455b-8d5b-f79d879d8d58-config-data\") pod \"d04cb142-7473-455b-8d5b-f79d879d8d58\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.049055 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d04cb142-7473-455b-8d5b-f79d879d8d58-logs\") pod \"d04cb142-7473-455b-8d5b-f79d879d8d58\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.049074 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f85d40e-16b8-4ece-a268-8b4d227ac36c-config-data-custom\") pod \"6f85d40e-16b8-4ece-a268-8b4d227ac36c\" (UID: \"6f85d40e-16b8-4ece-a268-8b4d227ac36c\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.049111 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc4t9\" (UniqueName: \"kubernetes.io/projected/f8c66f33-3fbd-4a35-8e0d-2b38c3cd513a-kube-api-access-sc4t9\") pod \"f8c66f33-3fbd-4a35-8e0d-2b38c3cd513a\" (UID: \"f8c66f33-3fbd-4a35-8e0d-2b38c3cd513a\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.049153 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trn26\" (UniqueName: \"kubernetes.io/projected/11f688b0-b3aa-46f7-a700-c6619e3a3951-kube-api-access-trn26\") pod \"11f688b0-b3aa-46f7-a700-c6619e3a3951\" (UID: \"11f688b0-b3aa-46f7-a700-c6619e3a3951\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.049203 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82hbd\" (UniqueName: \"kubernetes.io/projected/cb825980-5dc2-420a-8638-9607a9f1eb1f-kube-api-access-82hbd\") pod \"cb825980-5dc2-420a-8638-9607a9f1eb1f\" (UID: \"cb825980-5dc2-420a-8638-9607a9f1eb1f\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.049265 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d04cb142-7473-455b-8d5b-f79d879d8d58-internal-tls-certs\") pod \"d04cb142-7473-455b-8d5b-f79d879d8d58\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.049293 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f85d40e-16b8-4ece-a268-8b4d227ac36c-config-data\") pod \"6f85d40e-16b8-4ece-a268-8b4d227ac36c\" (UID: \"6f85d40e-16b8-4ece-a268-8b4d227ac36c\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.049317 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11f688b0-b3aa-46f7-a700-c6619e3a3951-combined-ca-bundle\") pod \"11f688b0-b3aa-46f7-a700-c6619e3a3951\" (UID: \"11f688b0-b3aa-46f7-a700-c6619e3a3951\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.049358 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"d04cb142-7473-455b-8d5b-f79d879d8d58\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.049379 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d04cb142-7473-455b-8d5b-f79d879d8d58-combined-ca-bundle\") pod \"d04cb142-7473-455b-8d5b-f79d879d8d58\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.049403 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/11f688b0-b3aa-46f7-a700-c6619e3a3951-kube-state-metrics-tls-certs\") pod \"11f688b0-b3aa-46f7-a700-c6619e3a3951\" (UID: \"11f688b0-b3aa-46f7-a700-c6619e3a3951\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.049439 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9wmd\" (UniqueName: \"kubernetes.io/projected/d04cb142-7473-455b-8d5b-f79d879d8d58-kube-api-access-p9wmd\") pod \"d04cb142-7473-455b-8d5b-f79d879d8d58\" (UID: \"d04cb142-7473-455b-8d5b-f79d879d8d58\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.049464 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f85d40e-16b8-4ece-a268-8b4d227ac36c-logs\") pod \"6f85d40e-16b8-4ece-a268-8b4d227ac36c\" (UID: \"6f85d40e-16b8-4ece-a268-8b4d227ac36c\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.051946 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement3a43-account-delete-t5vdd"] Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.054853 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f85d40e-16b8-4ece-a268-8b4d227ac36c-kube-api-access-7b7nh" (OuterVolumeSpecName: "kube-api-access-7b7nh") pod "6f85d40e-16b8-4ece-a268-8b4d227ac36c" (UID: "6f85d40e-16b8-4ece-a268-8b4d227ac36c"). InnerVolumeSpecName "kube-api-access-7b7nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.055619 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11f688b0-b3aa-46f7-a700-c6619e3a3951-kube-api-access-trn26" (OuterVolumeSpecName: "kube-api-access-trn26") pod "11f688b0-b3aa-46f7-a700-c6619e3a3951" (UID: "11f688b0-b3aa-46f7-a700-c6619e3a3951"). InnerVolumeSpecName "kube-api-access-trn26". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.056039 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d04cb142-7473-455b-8d5b-f79d879d8d58-logs" (OuterVolumeSpecName: "logs") pod "d04cb142-7473-455b-8d5b-f79d879d8d58" (UID: "d04cb142-7473-455b-8d5b-f79d879d8d58"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.061852 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement3a43-account-delete-t5vdd"] Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.061933 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6869bc4646-lrqdg"] Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.063135 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d04cb142-7473-455b-8d5b-f79d879d8d58-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d04cb142-7473-455b-8d5b-f79d879d8d58" (UID: "d04cb142-7473-455b-8d5b-f79d879d8d58"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.065263 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb825980-5dc2-420a-8638-9607a9f1eb1f-kube-api-access-82hbd" (OuterVolumeSpecName: "kube-api-access-82hbd") pod "cb825980-5dc2-420a-8638-9607a9f1eb1f" (UID: "cb825980-5dc2-420a-8638-9607a9f1eb1f"). InnerVolumeSpecName "kube-api-access-82hbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.066959 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f85d40e-16b8-4ece-a268-8b4d227ac36c-logs" (OuterVolumeSpecName: "logs") pod "6f85d40e-16b8-4ece-a268-8b4d227ac36c" (UID: "6f85d40e-16b8-4ece-a268-8b4d227ac36c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.070193 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6869bc4646-lrqdg"] Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.073232 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f85d40e-16b8-4ece-a268-8b4d227ac36c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6f85d40e-16b8-4ece-a268-8b4d227ac36c" (UID: "6f85d40e-16b8-4ece-a268-8b4d227ac36c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.076826 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d04cb142-7473-455b-8d5b-f79d879d8d58-scripts" (OuterVolumeSpecName: "scripts") pod "d04cb142-7473-455b-8d5b-f79d879d8d58" (UID: "d04cb142-7473-455b-8d5b-f79d879d8d58"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.077617 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.090022 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d04cb142-7473-455b-8d5b-f79d879d8d58-kube-api-access-p9wmd" (OuterVolumeSpecName: "kube-api-access-p9wmd") pod "d04cb142-7473-455b-8d5b-f79d879d8d58" (UID: "d04cb142-7473-455b-8d5b-f79d879d8d58"). InnerVolumeSpecName "kube-api-access-p9wmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.090148 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.090106 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8c66f33-3fbd-4a35-8e0d-2b38c3cd513a-kube-api-access-sc4t9" (OuterVolumeSpecName: "kube-api-access-sc4t9") pod "f8c66f33-3fbd-4a35-8e0d-2b38c3cd513a" (UID: "f8c66f33-3fbd-4a35-8e0d-2b38c3cd513a"). InnerVolumeSpecName "kube-api-access-sc4t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.090219 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "d04cb142-7473-455b-8d5b-f79d879d8d58" (UID: "d04cb142-7473-455b-8d5b-f79d879d8d58"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.116378 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f688b0-b3aa-46f7-a700-c6619e3a3951-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11f688b0-b3aa-46f7-a700-c6619e3a3951" (UID: "11f688b0-b3aa-46f7-a700-c6619e3a3951"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.141294 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f688b0-b3aa-46f7-a700-c6619e3a3951-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "11f688b0-b3aa-46f7-a700-c6619e3a3951" (UID: "11f688b0-b3aa-46f7-a700-c6619e3a3951"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.155724 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2aaf5d8e-00de-473b-91d2-1dd8a7354853-nova-metadata-tls-certs\") pod \"2aaf5d8e-00de-473b-91d2-1dd8a7354853\" (UID: \"2aaf5d8e-00de-473b-91d2-1dd8a7354853\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.155766 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2aaf5d8e-00de-473b-91d2-1dd8a7354853-logs\") pod \"2aaf5d8e-00de-473b-91d2-1dd8a7354853\" (UID: \"2aaf5d8e-00de-473b-91d2-1dd8a7354853\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.155818 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aaf5d8e-00de-473b-91d2-1dd8a7354853-config-data\") pod \"2aaf5d8e-00de-473b-91d2-1dd8a7354853\" (UID: \"2aaf5d8e-00de-473b-91d2-1dd8a7354853\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.155875 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aaf5d8e-00de-473b-91d2-1dd8a7354853-combined-ca-bundle\") pod \"2aaf5d8e-00de-473b-91d2-1dd8a7354853\" (UID: \"2aaf5d8e-00de-473b-91d2-1dd8a7354853\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.155915 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kt55\" (UniqueName: \"kubernetes.io/projected/2aaf5d8e-00de-473b-91d2-1dd8a7354853-kube-api-access-6kt55\") pod \"2aaf5d8e-00de-473b-91d2-1dd8a7354853\" (UID: \"2aaf5d8e-00de-473b-91d2-1dd8a7354853\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.156340 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d04cb142-7473-455b-8d5b-f79d879d8d58-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.156357 4833 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/11f688b0-b3aa-46f7-a700-c6619e3a3951-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.156368 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d04cb142-7473-455b-8d5b-f79d879d8d58-logs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.156377 4833 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f85d40e-16b8-4ece-a268-8b4d227ac36c-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.156386 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc4t9\" (UniqueName: \"kubernetes.io/projected/f8c66f33-3fbd-4a35-8e0d-2b38c3cd513a-kube-api-access-sc4t9\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.156396 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trn26\" (UniqueName: \"kubernetes.io/projected/11f688b0-b3aa-46f7-a700-c6619e3a3951-kube-api-access-trn26\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.156404 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82hbd\" (UniqueName: \"kubernetes.io/projected/cb825980-5dc2-420a-8638-9607a9f1eb1f-kube-api-access-82hbd\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.156413 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11f688b0-b3aa-46f7-a700-c6619e3a3951-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.156430 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.156439 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9wmd\" (UniqueName: \"kubernetes.io/projected/d04cb142-7473-455b-8d5b-f79d879d8d58-kube-api-access-p9wmd\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.156448 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f85d40e-16b8-4ece-a268-8b4d227ac36c-logs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.156456 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b7nh\" (UniqueName: \"kubernetes.io/projected/6f85d40e-16b8-4ece-a268-8b4d227ac36c-kube-api-access-7b7nh\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.156465 4833 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d04cb142-7473-455b-8d5b-f79d879d8d58-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.167803 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aaf5d8e-00de-473b-91d2-1dd8a7354853-logs" (OuterVolumeSpecName: "logs") pod "2aaf5d8e-00de-473b-91d2-1dd8a7354853" (UID: "2aaf5d8e-00de-473b-91d2-1dd8a7354853"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.177058 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d04cb142-7473-455b-8d5b-f79d879d8d58-config-data" (OuterVolumeSpecName: "config-data") pod "d04cb142-7473-455b-8d5b-f79d879d8d58" (UID: "d04cb142-7473-455b-8d5b-f79d879d8d58"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.177609 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aaf5d8e-00de-473b-91d2-1dd8a7354853-kube-api-access-6kt55" (OuterVolumeSpecName: "kube-api-access-6kt55") pod "2aaf5d8e-00de-473b-91d2-1dd8a7354853" (UID: "2aaf5d8e-00de-473b-91d2-1dd8a7354853"). InnerVolumeSpecName "kube-api-access-6kt55". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: E1013 06:50:01.239761 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574 is running failed: container process not found" containerID="ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 13 06:50:01 crc kubenswrapper[4833]: E1013 06:50:01.250325 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8e35e2a8250ae17cf941b37d8802e61ec860dff89840f9a3f6729e8f6064f8e0" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.257959 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kt55\" (UniqueName: \"kubernetes.io/projected/2aaf5d8e-00de-473b-91d2-1dd8a7354853-kube-api-access-6kt55\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.257986 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d04cb142-7473-455b-8d5b-f79d879d8d58-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.257995 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2aaf5d8e-00de-473b-91d2-1dd8a7354853-logs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.262438 4833 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Oct 13 06:50:01 crc kubenswrapper[4833]: E1013 06:50:01.264378 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574 is running failed: container process not found" containerID="ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.268192 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aaf5d8e-00de-473b-91d2-1dd8a7354853-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "2aaf5d8e-00de-473b-91d2-1dd8a7354853" (UID: "2aaf5d8e-00de-473b-91d2-1dd8a7354853"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: E1013 06:50:01.270039 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574 is running failed: container process not found" containerID="ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 13 06:50:01 crc kubenswrapper[4833]: E1013 06:50:01.270082 4833 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-7j8gx" podUID="6aef55de-c4dd-409e-b9f1-b79adc99ea8d" containerName="ovsdb-server" Oct 13 06:50:01 crc kubenswrapper[4833]: E1013 06:50:01.279476 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8e35e2a8250ae17cf941b37d8802e61ec860dff89840f9a3f6729e8f6064f8e0" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.279617 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aaf5d8e-00de-473b-91d2-1dd8a7354853-config-data" (OuterVolumeSpecName: "config-data") pod "2aaf5d8e-00de-473b-91d2-1dd8a7354853" (UID: "2aaf5d8e-00de-473b-91d2-1dd8a7354853"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.280748 4833 generic.go:334] "Generic (PLEG): container finished" podID="aaeaef09-d532-4399-b9bb-c9e59fbf1a62" containerID="b566af3cdcf6966c91d8eb92814d438d4b7a59c8593fa14b053ca258afc3130a" exitCode=0 Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.280870 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"aaeaef09-d532-4399-b9bb-c9e59fbf1a62","Type":"ContainerDied","Data":"b566af3cdcf6966c91d8eb92814d438d4b7a59c8593fa14b053ca258afc3130a"} Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.280936 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"aaeaef09-d532-4399-b9bb-c9e59fbf1a62","Type":"ContainerDied","Data":"9379fbfd7950ced81fb002d6875d701e087c8f38f737429e34e06513f1b0a092"} Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.280948 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9379fbfd7950ced81fb002d6875d701e087c8f38f737429e34e06513f1b0a092" Oct 13 06:50:01 crc kubenswrapper[4833]: E1013 06:50:01.309323 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8e35e2a8250ae17cf941b37d8802e61ec860dff89840f9a3f6729e8f6064f8e0" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 13 06:50:01 crc kubenswrapper[4833]: E1013 06:50:01.309389 4833 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-7j8gx" podUID="6aef55de-c4dd-409e-b9f1-b79adc99ea8d" containerName="ovs-vswitchd" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.333453 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aaf5d8e-00de-473b-91d2-1dd8a7354853-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2aaf5d8e-00de-473b-91d2-1dd8a7354853" (UID: "2aaf5d8e-00de-473b-91d2-1dd8a7354853"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.337683 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell07d06-account-delete-22d5b" event={"ID":"cb825980-5dc2-420a-8638-9607a9f1eb1f","Type":"ContainerDied","Data":"3d572d1e31130a37e1ab161e394d41c0e399953a2d148de59f63141b31c9a3af"} Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.337733 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d572d1e31130a37e1ab161e394d41c0e399953a2d148de59f63141b31c9a3af" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.337803 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell07d06-account-delete-22d5b" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.354816 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f688b0-b3aa-46f7-a700-c6619e3a3951-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "11f688b0-b3aa-46f7-a700-c6619e3a3951" (UID: "11f688b0-b3aa-46f7-a700-c6619e3a3951"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.364603 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aaf5d8e-00de-473b-91d2-1dd8a7354853-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.364633 4833 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2aaf5d8e-00de-473b-91d2-1dd8a7354853-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.364644 4833 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.364654 4833 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/11f688b0-b3aa-46f7-a700-c6619e3a3951-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.364662 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aaf5d8e-00de-473b-91d2-1dd8a7354853-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.374013 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d04cb142-7473-455b-8d5b-f79d879d8d58-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d04cb142-7473-455b-8d5b-f79d879d8d58" (UID: "d04cb142-7473-455b-8d5b-f79d879d8d58"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.391833 4833 generic.go:334] "Generic (PLEG): container finished" podID="11f688b0-b3aa-46f7-a700-c6619e3a3951" containerID="96e44e80a4356d715cd071293816d2a15545c8906346c80f998d3bb584b779b5" exitCode=2 Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.391914 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"11f688b0-b3aa-46f7-a700-c6619e3a3951","Type":"ContainerDied","Data":"96e44e80a4356d715cd071293816d2a15545c8906346c80f998d3bb584b779b5"} Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.391946 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"11f688b0-b3aa-46f7-a700-c6619e3a3951","Type":"ContainerDied","Data":"46e55a5dc3c01f45b9a9d34700ad0acecc5eeb0b7d1384d0632bb706f27a7d39"} Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.392017 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.416362 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.416967 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d04cb142-7473-455b-8d5b-f79d879d8d58","Type":"ContainerDied","Data":"18280953815e89a47e836a1c880b8b210c94a68182ac3f04e94c5cb9fe4ff098"} Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.425238 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d04cb142-7473-455b-8d5b-f79d879d8d58-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d04cb142-7473-455b-8d5b-f79d879d8d58" (UID: "d04cb142-7473-455b-8d5b-f79d879d8d58"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.442771 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba" containerID="5e2a48246659117dd99f108e17e66bdef082cff5d89afeb57ea91830bf119391" exitCode=0 Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.442871 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba","Type":"ContainerDied","Data":"5e2a48246659117dd99f108e17e66bdef082cff5d89afeb57ea91830bf119391"} Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.459679 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.460412 4833 generic.go:334] "Generic (PLEG): container finished" podID="fd0bf370-6aac-4334-b612-db75770844df" containerID="d828744544a28555a1b28a9ac7c2a4e7360927b89674b6368f01b8b2cf5d2ad8" exitCode=0 Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.460639 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"fd0bf370-6aac-4334-b612-db75770844df","Type":"ContainerDied","Data":"d828744544a28555a1b28a9ac7c2a4e7360927b89674b6368f01b8b2cf5d2ad8"} Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.466084 4833 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d04cb142-7473-455b-8d5b-f79d879d8d58-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.466115 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d04cb142-7473-455b-8d5b-f79d879d8d58-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.478867 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2aaf5d8e-00de-473b-91d2-1dd8a7354853","Type":"ContainerDied","Data":"9ba2a863b1273cd7f53154afc472c8c317f55783b478d39378e76bab634d4ee5"} Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.478945 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.492575 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.495196 4833 scope.go:117] "RemoveContainer" containerID="bb3c5e96c00181e44f04caa42689894453b910ac136df05f2f6dad225247c410" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.504620 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.511228 4833 generic.go:334] "Generic (PLEG): container finished" podID="8a18c26d-a476-4e4b-9320-84369da38cf2" containerID="40c6e393bbfaf517c5fecd9b2453770dae8d96c73815045f791a0be9bcebd55d" exitCode=0 Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.511285 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f847dcbd8-p95b9" event={"ID":"8a18c26d-a476-4e4b-9320-84369da38cf2","Type":"ContainerDied","Data":"40c6e393bbfaf517c5fecd9b2453770dae8d96c73815045f791a0be9bcebd55d"} Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.512436 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f85d40e-16b8-4ece-a268-8b4d227ac36c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f85d40e-16b8-4ece-a268-8b4d227ac36c" (UID: "6f85d40e-16b8-4ece-a268-8b4d227ac36c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.514809 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell07d06-account-delete-22d5b"] Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.524069 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-595797578d-ddhnv" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.524406 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi1bdf-account-delete-k9wpj" event={"ID":"f8c66f33-3fbd-4a35-8e0d-2b38c3cd513a","Type":"ContainerDied","Data":"482ab47ae80b8a955457765311b18bc8725ec266f7a4be7f5bb96f06d3aaaf2e"} Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.524434 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="482ab47ae80b8a955457765311b18bc8725ec266f7a4be7f5bb96f06d3aaaf2e" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.524483 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi1bdf-account-delete-k9wpj" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.525633 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f847dcbd8-p95b9" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.526356 4833 generic.go:334] "Generic (PLEG): container finished" podID="69c5134b-fc5b-453c-87ee-6a26e08796cf" containerID="d05913f08dee4311606e7fd0c07f800f52f54e4b74d0ee36fae94de7571c4162" exitCode=0 Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.526409 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69c5134b-fc5b-453c-87ee-6a26e08796cf","Type":"ContainerDied","Data":"d05913f08dee4311606e7fd0c07f800f52f54e4b74d0ee36fae94de7571c4162"} Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.529594 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novacell07d06-account-delete-22d5b"] Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.529649 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4418034e-f484-4638-94bd-5b086af9e8f3","Type":"ContainerDied","Data":"2cabbee089607667537683595e667e4aa78e6c197269f4dd6ca05d0b1ab6e461"} Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.529653 4833 generic.go:334] "Generic (PLEG): container finished" podID="4418034e-f484-4638-94bd-5b086af9e8f3" containerID="2cabbee089607667537683595e667e4aa78e6c197269f4dd6ca05d0b1ab6e461" exitCode=0 Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.529690 4833 generic.go:334] "Generic (PLEG): container finished" podID="4418034e-f484-4638-94bd-5b086af9e8f3" containerID="56972d15184e848e3cb04578a7051cd018536014dd786389c2513e45b4aaedf5" exitCode=2 Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.529704 4833 generic.go:334] "Generic (PLEG): container finished" podID="4418034e-f484-4638-94bd-5b086af9e8f3" containerID="b1d8fdb61d049ee070745d6bb37299a7e3a2ea5b6a2822cfd684cbef80477fcd" exitCode=0 Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.529861 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4418034e-f484-4638-94bd-5b086af9e8f3","Type":"ContainerDied","Data":"56972d15184e848e3cb04578a7051cd018536014dd786389c2513e45b4aaedf5"} Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.529885 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4418034e-f484-4638-94bd-5b086af9e8f3","Type":"ContainerDied","Data":"b1d8fdb61d049ee070745d6bb37299a7e3a2ea5b6a2822cfd684cbef80477fcd"} Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.532342 4833 generic.go:334] "Generic (PLEG): container finished" podID="7be1410c-e237-4abe-9a2d-c8e8b5242d93" containerID="cbe232a2ef3f6d567a6669fbc2756b76c79d3815ef20ff4c3ce4de44b9dfa6da" exitCode=0 Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.532421 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-595797578d-ddhnv" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.551359 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.556810 4833 scope.go:117] "RemoveContainer" containerID="ef7d82e1a30e86fc26d1eaeeddc5dbfd7806656a7caf33159465853af570230a" Oct 13 06:50:01 crc kubenswrapper[4833]: E1013 06:50:01.558087 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef7d82e1a30e86fc26d1eaeeddc5dbfd7806656a7caf33159465853af570230a\": container with ID starting with ef7d82e1a30e86fc26d1eaeeddc5dbfd7806656a7caf33159465853af570230a not found: ID does not exist" containerID="ef7d82e1a30e86fc26d1eaeeddc5dbfd7806656a7caf33159465853af570230a" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.558127 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef7d82e1a30e86fc26d1eaeeddc5dbfd7806656a7caf33159465853af570230a"} err="failed to get container status \"ef7d82e1a30e86fc26d1eaeeddc5dbfd7806656a7caf33159465853af570230a\": rpc error: code = NotFound desc = could not find container \"ef7d82e1a30e86fc26d1eaeeddc5dbfd7806656a7caf33159465853af570230a\": container with ID starting with ef7d82e1a30e86fc26d1eaeeddc5dbfd7806656a7caf33159465853af570230a not found: ID does not exist" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.558151 4833 scope.go:117] "RemoveContainer" containerID="bb3c5e96c00181e44f04caa42689894453b910ac136df05f2f6dad225247c410" Oct 13 06:50:01 crc kubenswrapper[4833]: E1013 06:50:01.558623 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb3c5e96c00181e44f04caa42689894453b910ac136df05f2f6dad225247c410\": container with ID starting with bb3c5e96c00181e44f04caa42689894453b910ac136df05f2f6dad225247c410 not found: ID does not exist" containerID="bb3c5e96c00181e44f04caa42689894453b910ac136df05f2f6dad225247c410" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.558652 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb3c5e96c00181e44f04caa42689894453b910ac136df05f2f6dad225247c410"} err="failed to get container status \"bb3c5e96c00181e44f04caa42689894453b910ac136df05f2f6dad225247c410\": rpc error: code = NotFound desc = could not find container \"bb3c5e96c00181e44f04caa42689894453b910ac136df05f2f6dad225247c410\": container with ID starting with bb3c5e96c00181e44f04caa42689894453b910ac136df05f2f6dad225247c410 not found: ID does not exist" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.558676 4833 scope.go:117] "RemoveContainer" containerID="149c8e3ce91dc7834b024b0cc72a73ba4a7ebf9adfb1a90e69396b5c7bb5813c" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.558824 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.566381 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-595797578d-ddhnv" event={"ID":"7be1410c-e237-4abe-9a2d-c8e8b5242d93","Type":"ContainerDied","Data":"cbe232a2ef3f6d567a6669fbc2756b76c79d3815ef20ff4c3ce4de44b9dfa6da"} Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.566839 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-config-data-custom\") pod \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.566885 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-logs\") pod \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.566920 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6q7d\" (UniqueName: \"kubernetes.io/projected/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-kube-api-access-j6q7d\") pod \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.566940 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-internal-tls-certs\") pod \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.567003 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-etc-machine-id\") pod \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.567070 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-public-tls-certs\") pod \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.567090 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-config-data\") pod \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.567121 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-combined-ca-bundle\") pod \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.567139 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-scripts\") pod \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.567496 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f85d40e-16b8-4ece-a268-8b4d227ac36c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.569787 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "aaeaef09-d532-4399-b9bb-c9e59fbf1a62" (UID: "aaeaef09-d532-4399-b9bb-c9e59fbf1a62"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.570792 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-logs" (OuterVolumeSpecName: "logs") pod "aaeaef09-d532-4399-b9bb-c9e59fbf1a62" (UID: "aaeaef09-d532-4399-b9bb-c9e59fbf1a62"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.574325 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-scripts" (OuterVolumeSpecName: "scripts") pod "aaeaef09-d532-4399-b9bb-c9e59fbf1a62" (UID: "aaeaef09-d532-4399-b9bb-c9e59fbf1a62"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.574575 4833 generic.go:334] "Generic (PLEG): container finished" podID="6f85d40e-16b8-4ece-a268-8b4d227ac36c" containerID="c1bd611de8c17665166390a0cbc9052a69c7ff68323956a33d62985368f8cd99" exitCode=0 Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.574609 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-bf7fd98f9-j4rff" event={"ID":"6f85d40e-16b8-4ece-a268-8b4d227ac36c","Type":"ContainerDied","Data":"c1bd611de8c17665166390a0cbc9052a69c7ff68323956a33d62985368f8cd99"} Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.574634 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-bf7fd98f9-j4rff" event={"ID":"6f85d40e-16b8-4ece-a268-8b4d227ac36c","Type":"ContainerDied","Data":"a961829225c701bb0ce386f6eebbd368a8e79e25764f59f09a1f23636ae17f42"} Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.574638 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-bf7fd98f9-j4rff" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.574682 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.574701 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "aaeaef09-d532-4399-b9bb-c9e59fbf1a62" (UID: "aaeaef09-d532-4399-b9bb-c9e59fbf1a62"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.577015 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f85d40e-16b8-4ece-a268-8b4d227ac36c-config-data" (OuterVolumeSpecName: "config-data") pod "6f85d40e-16b8-4ece-a268-8b4d227ac36c" (UID: "6f85d40e-16b8-4ece-a268-8b4d227ac36c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.577086 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-kube-api-access-j6q7d" (OuterVolumeSpecName: "kube-api-access-j6q7d") pod "aaeaef09-d532-4399-b9bb-c9e59fbf1a62" (UID: "aaeaef09-d532-4399-b9bb-c9e59fbf1a62"). InnerVolumeSpecName "kube-api-access-j6q7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.581707 4833 scope.go:117] "RemoveContainer" containerID="92baacf08545a4d0d20572f7ee58b21cd9c99873a45dab1e9d54ae3ddc847186" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.607668 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aaeaef09-d532-4399-b9bb-c9e59fbf1a62" (UID: "aaeaef09-d532-4399-b9bb-c9e59fbf1a62"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.610586 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapi1bdf-account-delete-k9wpj"] Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.617500 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novaapi1bdf-account-delete-k9wpj"] Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.642252 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.646675 4833 scope.go:117] "RemoveContainer" containerID="96e44e80a4356d715cd071293816d2a15545c8906346c80f998d3bb584b779b5" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.648628 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.656492 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "aaeaef09-d532-4399-b9bb-c9e59fbf1a62" (UID: "aaeaef09-d532-4399-b9bb-c9e59fbf1a62"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.668497 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a18c26d-a476-4e4b-9320-84369da38cf2-logs" (OuterVolumeSpecName: "logs") pod "8a18c26d-a476-4e4b-9320-84369da38cf2" (UID: "8a18c26d-a476-4e4b-9320-84369da38cf2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.670299 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "aaeaef09-d532-4399-b9bb-c9e59fbf1a62" (UID: "aaeaef09-d532-4399-b9bb-c9e59fbf1a62"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.668121 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a18c26d-a476-4e4b-9320-84369da38cf2-logs\") pod \"8a18c26d-a476-4e4b-9320-84369da38cf2\" (UID: \"8a18c26d-a476-4e4b-9320-84369da38cf2\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.673649 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba-combined-ca-bundle\") pod \"fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba\" (UID: \"fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.673676 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-internal-tls-certs\") pod \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.673701 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69c5134b-fc5b-453c-87ee-6a26e08796cf-internal-tls-certs\") pod \"69c5134b-fc5b-453c-87ee-6a26e08796cf\" (UID: \"69c5134b-fc5b-453c-87ee-6a26e08796cf\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.673721 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a18c26d-a476-4e4b-9320-84369da38cf2-config-data-custom\") pod \"8a18c26d-a476-4e4b-9320-84369da38cf2\" (UID: \"8a18c26d-a476-4e4b-9320-84369da38cf2\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.673742 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gs2px\" (UniqueName: \"kubernetes.io/projected/7be1410c-e237-4abe-9a2d-c8e8b5242d93-kube-api-access-gs2px\") pod \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\" (UID: \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.673770 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7be1410c-e237-4abe-9a2d-c8e8b5242d93-config-data-custom\") pod \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\" (UID: \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.673786 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd0bf370-6aac-4334-b612-db75770844df-config-data\") pod \"fd0bf370-6aac-4334-b612-db75770844df\" (UID: \"fd0bf370-6aac-4334-b612-db75770844df\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.673822 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba-config-data\") pod \"fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba\" (UID: \"fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.673861 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba-kolla-config\") pod \"fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba\" (UID: \"fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.673881 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7be1410c-e237-4abe-9a2d-c8e8b5242d93-logs\") pod \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\" (UID: \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.673906 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be1410c-e237-4abe-9a2d-c8e8b5242d93-config-data\") pod \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\" (UID: \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.673929 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-public-tls-certs\") pod \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\" (UID: \"aaeaef09-d532-4399-b9bb-c9e59fbf1a62\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.673974 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be1410c-e237-4abe-9a2d-c8e8b5242d93-public-tls-certs\") pod \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\" (UID: \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.674004 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69c5134b-fc5b-453c-87ee-6a26e08796cf-logs\") pod \"69c5134b-fc5b-453c-87ee-6a26e08796cf\" (UID: \"69c5134b-fc5b-453c-87ee-6a26e08796cf\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.674027 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0bf370-6aac-4334-b612-db75770844df-combined-ca-bundle\") pod \"fd0bf370-6aac-4334-b612-db75770844df\" (UID: \"fd0bf370-6aac-4334-b612-db75770844df\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.674048 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba-memcached-tls-certs\") pod \"fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba\" (UID: \"fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.674063 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be1410c-e237-4abe-9a2d-c8e8b5242d93-combined-ca-bundle\") pod \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\" (UID: \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.674083 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69c5134b-fc5b-453c-87ee-6a26e08796cf-combined-ca-bundle\") pod \"69c5134b-fc5b-453c-87ee-6a26e08796cf\" (UID: \"69c5134b-fc5b-453c-87ee-6a26e08796cf\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.674099 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a18c26d-a476-4e4b-9320-84369da38cf2-combined-ca-bundle\") pod \"8a18c26d-a476-4e4b-9320-84369da38cf2\" (UID: \"8a18c26d-a476-4e4b-9320-84369da38cf2\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.674117 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtdbn\" (UniqueName: \"kubernetes.io/projected/fd0bf370-6aac-4334-b612-db75770844df-kube-api-access-mtdbn\") pod \"fd0bf370-6aac-4334-b612-db75770844df\" (UID: \"fd0bf370-6aac-4334-b612-db75770844df\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.674131 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dncg\" (UniqueName: \"kubernetes.io/projected/8a18c26d-a476-4e4b-9320-84369da38cf2-kube-api-access-7dncg\") pod \"8a18c26d-a476-4e4b-9320-84369da38cf2\" (UID: \"8a18c26d-a476-4e4b-9320-84369da38cf2\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.674152 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be1410c-e237-4abe-9a2d-c8e8b5242d93-internal-tls-certs\") pod \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\" (UID: \"7be1410c-e237-4abe-9a2d-c8e8b5242d93\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.674167 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd62r\" (UniqueName: \"kubernetes.io/projected/fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba-kube-api-access-gd62r\") pod \"fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba\" (UID: \"fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.674188 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98fjg\" (UniqueName: \"kubernetes.io/projected/69c5134b-fc5b-453c-87ee-6a26e08796cf-kube-api-access-98fjg\") pod \"69c5134b-fc5b-453c-87ee-6a26e08796cf\" (UID: \"69c5134b-fc5b-453c-87ee-6a26e08796cf\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.674208 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69c5134b-fc5b-453c-87ee-6a26e08796cf-config-data\") pod \"69c5134b-fc5b-453c-87ee-6a26e08796cf\" (UID: \"69c5134b-fc5b-453c-87ee-6a26e08796cf\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.674222 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69c5134b-fc5b-453c-87ee-6a26e08796cf-public-tls-certs\") pod \"69c5134b-fc5b-453c-87ee-6a26e08796cf\" (UID: \"69c5134b-fc5b-453c-87ee-6a26e08796cf\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.674249 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a18c26d-a476-4e4b-9320-84369da38cf2-config-data\") pod \"8a18c26d-a476-4e4b-9320-84369da38cf2\" (UID: \"8a18c26d-a476-4e4b-9320-84369da38cf2\") " Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.674740 4833 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.674753 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a18c26d-a476-4e4b-9320-84369da38cf2-logs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.674763 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-logs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.674772 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6q7d\" (UniqueName: \"kubernetes.io/projected/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-kube-api-access-j6q7d\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.674780 4833 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.674789 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.674797 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.674804 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f85d40e-16b8-4ece-a268-8b4d227ac36c-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: W1013 06:50:01.673818 4833 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/aaeaef09-d532-4399-b9bb-c9e59fbf1a62/volumes/kubernetes.io~secret/internal-tls-certs Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.683166 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "aaeaef09-d532-4399-b9bb-c9e59fbf1a62" (UID: "aaeaef09-d532-4399-b9bb-c9e59fbf1a62"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: E1013 06:50:01.674861 4833 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Oct 13 06:50:01 crc kubenswrapper[4833]: E1013 06:50:01.683228 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/827f736f-2193-4ebd-ab7f-99fb22945d1e-config-data podName:827f736f-2193-4ebd-ab7f-99fb22945d1e nodeName:}" failed. No retries permitted until 2025-10-13 06:50:09.683211539 +0000 UTC m=+1299.783634455 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/827f736f-2193-4ebd-ab7f-99fb22945d1e-config-data") pod "rabbitmq-server-0" (UID: "827f736f-2193-4ebd-ab7f-99fb22945d1e") : configmap "rabbitmq-config-data" not found Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.685779 4833 scope.go:117] "RemoveContainer" containerID="96e44e80a4356d715cd071293816d2a15545c8906346c80f998d3bb584b779b5" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.686635 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7be1410c-e237-4abe-9a2d-c8e8b5242d93-kube-api-access-gs2px" (OuterVolumeSpecName: "kube-api-access-gs2px") pod "7be1410c-e237-4abe-9a2d-c8e8b5242d93" (UID: "7be1410c-e237-4abe-9a2d-c8e8b5242d93"). InnerVolumeSpecName "kube-api-access-gs2px". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: E1013 06:50:01.687262 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96e44e80a4356d715cd071293816d2a15545c8906346c80f998d3bb584b779b5\": container with ID starting with 96e44e80a4356d715cd071293816d2a15545c8906346c80f998d3bb584b779b5 not found: ID does not exist" containerID="96e44e80a4356d715cd071293816d2a15545c8906346c80f998d3bb584b779b5" Oct 13 06:50:01 crc kubenswrapper[4833]: W1013 06:50:01.676629 4833 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/aaeaef09-d532-4399-b9bb-c9e59fbf1a62/volumes/kubernetes.io~secret/public-tls-certs Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.687318 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "aaeaef09-d532-4399-b9bb-c9e59fbf1a62" (UID: "aaeaef09-d532-4399-b9bb-c9e59fbf1a62"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.683136 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba-config-data" (OuterVolumeSpecName: "config-data") pod "fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba" (UID: "fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.687329 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96e44e80a4356d715cd071293816d2a15545c8906346c80f998d3bb584b779b5"} err="failed to get container status \"96e44e80a4356d715cd071293816d2a15545c8906346c80f998d3bb584b779b5\": rpc error: code = NotFound desc = could not find container \"96e44e80a4356d715cd071293816d2a15545c8906346c80f998d3bb584b779b5\": container with ID starting with 96e44e80a4356d715cd071293816d2a15545c8906346c80f998d3bb584b779b5 not found: ID does not exist" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.687363 4833 scope.go:117] "RemoveContainer" containerID="51e7bc679df23d3e526317bc29126a1542ce20237f8a48a696b934699094819a" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.687573 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69c5134b-fc5b-453c-87ee-6a26e08796cf-logs" (OuterVolumeSpecName: "logs") pod "69c5134b-fc5b-453c-87ee-6a26e08796cf" (UID: "69c5134b-fc5b-453c-87ee-6a26e08796cf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.689138 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7be1410c-e237-4abe-9a2d-c8e8b5242d93-logs" (OuterVolumeSpecName: "logs") pod "7be1410c-e237-4abe-9a2d-c8e8b5242d93" (UID: "7be1410c-e237-4abe-9a2d-c8e8b5242d93"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.693891 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a18c26d-a476-4e4b-9320-84369da38cf2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8a18c26d-a476-4e4b-9320-84369da38cf2" (UID: "8a18c26d-a476-4e4b-9320-84369da38cf2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.695312 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba" (UID: "fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.703970 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7be1410c-e237-4abe-9a2d-c8e8b5242d93-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7be1410c-e237-4abe-9a2d-c8e8b5242d93" (UID: "7be1410c-e237-4abe-9a2d-c8e8b5242d93"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.707617 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba-kube-api-access-gd62r" (OuterVolumeSpecName: "kube-api-access-gd62r") pod "fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba" (UID: "fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba"). InnerVolumeSpecName "kube-api-access-gd62r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.716937 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69c5134b-fc5b-453c-87ee-6a26e08796cf-kube-api-access-98fjg" (OuterVolumeSpecName: "kube-api-access-98fjg") pod "69c5134b-fc5b-453c-87ee-6a26e08796cf" (UID: "69c5134b-fc5b-453c-87ee-6a26e08796cf"). InnerVolumeSpecName "kube-api-access-98fjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.717158 4833 scope.go:117] "RemoveContainer" containerID="a14dbf5251baa6dac8fcb1a3b7d4c495bc7314e806a21af252e2c6ac6c47c059" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.728274 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a18c26d-a476-4e4b-9320-84369da38cf2-kube-api-access-7dncg" (OuterVolumeSpecName: "kube-api-access-7dncg") pod "8a18c26d-a476-4e4b-9320-84369da38cf2" (UID: "8a18c26d-a476-4e4b-9320-84369da38cf2"). InnerVolumeSpecName "kube-api-access-7dncg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.733270 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd0bf370-6aac-4334-b612-db75770844df-kube-api-access-mtdbn" (OuterVolumeSpecName: "kube-api-access-mtdbn") pod "fd0bf370-6aac-4334-b612-db75770844df" (UID: "fd0bf370-6aac-4334-b612-db75770844df"). InnerVolumeSpecName "kube-api-access-mtdbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.752987 4833 scope.go:117] "RemoveContainer" containerID="0c49851d3254ed77c14a56073d79efd51082af7a60fed7458676ff9c919c96c6" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.761628 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.764514 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.765663 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd0bf370-6aac-4334-b612-db75770844df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd0bf370-6aac-4334-b612-db75770844df" (UID: "fd0bf370-6aac-4334-b612-db75770844df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.767649 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7be1410c-e237-4abe-9a2d-c8e8b5242d93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7be1410c-e237-4abe-9a2d-c8e8b5242d93" (UID: "7be1410c-e237-4abe-9a2d-c8e8b5242d93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.777163 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69c5134b-fc5b-453c-87ee-6a26e08796cf-logs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.777184 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0bf370-6aac-4334-b612-db75770844df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.777212 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be1410c-e237-4abe-9a2d-c8e8b5242d93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.777221 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtdbn\" (UniqueName: \"kubernetes.io/projected/fd0bf370-6aac-4334-b612-db75770844df-kube-api-access-mtdbn\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.777229 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dncg\" (UniqueName: \"kubernetes.io/projected/8a18c26d-a476-4e4b-9320-84369da38cf2-kube-api-access-7dncg\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.777247 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd62r\" (UniqueName: \"kubernetes.io/projected/fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba-kube-api-access-gd62r\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.777255 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98fjg\" (UniqueName: \"kubernetes.io/projected/69c5134b-fc5b-453c-87ee-6a26e08796cf-kube-api-access-98fjg\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.777263 4833 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.777289 4833 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a18c26d-a476-4e4b-9320-84369da38cf2-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.777299 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gs2px\" (UniqueName: \"kubernetes.io/projected/7be1410c-e237-4abe-9a2d-c8e8b5242d93-kube-api-access-gs2px\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.777309 4833 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7be1410c-e237-4abe-9a2d-c8e8b5242d93-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.777317 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.777325 4833 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba-kolla-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.777332 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7be1410c-e237-4abe-9a2d-c8e8b5242d93-logs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.777340 4833 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.782157 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-rtrth" podUID="5fb7c39d-6b28-4530-b9b1-87c2af591f61" containerName="ovn-controller" probeResult="failure" output="command timed out" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.785936 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd0bf370-6aac-4334-b612-db75770844df-config-data" (OuterVolumeSpecName: "config-data") pod "fd0bf370-6aac-4334-b612-db75770844df" (UID: "fd0bf370-6aac-4334-b612-db75770844df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.786030 4833 scope.go:117] "RemoveContainer" containerID="988658baa74f964f157fcd718e94c95fc2e7688fc3335d190e84005d02e7fd3a" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.811741 4833 scope.go:117] "RemoveContainer" containerID="cbe232a2ef3f6d567a6669fbc2756b76c79d3815ef20ff4c3ce4de44b9dfa6da" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.822883 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="827f736f-2193-4ebd-ab7f-99fb22945d1e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.104:5671: connect: connection refused" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.831141 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69c5134b-fc5b-453c-87ee-6a26e08796cf-config-data" (OuterVolumeSpecName: "config-data") pod "69c5134b-fc5b-453c-87ee-6a26e08796cf" (UID: "69c5134b-fc5b-453c-87ee-6a26e08796cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.845119 4833 scope.go:117] "RemoveContainer" containerID="e7c4fb08195b32e50c609a55ad8f5ba6ccf4ebfb598a3cf9e868edc5d55a8023" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.876183 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7be1410c-e237-4abe-9a2d-c8e8b5242d93-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7be1410c-e237-4abe-9a2d-c8e8b5242d93" (UID: "7be1410c-e237-4abe-9a2d-c8e8b5242d93"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.878358 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd0bf370-6aac-4334-b612-db75770844df-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.878388 4833 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be1410c-e237-4abe-9a2d-c8e8b5242d93-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.878398 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69c5134b-fc5b-453c-87ee-6a26e08796cf-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.886457 4833 scope.go:117] "RemoveContainer" containerID="c1bd611de8c17665166390a0cbc9052a69c7ff68323956a33d62985368f8cd99" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.898889 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69c5134b-fc5b-453c-87ee-6a26e08796cf-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "69c5134b-fc5b-453c-87ee-6a26e08796cf" (UID: "69c5134b-fc5b-453c-87ee-6a26e08796cf"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.903683 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-rtrth" podUID="5fb7c39d-6b28-4530-b9b1-87c2af591f61" containerName="ovn-controller" probeResult="failure" output=< Oct 13 06:50:01 crc kubenswrapper[4833]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Oct 13 06:50:01 crc kubenswrapper[4833]: > Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.913429 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-bf7fd98f9-j4rff"] Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.921096 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba" (UID: "fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.921321 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69c5134b-fc5b-453c-87ee-6a26e08796cf-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "69c5134b-fc5b-453c-87ee-6a26e08796cf" (UID: "69c5134b-fc5b-453c-87ee-6a26e08796cf"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.922274 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-bf7fd98f9-j4rff"] Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.923721 4833 scope.go:117] "RemoveContainer" containerID="f36c7308b02b9cd8d73f30a8ea3b598f9d78fa61234c951811b74664fe47b465" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.929296 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a18c26d-a476-4e4b-9320-84369da38cf2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a18c26d-a476-4e4b-9320-84369da38cf2" (UID: "8a18c26d-a476-4e4b-9320-84369da38cf2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.961809 4833 scope.go:117] "RemoveContainer" containerID="c1bd611de8c17665166390a0cbc9052a69c7ff68323956a33d62985368f8cd99" Oct 13 06:50:01 crc kubenswrapper[4833]: E1013 06:50:01.965703 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1bd611de8c17665166390a0cbc9052a69c7ff68323956a33d62985368f8cd99\": container with ID starting with c1bd611de8c17665166390a0cbc9052a69c7ff68323956a33d62985368f8cd99 not found: ID does not exist" containerID="c1bd611de8c17665166390a0cbc9052a69c7ff68323956a33d62985368f8cd99" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.965745 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1bd611de8c17665166390a0cbc9052a69c7ff68323956a33d62985368f8cd99"} err="failed to get container status \"c1bd611de8c17665166390a0cbc9052a69c7ff68323956a33d62985368f8cd99\": rpc error: code = NotFound desc = could not find container \"c1bd611de8c17665166390a0cbc9052a69c7ff68323956a33d62985368f8cd99\": container with ID starting with c1bd611de8c17665166390a0cbc9052a69c7ff68323956a33d62985368f8cd99 not found: ID does not exist" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.965769 4833 scope.go:117] "RemoveContainer" containerID="f36c7308b02b9cd8d73f30a8ea3b598f9d78fa61234c951811b74664fe47b465" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.965891 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69c5134b-fc5b-453c-87ee-6a26e08796cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69c5134b-fc5b-453c-87ee-6a26e08796cf" (UID: "69c5134b-fc5b-453c-87ee-6a26e08796cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: E1013 06:50:01.966292 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f36c7308b02b9cd8d73f30a8ea3b598f9d78fa61234c951811b74664fe47b465\": container with ID starting with f36c7308b02b9cd8d73f30a8ea3b598f9d78fa61234c951811b74664fe47b465 not found: ID does not exist" containerID="f36c7308b02b9cd8d73f30a8ea3b598f9d78fa61234c951811b74664fe47b465" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.966318 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f36c7308b02b9cd8d73f30a8ea3b598f9d78fa61234c951811b74664fe47b465"} err="failed to get container status \"f36c7308b02b9cd8d73f30a8ea3b598f9d78fa61234c951811b74664fe47b465\": rpc error: code = NotFound desc = could not find container \"f36c7308b02b9cd8d73f30a8ea3b598f9d78fa61234c951811b74664fe47b465\": container with ID starting with f36c7308b02b9cd8d73f30a8ea3b598f9d78fa61234c951811b74664fe47b465 not found: ID does not exist" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.971639 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7be1410c-e237-4abe-9a2d-c8e8b5242d93-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7be1410c-e237-4abe-9a2d-c8e8b5242d93" (UID: "7be1410c-e237-4abe-9a2d-c8e8b5242d93"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.971808 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba" (UID: "fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.981263 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-config-data" (OuterVolumeSpecName: "config-data") pod "aaeaef09-d532-4399-b9bb-c9e59fbf1a62" (UID: "aaeaef09-d532-4399-b9bb-c9e59fbf1a62"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.981838 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaeaef09-d532-4399-b9bb-c9e59fbf1a62-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.981868 4833 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.981881 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69c5134b-fc5b-453c-87ee-6a26e08796cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.981893 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a18c26d-a476-4e4b-9320-84369da38cf2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.981904 4833 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be1410c-e237-4abe-9a2d-c8e8b5242d93-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.981915 4833 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69c5134b-fc5b-453c-87ee-6a26e08796cf-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.981926 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.981937 4833 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69c5134b-fc5b-453c-87ee-6a26e08796cf-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:01 crc kubenswrapper[4833]: E1013 06:50:01.982140 4833 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Oct 13 06:50:01 crc kubenswrapper[4833]: E1013 06:50:01.982199 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0a6ab499-ed60-45e7-b510-5a43422aa7f5-config-data podName:0a6ab499-ed60-45e7-b510-5a43422aa7f5 nodeName:}" failed. No retries permitted until 2025-10-13 06:50:09.98218194 +0000 UTC m=+1300.082604856 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0a6ab499-ed60-45e7-b510-5a43422aa7f5-config-data") pod "rabbitmq-cell1-server-0" (UID: "0a6ab499-ed60-45e7-b510-5a43422aa7f5") : configmap "rabbitmq-cell1-config-data" not found Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.983749 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7be1410c-e237-4abe-9a2d-c8e8b5242d93-config-data" (OuterVolumeSpecName: "config-data") pod "7be1410c-e237-4abe-9a2d-c8e8b5242d93" (UID: "7be1410c-e237-4abe-9a2d-c8e8b5242d93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:01 crc kubenswrapper[4833]: I1013 06:50:01.987373 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a18c26d-a476-4e4b-9320-84369da38cf2-config-data" (OuterVolumeSpecName: "config-data") pod "8a18c26d-a476-4e4b-9320-84369da38cf2" (UID: "8a18c26d-a476-4e4b-9320-84369da38cf2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.083695 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a18c26d-a476-4e4b-9320-84369da38cf2-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.083719 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be1410c-e237-4abe-9a2d-c8e8b5242d93-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.387710 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-595797578d-ddhnv"] Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.406244 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-595797578d-ddhnv"] Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.591951 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.598328 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba","Type":"ContainerDied","Data":"93de4ba4fb1191bc7db4fdba21230ba56fb035a2a010fd3a123a7e8b541d3fd1"} Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.598371 4833 scope.go:117] "RemoveContainer" containerID="5e2a48246659117dd99f108e17e66bdef082cff5d89afeb57ea91830bf119391" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.598488 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.614472 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"fd0bf370-6aac-4334-b612-db75770844df","Type":"ContainerDied","Data":"3925169af8a9b5f4ed6c674eb78c9e1a977bae7d21b3de8613b162e7a08e0aae"} Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.614504 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.627957 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 06:50:02 crc kubenswrapper[4833]: E1013 06:50:02.636831 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e0d2353375289df900cadbe52a7dfd8067c5455ffa6c327d4b7380ccf466e04d is running failed: container process not found" containerID="e0d2353375289df900cadbe52a7dfd8067c5455ffa6c327d4b7380ccf466e04d" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Oct 13 06:50:02 crc kubenswrapper[4833]: E1013 06:50:02.637431 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e0d2353375289df900cadbe52a7dfd8067c5455ffa6c327d4b7380ccf466e04d is running failed: container process not found" containerID="e0d2353375289df900cadbe52a7dfd8067c5455ffa6c327d4b7380ccf466e04d" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.639091 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11f688b0-b3aa-46f7-a700-c6619e3a3951" path="/var/lib/kubelet/pods/11f688b0-b3aa-46f7-a700-c6619e3a3951/volumes" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.639721 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aaf5d8e-00de-473b-91d2-1dd8a7354853" path="/var/lib/kubelet/pods/2aaf5d8e-00de-473b-91d2-1dd8a7354853/volumes" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.640285 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f980bce-4b41-461d-9a1f-af4e6fb7455b" path="/var/lib/kubelet/pods/3f980bce-4b41-461d-9a1f-af4e6fb7455b/volumes" Oct 13 06:50:02 crc kubenswrapper[4833]: E1013 06:50:02.640282 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e0d2353375289df900cadbe52a7dfd8067c5455ffa6c327d4b7380ccf466e04d is running failed: container process not found" containerID="e0d2353375289df900cadbe52a7dfd8067c5455ffa6c327d4b7380ccf466e04d" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Oct 13 06:50:02 crc kubenswrapper[4833]: E1013 06:50:02.640448 4833 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e0d2353375289df900cadbe52a7dfd8067c5455ffa6c327d4b7380ccf466e04d is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="c1a7008b-3448-4108-81b0-4d16484a6f7b" containerName="ovn-northd" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.642006 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="626d71e0-e957-4a46-9565-d19058a575c9" path="/var/lib/kubelet/pods/626d71e0-e957-4a46-9565-d19058a575c9/volumes" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.642659 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f85d40e-16b8-4ece-a268-8b4d227ac36c" path="/var/lib/kubelet/pods/6f85d40e-16b8-4ece-a268-8b4d227ac36c/volumes" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.643400 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77004520-24e0-4076-8155-b4a8b6b3e1a2" path="/var/lib/kubelet/pods/77004520-24e0-4076-8155-b4a8b6b3e1a2/volumes" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.644561 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7be1410c-e237-4abe-9a2d-c8e8b5242d93" path="/var/lib/kubelet/pods/7be1410c-e237-4abe-9a2d-c8e8b5242d93/volumes" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.645142 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baa4dafc-7be7-4f97-ba72-359c27e3151c" path="/var/lib/kubelet/pods/baa4dafc-7be7-4f97-ba72-359c27e3151c/volumes" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.645740 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb825980-5dc2-420a-8638-9607a9f1eb1f" path="/var/lib/kubelet/pods/cb825980-5dc2-420a-8638-9607a9f1eb1f/volumes" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.646716 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d04cb142-7473-455b-8d5b-f79d879d8d58" path="/var/lib/kubelet/pods/d04cb142-7473-455b-8d5b-f79d879d8d58/volumes" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.647219 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0bc4033-85b9-4212-b1e2-3c5888ddcf0a" path="/var/lib/kubelet/pods/d0bc4033-85b9-4212-b1e2-3c5888ddcf0a/volumes" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.647698 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e71af496-4851-4904-9003-0358adc97b94" path="/var/lib/kubelet/pods/e71af496-4851-4904-9003-0358adc97b94/volumes" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.648555 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8c66f33-3fbd-4a35-8e0d-2b38c3cd513a" path="/var/lib/kubelet/pods/f8c66f33-3fbd-4a35-8e0d-2b38c3cd513a/volumes" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.654157 4833 generic.go:334] "Generic (PLEG): container finished" podID="aa0ca608-57b5-4289-9271-fcc10a6c7422" containerID="be1096a47ac15e28bada61021cfe94e95ea96a8a39abb449b9b81d46dc563b03" exitCode=0 Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.654265 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.669741 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c1a7008b-3448-4108-81b0-4d16484a6f7b/ovn-northd/0.log" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.669780 4833 generic.go:334] "Generic (PLEG): container finished" podID="c1a7008b-3448-4108-81b0-4d16484a6f7b" containerID="e0d2353375289df900cadbe52a7dfd8067c5455ffa6c327d4b7380ccf466e04d" exitCode=139 Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.674333 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.676638 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f847dcbd8-p95b9" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.692594 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aa0ca608-57b5-4289-9271-fcc10a6c7422-kolla-config\") pod \"aa0ca608-57b5-4289-9271-fcc10a6c7422\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.692639 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa0ca608-57b5-4289-9271-fcc10a6c7422-galera-tls-certs\") pod \"aa0ca608-57b5-4289-9271-fcc10a6c7422\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.692701 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/aa0ca608-57b5-4289-9271-fcc10a6c7422-config-data-default\") pod \"aa0ca608-57b5-4289-9271-fcc10a6c7422\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.692754 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/aa0ca608-57b5-4289-9271-fcc10a6c7422-secrets\") pod \"aa0ca608-57b5-4289-9271-fcc10a6c7422\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.693085 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0ca608-57b5-4289-9271-fcc10a6c7422-combined-ca-bundle\") pod \"aa0ca608-57b5-4289-9271-fcc10a6c7422\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.693123 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96dr9\" (UniqueName: \"kubernetes.io/projected/aa0ca608-57b5-4289-9271-fcc10a6c7422-kube-api-access-96dr9\") pod \"aa0ca608-57b5-4289-9271-fcc10a6c7422\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.693151 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa0ca608-57b5-4289-9271-fcc10a6c7422-operator-scripts\") pod \"aa0ca608-57b5-4289-9271-fcc10a6c7422\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.693197 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/aa0ca608-57b5-4289-9271-fcc10a6c7422-config-data-generated\") pod \"aa0ca608-57b5-4289-9271-fcc10a6c7422\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.693232 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"aa0ca608-57b5-4289-9271-fcc10a6c7422\" (UID: \"aa0ca608-57b5-4289-9271-fcc10a6c7422\") " Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.693406 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa0ca608-57b5-4289-9271-fcc10a6c7422-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "aa0ca608-57b5-4289-9271-fcc10a6c7422" (UID: "aa0ca608-57b5-4289-9271-fcc10a6c7422"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.693451 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa0ca608-57b5-4289-9271-fcc10a6c7422-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "aa0ca608-57b5-4289-9271-fcc10a6c7422" (UID: "aa0ca608-57b5-4289-9271-fcc10a6c7422"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.694005 4833 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/aa0ca608-57b5-4289-9271-fcc10a6c7422-config-data-default\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.694032 4833 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aa0ca608-57b5-4289-9271-fcc10a6c7422-kolla-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.694233 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa0ca608-57b5-4289-9271-fcc10a6c7422-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "aa0ca608-57b5-4289-9271-fcc10a6c7422" (UID: "aa0ca608-57b5-4289-9271-fcc10a6c7422"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.694366 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa0ca608-57b5-4289-9271-fcc10a6c7422-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aa0ca608-57b5-4289-9271-fcc10a6c7422" (UID: "aa0ca608-57b5-4289-9271-fcc10a6c7422"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.698075 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa0ca608-57b5-4289-9271-fcc10a6c7422-secrets" (OuterVolumeSpecName: "secrets") pod "aa0ca608-57b5-4289-9271-fcc10a6c7422" (UID: "aa0ca608-57b5-4289-9271-fcc10a6c7422"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.701780 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa0ca608-57b5-4289-9271-fcc10a6c7422-kube-api-access-96dr9" (OuterVolumeSpecName: "kube-api-access-96dr9") pod "aa0ca608-57b5-4289-9271-fcc10a6c7422" (UID: "aa0ca608-57b5-4289-9271-fcc10a6c7422"). InnerVolumeSpecName "kube-api-access-96dr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.704469 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "mysql-db") pod "aa0ca608-57b5-4289-9271-fcc10a6c7422" (UID: "aa0ca608-57b5-4289-9271-fcc10a6c7422"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.712846 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69c5134b-fc5b-453c-87ee-6a26e08796cf","Type":"ContainerDied","Data":"84213ef112374c121d2b79a915af71f5d3bc419e0ffe5f2dbd77da56240629a4"} Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.712897 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"aa0ca608-57b5-4289-9271-fcc10a6c7422","Type":"ContainerDied","Data":"be1096a47ac15e28bada61021cfe94e95ea96a8a39abb449b9b81d46dc563b03"} Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.712913 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"aa0ca608-57b5-4289-9271-fcc10a6c7422","Type":"ContainerDied","Data":"03828655167a40072a1527a33a61b9fbc5313e490328900dc94ec83fea826c0e"} Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.712924 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c1a7008b-3448-4108-81b0-4d16484a6f7b","Type":"ContainerDied","Data":"e0d2353375289df900cadbe52a7dfd8067c5455ffa6c327d4b7380ccf466e04d"} Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.712936 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f847dcbd8-p95b9" event={"ID":"8a18c26d-a476-4e4b-9320-84369da38cf2","Type":"ContainerDied","Data":"3b968c0d9589f8dc9186d50c9a1e3b9eb80e98b8ec3f6cd28a674143e69a7e2e"} Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.789291 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5f847dcbd8-p95b9"] Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.789340 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-5f847dcbd8-p95b9"] Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.768840 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa0ca608-57b5-4289-9271-fcc10a6c7422-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "aa0ca608-57b5-4289-9271-fcc10a6c7422" (UID: "aa0ca608-57b5-4289-9271-fcc10a6c7422"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.791895 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa0ca608-57b5-4289-9271-fcc10a6c7422-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa0ca608-57b5-4289-9271-fcc10a6c7422" (UID: "aa0ca608-57b5-4289-9271-fcc10a6c7422"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.796694 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.801341 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0ca608-57b5-4289-9271-fcc10a6c7422-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.801361 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96dr9\" (UniqueName: \"kubernetes.io/projected/aa0ca608-57b5-4289-9271-fcc10a6c7422-kube-api-access-96dr9\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.801371 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa0ca608-57b5-4289-9271-fcc10a6c7422-operator-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.801379 4833 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/aa0ca608-57b5-4289-9271-fcc10a6c7422-config-data-generated\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.801451 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.801476 4833 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa0ca608-57b5-4289-9271-fcc10a6c7422-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.801487 4833 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/aa0ca608-57b5-4289-9271-fcc10a6c7422-secrets\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.817632 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.819744 4833 scope.go:117] "RemoveContainer" containerID="d828744544a28555a1b28a9ac7c2a4e7360927b89674b6368f01b8b2cf5d2ad8" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.824734 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.839786 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.844558 4833 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.854245 4833 scope.go:117] "RemoveContainer" containerID="d05913f08dee4311606e7fd0c07f800f52f54e4b74d0ee36fae94de7571c4162" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.875943 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.879818 4833 scope.go:117] "RemoveContainer" containerID="d419f7d589b55bb7907d4d67106a93b203046358490c3edf1dc9eeca8e0bd809" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.882514 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.891121 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.894763 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.905427 4833 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.912755 4833 scope.go:117] "RemoveContainer" containerID="be1096a47ac15e28bada61021cfe94e95ea96a8a39abb449b9b81d46dc563b03" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.916988 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c1a7008b-3448-4108-81b0-4d16484a6f7b/ovn-northd/0.log" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.917052 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 13 06:50:02 crc kubenswrapper[4833]: I1013 06:50:02.977485 4833 scope.go:117] "RemoveContainer" containerID="b5b26884237d1d0340d022d2f237fead3af51824f579b5fa022a6074b7e977dc" Oct 13 06:50:02 crc kubenswrapper[4833]: E1013 06:50:02.995995 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0f094acdb89c411f919d5e575dcd1514370d320b5ec95bb3019deaf50dd6a0bc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 13 06:50:02 crc kubenswrapper[4833]: E1013 06:50:02.997666 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0f094acdb89c411f919d5e575dcd1514370d320b5ec95bb3019deaf50dd6a0bc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 13 06:50:02 crc kubenswrapper[4833]: E1013 06:50:02.999008 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0f094acdb89c411f919d5e575dcd1514370d320b5ec95bb3019deaf50dd6a0bc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 13 06:50:02 crc kubenswrapper[4833]: E1013 06:50:02.999067 4833 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b" containerName="nova-cell1-conductor-conductor" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.006773 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcrm2\" (UniqueName: \"kubernetes.io/projected/c1a7008b-3448-4108-81b0-4d16484a6f7b-kube-api-access-wcrm2\") pod \"c1a7008b-3448-4108-81b0-4d16484a6f7b\" (UID: \"c1a7008b-3448-4108-81b0-4d16484a6f7b\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.006831 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a7008b-3448-4108-81b0-4d16484a6f7b-combined-ca-bundle\") pod \"c1a7008b-3448-4108-81b0-4d16484a6f7b\" (UID: \"c1a7008b-3448-4108-81b0-4d16484a6f7b\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.006889 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c1a7008b-3448-4108-81b0-4d16484a6f7b-ovn-rundir\") pod \"c1a7008b-3448-4108-81b0-4d16484a6f7b\" (UID: \"c1a7008b-3448-4108-81b0-4d16484a6f7b\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.006908 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1a7008b-3448-4108-81b0-4d16484a6f7b-scripts\") pod \"c1a7008b-3448-4108-81b0-4d16484a6f7b\" (UID: \"c1a7008b-3448-4108-81b0-4d16484a6f7b\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.006929 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1a7008b-3448-4108-81b0-4d16484a6f7b-metrics-certs-tls-certs\") pod \"c1a7008b-3448-4108-81b0-4d16484a6f7b\" (UID: \"c1a7008b-3448-4108-81b0-4d16484a6f7b\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.006962 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1a7008b-3448-4108-81b0-4d16484a6f7b-ovn-northd-tls-certs\") pod \"c1a7008b-3448-4108-81b0-4d16484a6f7b\" (UID: \"c1a7008b-3448-4108-81b0-4d16484a6f7b\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.007020 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a7008b-3448-4108-81b0-4d16484a6f7b-config\") pod \"c1a7008b-3448-4108-81b0-4d16484a6f7b\" (UID: \"c1a7008b-3448-4108-81b0-4d16484a6f7b\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.007723 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1a7008b-3448-4108-81b0-4d16484a6f7b-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "c1a7008b-3448-4108-81b0-4d16484a6f7b" (UID: "c1a7008b-3448-4108-81b0-4d16484a6f7b"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.007754 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1a7008b-3448-4108-81b0-4d16484a6f7b-config" (OuterVolumeSpecName: "config") pod "c1a7008b-3448-4108-81b0-4d16484a6f7b" (UID: "c1a7008b-3448-4108-81b0-4d16484a6f7b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.008292 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1a7008b-3448-4108-81b0-4d16484a6f7b-scripts" (OuterVolumeSpecName: "scripts") pod "c1a7008b-3448-4108-81b0-4d16484a6f7b" (UID: "c1a7008b-3448-4108-81b0-4d16484a6f7b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.009671 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.011883 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1a7008b-3448-4108-81b0-4d16484a6f7b-kube-api-access-wcrm2" (OuterVolumeSpecName: "kube-api-access-wcrm2") pod "c1a7008b-3448-4108-81b0-4d16484a6f7b" (UID: "c1a7008b-3448-4108-81b0-4d16484a6f7b"). InnerVolumeSpecName "kube-api-access-wcrm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.013083 4833 scope.go:117] "RemoveContainer" containerID="be1096a47ac15e28bada61021cfe94e95ea96a8a39abb449b9b81d46dc563b03" Oct 13 06:50:03 crc kubenswrapper[4833]: E1013 06:50:03.013404 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be1096a47ac15e28bada61021cfe94e95ea96a8a39abb449b9b81d46dc563b03\": container with ID starting with be1096a47ac15e28bada61021cfe94e95ea96a8a39abb449b9b81d46dc563b03 not found: ID does not exist" containerID="be1096a47ac15e28bada61021cfe94e95ea96a8a39abb449b9b81d46dc563b03" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.013426 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be1096a47ac15e28bada61021cfe94e95ea96a8a39abb449b9b81d46dc563b03"} err="failed to get container status \"be1096a47ac15e28bada61021cfe94e95ea96a8a39abb449b9b81d46dc563b03\": rpc error: code = NotFound desc = could not find container \"be1096a47ac15e28bada61021cfe94e95ea96a8a39abb449b9b81d46dc563b03\": container with ID starting with be1096a47ac15e28bada61021cfe94e95ea96a8a39abb449b9b81d46dc563b03 not found: ID does not exist" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.013445 4833 scope.go:117] "RemoveContainer" containerID="b5b26884237d1d0340d022d2f237fead3af51824f579b5fa022a6074b7e977dc" Oct 13 06:50:03 crc kubenswrapper[4833]: E1013 06:50:03.013713 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5b26884237d1d0340d022d2f237fead3af51824f579b5fa022a6074b7e977dc\": container with ID starting with b5b26884237d1d0340d022d2f237fead3af51824f579b5fa022a6074b7e977dc not found: ID does not exist" containerID="b5b26884237d1d0340d022d2f237fead3af51824f579b5fa022a6074b7e977dc" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.013729 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5b26884237d1d0340d022d2f237fead3af51824f579b5fa022a6074b7e977dc"} err="failed to get container status \"b5b26884237d1d0340d022d2f237fead3af51824f579b5fa022a6074b7e977dc\": rpc error: code = NotFound desc = could not find container \"b5b26884237d1d0340d022d2f237fead3af51824f579b5fa022a6074b7e977dc\": container with ID starting with b5b26884237d1d0340d022d2f237fead3af51824f579b5fa022a6074b7e977dc not found: ID does not exist" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.013742 4833 scope.go:117] "RemoveContainer" containerID="40c6e393bbfaf517c5fecd9b2453770dae8d96c73815045f791a0be9bcebd55d" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.016670 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.033214 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1a7008b-3448-4108-81b0-4d16484a6f7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1a7008b-3448-4108-81b0-4d16484a6f7b" (UID: "c1a7008b-3448-4108-81b0-4d16484a6f7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.110785 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcrm2\" (UniqueName: \"kubernetes.io/projected/c1a7008b-3448-4108-81b0-4d16484a6f7b-kube-api-access-wcrm2\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.110817 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a7008b-3448-4108-81b0-4d16484a6f7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.110830 4833 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c1a7008b-3448-4108-81b0-4d16484a6f7b-ovn-rundir\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.110841 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1a7008b-3448-4108-81b0-4d16484a6f7b-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.110853 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a7008b-3448-4108-81b0-4d16484a6f7b-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.128620 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1a7008b-3448-4108-81b0-4d16484a6f7b-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "c1a7008b-3448-4108-81b0-4d16484a6f7b" (UID: "c1a7008b-3448-4108-81b0-4d16484a6f7b"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.143666 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1a7008b-3448-4108-81b0-4d16484a6f7b-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "c1a7008b-3448-4108-81b0-4d16484a6f7b" (UID: "c1a7008b-3448-4108-81b0-4d16484a6f7b"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.198709 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.209246 4833 scope.go:117] "RemoveContainer" containerID="4c2d835c2cdf83c5990f9e667ecb740187ba835cbe395bfdce7fceef0f080f02" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.212812 4833 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1a7008b-3448-4108-81b0-4d16484a6f7b-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.212854 4833 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1a7008b-3448-4108-81b0-4d16484a6f7b-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.287610 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="7113b07b-875e-4a09-a221-be312e4d0dce" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"https://10.217.0.199:6080/vnc_lite.html\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.314048 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/827f736f-2193-4ebd-ab7f-99fb22945d1e-rabbitmq-erlang-cookie\") pod \"827f736f-2193-4ebd-ab7f-99fb22945d1e\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.314193 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/827f736f-2193-4ebd-ab7f-99fb22945d1e-plugins-conf\") pod \"827f736f-2193-4ebd-ab7f-99fb22945d1e\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.314239 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/827f736f-2193-4ebd-ab7f-99fb22945d1e-rabbitmq-tls\") pod \"827f736f-2193-4ebd-ab7f-99fb22945d1e\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.314274 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"827f736f-2193-4ebd-ab7f-99fb22945d1e\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.314334 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/827f736f-2193-4ebd-ab7f-99fb22945d1e-rabbitmq-confd\") pod \"827f736f-2193-4ebd-ab7f-99fb22945d1e\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.314369 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/827f736f-2193-4ebd-ab7f-99fb22945d1e-server-conf\") pod \"827f736f-2193-4ebd-ab7f-99fb22945d1e\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.314692 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/827f736f-2193-4ebd-ab7f-99fb22945d1e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "827f736f-2193-4ebd-ab7f-99fb22945d1e" (UID: "827f736f-2193-4ebd-ab7f-99fb22945d1e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.314877 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk2fq\" (UniqueName: \"kubernetes.io/projected/827f736f-2193-4ebd-ab7f-99fb22945d1e-kube-api-access-nk2fq\") pod \"827f736f-2193-4ebd-ab7f-99fb22945d1e\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.314926 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/827f736f-2193-4ebd-ab7f-99fb22945d1e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "827f736f-2193-4ebd-ab7f-99fb22945d1e" (UID: "827f736f-2193-4ebd-ab7f-99fb22945d1e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.314977 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/827f736f-2193-4ebd-ab7f-99fb22945d1e-rabbitmq-plugins\") pod \"827f736f-2193-4ebd-ab7f-99fb22945d1e\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.315026 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/827f736f-2193-4ebd-ab7f-99fb22945d1e-erlang-cookie-secret\") pod \"827f736f-2193-4ebd-ab7f-99fb22945d1e\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.315091 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/827f736f-2193-4ebd-ab7f-99fb22945d1e-pod-info\") pod \"827f736f-2193-4ebd-ab7f-99fb22945d1e\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.315143 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/827f736f-2193-4ebd-ab7f-99fb22945d1e-config-data\") pod \"827f736f-2193-4ebd-ab7f-99fb22945d1e\" (UID: \"827f736f-2193-4ebd-ab7f-99fb22945d1e\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.315629 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/827f736f-2193-4ebd-ab7f-99fb22945d1e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "827f736f-2193-4ebd-ab7f-99fb22945d1e" (UID: "827f736f-2193-4ebd-ab7f-99fb22945d1e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.316096 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/827f736f-2193-4ebd-ab7f-99fb22945d1e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.316119 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/827f736f-2193-4ebd-ab7f-99fb22945d1e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.316139 4833 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/827f736f-2193-4ebd-ab7f-99fb22945d1e-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.318338 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/827f736f-2193-4ebd-ab7f-99fb22945d1e-kube-api-access-nk2fq" (OuterVolumeSpecName: "kube-api-access-nk2fq") pod "827f736f-2193-4ebd-ab7f-99fb22945d1e" (UID: "827f736f-2193-4ebd-ab7f-99fb22945d1e"). InnerVolumeSpecName "kube-api-access-nk2fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.319139 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "persistence") pod "827f736f-2193-4ebd-ab7f-99fb22945d1e" (UID: "827f736f-2193-4ebd-ab7f-99fb22945d1e"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.323296 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/827f736f-2193-4ebd-ab7f-99fb22945d1e-pod-info" (OuterVolumeSpecName: "pod-info") pod "827f736f-2193-4ebd-ab7f-99fb22945d1e" (UID: "827f736f-2193-4ebd-ab7f-99fb22945d1e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.323463 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/827f736f-2193-4ebd-ab7f-99fb22945d1e-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "827f736f-2193-4ebd-ab7f-99fb22945d1e" (UID: "827f736f-2193-4ebd-ab7f-99fb22945d1e"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.329507 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.330052 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/827f736f-2193-4ebd-ab7f-99fb22945d1e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "827f736f-2193-4ebd-ab7f-99fb22945d1e" (UID: "827f736f-2193-4ebd-ab7f-99fb22945d1e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.339719 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/827f736f-2193-4ebd-ab7f-99fb22945d1e-config-data" (OuterVolumeSpecName: "config-data") pod "827f736f-2193-4ebd-ab7f-99fb22945d1e" (UID: "827f736f-2193-4ebd-ab7f-99fb22945d1e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.362988 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/keystone-558c47b6d4-9zp2v" podUID="c03db41f-e7fb-4188-bd67-13f35c231490" containerName="keystone-api" probeResult="failure" output="Get \"https://10.217.0.151:5000/v3\": read tcp 10.217.0.2:49112->10.217.0.151:5000: read: connection reset by peer" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.384781 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/827f736f-2193-4ebd-ab7f-99fb22945d1e-server-conf" (OuterVolumeSpecName: "server-conf") pod "827f736f-2193-4ebd-ab7f-99fb22945d1e" (UID: "827f736f-2193-4ebd-ab7f-99fb22945d1e"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.410782 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/827f736f-2193-4ebd-ab7f-99fb22945d1e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "827f736f-2193-4ebd-ab7f-99fb22945d1e" (UID: "827f736f-2193-4ebd-ab7f-99fb22945d1e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.417223 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0a6ab499-ed60-45e7-b510-5a43422aa7f5-rabbitmq-plugins\") pod \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.417282 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0a6ab499-ed60-45e7-b510-5a43422aa7f5-config-data\") pod \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.417303 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0a6ab499-ed60-45e7-b510-5a43422aa7f5-plugins-conf\") pod \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.417344 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0a6ab499-ed60-45e7-b510-5a43422aa7f5-rabbitmq-erlang-cookie\") pod \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.417391 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvdcf\" (UniqueName: \"kubernetes.io/projected/0a6ab499-ed60-45e7-b510-5a43422aa7f5-kube-api-access-xvdcf\") pod \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.417418 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0a6ab499-ed60-45e7-b510-5a43422aa7f5-pod-info\") pod \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.417541 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0a6ab499-ed60-45e7-b510-5a43422aa7f5-rabbitmq-tls\") pod \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.417584 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0a6ab499-ed60-45e7-b510-5a43422aa7f5-server-conf\") pod \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.417613 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.417657 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0a6ab499-ed60-45e7-b510-5a43422aa7f5-erlang-cookie-secret\") pod \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.417686 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0a6ab499-ed60-45e7-b510-5a43422aa7f5-rabbitmq-confd\") pod \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\" (UID: \"0a6ab499-ed60-45e7-b510-5a43422aa7f5\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.418297 4833 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/827f736f-2193-4ebd-ab7f-99fb22945d1e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.418331 4833 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/827f736f-2193-4ebd-ab7f-99fb22945d1e-pod-info\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.418345 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/827f736f-2193-4ebd-ab7f-99fb22945d1e-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.418356 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/827f736f-2193-4ebd-ab7f-99fb22945d1e-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.418390 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.418405 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/827f736f-2193-4ebd-ab7f-99fb22945d1e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.418417 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nk2fq\" (UniqueName: \"kubernetes.io/projected/827f736f-2193-4ebd-ab7f-99fb22945d1e-kube-api-access-nk2fq\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.418431 4833 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/827f736f-2193-4ebd-ab7f-99fb22945d1e-server-conf\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.418509 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a6ab499-ed60-45e7-b510-5a43422aa7f5-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0a6ab499-ed60-45e7-b510-5a43422aa7f5" (UID: "0a6ab499-ed60-45e7-b510-5a43422aa7f5"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.418925 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a6ab499-ed60-45e7-b510-5a43422aa7f5-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0a6ab499-ed60-45e7-b510-5a43422aa7f5" (UID: "0a6ab499-ed60-45e7-b510-5a43422aa7f5"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.419267 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a6ab499-ed60-45e7-b510-5a43422aa7f5-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0a6ab499-ed60-45e7-b510-5a43422aa7f5" (UID: "0a6ab499-ed60-45e7-b510-5a43422aa7f5"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.423088 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0a6ab499-ed60-45e7-b510-5a43422aa7f5-pod-info" (OuterVolumeSpecName: "pod-info") pod "0a6ab499-ed60-45e7-b510-5a43422aa7f5" (UID: "0a6ab499-ed60-45e7-b510-5a43422aa7f5"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.423710 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a6ab499-ed60-45e7-b510-5a43422aa7f5-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0a6ab499-ed60-45e7-b510-5a43422aa7f5" (UID: "0a6ab499-ed60-45e7-b510-5a43422aa7f5"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.430818 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "0a6ab499-ed60-45e7-b510-5a43422aa7f5" (UID: "0a6ab499-ed60-45e7-b510-5a43422aa7f5"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.433891 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a6ab499-ed60-45e7-b510-5a43422aa7f5-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "0a6ab499-ed60-45e7-b510-5a43422aa7f5" (UID: "0a6ab499-ed60-45e7-b510-5a43422aa7f5"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.438472 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a6ab499-ed60-45e7-b510-5a43422aa7f5-kube-api-access-xvdcf" (OuterVolumeSpecName: "kube-api-access-xvdcf") pod "0a6ab499-ed60-45e7-b510-5a43422aa7f5" (UID: "0a6ab499-ed60-45e7-b510-5a43422aa7f5"). InnerVolumeSpecName "kube-api-access-xvdcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.438846 4833 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.445065 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a6ab499-ed60-45e7-b510-5a43422aa7f5-config-data" (OuterVolumeSpecName: "config-data") pod "0a6ab499-ed60-45e7-b510-5a43422aa7f5" (UID: "0a6ab499-ed60-45e7-b510-5a43422aa7f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.463258 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a6ab499-ed60-45e7-b510-5a43422aa7f5-server-conf" (OuterVolumeSpecName: "server-conf") pod "0a6ab499-ed60-45e7-b510-5a43422aa7f5" (UID: "0a6ab499-ed60-45e7-b510-5a43422aa7f5"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.519419 4833 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0a6ab499-ed60-45e7-b510-5a43422aa7f5-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.519450 4833 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.519459 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0a6ab499-ed60-45e7-b510-5a43422aa7f5-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.519468 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0a6ab499-ed60-45e7-b510-5a43422aa7f5-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.519475 4833 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0a6ab499-ed60-45e7-b510-5a43422aa7f5-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.519486 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0a6ab499-ed60-45e7-b510-5a43422aa7f5-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.519495 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvdcf\" (UniqueName: \"kubernetes.io/projected/0a6ab499-ed60-45e7-b510-5a43422aa7f5-kube-api-access-xvdcf\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.519503 4833 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0a6ab499-ed60-45e7-b510-5a43422aa7f5-pod-info\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.519510 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0a6ab499-ed60-45e7-b510-5a43422aa7f5-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.519528 4833 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0a6ab499-ed60-45e7-b510-5a43422aa7f5-server-conf\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.519576 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.524111 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a6ab499-ed60-45e7-b510-5a43422aa7f5-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0a6ab499-ed60-45e7-b510-5a43422aa7f5" (UID: "0a6ab499-ed60-45e7-b510-5a43422aa7f5"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.551267 4833 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.621718 4833 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.622032 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0a6ab499-ed60-45e7-b510-5a43422aa7f5-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.667490 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-558c47b6d4-9zp2v" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.700321 4833 generic.go:334] "Generic (PLEG): container finished" podID="c03db41f-e7fb-4188-bd67-13f35c231490" containerID="9e6742134cdf90f69643cd249cb0d8765be245ae58a1aadfbd64d0b1618d524e" exitCode=0 Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.700484 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-558c47b6d4-9zp2v" event={"ID":"c03db41f-e7fb-4188-bd67-13f35c231490","Type":"ContainerDied","Data":"9e6742134cdf90f69643cd249cb0d8765be245ae58a1aadfbd64d0b1618d524e"} Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.700532 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-558c47b6d4-9zp2v" event={"ID":"c03db41f-e7fb-4188-bd67-13f35c231490","Type":"ContainerDied","Data":"2680ccaa3743c79ad8df0ae3fe46dc4d517812491a0140cb5e4ac3352d30fdf7"} Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.700588 4833 scope.go:117] "RemoveContainer" containerID="9e6742134cdf90f69643cd249cb0d8765be245ae58a1aadfbd64d0b1618d524e" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.700512 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-558c47b6d4-9zp2v" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.704797 4833 generic.go:334] "Generic (PLEG): container finished" podID="0a6ab499-ed60-45e7-b510-5a43422aa7f5" containerID="24aad4a10d73945e5a0646981275abbd2aeda300a5f6a5262692650bb4e35a27" exitCode=0 Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.704866 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0a6ab499-ed60-45e7-b510-5a43422aa7f5","Type":"ContainerDied","Data":"24aad4a10d73945e5a0646981275abbd2aeda300a5f6a5262692650bb4e35a27"} Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.704893 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0a6ab499-ed60-45e7-b510-5a43422aa7f5","Type":"ContainerDied","Data":"0b84c0d8e32dd5f7e418f450d21a0b0dbf45ffb952b07c5d18db22a16dac8081"} Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.706716 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.715500 4833 generic.go:334] "Generic (PLEG): container finished" podID="827f736f-2193-4ebd-ab7f-99fb22945d1e" containerID="0e7b21d947b33ba49437a8fc41d929e050f2e2654fda6595a5bdceb0af1cad5b" exitCode=0 Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.715643 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"827f736f-2193-4ebd-ab7f-99fb22945d1e","Type":"ContainerDied","Data":"0e7b21d947b33ba49437a8fc41d929e050f2e2654fda6595a5bdceb0af1cad5b"} Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.715677 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"827f736f-2193-4ebd-ab7f-99fb22945d1e","Type":"ContainerDied","Data":"566396b1408580b2239a9e0ea20d35c824b1b882acf1d42473d5b4de5f5887be"} Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.715807 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.730502 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c1a7008b-3448-4108-81b0-4d16484a6f7b/ovn-northd/0.log" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.730630 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c1a7008b-3448-4108-81b0-4d16484a6f7b","Type":"ContainerDied","Data":"81f3e93d3087305b07519551ee186a721be27eea1cc8284e47a43644774e7604"} Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.730762 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 13 06:50:03 crc kubenswrapper[4833]: E1013 06:50:03.736518 4833 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Oct 13 06:50:03 crc kubenswrapper[4833]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2025-10-13T06:49:56Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Oct 13 06:50:03 crc kubenswrapper[4833]: /etc/init.d/functions: line 589: 379 Alarm clock "$@" Oct 13 06:50:03 crc kubenswrapper[4833]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-rtrth" message=< Oct 13 06:50:03 crc kubenswrapper[4833]: Exiting ovn-controller (1) [FAILED] Oct 13 06:50:03 crc kubenswrapper[4833]: Killing ovn-controller (1) [ OK ] Oct 13 06:50:03 crc kubenswrapper[4833]: Killing ovn-controller (1) with SIGKILL [ OK ] Oct 13 06:50:03 crc kubenswrapper[4833]: 2025-10-13T06:49:56Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Oct 13 06:50:03 crc kubenswrapper[4833]: /etc/init.d/functions: line 589: 379 Alarm clock "$@" Oct 13 06:50:03 crc kubenswrapper[4833]: > Oct 13 06:50:03 crc kubenswrapper[4833]: E1013 06:50:03.736573 4833 kuberuntime_container.go:691] "PreStop hook failed" err=< Oct 13 06:50:03 crc kubenswrapper[4833]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2025-10-13T06:49:56Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Oct 13 06:50:03 crc kubenswrapper[4833]: /etc/init.d/functions: line 589: 379 Alarm clock "$@" Oct 13 06:50:03 crc kubenswrapper[4833]: > pod="openstack/ovn-controller-rtrth" podUID="5fb7c39d-6b28-4530-b9b1-87c2af591f61" containerName="ovn-controller" containerID="cri-o://14a8af544c70fb32901be0e6cb469a5beefe6d01d04d2a1c6569af305cff5545" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.736606 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-rtrth" podUID="5fb7c39d-6b28-4530-b9b1-87c2af591f61" containerName="ovn-controller" containerID="cri-o://14a8af544c70fb32901be0e6cb469a5beefe6d01d04d2a1c6569af305cff5545" gracePeriod=22 Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.742402 4833 scope.go:117] "RemoveContainer" containerID="9e6742134cdf90f69643cd249cb0d8765be245ae58a1aadfbd64d0b1618d524e" Oct 13 06:50:03 crc kubenswrapper[4833]: E1013 06:50:03.743774 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e6742134cdf90f69643cd249cb0d8765be245ae58a1aadfbd64d0b1618d524e\": container with ID starting with 9e6742134cdf90f69643cd249cb0d8765be245ae58a1aadfbd64d0b1618d524e not found: ID does not exist" containerID="9e6742134cdf90f69643cd249cb0d8765be245ae58a1aadfbd64d0b1618d524e" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.743854 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e6742134cdf90f69643cd249cb0d8765be245ae58a1aadfbd64d0b1618d524e"} err="failed to get container status \"9e6742134cdf90f69643cd249cb0d8765be245ae58a1aadfbd64d0b1618d524e\": rpc error: code = NotFound desc = could not find container \"9e6742134cdf90f69643cd249cb0d8765be245ae58a1aadfbd64d0b1618d524e\": container with ID starting with 9e6742134cdf90f69643cd249cb0d8765be245ae58a1aadfbd64d0b1618d524e not found: ID does not exist" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.743909 4833 scope.go:117] "RemoveContainer" containerID="24aad4a10d73945e5a0646981275abbd2aeda300a5f6a5262692650bb4e35a27" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.828174 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-public-tls-certs\") pod \"c03db41f-e7fb-4188-bd67-13f35c231490\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.828212 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-credential-keys\") pod \"c03db41f-e7fb-4188-bd67-13f35c231490\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.828255 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-fernet-keys\") pod \"c03db41f-e7fb-4188-bd67-13f35c231490\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.828294 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-config-data\") pod \"c03db41f-e7fb-4188-bd67-13f35c231490\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.828420 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-scripts\") pod \"c03db41f-e7fb-4188-bd67-13f35c231490\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.828486 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-internal-tls-certs\") pod \"c03db41f-e7fb-4188-bd67-13f35c231490\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.828576 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkhtt\" (UniqueName: \"kubernetes.io/projected/c03db41f-e7fb-4188-bd67-13f35c231490-kube-api-access-rkhtt\") pod \"c03db41f-e7fb-4188-bd67-13f35c231490\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.828644 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-combined-ca-bundle\") pod \"c03db41f-e7fb-4188-bd67-13f35c231490\" (UID: \"c03db41f-e7fb-4188-bd67-13f35c231490\") " Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.885275 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c03db41f-e7fb-4188-bd67-13f35c231490" (UID: "c03db41f-e7fb-4188-bd67-13f35c231490"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.885773 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-scripts" (OuterVolumeSpecName: "scripts") pod "c03db41f-e7fb-4188-bd67-13f35c231490" (UID: "c03db41f-e7fb-4188-bd67-13f35c231490"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.887312 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c03db41f-e7fb-4188-bd67-13f35c231490" (UID: "c03db41f-e7fb-4188-bd67-13f35c231490"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.891062 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c03db41f-e7fb-4188-bd67-13f35c231490" (UID: "c03db41f-e7fb-4188-bd67-13f35c231490"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.893235 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-config-data" (OuterVolumeSpecName: "config-data") pod "c03db41f-e7fb-4188-bd67-13f35c231490" (UID: "c03db41f-e7fb-4188-bd67-13f35c231490"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.894975 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03db41f-e7fb-4188-bd67-13f35c231490-kube-api-access-rkhtt" (OuterVolumeSpecName: "kube-api-access-rkhtt") pod "c03db41f-e7fb-4188-bd67-13f35c231490" (UID: "c03db41f-e7fb-4188-bd67-13f35c231490"). InnerVolumeSpecName "kube-api-access-rkhtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.905453 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c03db41f-e7fb-4188-bd67-13f35c231490" (UID: "c03db41f-e7fb-4188-bd67-13f35c231490"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.905631 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c03db41f-e7fb-4188-bd67-13f35c231490" (UID: "c03db41f-e7fb-4188-bd67-13f35c231490"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.929969 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.929998 4833 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.930008 4833 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.930017 4833 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.930025 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.930033 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.930042 4833 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c03db41f-e7fb-4188-bd67-13f35c231490-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:03 crc kubenswrapper[4833]: I1013 06:50:03.930050 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkhtt\" (UniqueName: \"kubernetes.io/projected/c03db41f-e7fb-4188-bd67-13f35c231490-kube-api-access-rkhtt\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:03.967396 4833 scope.go:117] "RemoveContainer" containerID="81cf39063cd0366ae3391f599d627326a725a85022b63926955f72073a4f5bd7" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:03.979054 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:03.987397 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:03.990432 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:03.993860 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.015682 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.047270 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.060412 4833 scope.go:117] "RemoveContainer" containerID="24aad4a10d73945e5a0646981275abbd2aeda300a5f6a5262692650bb4e35a27" Oct 13 06:50:04 crc kubenswrapper[4833]: E1013 06:50:04.060858 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24aad4a10d73945e5a0646981275abbd2aeda300a5f6a5262692650bb4e35a27\": container with ID starting with 24aad4a10d73945e5a0646981275abbd2aeda300a5f6a5262692650bb4e35a27 not found: ID does not exist" containerID="24aad4a10d73945e5a0646981275abbd2aeda300a5f6a5262692650bb4e35a27" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.060902 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24aad4a10d73945e5a0646981275abbd2aeda300a5f6a5262692650bb4e35a27"} err="failed to get container status \"24aad4a10d73945e5a0646981275abbd2aeda300a5f6a5262692650bb4e35a27\": rpc error: code = NotFound desc = could not find container \"24aad4a10d73945e5a0646981275abbd2aeda300a5f6a5262692650bb4e35a27\": container with ID starting with 24aad4a10d73945e5a0646981275abbd2aeda300a5f6a5262692650bb4e35a27 not found: ID does not exist" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.060929 4833 scope.go:117] "RemoveContainer" containerID="81cf39063cd0366ae3391f599d627326a725a85022b63926955f72073a4f5bd7" Oct 13 06:50:04 crc kubenswrapper[4833]: E1013 06:50:04.061144 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81cf39063cd0366ae3391f599d627326a725a85022b63926955f72073a4f5bd7\": container with ID starting with 81cf39063cd0366ae3391f599d627326a725a85022b63926955f72073a4f5bd7 not found: ID does not exist" containerID="81cf39063cd0366ae3391f599d627326a725a85022b63926955f72073a4f5bd7" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.061187 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81cf39063cd0366ae3391f599d627326a725a85022b63926955f72073a4f5bd7"} err="failed to get container status \"81cf39063cd0366ae3391f599d627326a725a85022b63926955f72073a4f5bd7\": rpc error: code = NotFound desc = could not find container \"81cf39063cd0366ae3391f599d627326a725a85022b63926955f72073a4f5bd7\": container with ID starting with 81cf39063cd0366ae3391f599d627326a725a85022b63926955f72073a4f5bd7 not found: ID does not exist" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.061200 4833 scope.go:117] "RemoveContainer" containerID="0e7b21d947b33ba49437a8fc41d929e050f2e2654fda6595a5bdceb0af1cad5b" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.099286 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-558c47b6d4-9zp2v"] Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.122720 4833 scope.go:117] "RemoveContainer" containerID="8e0a7d40f38e036ffe265726cc3871a21f2953637eec4dba0015a2fbeb48b65a" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.135732 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-558c47b6d4-9zp2v"] Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.225078 4833 scope.go:117] "RemoveContainer" containerID="0e7b21d947b33ba49437a8fc41d929e050f2e2654fda6595a5bdceb0af1cad5b" Oct 13 06:50:04 crc kubenswrapper[4833]: E1013 06:50:04.235746 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e7b21d947b33ba49437a8fc41d929e050f2e2654fda6595a5bdceb0af1cad5b\": container with ID starting with 0e7b21d947b33ba49437a8fc41d929e050f2e2654fda6595a5bdceb0af1cad5b not found: ID does not exist" containerID="0e7b21d947b33ba49437a8fc41d929e050f2e2654fda6595a5bdceb0af1cad5b" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.235794 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e7b21d947b33ba49437a8fc41d929e050f2e2654fda6595a5bdceb0af1cad5b"} err="failed to get container status \"0e7b21d947b33ba49437a8fc41d929e050f2e2654fda6595a5bdceb0af1cad5b\": rpc error: code = NotFound desc = could not find container \"0e7b21d947b33ba49437a8fc41d929e050f2e2654fda6595a5bdceb0af1cad5b\": container with ID starting with 0e7b21d947b33ba49437a8fc41d929e050f2e2654fda6595a5bdceb0af1cad5b not found: ID does not exist" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.235823 4833 scope.go:117] "RemoveContainer" containerID="8e0a7d40f38e036ffe265726cc3871a21f2953637eec4dba0015a2fbeb48b65a" Oct 13 06:50:04 crc kubenswrapper[4833]: E1013 06:50:04.236269 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e0a7d40f38e036ffe265726cc3871a21f2953637eec4dba0015a2fbeb48b65a\": container with ID starting with 8e0a7d40f38e036ffe265726cc3871a21f2953637eec4dba0015a2fbeb48b65a not found: ID does not exist" containerID="8e0a7d40f38e036ffe265726cc3871a21f2953637eec4dba0015a2fbeb48b65a" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.236295 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e0a7d40f38e036ffe265726cc3871a21f2953637eec4dba0015a2fbeb48b65a"} err="failed to get container status \"8e0a7d40f38e036ffe265726cc3871a21f2953637eec4dba0015a2fbeb48b65a\": rpc error: code = NotFound desc = could not find container \"8e0a7d40f38e036ffe265726cc3871a21f2953637eec4dba0015a2fbeb48b65a\": container with ID starting with 8e0a7d40f38e036ffe265726cc3871a21f2953637eec4dba0015a2fbeb48b65a not found: ID does not exist" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.236306 4833 scope.go:117] "RemoveContainer" containerID="54a34d37063fa7510c51a589e85db2af1e8eef4bc3dcb4482d914746021edcd6" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.241667 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-rtrth_5fb7c39d-6b28-4530-b9b1-87c2af591f61/ovn-controller/0.log" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.241735 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rtrth" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.258367 4833 scope.go:117] "RemoveContainer" containerID="e0d2353375289df900cadbe52a7dfd8067c5455ffa6c327d4b7380ccf466e04d" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.283008 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.334273 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fb7c39d-6b28-4530-b9b1-87c2af591f61-combined-ca-bundle\") pod \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\" (UID: \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\") " Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.334330 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fb7c39d-6b28-4530-b9b1-87c2af591f61-ovn-controller-tls-certs\") pod \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\" (UID: \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\") " Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.334350 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5fb7c39d-6b28-4530-b9b1-87c2af591f61-var-run-ovn\") pod \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\" (UID: \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\") " Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.334456 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5fb7c39d-6b28-4530-b9b1-87c2af591f61-scripts\") pod \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\" (UID: \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\") " Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.334480 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5fb7c39d-6b28-4530-b9b1-87c2af591f61-var-run\") pod \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\" (UID: \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\") " Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.334502 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq5j7\" (UniqueName: \"kubernetes.io/projected/5fb7c39d-6b28-4530-b9b1-87c2af591f61-kube-api-access-kq5j7\") pod \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\" (UID: \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\") " Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.334515 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5fb7c39d-6b28-4530-b9b1-87c2af591f61-var-log-ovn\") pod \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\" (UID: \"5fb7c39d-6b28-4530-b9b1-87c2af591f61\") " Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.334805 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fb7c39d-6b28-4530-b9b1-87c2af591f61-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "5fb7c39d-6b28-4530-b9b1-87c2af591f61" (UID: "5fb7c39d-6b28-4530-b9b1-87c2af591f61"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.334844 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fb7c39d-6b28-4530-b9b1-87c2af591f61-var-run" (OuterVolumeSpecName: "var-run") pod "5fb7c39d-6b28-4530-b9b1-87c2af591f61" (UID: "5fb7c39d-6b28-4530-b9b1-87c2af591f61"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.335948 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fb7c39d-6b28-4530-b9b1-87c2af591f61-scripts" (OuterVolumeSpecName: "scripts") pod "5fb7c39d-6b28-4530-b9b1-87c2af591f61" (UID: "5fb7c39d-6b28-4530-b9b1-87c2af591f61"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.336585 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fb7c39d-6b28-4530-b9b1-87c2af591f61-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "5fb7c39d-6b28-4530-b9b1-87c2af591f61" (UID: "5fb7c39d-6b28-4530-b9b1-87c2af591f61"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.340345 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fb7c39d-6b28-4530-b9b1-87c2af591f61-kube-api-access-kq5j7" (OuterVolumeSpecName: "kube-api-access-kq5j7") pod "5fb7c39d-6b28-4530-b9b1-87c2af591f61" (UID: "5fb7c39d-6b28-4530-b9b1-87c2af591f61"). InnerVolumeSpecName "kube-api-access-kq5j7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.366337 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fb7c39d-6b28-4530-b9b1-87c2af591f61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5fb7c39d-6b28-4530-b9b1-87c2af591f61" (UID: "5fb7c39d-6b28-4530-b9b1-87c2af591f61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.402068 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fb7c39d-6b28-4530-b9b1-87c2af591f61-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "5fb7c39d-6b28-4530-b9b1-87c2af591f61" (UID: "5fb7c39d-6b28-4530-b9b1-87c2af591f61"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.435448 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4418034e-f484-4638-94bd-5b086af9e8f3-scripts\") pod \"4418034e-f484-4638-94bd-5b086af9e8f3\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.435493 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4418034e-f484-4638-94bd-5b086af9e8f3-config-data\") pod \"4418034e-f484-4638-94bd-5b086af9e8f3\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.435510 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4418034e-f484-4638-94bd-5b086af9e8f3-sg-core-conf-yaml\") pod \"4418034e-f484-4638-94bd-5b086af9e8f3\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.435529 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4418034e-f484-4638-94bd-5b086af9e8f3-run-httpd\") pod \"4418034e-f484-4638-94bd-5b086af9e8f3\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.435580 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4418034e-f484-4638-94bd-5b086af9e8f3-combined-ca-bundle\") pod \"4418034e-f484-4638-94bd-5b086af9e8f3\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.435615 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p92c\" (UniqueName: \"kubernetes.io/projected/4418034e-f484-4638-94bd-5b086af9e8f3-kube-api-access-8p92c\") pod \"4418034e-f484-4638-94bd-5b086af9e8f3\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.435642 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4418034e-f484-4638-94bd-5b086af9e8f3-log-httpd\") pod \"4418034e-f484-4638-94bd-5b086af9e8f3\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.435689 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4418034e-f484-4638-94bd-5b086af9e8f3-ceilometer-tls-certs\") pod \"4418034e-f484-4638-94bd-5b086af9e8f3\" (UID: \"4418034e-f484-4638-94bd-5b086af9e8f3\") " Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.435961 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fb7c39d-6b28-4530-b9b1-87c2af591f61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.435974 4833 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fb7c39d-6b28-4530-b9b1-87c2af591f61-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.435986 4833 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5fb7c39d-6b28-4530-b9b1-87c2af591f61-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.435996 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5fb7c39d-6b28-4530-b9b1-87c2af591f61-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.436006 4833 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5fb7c39d-6b28-4530-b9b1-87c2af591f61-var-run\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.436015 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq5j7\" (UniqueName: \"kubernetes.io/projected/5fb7c39d-6b28-4530-b9b1-87c2af591f61-kube-api-access-kq5j7\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.436023 4833 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5fb7c39d-6b28-4530-b9b1-87c2af591f61-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.436760 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4418034e-f484-4638-94bd-5b086af9e8f3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4418034e-f484-4638-94bd-5b086af9e8f3" (UID: "4418034e-f484-4638-94bd-5b086af9e8f3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.438139 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4418034e-f484-4638-94bd-5b086af9e8f3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4418034e-f484-4638-94bd-5b086af9e8f3" (UID: "4418034e-f484-4638-94bd-5b086af9e8f3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.440240 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4418034e-f484-4638-94bd-5b086af9e8f3-kube-api-access-8p92c" (OuterVolumeSpecName: "kube-api-access-8p92c") pod "4418034e-f484-4638-94bd-5b086af9e8f3" (UID: "4418034e-f484-4638-94bd-5b086af9e8f3"). InnerVolumeSpecName "kube-api-access-8p92c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.441176 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4418034e-f484-4638-94bd-5b086af9e8f3-scripts" (OuterVolumeSpecName: "scripts") pod "4418034e-f484-4638-94bd-5b086af9e8f3" (UID: "4418034e-f484-4638-94bd-5b086af9e8f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.453807 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.458314 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4418034e-f484-4638-94bd-5b086af9e8f3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4418034e-f484-4638-94bd-5b086af9e8f3" (UID: "4418034e-f484-4638-94bd-5b086af9e8f3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.493762 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4418034e-f484-4638-94bd-5b086af9e8f3-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "4418034e-f484-4638-94bd-5b086af9e8f3" (UID: "4418034e-f484-4638-94bd-5b086af9e8f3"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.503439 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4418034e-f484-4638-94bd-5b086af9e8f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4418034e-f484-4638-94bd-5b086af9e8f3" (UID: "4418034e-f484-4638-94bd-5b086af9e8f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.530723 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4418034e-f484-4638-94bd-5b086af9e8f3-config-data" (OuterVolumeSpecName: "config-data") pod "4418034e-f484-4638-94bd-5b086af9e8f3" (UID: "4418034e-f484-4638-94bd-5b086af9e8f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.537372 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrml6\" (UniqueName: \"kubernetes.io/projected/e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b-kube-api-access-mrml6\") pod \"e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b\" (UID: \"e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b\") " Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.537621 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b-combined-ca-bundle\") pod \"e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b\" (UID: \"e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b\") " Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.537664 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b-config-data\") pod \"e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b\" (UID: \"e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b\") " Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.537996 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4418034e-f484-4638-94bd-5b086af9e8f3-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.538023 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4418034e-f484-4638-94bd-5b086af9e8f3-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.538035 4833 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4418034e-f484-4638-94bd-5b086af9e8f3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.538048 4833 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4418034e-f484-4638-94bd-5b086af9e8f3-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.538059 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4418034e-f484-4638-94bd-5b086af9e8f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.538072 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p92c\" (UniqueName: \"kubernetes.io/projected/4418034e-f484-4638-94bd-5b086af9e8f3-kube-api-access-8p92c\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.538082 4833 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4418034e-f484-4638-94bd-5b086af9e8f3-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.538094 4833 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4418034e-f484-4638-94bd-5b086af9e8f3-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.541700 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b-kube-api-access-mrml6" (OuterVolumeSpecName: "kube-api-access-mrml6") pod "e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b" (UID: "e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b"). InnerVolumeSpecName "kube-api-access-mrml6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.554827 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b-config-data" (OuterVolumeSpecName: "config-data") pod "e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b" (UID: "e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.555575 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b" (UID: "e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.636473 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a6ab499-ed60-45e7-b510-5a43422aa7f5" path="/var/lib/kubelet/pods/0a6ab499-ed60-45e7-b510-5a43422aa7f5/volumes" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.637101 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69c5134b-fc5b-453c-87ee-6a26e08796cf" path="/var/lib/kubelet/pods/69c5134b-fc5b-453c-87ee-6a26e08796cf/volumes" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.638397 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="827f736f-2193-4ebd-ab7f-99fb22945d1e" path="/var/lib/kubelet/pods/827f736f-2193-4ebd-ab7f-99fb22945d1e/volumes" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.639017 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.639051 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.639065 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrml6\" (UniqueName: \"kubernetes.io/projected/e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b-kube-api-access-mrml6\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.639197 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a18c26d-a476-4e4b-9320-84369da38cf2" path="/var/lib/kubelet/pods/8a18c26d-a476-4e4b-9320-84369da38cf2/volumes" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.639906 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa0ca608-57b5-4289-9271-fcc10a6c7422" path="/var/lib/kubelet/pods/aa0ca608-57b5-4289-9271-fcc10a6c7422/volumes" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.640930 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaeaef09-d532-4399-b9bb-c9e59fbf1a62" path="/var/lib/kubelet/pods/aaeaef09-d532-4399-b9bb-c9e59fbf1a62/volumes" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.641645 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03db41f-e7fb-4188-bd67-13f35c231490" path="/var/lib/kubelet/pods/c03db41f-e7fb-4188-bd67-13f35c231490/volumes" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.642141 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1a7008b-3448-4108-81b0-4d16484a6f7b" path="/var/lib/kubelet/pods/c1a7008b-3448-4108-81b0-4d16484a6f7b/volumes" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.643248 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba" path="/var/lib/kubelet/pods/fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba/volumes" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.643744 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd0bf370-6aac-4334-b612-db75770844df" path="/var/lib/kubelet/pods/fd0bf370-6aac-4334-b612-db75770844df/volumes" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.747428 4833 generic.go:334] "Generic (PLEG): container finished" podID="e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b" containerID="0f094acdb89c411f919d5e575dcd1514370d320b5ec95bb3019deaf50dd6a0bc" exitCode=0 Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.747491 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b","Type":"ContainerDied","Data":"0f094acdb89c411f919d5e575dcd1514370d320b5ec95bb3019deaf50dd6a0bc"} Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.747517 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b","Type":"ContainerDied","Data":"b5d84666afa529e541c19299ef85a3d905b02b6e03d65e000fa7b22c591ad4cb"} Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.747565 4833 scope.go:117] "RemoveContainer" containerID="0f094acdb89c411f919d5e575dcd1514370d320b5ec95bb3019deaf50dd6a0bc" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.747664 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.753931 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-rtrth_5fb7c39d-6b28-4530-b9b1-87c2af591f61/ovn-controller/0.log" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.753975 4833 generic.go:334] "Generic (PLEG): container finished" podID="5fb7c39d-6b28-4530-b9b1-87c2af591f61" containerID="14a8af544c70fb32901be0e6cb469a5beefe6d01d04d2a1c6569af305cff5545" exitCode=137 Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.754027 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rtrth" event={"ID":"5fb7c39d-6b28-4530-b9b1-87c2af591f61","Type":"ContainerDied","Data":"14a8af544c70fb32901be0e6cb469a5beefe6d01d04d2a1c6569af305cff5545"} Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.754057 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rtrth" event={"ID":"5fb7c39d-6b28-4530-b9b1-87c2af591f61","Type":"ContainerDied","Data":"d91f7b014bb590335e9ed56269ff92c8cca467ed322b755a0eb8b2d53f724508"} Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.754409 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rtrth" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.776112 4833 generic.go:334] "Generic (PLEG): container finished" podID="4418034e-f484-4638-94bd-5b086af9e8f3" containerID="d1b5ef65dd6c04a469b4e26abcf945980198eb1805346763f419d84fce76a1df" exitCode=0 Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.776167 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4418034e-f484-4638-94bd-5b086af9e8f3","Type":"ContainerDied","Data":"d1b5ef65dd6c04a469b4e26abcf945980198eb1805346763f419d84fce76a1df"} Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.776202 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4418034e-f484-4638-94bd-5b086af9e8f3","Type":"ContainerDied","Data":"0ca875ce5679a12bb4cabcf0ebe06fabe16ded1d72a923c749d8a492f0f73dd1"} Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.776291 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.786714 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.792858 4833 scope.go:117] "RemoveContainer" containerID="0f094acdb89c411f919d5e575dcd1514370d320b5ec95bb3019deaf50dd6a0bc" Oct 13 06:50:04 crc kubenswrapper[4833]: E1013 06:50:04.795950 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f094acdb89c411f919d5e575dcd1514370d320b5ec95bb3019deaf50dd6a0bc\": container with ID starting with 0f094acdb89c411f919d5e575dcd1514370d320b5ec95bb3019deaf50dd6a0bc not found: ID does not exist" containerID="0f094acdb89c411f919d5e575dcd1514370d320b5ec95bb3019deaf50dd6a0bc" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.796055 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f094acdb89c411f919d5e575dcd1514370d320b5ec95bb3019deaf50dd6a0bc"} err="failed to get container status \"0f094acdb89c411f919d5e575dcd1514370d320b5ec95bb3019deaf50dd6a0bc\": rpc error: code = NotFound desc = could not find container \"0f094acdb89c411f919d5e575dcd1514370d320b5ec95bb3019deaf50dd6a0bc\": container with ID starting with 0f094acdb89c411f919d5e575dcd1514370d320b5ec95bb3019deaf50dd6a0bc not found: ID does not exist" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.796132 4833 scope.go:117] "RemoveContainer" containerID="14a8af544c70fb32901be0e6cb469a5beefe6d01d04d2a1c6569af305cff5545" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.798423 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.811703 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-rtrth"] Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.817786 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-rtrth"] Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.824848 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.830308 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.833698 4833 scope.go:117] "RemoveContainer" containerID="14a8af544c70fb32901be0e6cb469a5beefe6d01d04d2a1c6569af305cff5545" Oct 13 06:50:04 crc kubenswrapper[4833]: E1013 06:50:04.834180 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14a8af544c70fb32901be0e6cb469a5beefe6d01d04d2a1c6569af305cff5545\": container with ID starting with 14a8af544c70fb32901be0e6cb469a5beefe6d01d04d2a1c6569af305cff5545 not found: ID does not exist" containerID="14a8af544c70fb32901be0e6cb469a5beefe6d01d04d2a1c6569af305cff5545" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.834208 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14a8af544c70fb32901be0e6cb469a5beefe6d01d04d2a1c6569af305cff5545"} err="failed to get container status \"14a8af544c70fb32901be0e6cb469a5beefe6d01d04d2a1c6569af305cff5545\": rpc error: code = NotFound desc = could not find container \"14a8af544c70fb32901be0e6cb469a5beefe6d01d04d2a1c6569af305cff5545\": container with ID starting with 14a8af544c70fb32901be0e6cb469a5beefe6d01d04d2a1c6569af305cff5545 not found: ID does not exist" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.834229 4833 scope.go:117] "RemoveContainer" containerID="2cabbee089607667537683595e667e4aa78e6c197269f4dd6ca05d0b1ab6e461" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.852716 4833 scope.go:117] "RemoveContainer" containerID="56972d15184e848e3cb04578a7051cd018536014dd786389c2513e45b4aaedf5" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.878696 4833 scope.go:117] "RemoveContainer" containerID="d1b5ef65dd6c04a469b4e26abcf945980198eb1805346763f419d84fce76a1df" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.902789 4833 scope.go:117] "RemoveContainer" containerID="b1d8fdb61d049ee070745d6bb37299a7e3a2ea5b6a2822cfd684cbef80477fcd" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.924192 4833 scope.go:117] "RemoveContainer" containerID="2cabbee089607667537683595e667e4aa78e6c197269f4dd6ca05d0b1ab6e461" Oct 13 06:50:04 crc kubenswrapper[4833]: E1013 06:50:04.924746 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cabbee089607667537683595e667e4aa78e6c197269f4dd6ca05d0b1ab6e461\": container with ID starting with 2cabbee089607667537683595e667e4aa78e6c197269f4dd6ca05d0b1ab6e461 not found: ID does not exist" containerID="2cabbee089607667537683595e667e4aa78e6c197269f4dd6ca05d0b1ab6e461" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.924779 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cabbee089607667537683595e667e4aa78e6c197269f4dd6ca05d0b1ab6e461"} err="failed to get container status \"2cabbee089607667537683595e667e4aa78e6c197269f4dd6ca05d0b1ab6e461\": rpc error: code = NotFound desc = could not find container \"2cabbee089607667537683595e667e4aa78e6c197269f4dd6ca05d0b1ab6e461\": container with ID starting with 2cabbee089607667537683595e667e4aa78e6c197269f4dd6ca05d0b1ab6e461 not found: ID does not exist" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.924801 4833 scope.go:117] "RemoveContainer" containerID="56972d15184e848e3cb04578a7051cd018536014dd786389c2513e45b4aaedf5" Oct 13 06:50:04 crc kubenswrapper[4833]: E1013 06:50:04.925054 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56972d15184e848e3cb04578a7051cd018536014dd786389c2513e45b4aaedf5\": container with ID starting with 56972d15184e848e3cb04578a7051cd018536014dd786389c2513e45b4aaedf5 not found: ID does not exist" containerID="56972d15184e848e3cb04578a7051cd018536014dd786389c2513e45b4aaedf5" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.925075 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56972d15184e848e3cb04578a7051cd018536014dd786389c2513e45b4aaedf5"} err="failed to get container status \"56972d15184e848e3cb04578a7051cd018536014dd786389c2513e45b4aaedf5\": rpc error: code = NotFound desc = could not find container \"56972d15184e848e3cb04578a7051cd018536014dd786389c2513e45b4aaedf5\": container with ID starting with 56972d15184e848e3cb04578a7051cd018536014dd786389c2513e45b4aaedf5 not found: ID does not exist" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.925089 4833 scope.go:117] "RemoveContainer" containerID="d1b5ef65dd6c04a469b4e26abcf945980198eb1805346763f419d84fce76a1df" Oct 13 06:50:04 crc kubenswrapper[4833]: E1013 06:50:04.925438 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1b5ef65dd6c04a469b4e26abcf945980198eb1805346763f419d84fce76a1df\": container with ID starting with d1b5ef65dd6c04a469b4e26abcf945980198eb1805346763f419d84fce76a1df not found: ID does not exist" containerID="d1b5ef65dd6c04a469b4e26abcf945980198eb1805346763f419d84fce76a1df" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.925480 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1b5ef65dd6c04a469b4e26abcf945980198eb1805346763f419d84fce76a1df"} err="failed to get container status \"d1b5ef65dd6c04a469b4e26abcf945980198eb1805346763f419d84fce76a1df\": rpc error: code = NotFound desc = could not find container \"d1b5ef65dd6c04a469b4e26abcf945980198eb1805346763f419d84fce76a1df\": container with ID starting with d1b5ef65dd6c04a469b4e26abcf945980198eb1805346763f419d84fce76a1df not found: ID does not exist" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.925521 4833 scope.go:117] "RemoveContainer" containerID="b1d8fdb61d049ee070745d6bb37299a7e3a2ea5b6a2822cfd684cbef80477fcd" Oct 13 06:50:04 crc kubenswrapper[4833]: E1013 06:50:04.925883 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1d8fdb61d049ee070745d6bb37299a7e3a2ea5b6a2822cfd684cbef80477fcd\": container with ID starting with b1d8fdb61d049ee070745d6bb37299a7e3a2ea5b6a2822cfd684cbef80477fcd not found: ID does not exist" containerID="b1d8fdb61d049ee070745d6bb37299a7e3a2ea5b6a2822cfd684cbef80477fcd" Oct 13 06:50:04 crc kubenswrapper[4833]: I1013 06:50:04.925914 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1d8fdb61d049ee070745d6bb37299a7e3a2ea5b6a2822cfd684cbef80477fcd"} err="failed to get container status \"b1d8fdb61d049ee070745d6bb37299a7e3a2ea5b6a2822cfd684cbef80477fcd\": rpc error: code = NotFound desc = could not find container \"b1d8fdb61d049ee070745d6bb37299a7e3a2ea5b6a2822cfd684cbef80477fcd\": container with ID starting with b1d8fdb61d049ee070745d6bb37299a7e3a2ea5b6a2822cfd684cbef80477fcd not found: ID does not exist" Oct 13 06:50:06 crc kubenswrapper[4833]: E1013 06:50:06.223986 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574 is running failed: container process not found" containerID="ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 13 06:50:06 crc kubenswrapper[4833]: E1013 06:50:06.224416 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574 is running failed: container process not found" containerID="ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 13 06:50:06 crc kubenswrapper[4833]: E1013 06:50:06.224738 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574 is running failed: container process not found" containerID="ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 13 06:50:06 crc kubenswrapper[4833]: E1013 06:50:06.224775 4833 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-7j8gx" podUID="6aef55de-c4dd-409e-b9f1-b79adc99ea8d" containerName="ovsdb-server" Oct 13 06:50:06 crc kubenswrapper[4833]: E1013 06:50:06.225194 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8e35e2a8250ae17cf941b37d8802e61ec860dff89840f9a3f6729e8f6064f8e0" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 13 06:50:06 crc kubenswrapper[4833]: E1013 06:50:06.226212 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8e35e2a8250ae17cf941b37d8802e61ec860dff89840f9a3f6729e8f6064f8e0" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 13 06:50:06 crc kubenswrapper[4833]: E1013 06:50:06.227795 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8e35e2a8250ae17cf941b37d8802e61ec860dff89840f9a3f6729e8f6064f8e0" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 13 06:50:06 crc kubenswrapper[4833]: E1013 06:50:06.227823 4833 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-7j8gx" podUID="6aef55de-c4dd-409e-b9f1-b79adc99ea8d" containerName="ovs-vswitchd" Oct 13 06:50:06 crc kubenswrapper[4833]: I1013 06:50:06.638764 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4418034e-f484-4638-94bd-5b086af9e8f3" path="/var/lib/kubelet/pods/4418034e-f484-4638-94bd-5b086af9e8f3/volumes" Oct 13 06:50:06 crc kubenswrapper[4833]: I1013 06:50:06.639869 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fb7c39d-6b28-4530-b9b1-87c2af591f61" path="/var/lib/kubelet/pods/5fb7c39d-6b28-4530-b9b1-87c2af591f61/volumes" Oct 13 06:50:06 crc kubenswrapper[4833]: I1013 06:50:06.640517 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b" path="/var/lib/kubelet/pods/e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b/volumes" Oct 13 06:50:11 crc kubenswrapper[4833]: E1013 06:50:11.221312 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574 is running failed: container process not found" containerID="ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 13 06:50:11 crc kubenswrapper[4833]: E1013 06:50:11.222291 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574 is running failed: container process not found" containerID="ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 13 06:50:11 crc kubenswrapper[4833]: E1013 06:50:11.226156 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8e35e2a8250ae17cf941b37d8802e61ec860dff89840f9a3f6729e8f6064f8e0" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 13 06:50:11 crc kubenswrapper[4833]: E1013 06:50:11.226266 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574 is running failed: container process not found" containerID="ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 13 06:50:11 crc kubenswrapper[4833]: E1013 06:50:11.226322 4833 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-7j8gx" podUID="6aef55de-c4dd-409e-b9f1-b79adc99ea8d" containerName="ovsdb-server" Oct 13 06:50:11 crc kubenswrapper[4833]: E1013 06:50:11.231149 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8e35e2a8250ae17cf941b37d8802e61ec860dff89840f9a3f6729e8f6064f8e0" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 13 06:50:11 crc kubenswrapper[4833]: E1013 06:50:11.235184 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8e35e2a8250ae17cf941b37d8802e61ec860dff89840f9a3f6729e8f6064f8e0" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 13 06:50:11 crc kubenswrapper[4833]: E1013 06:50:11.235254 4833 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-7j8gx" podUID="6aef55de-c4dd-409e-b9f1-b79adc99ea8d" containerName="ovs-vswitchd" Oct 13 06:50:12 crc kubenswrapper[4833]: I1013 06:50:12.416400 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-b78565d7c-d78jk" podUID="65e5cee6-ee1c-4612-89b8-c2cfe968438b" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.167:9696/\": dial tcp 10.217.0.167:9696: connect: connection refused" Oct 13 06:50:16 crc kubenswrapper[4833]: E1013 06:50:16.224066 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574 is running failed: container process not found" containerID="ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 13 06:50:16 crc kubenswrapper[4833]: E1013 06:50:16.225254 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574 is running failed: container process not found" containerID="ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 13 06:50:16 crc kubenswrapper[4833]: E1013 06:50:16.226327 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8e35e2a8250ae17cf941b37d8802e61ec860dff89840f9a3f6729e8f6064f8e0" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 13 06:50:16 crc kubenswrapper[4833]: E1013 06:50:16.226592 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574 is running failed: container process not found" containerID="ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 13 06:50:16 crc kubenswrapper[4833]: E1013 06:50:16.226696 4833 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-7j8gx" podUID="6aef55de-c4dd-409e-b9f1-b79adc99ea8d" containerName="ovsdb-server" Oct 13 06:50:16 crc kubenswrapper[4833]: E1013 06:50:16.229221 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8e35e2a8250ae17cf941b37d8802e61ec860dff89840f9a3f6729e8f6064f8e0" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 13 06:50:16 crc kubenswrapper[4833]: E1013 06:50:16.231122 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8e35e2a8250ae17cf941b37d8802e61ec860dff89840f9a3f6729e8f6064f8e0" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 13 06:50:16 crc kubenswrapper[4833]: E1013 06:50:16.231196 4833 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-7j8gx" podUID="6aef55de-c4dd-409e-b9f1-b79adc99ea8d" containerName="ovs-vswitchd" Oct 13 06:50:17 crc kubenswrapper[4833]: I1013 06:50:17.907049 4833 generic.go:334] "Generic (PLEG): container finished" podID="65e5cee6-ee1c-4612-89b8-c2cfe968438b" containerID="693f1a344ba18ce292d370fac9613ada4bf6424ec01d376fafe1fa5f5d79c8b2" exitCode=0 Oct 13 06:50:17 crc kubenswrapper[4833]: I1013 06:50:17.907146 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b78565d7c-d78jk" event={"ID":"65e5cee6-ee1c-4612-89b8-c2cfe968438b","Type":"ContainerDied","Data":"693f1a344ba18ce292d370fac9613ada4bf6424ec01d376fafe1fa5f5d79c8b2"} Oct 13 06:50:18 crc kubenswrapper[4833]: I1013 06:50:18.113706 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b78565d7c-d78jk" Oct 13 06:50:18 crc kubenswrapper[4833]: I1013 06:50:18.259352 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-ovndb-tls-certs\") pod \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\" (UID: \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\") " Oct 13 06:50:18 crc kubenswrapper[4833]: I1013 06:50:18.259453 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-internal-tls-certs\") pod \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\" (UID: \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\") " Oct 13 06:50:18 crc kubenswrapper[4833]: I1013 06:50:18.259515 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2g6bm\" (UniqueName: \"kubernetes.io/projected/65e5cee6-ee1c-4612-89b8-c2cfe968438b-kube-api-access-2g6bm\") pod \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\" (UID: \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\") " Oct 13 06:50:18 crc kubenswrapper[4833]: I1013 06:50:18.259610 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-combined-ca-bundle\") pod \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\" (UID: \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\") " Oct 13 06:50:18 crc kubenswrapper[4833]: I1013 06:50:18.259730 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-public-tls-certs\") pod \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\" (UID: \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\") " Oct 13 06:50:18 crc kubenswrapper[4833]: I1013 06:50:18.259769 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-config\") pod \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\" (UID: \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\") " Oct 13 06:50:18 crc kubenswrapper[4833]: I1013 06:50:18.259806 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-httpd-config\") pod \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\" (UID: \"65e5cee6-ee1c-4612-89b8-c2cfe968438b\") " Oct 13 06:50:18 crc kubenswrapper[4833]: I1013 06:50:18.264810 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "65e5cee6-ee1c-4612-89b8-c2cfe968438b" (UID: "65e5cee6-ee1c-4612-89b8-c2cfe968438b"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:18 crc kubenswrapper[4833]: I1013 06:50:18.268780 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65e5cee6-ee1c-4612-89b8-c2cfe968438b-kube-api-access-2g6bm" (OuterVolumeSpecName: "kube-api-access-2g6bm") pod "65e5cee6-ee1c-4612-89b8-c2cfe968438b" (UID: "65e5cee6-ee1c-4612-89b8-c2cfe968438b"). InnerVolumeSpecName "kube-api-access-2g6bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:50:18 crc kubenswrapper[4833]: I1013 06:50:18.313738 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "65e5cee6-ee1c-4612-89b8-c2cfe968438b" (UID: "65e5cee6-ee1c-4612-89b8-c2cfe968438b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:18 crc kubenswrapper[4833]: I1013 06:50:18.316991 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65e5cee6-ee1c-4612-89b8-c2cfe968438b" (UID: "65e5cee6-ee1c-4612-89b8-c2cfe968438b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:18 crc kubenswrapper[4833]: I1013 06:50:18.317671 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-config" (OuterVolumeSpecName: "config") pod "65e5cee6-ee1c-4612-89b8-c2cfe968438b" (UID: "65e5cee6-ee1c-4612-89b8-c2cfe968438b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:18 crc kubenswrapper[4833]: I1013 06:50:18.322092 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "65e5cee6-ee1c-4612-89b8-c2cfe968438b" (UID: "65e5cee6-ee1c-4612-89b8-c2cfe968438b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:18 crc kubenswrapper[4833]: I1013 06:50:18.334070 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "65e5cee6-ee1c-4612-89b8-c2cfe968438b" (UID: "65e5cee6-ee1c-4612-89b8-c2cfe968438b"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:18 crc kubenswrapper[4833]: I1013 06:50:18.361640 4833 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:18 crc kubenswrapper[4833]: I1013 06:50:18.361689 4833 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:18 crc kubenswrapper[4833]: I1013 06:50:18.361702 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2g6bm\" (UniqueName: \"kubernetes.io/projected/65e5cee6-ee1c-4612-89b8-c2cfe968438b-kube-api-access-2g6bm\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:18 crc kubenswrapper[4833]: I1013 06:50:18.361713 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:18 crc kubenswrapper[4833]: I1013 06:50:18.361721 4833 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:18 crc kubenswrapper[4833]: I1013 06:50:18.361730 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:18 crc kubenswrapper[4833]: I1013 06:50:18.361740 4833 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/65e5cee6-ee1c-4612-89b8-c2cfe968438b-httpd-config\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:18 crc kubenswrapper[4833]: I1013 06:50:18.920962 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b78565d7c-d78jk" event={"ID":"65e5cee6-ee1c-4612-89b8-c2cfe968438b","Type":"ContainerDied","Data":"adef8b0bdfe5ad5877a14b9da9c7bf657b0674168363b76ea94b930673147f08"} Oct 13 06:50:18 crc kubenswrapper[4833]: I1013 06:50:18.921126 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b78565d7c-d78jk" Oct 13 06:50:18 crc kubenswrapper[4833]: I1013 06:50:18.921354 4833 scope.go:117] "RemoveContainer" containerID="d43c06194342280710813b12ad00477467b337fe1567ed350bad5cbf383d8289" Oct 13 06:50:18 crc kubenswrapper[4833]: I1013 06:50:18.959165 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b78565d7c-d78jk"] Oct 13 06:50:18 crc kubenswrapper[4833]: I1013 06:50:18.966393 4833 scope.go:117] "RemoveContainer" containerID="693f1a344ba18ce292d370fac9613ada4bf6424ec01d376fafe1fa5f5d79c8b2" Oct 13 06:50:18 crc kubenswrapper[4833]: I1013 06:50:18.971784 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-b78565d7c-d78jk"] Oct 13 06:50:20 crc kubenswrapper[4833]: I1013 06:50:20.638135 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65e5cee6-ee1c-4612-89b8-c2cfe968438b" path="/var/lib/kubelet/pods/65e5cee6-ee1c-4612-89b8-c2cfe968438b/volumes" Oct 13 06:50:21 crc kubenswrapper[4833]: E1013 06:50:21.222699 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8e35e2a8250ae17cf941b37d8802e61ec860dff89840f9a3f6729e8f6064f8e0" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 13 06:50:21 crc kubenswrapper[4833]: E1013 06:50:21.222720 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574 is running failed: container process not found" containerID="ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 13 06:50:21 crc kubenswrapper[4833]: E1013 06:50:21.223498 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574 is running failed: container process not found" containerID="ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 13 06:50:21 crc kubenswrapper[4833]: E1013 06:50:21.223835 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574 is running failed: container process not found" containerID="ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 13 06:50:21 crc kubenswrapper[4833]: E1013 06:50:21.223898 4833 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-7j8gx" podUID="6aef55de-c4dd-409e-b9f1-b79adc99ea8d" containerName="ovsdb-server" Oct 13 06:50:21 crc kubenswrapper[4833]: E1013 06:50:21.225003 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8e35e2a8250ae17cf941b37d8802e61ec860dff89840f9a3f6729e8f6064f8e0" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 13 06:50:21 crc kubenswrapper[4833]: E1013 06:50:21.227144 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8e35e2a8250ae17cf941b37d8802e61ec860dff89840f9a3f6729e8f6064f8e0" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 13 06:50:21 crc kubenswrapper[4833]: E1013 06:50:21.227187 4833 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-7j8gx" podUID="6aef55de-c4dd-409e-b9f1-b79adc99ea8d" containerName="ovs-vswitchd" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.003565 4833 generic.go:334] "Generic (PLEG): container finished" podID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerID="b8c0fd99cc7bf147089ee3034a7d63738ca80123381a9e4fcfb1fb0f59148960" exitCode=137 Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.004111 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerDied","Data":"b8c0fd99cc7bf147089ee3034a7d63738ca80123381a9e4fcfb1fb0f59148960"} Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.006025 4833 generic.go:334] "Generic (PLEG): container finished" podID="2b3604db-dabe-4d61-918d-b41a85fbcbf5" containerID="a3e737b2f25b20ffb3b6db74d1d62d4e6066ed41e5b09d860374f17370033973" exitCode=137 Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.006075 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2b3604db-dabe-4d61-918d-b41a85fbcbf5","Type":"ContainerDied","Data":"a3e737b2f25b20ffb3b6db74d1d62d4e6066ed41e5b09d860374f17370033973"} Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.024611 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7j8gx_6aef55de-c4dd-409e-b9f1-b79adc99ea8d/ovs-vswitchd/0.log" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.025268 4833 generic.go:334] "Generic (PLEG): container finished" podID="6aef55de-c4dd-409e-b9f1-b79adc99ea8d" containerID="8e35e2a8250ae17cf941b37d8802e61ec860dff89840f9a3f6729e8f6064f8e0" exitCode=137 Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.025318 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7j8gx" event={"ID":"6aef55de-c4dd-409e-b9f1-b79adc99ea8d","Type":"ContainerDied","Data":"8e35e2a8250ae17cf941b37d8802e61ec860dff89840f9a3f6729e8f6064f8e0"} Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.183220 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.194329 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 13 06:50:26 crc kubenswrapper[4833]: E1013 06:50:26.221990 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8e35e2a8250ae17cf941b37d8802e61ec860dff89840f9a3f6729e8f6064f8e0 is running failed: container process not found" containerID="8e35e2a8250ae17cf941b37d8802e61ec860dff89840f9a3f6729e8f6064f8e0" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 13 06:50:26 crc kubenswrapper[4833]: E1013 06:50:26.222076 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574 is running failed: container process not found" containerID="ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 13 06:50:26 crc kubenswrapper[4833]: E1013 06:50:26.222291 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8e35e2a8250ae17cf941b37d8802e61ec860dff89840f9a3f6729e8f6064f8e0 is running failed: container process not found" containerID="8e35e2a8250ae17cf941b37d8802e61ec860dff89840f9a3f6729e8f6064f8e0" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 13 06:50:26 crc kubenswrapper[4833]: E1013 06:50:26.222345 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574 is running failed: container process not found" containerID="ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 13 06:50:26 crc kubenswrapper[4833]: E1013 06:50:26.222590 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8e35e2a8250ae17cf941b37d8802e61ec860dff89840f9a3f6729e8f6064f8e0 is running failed: container process not found" containerID="8e35e2a8250ae17cf941b37d8802e61ec860dff89840f9a3f6729e8f6064f8e0" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 13 06:50:26 crc kubenswrapper[4833]: E1013 06:50:26.222619 4833 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8e35e2a8250ae17cf941b37d8802e61ec860dff89840f9a3f6729e8f6064f8e0 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-7j8gx" podUID="6aef55de-c4dd-409e-b9f1-b79adc99ea8d" containerName="ovs-vswitchd" Oct 13 06:50:26 crc kubenswrapper[4833]: E1013 06:50:26.222914 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574 is running failed: container process not found" containerID="ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 13 06:50:26 crc kubenswrapper[4833]: E1013 06:50:26.222941 4833 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-7j8gx" podUID="6aef55de-c4dd-409e-b9f1-b79adc99ea8d" containerName="ovsdb-server" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.318078 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7j8gx_6aef55de-c4dd-409e-b9f1-b79adc99ea8d/ovs-vswitchd/0.log" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.319257 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7j8gx" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.381198 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2b3604db-dabe-4d61-918d-b41a85fbcbf5-etc-machine-id\") pod \"2b3604db-dabe-4d61-918d-b41a85fbcbf5\" (UID: \"2b3604db-dabe-4d61-918d-b41a85fbcbf5\") " Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.381275 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-etc-swift\") pod \"23940e94-2a8f-4e11-b8aa-31fbcd8d9076\" (UID: \"23940e94-2a8f-4e11-b8aa-31fbcd8d9076\") " Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.381346 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-cache\") pod \"23940e94-2a8f-4e11-b8aa-31fbcd8d9076\" (UID: \"23940e94-2a8f-4e11-b8aa-31fbcd8d9076\") " Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.381382 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3604db-dabe-4d61-918d-b41a85fbcbf5-config-data\") pod \"2b3604db-dabe-4d61-918d-b41a85fbcbf5\" (UID: \"2b3604db-dabe-4d61-918d-b41a85fbcbf5\") " Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.381439 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"23940e94-2a8f-4e11-b8aa-31fbcd8d9076\" (UID: \"23940e94-2a8f-4e11-b8aa-31fbcd8d9076\") " Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.381591 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b3604db-dabe-4d61-918d-b41a85fbcbf5-scripts\") pod \"2b3604db-dabe-4d61-918d-b41a85fbcbf5\" (UID: \"2b3604db-dabe-4d61-918d-b41a85fbcbf5\") " Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.381610 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b3604db-dabe-4d61-918d-b41a85fbcbf5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2b3604db-dabe-4d61-918d-b41a85fbcbf5" (UID: "2b3604db-dabe-4d61-918d-b41a85fbcbf5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.381957 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-cache" (OuterVolumeSpecName: "cache") pod "23940e94-2a8f-4e11-b8aa-31fbcd8d9076" (UID: "23940e94-2a8f-4e11-b8aa-31fbcd8d9076"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.382121 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95477\" (UniqueName: \"kubernetes.io/projected/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-kube-api-access-95477\") pod \"23940e94-2a8f-4e11-b8aa-31fbcd8d9076\" (UID: \"23940e94-2a8f-4e11-b8aa-31fbcd8d9076\") " Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.382230 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-var-log\") pod \"6aef55de-c4dd-409e-b9f1-b79adc99ea8d\" (UID: \"6aef55de-c4dd-409e-b9f1-b79adc99ea8d\") " Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.382292 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-var-run\") pod \"6aef55de-c4dd-409e-b9f1-b79adc99ea8d\" (UID: \"6aef55de-c4dd-409e-b9f1-b79adc99ea8d\") " Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.382330 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-var-log" (OuterVolumeSpecName: "var-log") pod "6aef55de-c4dd-409e-b9f1-b79adc99ea8d" (UID: "6aef55de-c4dd-409e-b9f1-b79adc99ea8d"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.382338 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-854w8\" (UniqueName: \"kubernetes.io/projected/2b3604db-dabe-4d61-918d-b41a85fbcbf5-kube-api-access-854w8\") pod \"2b3604db-dabe-4d61-918d-b41a85fbcbf5\" (UID: \"2b3604db-dabe-4d61-918d-b41a85fbcbf5\") " Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.382370 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-var-run" (OuterVolumeSpecName: "var-run") pod "6aef55de-c4dd-409e-b9f1-b79adc99ea8d" (UID: "6aef55de-c4dd-409e-b9f1-b79adc99ea8d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.382438 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3604db-dabe-4d61-918d-b41a85fbcbf5-combined-ca-bundle\") pod \"2b3604db-dabe-4d61-918d-b41a85fbcbf5\" (UID: \"2b3604db-dabe-4d61-918d-b41a85fbcbf5\") " Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.382476 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-lock\") pod \"23940e94-2a8f-4e11-b8aa-31fbcd8d9076\" (UID: \"23940e94-2a8f-4e11-b8aa-31fbcd8d9076\") " Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.382513 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b3604db-dabe-4d61-918d-b41a85fbcbf5-config-data-custom\") pod \"2b3604db-dabe-4d61-918d-b41a85fbcbf5\" (UID: \"2b3604db-dabe-4d61-918d-b41a85fbcbf5\") " Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.382861 4833 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-var-run\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.382889 4833 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2b3604db-dabe-4d61-918d-b41a85fbcbf5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.382902 4833 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-cache\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.382914 4833 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-var-log\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.382970 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-lock" (OuterVolumeSpecName: "lock") pod "23940e94-2a8f-4e11-b8aa-31fbcd8d9076" (UID: "23940e94-2a8f-4e11-b8aa-31fbcd8d9076"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.387005 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-kube-api-access-95477" (OuterVolumeSpecName: "kube-api-access-95477") pod "23940e94-2a8f-4e11-b8aa-31fbcd8d9076" (UID: "23940e94-2a8f-4e11-b8aa-31fbcd8d9076"). InnerVolumeSpecName "kube-api-access-95477". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.387352 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b3604db-dabe-4d61-918d-b41a85fbcbf5-kube-api-access-854w8" (OuterVolumeSpecName: "kube-api-access-854w8") pod "2b3604db-dabe-4d61-918d-b41a85fbcbf5" (UID: "2b3604db-dabe-4d61-918d-b41a85fbcbf5"). InnerVolumeSpecName "kube-api-access-854w8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.390796 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3604db-dabe-4d61-918d-b41a85fbcbf5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2b3604db-dabe-4d61-918d-b41a85fbcbf5" (UID: "2b3604db-dabe-4d61-918d-b41a85fbcbf5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.394143 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3604db-dabe-4d61-918d-b41a85fbcbf5-scripts" (OuterVolumeSpecName: "scripts") pod "2b3604db-dabe-4d61-918d-b41a85fbcbf5" (UID: "2b3604db-dabe-4d61-918d-b41a85fbcbf5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.398681 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "swift") pod "23940e94-2a8f-4e11-b8aa-31fbcd8d9076" (UID: "23940e94-2a8f-4e11-b8aa-31fbcd8d9076"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.401454 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "23940e94-2a8f-4e11-b8aa-31fbcd8d9076" (UID: "23940e94-2a8f-4e11-b8aa-31fbcd8d9076"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.432281 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3604db-dabe-4d61-918d-b41a85fbcbf5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b3604db-dabe-4d61-918d-b41a85fbcbf5" (UID: "2b3604db-dabe-4d61-918d-b41a85fbcbf5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.456992 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3604db-dabe-4d61-918d-b41a85fbcbf5-config-data" (OuterVolumeSpecName: "config-data") pod "2b3604db-dabe-4d61-918d-b41a85fbcbf5" (UID: "2b3604db-dabe-4d61-918d-b41a85fbcbf5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.483382 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-var-lib\") pod \"6aef55de-c4dd-409e-b9f1-b79adc99ea8d\" (UID: \"6aef55de-c4dd-409e-b9f1-b79adc99ea8d\") " Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.483460 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsjb5\" (UniqueName: \"kubernetes.io/projected/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-kube-api-access-zsjb5\") pod \"6aef55de-c4dd-409e-b9f1-b79adc99ea8d\" (UID: \"6aef55de-c4dd-409e-b9f1-b79adc99ea8d\") " Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.483464 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-var-lib" (OuterVolumeSpecName: "var-lib") pod "6aef55de-c4dd-409e-b9f1-b79adc99ea8d" (UID: "6aef55de-c4dd-409e-b9f1-b79adc99ea8d"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.483510 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-etc-ovs\") pod \"6aef55de-c4dd-409e-b9f1-b79adc99ea8d\" (UID: \"6aef55de-c4dd-409e-b9f1-b79adc99ea8d\") " Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.483566 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-scripts\") pod \"6aef55de-c4dd-409e-b9f1-b79adc99ea8d\" (UID: \"6aef55de-c4dd-409e-b9f1-b79adc99ea8d\") " Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.483852 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-854w8\" (UniqueName: \"kubernetes.io/projected/2b3604db-dabe-4d61-918d-b41a85fbcbf5-kube-api-access-854w8\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.483870 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3604db-dabe-4d61-918d-b41a85fbcbf5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.483881 4833 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-lock\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.483893 4833 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b3604db-dabe-4d61-918d-b41a85fbcbf5-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.483903 4833 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-var-lib\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.483913 4833 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.483924 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3604db-dabe-4d61-918d-b41a85fbcbf5-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.483945 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.483955 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b3604db-dabe-4d61-918d-b41a85fbcbf5-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.483965 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95477\" (UniqueName: \"kubernetes.io/projected/23940e94-2a8f-4e11-b8aa-31fbcd8d9076-kube-api-access-95477\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.484165 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "6aef55de-c4dd-409e-b9f1-b79adc99ea8d" (UID: "6aef55de-c4dd-409e-b9f1-b79adc99ea8d"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.485670 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-scripts" (OuterVolumeSpecName: "scripts") pod "6aef55de-c4dd-409e-b9f1-b79adc99ea8d" (UID: "6aef55de-c4dd-409e-b9f1-b79adc99ea8d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.486699 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-kube-api-access-zsjb5" (OuterVolumeSpecName: "kube-api-access-zsjb5") pod "6aef55de-c4dd-409e-b9f1-b79adc99ea8d" (UID: "6aef55de-c4dd-409e-b9f1-b79adc99ea8d"). InnerVolumeSpecName "kube-api-access-zsjb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.523439 4833 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.584844 4833 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.584904 4833 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-etc-ovs\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.584924 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:26 crc kubenswrapper[4833]: I1013 06:50:26.584978 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsjb5\" (UniqueName: \"kubernetes.io/projected/6aef55de-c4dd-409e-b9f1-b79adc99ea8d-kube-api-access-zsjb5\") on node \"crc\" DevicePath \"\"" Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.042954 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23940e94-2a8f-4e11-b8aa-31fbcd8d9076","Type":"ContainerDied","Data":"d5b48aa7b32081d8b026e457fe94db17fe434a5b2762aebe8799ee491a7df1c2"} Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.043403 4833 scope.go:117] "RemoveContainer" containerID="b8c0fd99cc7bf147089ee3034a7d63738ca80123381a9e4fcfb1fb0f59148960" Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.043059 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.048377 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2b3604db-dabe-4d61-918d-b41a85fbcbf5","Type":"ContainerDied","Data":"f7a757e7430121896966c8b5353c760bb5793b9db0630c5d1962ce266bdfe25f"} Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.048732 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.051447 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7j8gx_6aef55de-c4dd-409e-b9f1-b79adc99ea8d/ovs-vswitchd/0.log" Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.052402 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7j8gx" event={"ID":"6aef55de-c4dd-409e-b9f1-b79adc99ea8d","Type":"ContainerDied","Data":"c45ec8164269a0ab296754a0a07dd0c8f338f37fc34ab2c06eab9cb5210b1958"} Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.052522 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7j8gx" Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.082973 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-7j8gx"] Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.094507 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-7j8gx"] Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.099900 4833 scope.go:117] "RemoveContainer" containerID="9b5d782d1b0574c39149c8bb487ccb192e4ad78574ba00d0053886812eecf629" Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.099938 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.104977 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.127622 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.132741 4833 scope.go:117] "RemoveContainer" containerID="b4b5158af1d09b9e60b53b67061ee2a7c79d89b8a882cf00a94e754f31eeb82c" Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.134454 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.151718 4833 scope.go:117] "RemoveContainer" containerID="9dad12e9c90578194f390432ae46d99079a4a5d4c95d825ba6dcc15e26e20fb2" Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.167144 4833 scope.go:117] "RemoveContainer" containerID="02e170a5ebde87992af1b9ec82acf052249debf50eb102dbdc067004eac83dd6" Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.187763 4833 scope.go:117] "RemoveContainer" containerID="ef4bcd2d312a9e41b4e42cf22758d715ea58715ab0b3bcd2ec00f09ab616489b" Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.209238 4833 scope.go:117] "RemoveContainer" containerID="5847c7fbaaa19a0f3623af3ea4be590fad1d82ea8d09cd6086994de5af8c21c0" Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.224333 4833 scope.go:117] "RemoveContainer" containerID="7fbc873a90a0e18d29a4c28fb0bffb723bba4761bbd24dad68303e83c89729b5" Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.241761 4833 scope.go:117] "RemoveContainer" containerID="dda623bd500bc7d4d2d7d9bda0087208d82cc295d3ca8170fefd53b38c5cb99b" Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.261205 4833 scope.go:117] "RemoveContainer" containerID="b4fe6dd76ddecca8a3c9f5a3f305a67a70a4c5075c8827646cbfd73ae58679f8" Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.277054 4833 scope.go:117] "RemoveContainer" containerID="a338bdcb17781b39a4745895b5274ba984f3740577bcb756eb359867e4c8349d" Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.296517 4833 scope.go:117] "RemoveContainer" containerID="39a80ccb5dcfc3109b31f5ea15bdac0c69f4fb148fff6b2e14183efb30f32315" Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.317846 4833 scope.go:117] "RemoveContainer" containerID="b40d94a3b28168dc3adfbd67bb111dd625c1b3a8e28dfcf65f21de1d71ac05ef" Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.342176 4833 scope.go:117] "RemoveContainer" containerID="ddc798bf52735ed655b9f2029dcd6fac626a69a57beb0d6ecfacaf0af9255c10" Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.362669 4833 scope.go:117] "RemoveContainer" containerID="7126480ee2e234f256253f3be3f11958f282b8685399c352e9fe1fed288e1a27" Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.390203 4833 scope.go:117] "RemoveContainer" containerID="2c269f1c0068b7093464c1d749f2f94c414ec34d98624840bb84d4f79d7523e2" Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.410986 4833 scope.go:117] "RemoveContainer" containerID="a3e737b2f25b20ffb3b6db74d1d62d4e6066ed41e5b09d860374f17370033973" Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.432907 4833 scope.go:117] "RemoveContainer" containerID="8e35e2a8250ae17cf941b37d8802e61ec860dff89840f9a3f6729e8f6064f8e0" Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.451494 4833 scope.go:117] "RemoveContainer" containerID="ccb0cb8e20e540b2d0bc99f261e88b255b41fadccc2553732c4d1ef5a10ea574" Oct 13 06:50:27 crc kubenswrapper[4833]: I1013 06:50:27.476282 4833 scope.go:117] "RemoveContainer" containerID="e01d929298cb195a544729b45935babcfa665b42e6e77270e6adc29a83a7fd2f" Oct 13 06:50:28 crc kubenswrapper[4833]: I1013 06:50:28.638865 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" path="/var/lib/kubelet/pods/23940e94-2a8f-4e11-b8aa-31fbcd8d9076/volumes" Oct 13 06:50:28 crc kubenswrapper[4833]: I1013 06:50:28.643027 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b3604db-dabe-4d61-918d-b41a85fbcbf5" path="/var/lib/kubelet/pods/2b3604db-dabe-4d61-918d-b41a85fbcbf5/volumes" Oct 13 06:50:28 crc kubenswrapper[4833]: I1013 06:50:28.644448 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aef55de-c4dd-409e-b9f1-b79adc99ea8d" path="/var/lib/kubelet/pods/6aef55de-c4dd-409e-b9f1-b79adc99ea8d/volumes" Oct 13 06:50:30 crc kubenswrapper[4833]: I1013 06:50:30.549802 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 06:50:30 crc kubenswrapper[4833]: I1013 06:50:30.550119 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 06:51:00 crc kubenswrapper[4833]: I1013 06:51:00.543433 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 06:51:00 crc kubenswrapper[4833]: I1013 06:51:00.544265 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 06:51:00 crc kubenswrapper[4833]: I1013 06:51:00.544331 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 06:51:00 crc kubenswrapper[4833]: I1013 06:51:00.545307 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1e0b8be85d4de611f9b44e391e311db1ba1fbe1a8afc86f11d35d9a6be4b16ce"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 06:51:00 crc kubenswrapper[4833]: I1013 06:51:00.545441 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://1e0b8be85d4de611f9b44e391e311db1ba1fbe1a8afc86f11d35d9a6be4b16ce" gracePeriod=600 Oct 13 06:51:01 crc kubenswrapper[4833]: I1013 06:51:01.382848 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="1e0b8be85d4de611f9b44e391e311db1ba1fbe1a8afc86f11d35d9a6be4b16ce" exitCode=0 Oct 13 06:51:01 crc kubenswrapper[4833]: I1013 06:51:01.383271 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"1e0b8be85d4de611f9b44e391e311db1ba1fbe1a8afc86f11d35d9a6be4b16ce"} Oct 13 06:51:01 crc kubenswrapper[4833]: I1013 06:51:01.383399 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"40fe12eaaa5fd00d2647217d1c5ca48c96e0fd15e2615087f10225f5f2df2206"} Oct 13 06:51:01 crc kubenswrapper[4833]: I1013 06:51:01.383424 4833 scope.go:117] "RemoveContainer" containerID="e40769d80ac05fe37627a30679bf55af458c5472940a5c49dc7bce9376576247" Oct 13 06:51:46 crc kubenswrapper[4833]: I1013 06:51:46.872123 4833 scope.go:117] "RemoveContainer" containerID="7d5393630528a3731b07a1e3a9290637e0b186b2594cc6d44c147e3d89bf11bd" Oct 13 06:51:46 crc kubenswrapper[4833]: I1013 06:51:46.907058 4833 scope.go:117] "RemoveContainer" containerID="a71d16b69b5e8b1db89864394761fa02481b8e3f821d42d70b8b6ffb4df0bc2c" Oct 13 06:51:46 crc kubenswrapper[4833]: I1013 06:51:46.947854 4833 scope.go:117] "RemoveContainer" containerID="5695eb3a82d3a3e492348a23917f842cdbe3066717947a49e3101bca340c7b89" Oct 13 06:51:46 crc kubenswrapper[4833]: I1013 06:51:46.982109 4833 scope.go:117] "RemoveContainer" containerID="dfb5554cc1e88bb53880bea3c9c794e8bd7e2ca9872d2a7c6cd5b75d875acd2b" Oct 13 06:51:47 crc kubenswrapper[4833]: I1013 06:51:47.036777 4833 scope.go:117] "RemoveContainer" containerID="40d67ace0956fe6006bf47bbf62dbdb5ebc2c7ef856bdb62b1c074f6c99b2748" Oct 13 06:51:47 crc kubenswrapper[4833]: I1013 06:51:47.075550 4833 scope.go:117] "RemoveContainer" containerID="6ab1238369a6f5949fd2069743bf6e4ae555e077f1842bd96f9fae2a7a21713c" Oct 13 06:51:47 crc kubenswrapper[4833]: I1013 06:51:47.122354 4833 scope.go:117] "RemoveContainer" containerID="afce58125c1568e6bf304d5afa6b0e91c5a9d7ebe531902badfc18775732770c" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.173155 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mqp7l"] Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.183355 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7113b07b-875e-4a09-a221-be312e4d0dce" containerName="nova-cell1-novncproxy-novncproxy" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.183594 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7113b07b-875e-4a09-a221-be312e4d0dce" containerName="nova-cell1-novncproxy-novncproxy" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.183940 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="336d549b-b94b-4966-af57-2289b1c8acc8" containerName="ovsdbserver-sb" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.184046 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="336d549b-b94b-4966-af57-2289b1c8acc8" containerName="ovsdbserver-sb" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.186609 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="827f736f-2193-4ebd-ab7f-99fb22945d1e" containerName="rabbitmq" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.186718 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="827f736f-2193-4ebd-ab7f-99fb22945d1e" containerName="rabbitmq" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.186799 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd0bf370-6aac-4334-b612-db75770844df" containerName="nova-cell0-conductor-conductor" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.186876 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd0bf370-6aac-4334-b612-db75770844df" containerName="nova-cell0-conductor-conductor" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.186956 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="rsync" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.187031 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="rsync" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.187119 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65e5cee6-ee1c-4612-89b8-c2cfe968438b" containerName="neutron-httpd" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.187203 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="65e5cee6-ee1c-4612-89b8-c2cfe968438b" containerName="neutron-httpd" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.187291 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="336d549b-b94b-4966-af57-2289b1c8acc8" containerName="openstack-network-exporter" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.187369 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="336d549b-b94b-4966-af57-2289b1c8acc8" containerName="openstack-network-exporter" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.187454 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="container-replicator" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.187581 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="container-replicator" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.187710 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8c66f33-3fbd-4a35-8e0d-2b38c3cd513a" containerName="mariadb-account-delete" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.187846 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8c66f33-3fbd-4a35-8e0d-2b38c3cd513a" containerName="mariadb-account-delete" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.187963 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77004520-24e0-4076-8155-b4a8b6b3e1a2" containerName="proxy-server" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.188077 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="77004520-24e0-4076-8155-b4a8b6b3e1a2" containerName="proxy-server" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.189716 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb825980-5dc2-420a-8638-9607a9f1eb1f" containerName="mariadb-account-delete" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.189744 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb825980-5dc2-420a-8638-9607a9f1eb1f" containerName="mariadb-account-delete" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.189781 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaeaef09-d532-4399-b9bb-c9e59fbf1a62" containerName="cinder-api-log" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.189796 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaeaef09-d532-4399-b9bb-c9e59fbf1a62" containerName="cinder-api-log" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.189807 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="container-updater" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.189819 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="container-updater" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.189845 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d04cb142-7473-455b-8d5b-f79d879d8d58" containerName="glance-log" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.189856 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d04cb142-7473-455b-8d5b-f79d879d8d58" containerName="glance-log" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.189873 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="account-server" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.189886 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="account-server" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.189900 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="account-reaper" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.189911 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="account-reaper" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.189931 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="container-auditor" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.189942 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="container-auditor" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.189956 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11f688b0-b3aa-46f7-a700-c6619e3a3951" containerName="kube-state-metrics" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.189967 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="11f688b0-b3aa-46f7-a700-c6619e3a3951" containerName="kube-state-metrics" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.189985 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="626d71e0-e957-4a46-9565-d19058a575c9" containerName="placement-api" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.189995 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="626d71e0-e957-4a46-9565-d19058a575c9" containerName="placement-api" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190010 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a18c26d-a476-4e4b-9320-84369da38cf2" containerName="barbican-keystone-listener-log" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190020 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a18c26d-a476-4e4b-9320-84369da38cf2" containerName="barbican-keystone-listener-log" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190038 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b" containerName="nova-cell1-conductor-conductor" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190049 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b" containerName="nova-cell1-conductor-conductor" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190069 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be1410c-e237-4abe-9a2d-c8e8b5242d93" containerName="barbican-api" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190081 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be1410c-e237-4abe-9a2d-c8e8b5242d93" containerName="barbican-api" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190092 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be1410c-e237-4abe-9a2d-c8e8b5242d93" containerName="barbican-api-log" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190103 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be1410c-e237-4abe-9a2d-c8e8b5242d93" containerName="barbican-api-log" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190123 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ab7add-ea30-4610-a96a-2cad6ae8e40c" containerName="openstack-network-exporter" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190135 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ab7add-ea30-4610-a96a-2cad6ae8e40c" containerName="openstack-network-exporter" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190152 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba" containerName="memcached" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190163 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba" containerName="memcached" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190175 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f85d40e-16b8-4ece-a268-8b4d227ac36c" containerName="barbican-worker" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190185 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f85d40e-16b8-4ece-a268-8b4d227ac36c" containerName="barbican-worker" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190207 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b98eb9-459c-4a87-88e3-63624b7969b9" containerName="openstack-network-exporter" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190218 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b98eb9-459c-4a87-88e3-63624b7969b9" containerName="openstack-network-exporter" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190234 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="626d71e0-e957-4a46-9565-d19058a575c9" containerName="placement-log" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190245 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="626d71e0-e957-4a46-9565-d19058a575c9" containerName="placement-log" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190268 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa0ca608-57b5-4289-9271-fcc10a6c7422" containerName="mysql-bootstrap" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190278 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa0ca608-57b5-4289-9271-fcc10a6c7422" containerName="mysql-bootstrap" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190295 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65e5cee6-ee1c-4612-89b8-c2cfe968438b" containerName="neutron-api" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190305 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="65e5cee6-ee1c-4612-89b8-c2cfe968438b" containerName="neutron-api" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190318 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1a7008b-3448-4108-81b0-4d16484a6f7b" containerName="ovn-northd" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190360 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a7008b-3448-4108-81b0-4d16484a6f7b" containerName="ovn-northd" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190382 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f85d40e-16b8-4ece-a268-8b4d227ac36c" containerName="barbican-worker-log" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190393 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f85d40e-16b8-4ece-a268-8b4d227ac36c" containerName="barbican-worker-log" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190412 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa2db326-7b3a-4cc8-acb4-9c680c8f4972" containerName="galera" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190422 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa2db326-7b3a-4cc8-acb4-9c680c8f4972" containerName="galera" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190436 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4418034e-f484-4638-94bd-5b086af9e8f3" containerName="sg-core" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190446 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4418034e-f484-4638-94bd-5b086af9e8f3" containerName="sg-core" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190457 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69c5134b-fc5b-453c-87ee-6a26e08796cf" containerName="nova-api-api" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190467 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="69c5134b-fc5b-453c-87ee-6a26e08796cf" containerName="nova-api-api" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190482 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f980bce-4b41-461d-9a1f-af4e6fb7455b" containerName="mariadb-account-delete" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190493 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f980bce-4b41-461d-9a1f-af4e6fb7455b" containerName="mariadb-account-delete" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190507 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="827f736f-2193-4ebd-ab7f-99fb22945d1e" containerName="setup-container" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190518 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="827f736f-2193-4ebd-ab7f-99fb22945d1e" containerName="setup-container" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190529 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61fe1ee9-51ff-4f77-8dd7-4e29e3365556" containerName="dnsmasq-dns" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190562 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="61fe1ee9-51ff-4f77-8dd7-4e29e3365556" containerName="dnsmasq-dns" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190576 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4418034e-f484-4638-94bd-5b086af9e8f3" containerName="ceilometer-central-agent" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190585 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4418034e-f484-4638-94bd-5b086af9e8f3" containerName="ceilometer-central-agent" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190606 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c03db41f-e7fb-4188-bd67-13f35c231490" containerName="keystone-api" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190616 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c03db41f-e7fb-4188-bd67-13f35c231490" containerName="keystone-api" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190627 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aaf5d8e-00de-473b-91d2-1dd8a7354853" containerName="nova-metadata-log" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190637 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aaf5d8e-00de-473b-91d2-1dd8a7354853" containerName="nova-metadata-log" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190651 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="object-replicator" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190662 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="object-replicator" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190678 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="object-server" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190688 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="object-server" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190703 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1a7008b-3448-4108-81b0-4d16484a6f7b" containerName="openstack-network-exporter" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190714 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a7008b-3448-4108-81b0-4d16484a6f7b" containerName="openstack-network-exporter" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190727 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="account-replicator" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190737 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="account-replicator" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190751 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a6ab499-ed60-45e7-b510-5a43422aa7f5" containerName="setup-container" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190761 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a6ab499-ed60-45e7-b510-5a43422aa7f5" containerName="setup-container" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190777 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="container-server" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190787 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="container-server" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190800 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a6ab499-ed60-45e7-b510-5a43422aa7f5" containerName="rabbitmq" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190810 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a6ab499-ed60-45e7-b510-5a43422aa7f5" containerName="rabbitmq" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190826 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b3604db-dabe-4d61-918d-b41a85fbcbf5" containerName="cinder-scheduler" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190836 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b3604db-dabe-4d61-918d-b41a85fbcbf5" containerName="cinder-scheduler" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190853 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ab7add-ea30-4610-a96a-2cad6ae8e40c" containerName="ovsdbserver-nb" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190867 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ab7add-ea30-4610-a96a-2cad6ae8e40c" containerName="ovsdbserver-nb" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190886 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa2db326-7b3a-4cc8-acb4-9c680c8f4972" containerName="mysql-bootstrap" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190896 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa2db326-7b3a-4cc8-acb4-9c680c8f4972" containerName="mysql-bootstrap" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190916 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="account-auditor" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190927 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="account-auditor" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190938 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aef55de-c4dd-409e-b9f1-b79adc99ea8d" containerName="ovs-vswitchd" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190948 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aef55de-c4dd-409e-b9f1-b79adc99ea8d" containerName="ovs-vswitchd" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190967 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baa4dafc-7be7-4f97-ba72-359c27e3151c" containerName="glance-log" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.190978 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="baa4dafc-7be7-4f97-ba72-359c27e3151c" containerName="glance-log" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.190989 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baa4dafc-7be7-4f97-ba72-359c27e3151c" containerName="glance-httpd" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.191000 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="baa4dafc-7be7-4f97-ba72-359c27e3151c" containerName="glance-httpd" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.191018 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="swift-recon-cron" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.191027 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="swift-recon-cron" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.191042 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69c5134b-fc5b-453c-87ee-6a26e08796cf" containerName="nova-api-log" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.191052 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="69c5134b-fc5b-453c-87ee-6a26e08796cf" containerName="nova-api-log" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.191063 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e71af496-4851-4904-9003-0358adc97b94" containerName="mariadb-account-delete" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.191073 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e71af496-4851-4904-9003-0358adc97b94" containerName="mariadb-account-delete" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.191092 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="object-updater" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.191102 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="object-updater" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.191116 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61fe1ee9-51ff-4f77-8dd7-4e29e3365556" containerName="init" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.191127 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="61fe1ee9-51ff-4f77-8dd7-4e29e3365556" containerName="init" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.191142 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aef55de-c4dd-409e-b9f1-b79adc99ea8d" containerName="ovsdb-server-init" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.191152 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aef55de-c4dd-409e-b9f1-b79adc99ea8d" containerName="ovsdb-server-init" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.191162 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4418034e-f484-4638-94bd-5b086af9e8f3" containerName="proxy-httpd" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.191172 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4418034e-f484-4638-94bd-5b086af9e8f3" containerName="proxy-httpd" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.191191 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b3604db-dabe-4d61-918d-b41a85fbcbf5" containerName="probe" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.191201 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b3604db-dabe-4d61-918d-b41a85fbcbf5" containerName="probe" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.191215 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="475289a4-cf33-4f56-93d9-73f7551026f8" containerName="nova-scheduler-scheduler" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.191225 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="475289a4-cf33-4f56-93d9-73f7551026f8" containerName="nova-scheduler-scheduler" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.191243 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fb7c39d-6b28-4530-b9b1-87c2af591f61" containerName="ovn-controller" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.191252 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fb7c39d-6b28-4530-b9b1-87c2af591f61" containerName="ovn-controller" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.191267 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0bc4033-85b9-4212-b1e2-3c5888ddcf0a" containerName="mariadb-account-delete" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.191276 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0bc4033-85b9-4212-b1e2-3c5888ddcf0a" containerName="mariadb-account-delete" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.191293 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="object-auditor" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.191303 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="object-auditor" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.191317 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a18c26d-a476-4e4b-9320-84369da38cf2" containerName="barbican-keystone-listener" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.191329 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a18c26d-a476-4e4b-9320-84369da38cf2" containerName="barbican-keystone-listener" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.191345 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="object-expirer" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.191355 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="object-expirer" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.191367 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4418034e-f484-4638-94bd-5b086af9e8f3" containerName="ceilometer-notification-agent" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.191378 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4418034e-f484-4638-94bd-5b086af9e8f3" containerName="ceilometer-notification-agent" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.191396 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77004520-24e0-4076-8155-b4a8b6b3e1a2" containerName="proxy-httpd" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.191406 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="77004520-24e0-4076-8155-b4a8b6b3e1a2" containerName="proxy-httpd" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.191418 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa0ca608-57b5-4289-9271-fcc10a6c7422" containerName="galera" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.191427 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa0ca608-57b5-4289-9271-fcc10a6c7422" containerName="galera" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.191441 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaeaef09-d532-4399-b9bb-c9e59fbf1a62" containerName="cinder-api" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.191453 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaeaef09-d532-4399-b9bb-c9e59fbf1a62" containerName="cinder-api" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.191474 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aef55de-c4dd-409e-b9f1-b79adc99ea8d" containerName="ovsdb-server" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.191484 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aef55de-c4dd-409e-b9f1-b79adc99ea8d" containerName="ovsdb-server" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.191502 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d04cb142-7473-455b-8d5b-f79d879d8d58" containerName="glance-httpd" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.191511 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d04cb142-7473-455b-8d5b-f79d879d8d58" containerName="glance-httpd" Oct 13 06:52:31 crc kubenswrapper[4833]: E1013 06:52:31.191566 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aaf5d8e-00de-473b-91d2-1dd8a7354853" containerName="nova-metadata-metadata" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.191579 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aaf5d8e-00de-473b-91d2-1dd8a7354853" containerName="nova-metadata-metadata" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.191934 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7113b07b-875e-4a09-a221-be312e4d0dce" containerName="nova-cell1-novncproxy-novncproxy" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.191962 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b3604db-dabe-4d61-918d-b41a85fbcbf5" containerName="cinder-scheduler" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.191977 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="object-auditor" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.191994 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1a7008b-3448-4108-81b0-4d16484a6f7b" containerName="openstack-network-exporter" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192014 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4418034e-f484-4638-94bd-5b086af9e8f3" containerName="proxy-httpd" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192038 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="61fe1ee9-51ff-4f77-8dd7-4e29e3365556" containerName="dnsmasq-dns" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192050 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="827f736f-2193-4ebd-ab7f-99fb22945d1e" containerName="rabbitmq" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192069 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="69c5134b-fc5b-453c-87ee-6a26e08796cf" containerName="nova-api-api" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192086 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="626d71e0-e957-4a46-9565-d19058a575c9" containerName="placement-api" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192102 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="container-auditor" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192115 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d04cb142-7473-455b-8d5b-f79d879d8d58" containerName="glance-httpd" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192126 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a18c26d-a476-4e4b-9320-84369da38cf2" containerName="barbican-keystone-listener" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192137 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="77004520-24e0-4076-8155-b4a8b6b3e1a2" containerName="proxy-server" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192152 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b98eb9-459c-4a87-88e3-63624b7969b9" containerName="openstack-network-exporter" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192163 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aef55de-c4dd-409e-b9f1-b79adc99ea8d" containerName="ovs-vswitchd" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192174 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aaf5d8e-00de-473b-91d2-1dd8a7354853" containerName="nova-metadata-metadata" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192188 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4418034e-f484-4638-94bd-5b086af9e8f3" containerName="sg-core" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192204 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be1410c-e237-4abe-9a2d-c8e8b5242d93" containerName="barbican-api-log" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192219 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="account-auditor" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192234 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb825980-5dc2-420a-8638-9607a9f1eb1f" containerName="mariadb-account-delete" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192251 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0bc4033-85b9-4212-b1e2-3c5888ddcf0a" containerName="mariadb-account-delete" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192268 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="account-server" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192282 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="object-expirer" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192299 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="65e5cee6-ee1c-4612-89b8-c2cfe968438b" containerName="neutron-api" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192312 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="475289a4-cf33-4f56-93d9-73f7551026f8" containerName="nova-scheduler-scheduler" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192330 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="swift-recon-cron" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192348 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4418034e-f484-4638-94bd-5b086af9e8f3" containerName="ceilometer-central-agent" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192362 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1a7008b-3448-4108-81b0-4d16484a6f7b" containerName="ovn-northd" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192372 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aaf5d8e-00de-473b-91d2-1dd8a7354853" containerName="nova-metadata-log" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192386 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f980bce-4b41-461d-9a1f-af4e6fb7455b" containerName="mariadb-account-delete" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192403 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="object-updater" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192415 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa2db326-7b3a-4cc8-acb4-9c680c8f4972" containerName="galera" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192431 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="container-server" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192447 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aef55de-c4dd-409e-b9f1-b79adc99ea8d" containerName="ovsdb-server" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192459 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="626d71e0-e957-4a46-9565-d19058a575c9" containerName="placement-log" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192473 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a6ab499-ed60-45e7-b510-5a43422aa7f5" containerName="rabbitmq" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192485 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="e80d0e3a-46f5-4e5e-b53f-3794f8b2b99b" containerName="nova-cell1-conductor-conductor" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192500 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="e71af496-4851-4904-9003-0358adc97b94" containerName="mariadb-account-delete" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192517 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="container-replicator" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192531 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ab7add-ea30-4610-a96a-2cad6ae8e40c" containerName="ovsdbserver-nb" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192567 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d04cb142-7473-455b-8d5b-f79d879d8d58" containerName="glance-log" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192583 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8c66f33-3fbd-4a35-8e0d-2b38c3cd513a" containerName="mariadb-account-delete" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192593 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="object-server" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192603 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="11f688b0-b3aa-46f7-a700-c6619e3a3951" containerName="kube-state-metrics" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192622 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="65e5cee6-ee1c-4612-89b8-c2cfe968438b" containerName="neutron-httpd" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192637 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b3604db-dabe-4d61-918d-b41a85fbcbf5" containerName="probe" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192651 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="336d549b-b94b-4966-af57-2289b1c8acc8" containerName="openstack-network-exporter" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192664 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f85d40e-16b8-4ece-a268-8b4d227ac36c" containerName="barbican-worker" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192716 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaeaef09-d532-4399-b9bb-c9e59fbf1a62" containerName="cinder-api" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192729 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa0ca608-57b5-4289-9271-fcc10a6c7422" containerName="galera" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192741 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd0bf370-6aac-4334-b612-db75770844df" containerName="nova-cell0-conductor-conductor" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192754 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f85d40e-16b8-4ece-a268-8b4d227ac36c" containerName="barbican-worker-log" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192770 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4418034e-f484-4638-94bd-5b086af9e8f3" containerName="ceilometer-notification-agent" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192784 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ab7add-ea30-4610-a96a-2cad6ae8e40c" containerName="openstack-network-exporter" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192799 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="container-updater" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192812 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="rsync" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192828 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c03db41f-e7fb-4188-bd67-13f35c231490" containerName="keystone-api" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192843 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa4d68da-3ce0-4d91-a414-0f4fd4bd3dba" containerName="memcached" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192859 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="69c5134b-fc5b-453c-87ee-6a26e08796cf" containerName="nova-api-log" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192870 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="baa4dafc-7be7-4f97-ba72-359c27e3151c" containerName="glance-httpd" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192889 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="77004520-24e0-4076-8155-b4a8b6b3e1a2" containerName="proxy-httpd" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192900 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="baa4dafc-7be7-4f97-ba72-359c27e3151c" containerName="glance-log" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192915 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="object-replicator" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192933 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be1410c-e237-4abe-9a2d-c8e8b5242d93" containerName="barbican-api" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192948 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="account-reaper" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192959 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fb7c39d-6b28-4530-b9b1-87c2af591f61" containerName="ovn-controller" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192972 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="336d549b-b94b-4966-af57-2289b1c8acc8" containerName="ovsdbserver-sb" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.192988 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaeaef09-d532-4399-b9bb-c9e59fbf1a62" containerName="cinder-api-log" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.193006 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a18c26d-a476-4e4b-9320-84369da38cf2" containerName="barbican-keystone-listener-log" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.193020 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="23940e94-2a8f-4e11-b8aa-31fbcd8d9076" containerName="account-replicator" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.194717 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mqp7l"] Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.194852 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mqp7l" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.273355 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fppw\" (UniqueName: \"kubernetes.io/projected/bca47b79-4b03-4940-b41e-d0df78918244-kube-api-access-6fppw\") pod \"redhat-marketplace-mqp7l\" (UID: \"bca47b79-4b03-4940-b41e-d0df78918244\") " pod="openshift-marketplace/redhat-marketplace-mqp7l" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.273438 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bca47b79-4b03-4940-b41e-d0df78918244-catalog-content\") pod \"redhat-marketplace-mqp7l\" (UID: \"bca47b79-4b03-4940-b41e-d0df78918244\") " pod="openshift-marketplace/redhat-marketplace-mqp7l" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.273645 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bca47b79-4b03-4940-b41e-d0df78918244-utilities\") pod \"redhat-marketplace-mqp7l\" (UID: \"bca47b79-4b03-4940-b41e-d0df78918244\") " pod="openshift-marketplace/redhat-marketplace-mqp7l" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.375635 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bca47b79-4b03-4940-b41e-d0df78918244-utilities\") pod \"redhat-marketplace-mqp7l\" (UID: \"bca47b79-4b03-4940-b41e-d0df78918244\") " pod="openshift-marketplace/redhat-marketplace-mqp7l" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.375696 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fppw\" (UniqueName: \"kubernetes.io/projected/bca47b79-4b03-4940-b41e-d0df78918244-kube-api-access-6fppw\") pod \"redhat-marketplace-mqp7l\" (UID: \"bca47b79-4b03-4940-b41e-d0df78918244\") " pod="openshift-marketplace/redhat-marketplace-mqp7l" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.375775 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bca47b79-4b03-4940-b41e-d0df78918244-catalog-content\") pod \"redhat-marketplace-mqp7l\" (UID: \"bca47b79-4b03-4940-b41e-d0df78918244\") " pod="openshift-marketplace/redhat-marketplace-mqp7l" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.376134 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bca47b79-4b03-4940-b41e-d0df78918244-utilities\") pod \"redhat-marketplace-mqp7l\" (UID: \"bca47b79-4b03-4940-b41e-d0df78918244\") " pod="openshift-marketplace/redhat-marketplace-mqp7l" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.376157 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bca47b79-4b03-4940-b41e-d0df78918244-catalog-content\") pod \"redhat-marketplace-mqp7l\" (UID: \"bca47b79-4b03-4940-b41e-d0df78918244\") " pod="openshift-marketplace/redhat-marketplace-mqp7l" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.399062 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fppw\" (UniqueName: \"kubernetes.io/projected/bca47b79-4b03-4940-b41e-d0df78918244-kube-api-access-6fppw\") pod \"redhat-marketplace-mqp7l\" (UID: \"bca47b79-4b03-4940-b41e-d0df78918244\") " pod="openshift-marketplace/redhat-marketplace-mqp7l" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.521727 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mqp7l" Oct 13 06:52:31 crc kubenswrapper[4833]: I1013 06:52:31.770235 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mqp7l"] Oct 13 06:52:32 crc kubenswrapper[4833]: I1013 06:52:32.349087 4833 generic.go:334] "Generic (PLEG): container finished" podID="bca47b79-4b03-4940-b41e-d0df78918244" containerID="98b317b9f6ca5ce39dc6ff17820aa515917015d442094c742d6a179bfe1bebca" exitCode=0 Oct 13 06:52:32 crc kubenswrapper[4833]: I1013 06:52:32.349161 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mqp7l" event={"ID":"bca47b79-4b03-4940-b41e-d0df78918244","Type":"ContainerDied","Data":"98b317b9f6ca5ce39dc6ff17820aa515917015d442094c742d6a179bfe1bebca"} Oct 13 06:52:32 crc kubenswrapper[4833]: I1013 06:52:32.349494 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mqp7l" event={"ID":"bca47b79-4b03-4940-b41e-d0df78918244","Type":"ContainerStarted","Data":"91aa28d9349fe84aa3ac13385a48b6ef717869ae21cba60e6442665b70a1a030"} Oct 13 06:52:32 crc kubenswrapper[4833]: I1013 06:52:32.352108 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 06:52:33 crc kubenswrapper[4833]: I1013 06:52:33.360513 4833 generic.go:334] "Generic (PLEG): container finished" podID="bca47b79-4b03-4940-b41e-d0df78918244" containerID="5e1d21b23c9a68250de8ab1ecdaa8b407b2843ef1cdf7a8618c988e7e7d2eef3" exitCode=0 Oct 13 06:52:33 crc kubenswrapper[4833]: I1013 06:52:33.360619 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mqp7l" event={"ID":"bca47b79-4b03-4940-b41e-d0df78918244","Type":"ContainerDied","Data":"5e1d21b23c9a68250de8ab1ecdaa8b407b2843ef1cdf7a8618c988e7e7d2eef3"} Oct 13 06:52:34 crc kubenswrapper[4833]: I1013 06:52:34.369186 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mqp7l" event={"ID":"bca47b79-4b03-4940-b41e-d0df78918244","Type":"ContainerStarted","Data":"229f3520ff4f0d6f55d66371a0d80b4df9a29556ace59a65b3747b44554d8316"} Oct 13 06:52:34 crc kubenswrapper[4833]: I1013 06:52:34.387837 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mqp7l" podStartSLOduration=1.954499741 podStartE2EDuration="3.387819506s" podCreationTimestamp="2025-10-13 06:52:31 +0000 UTC" firstStartedPulling="2025-10-13 06:52:32.351826484 +0000 UTC m=+1442.452249400" lastFinishedPulling="2025-10-13 06:52:33.785146259 +0000 UTC m=+1443.885569165" observedRunningTime="2025-10-13 06:52:34.386122707 +0000 UTC m=+1444.486545633" watchObservedRunningTime="2025-10-13 06:52:34.387819506 +0000 UTC m=+1444.488242432" Oct 13 06:52:41 crc kubenswrapper[4833]: I1013 06:52:41.522029 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mqp7l" Oct 13 06:52:41 crc kubenswrapper[4833]: I1013 06:52:41.522689 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mqp7l" Oct 13 06:52:41 crc kubenswrapper[4833]: I1013 06:52:41.565615 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mqp7l" Oct 13 06:52:41 crc kubenswrapper[4833]: I1013 06:52:41.916302 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b9dd2"] Oct 13 06:52:41 crc kubenswrapper[4833]: I1013 06:52:41.918137 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b9dd2" Oct 13 06:52:41 crc kubenswrapper[4833]: I1013 06:52:41.924328 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b9dd2"] Oct 13 06:52:42 crc kubenswrapper[4833]: I1013 06:52:42.031223 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzjhv\" (UniqueName: \"kubernetes.io/projected/9a9bfeed-cc45-475f-888d-aa16cf0b4745-kube-api-access-nzjhv\") pod \"community-operators-b9dd2\" (UID: \"9a9bfeed-cc45-475f-888d-aa16cf0b4745\") " pod="openshift-marketplace/community-operators-b9dd2" Oct 13 06:52:42 crc kubenswrapper[4833]: I1013 06:52:42.032050 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a9bfeed-cc45-475f-888d-aa16cf0b4745-utilities\") pod \"community-operators-b9dd2\" (UID: \"9a9bfeed-cc45-475f-888d-aa16cf0b4745\") " pod="openshift-marketplace/community-operators-b9dd2" Oct 13 06:52:42 crc kubenswrapper[4833]: I1013 06:52:42.032228 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a9bfeed-cc45-475f-888d-aa16cf0b4745-catalog-content\") pod \"community-operators-b9dd2\" (UID: \"9a9bfeed-cc45-475f-888d-aa16cf0b4745\") " pod="openshift-marketplace/community-operators-b9dd2" Oct 13 06:52:42 crc kubenswrapper[4833]: I1013 06:52:42.133460 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzjhv\" (UniqueName: \"kubernetes.io/projected/9a9bfeed-cc45-475f-888d-aa16cf0b4745-kube-api-access-nzjhv\") pod \"community-operators-b9dd2\" (UID: \"9a9bfeed-cc45-475f-888d-aa16cf0b4745\") " pod="openshift-marketplace/community-operators-b9dd2" Oct 13 06:52:42 crc kubenswrapper[4833]: I1013 06:52:42.133551 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a9bfeed-cc45-475f-888d-aa16cf0b4745-utilities\") pod \"community-operators-b9dd2\" (UID: \"9a9bfeed-cc45-475f-888d-aa16cf0b4745\") " pod="openshift-marketplace/community-operators-b9dd2" Oct 13 06:52:42 crc kubenswrapper[4833]: I1013 06:52:42.133603 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a9bfeed-cc45-475f-888d-aa16cf0b4745-catalog-content\") pod \"community-operators-b9dd2\" (UID: \"9a9bfeed-cc45-475f-888d-aa16cf0b4745\") " pod="openshift-marketplace/community-operators-b9dd2" Oct 13 06:52:42 crc kubenswrapper[4833]: I1013 06:52:42.134217 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a9bfeed-cc45-475f-888d-aa16cf0b4745-catalog-content\") pod \"community-operators-b9dd2\" (UID: \"9a9bfeed-cc45-475f-888d-aa16cf0b4745\") " pod="openshift-marketplace/community-operators-b9dd2" Oct 13 06:52:42 crc kubenswrapper[4833]: I1013 06:52:42.134252 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a9bfeed-cc45-475f-888d-aa16cf0b4745-utilities\") pod \"community-operators-b9dd2\" (UID: \"9a9bfeed-cc45-475f-888d-aa16cf0b4745\") " pod="openshift-marketplace/community-operators-b9dd2" Oct 13 06:52:42 crc kubenswrapper[4833]: I1013 06:52:42.165997 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzjhv\" (UniqueName: \"kubernetes.io/projected/9a9bfeed-cc45-475f-888d-aa16cf0b4745-kube-api-access-nzjhv\") pod \"community-operators-b9dd2\" (UID: \"9a9bfeed-cc45-475f-888d-aa16cf0b4745\") " pod="openshift-marketplace/community-operators-b9dd2" Oct 13 06:52:42 crc kubenswrapper[4833]: I1013 06:52:42.247246 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b9dd2" Oct 13 06:52:42 crc kubenswrapper[4833]: I1013 06:52:42.485444 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mqp7l" Oct 13 06:52:42 crc kubenswrapper[4833]: I1013 06:52:42.716135 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b9dd2"] Oct 13 06:52:42 crc kubenswrapper[4833]: W1013 06:52:42.721053 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a9bfeed_cc45_475f_888d_aa16cf0b4745.slice/crio-c31c7d316832cf00b4b4bb12ad994b8d1e367ec41db78ab872c1b13265c19703 WatchSource:0}: Error finding container c31c7d316832cf00b4b4bb12ad994b8d1e367ec41db78ab872c1b13265c19703: Status 404 returned error can't find the container with id c31c7d316832cf00b4b4bb12ad994b8d1e367ec41db78ab872c1b13265c19703 Oct 13 06:52:43 crc kubenswrapper[4833]: I1013 06:52:43.456021 4833 generic.go:334] "Generic (PLEG): container finished" podID="9a9bfeed-cc45-475f-888d-aa16cf0b4745" containerID="bd0ba59ec2b5124ea72fcc091929a76dfb223224eec05a68d66cbf6ff12f50ab" exitCode=0 Oct 13 06:52:43 crc kubenswrapper[4833]: I1013 06:52:43.456142 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9dd2" event={"ID":"9a9bfeed-cc45-475f-888d-aa16cf0b4745","Type":"ContainerDied","Data":"bd0ba59ec2b5124ea72fcc091929a76dfb223224eec05a68d66cbf6ff12f50ab"} Oct 13 06:52:43 crc kubenswrapper[4833]: I1013 06:52:43.456468 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9dd2" event={"ID":"9a9bfeed-cc45-475f-888d-aa16cf0b4745","Type":"ContainerStarted","Data":"c31c7d316832cf00b4b4bb12ad994b8d1e367ec41db78ab872c1b13265c19703"} Oct 13 06:52:44 crc kubenswrapper[4833]: I1013 06:52:44.465980 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9dd2" event={"ID":"9a9bfeed-cc45-475f-888d-aa16cf0b4745","Type":"ContainerStarted","Data":"5dab52dc19707ebb25030c960f5383c01acad809ed8ae17a09cf1a8c61a8ad45"} Oct 13 06:52:44 crc kubenswrapper[4833]: I1013 06:52:44.801266 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mqp7l"] Oct 13 06:52:45 crc kubenswrapper[4833]: I1013 06:52:45.474854 4833 generic.go:334] "Generic (PLEG): container finished" podID="9a9bfeed-cc45-475f-888d-aa16cf0b4745" containerID="5dab52dc19707ebb25030c960f5383c01acad809ed8ae17a09cf1a8c61a8ad45" exitCode=0 Oct 13 06:52:45 crc kubenswrapper[4833]: I1013 06:52:45.475033 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mqp7l" podUID="bca47b79-4b03-4940-b41e-d0df78918244" containerName="registry-server" containerID="cri-o://229f3520ff4f0d6f55d66371a0d80b4df9a29556ace59a65b3747b44554d8316" gracePeriod=2 Oct 13 06:52:45 crc kubenswrapper[4833]: I1013 06:52:45.475924 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9dd2" event={"ID":"9a9bfeed-cc45-475f-888d-aa16cf0b4745","Type":"ContainerDied","Data":"5dab52dc19707ebb25030c960f5383c01acad809ed8ae17a09cf1a8c61a8ad45"} Oct 13 06:52:46 crc kubenswrapper[4833]: I1013 06:52:46.000047 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mqp7l" Oct 13 06:52:46 crc kubenswrapper[4833]: I1013 06:52:46.009978 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bca47b79-4b03-4940-b41e-d0df78918244-catalog-content\") pod \"bca47b79-4b03-4940-b41e-d0df78918244\" (UID: \"bca47b79-4b03-4940-b41e-d0df78918244\") " Oct 13 06:52:46 crc kubenswrapper[4833]: I1013 06:52:46.010021 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fppw\" (UniqueName: \"kubernetes.io/projected/bca47b79-4b03-4940-b41e-d0df78918244-kube-api-access-6fppw\") pod \"bca47b79-4b03-4940-b41e-d0df78918244\" (UID: \"bca47b79-4b03-4940-b41e-d0df78918244\") " Oct 13 06:52:46 crc kubenswrapper[4833]: I1013 06:52:46.010078 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bca47b79-4b03-4940-b41e-d0df78918244-utilities\") pod \"bca47b79-4b03-4940-b41e-d0df78918244\" (UID: \"bca47b79-4b03-4940-b41e-d0df78918244\") " Oct 13 06:52:46 crc kubenswrapper[4833]: I1013 06:52:46.010978 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bca47b79-4b03-4940-b41e-d0df78918244-utilities" (OuterVolumeSpecName: "utilities") pod "bca47b79-4b03-4940-b41e-d0df78918244" (UID: "bca47b79-4b03-4940-b41e-d0df78918244"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:52:46 crc kubenswrapper[4833]: I1013 06:52:46.017811 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bca47b79-4b03-4940-b41e-d0df78918244-kube-api-access-6fppw" (OuterVolumeSpecName: "kube-api-access-6fppw") pod "bca47b79-4b03-4940-b41e-d0df78918244" (UID: "bca47b79-4b03-4940-b41e-d0df78918244"). InnerVolumeSpecName "kube-api-access-6fppw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:52:46 crc kubenswrapper[4833]: I1013 06:52:46.037710 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bca47b79-4b03-4940-b41e-d0df78918244-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bca47b79-4b03-4940-b41e-d0df78918244" (UID: "bca47b79-4b03-4940-b41e-d0df78918244"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:52:46 crc kubenswrapper[4833]: I1013 06:52:46.112273 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bca47b79-4b03-4940-b41e-d0df78918244-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 06:52:46 crc kubenswrapper[4833]: I1013 06:52:46.112322 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fppw\" (UniqueName: \"kubernetes.io/projected/bca47b79-4b03-4940-b41e-d0df78918244-kube-api-access-6fppw\") on node \"crc\" DevicePath \"\"" Oct 13 06:52:46 crc kubenswrapper[4833]: I1013 06:52:46.112340 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bca47b79-4b03-4940-b41e-d0df78918244-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 06:52:46 crc kubenswrapper[4833]: I1013 06:52:46.497043 4833 generic.go:334] "Generic (PLEG): container finished" podID="bca47b79-4b03-4940-b41e-d0df78918244" containerID="229f3520ff4f0d6f55d66371a0d80b4df9a29556ace59a65b3747b44554d8316" exitCode=0 Oct 13 06:52:46 crc kubenswrapper[4833]: I1013 06:52:46.497112 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mqp7l" Oct 13 06:52:46 crc kubenswrapper[4833]: I1013 06:52:46.497140 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mqp7l" event={"ID":"bca47b79-4b03-4940-b41e-d0df78918244","Type":"ContainerDied","Data":"229f3520ff4f0d6f55d66371a0d80b4df9a29556ace59a65b3747b44554d8316"} Oct 13 06:52:46 crc kubenswrapper[4833]: I1013 06:52:46.497216 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mqp7l" event={"ID":"bca47b79-4b03-4940-b41e-d0df78918244","Type":"ContainerDied","Data":"91aa28d9349fe84aa3ac13385a48b6ef717869ae21cba60e6442665b70a1a030"} Oct 13 06:52:46 crc kubenswrapper[4833]: I1013 06:52:46.497241 4833 scope.go:117] "RemoveContainer" containerID="229f3520ff4f0d6f55d66371a0d80b4df9a29556ace59a65b3747b44554d8316" Oct 13 06:52:46 crc kubenswrapper[4833]: I1013 06:52:46.501595 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9dd2" event={"ID":"9a9bfeed-cc45-475f-888d-aa16cf0b4745","Type":"ContainerStarted","Data":"361a80cb8df3244d489aa12f4b933a724a688d885273732f35f5c1c927aaaa05"} Oct 13 06:52:46 crc kubenswrapper[4833]: I1013 06:52:46.521989 4833 scope.go:117] "RemoveContainer" containerID="5e1d21b23c9a68250de8ab1ecdaa8b407b2843ef1cdf7a8618c988e7e7d2eef3" Oct 13 06:52:46 crc kubenswrapper[4833]: I1013 06:52:46.536287 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b9dd2" podStartSLOduration=3.087730018 podStartE2EDuration="5.536265624s" podCreationTimestamp="2025-10-13 06:52:41 +0000 UTC" firstStartedPulling="2025-10-13 06:52:43.4575586 +0000 UTC m=+1453.557981516" lastFinishedPulling="2025-10-13 06:52:45.906094206 +0000 UTC m=+1456.006517122" observedRunningTime="2025-10-13 06:52:46.531643221 +0000 UTC m=+1456.632066137" watchObservedRunningTime="2025-10-13 06:52:46.536265624 +0000 UTC m=+1456.636688540" Oct 13 06:52:46 crc kubenswrapper[4833]: I1013 06:52:46.554270 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mqp7l"] Oct 13 06:52:46 crc kubenswrapper[4833]: I1013 06:52:46.556965 4833 scope.go:117] "RemoveContainer" containerID="98b317b9f6ca5ce39dc6ff17820aa515917015d442094c742d6a179bfe1bebca" Oct 13 06:52:46 crc kubenswrapper[4833]: I1013 06:52:46.564060 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mqp7l"] Oct 13 06:52:46 crc kubenswrapper[4833]: I1013 06:52:46.571484 4833 scope.go:117] "RemoveContainer" containerID="229f3520ff4f0d6f55d66371a0d80b4df9a29556ace59a65b3747b44554d8316" Oct 13 06:52:46 crc kubenswrapper[4833]: E1013 06:52:46.571900 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"229f3520ff4f0d6f55d66371a0d80b4df9a29556ace59a65b3747b44554d8316\": container with ID starting with 229f3520ff4f0d6f55d66371a0d80b4df9a29556ace59a65b3747b44554d8316 not found: ID does not exist" containerID="229f3520ff4f0d6f55d66371a0d80b4df9a29556ace59a65b3747b44554d8316" Oct 13 06:52:46 crc kubenswrapper[4833]: I1013 06:52:46.571941 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"229f3520ff4f0d6f55d66371a0d80b4df9a29556ace59a65b3747b44554d8316"} err="failed to get container status \"229f3520ff4f0d6f55d66371a0d80b4df9a29556ace59a65b3747b44554d8316\": rpc error: code = NotFound desc = could not find container \"229f3520ff4f0d6f55d66371a0d80b4df9a29556ace59a65b3747b44554d8316\": container with ID starting with 229f3520ff4f0d6f55d66371a0d80b4df9a29556ace59a65b3747b44554d8316 not found: ID does not exist" Oct 13 06:52:46 crc kubenswrapper[4833]: I1013 06:52:46.571966 4833 scope.go:117] "RemoveContainer" containerID="5e1d21b23c9a68250de8ab1ecdaa8b407b2843ef1cdf7a8618c988e7e7d2eef3" Oct 13 06:52:46 crc kubenswrapper[4833]: E1013 06:52:46.572235 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e1d21b23c9a68250de8ab1ecdaa8b407b2843ef1cdf7a8618c988e7e7d2eef3\": container with ID starting with 5e1d21b23c9a68250de8ab1ecdaa8b407b2843ef1cdf7a8618c988e7e7d2eef3 not found: ID does not exist" containerID="5e1d21b23c9a68250de8ab1ecdaa8b407b2843ef1cdf7a8618c988e7e7d2eef3" Oct 13 06:52:46 crc kubenswrapper[4833]: I1013 06:52:46.572274 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e1d21b23c9a68250de8ab1ecdaa8b407b2843ef1cdf7a8618c988e7e7d2eef3"} err="failed to get container status \"5e1d21b23c9a68250de8ab1ecdaa8b407b2843ef1cdf7a8618c988e7e7d2eef3\": rpc error: code = NotFound desc = could not find container \"5e1d21b23c9a68250de8ab1ecdaa8b407b2843ef1cdf7a8618c988e7e7d2eef3\": container with ID starting with 5e1d21b23c9a68250de8ab1ecdaa8b407b2843ef1cdf7a8618c988e7e7d2eef3 not found: ID does not exist" Oct 13 06:52:46 crc kubenswrapper[4833]: I1013 06:52:46.572302 4833 scope.go:117] "RemoveContainer" containerID="98b317b9f6ca5ce39dc6ff17820aa515917015d442094c742d6a179bfe1bebca" Oct 13 06:52:46 crc kubenswrapper[4833]: E1013 06:52:46.572599 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98b317b9f6ca5ce39dc6ff17820aa515917015d442094c742d6a179bfe1bebca\": container with ID starting with 98b317b9f6ca5ce39dc6ff17820aa515917015d442094c742d6a179bfe1bebca not found: ID does not exist" containerID="98b317b9f6ca5ce39dc6ff17820aa515917015d442094c742d6a179bfe1bebca" Oct 13 06:52:46 crc kubenswrapper[4833]: I1013 06:52:46.572636 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98b317b9f6ca5ce39dc6ff17820aa515917015d442094c742d6a179bfe1bebca"} err="failed to get container status \"98b317b9f6ca5ce39dc6ff17820aa515917015d442094c742d6a179bfe1bebca\": rpc error: code = NotFound desc = could not find container \"98b317b9f6ca5ce39dc6ff17820aa515917015d442094c742d6a179bfe1bebca\": container with ID starting with 98b317b9f6ca5ce39dc6ff17820aa515917015d442094c742d6a179bfe1bebca not found: ID does not exist" Oct 13 06:52:46 crc kubenswrapper[4833]: I1013 06:52:46.639447 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bca47b79-4b03-4940-b41e-d0df78918244" path="/var/lib/kubelet/pods/bca47b79-4b03-4940-b41e-d0df78918244/volumes" Oct 13 06:52:47 crc kubenswrapper[4833]: I1013 06:52:47.265235 4833 scope.go:117] "RemoveContainer" containerID="9424384411f7c288d34b772f293a05528e9898aec1796a567f88c98f287d2166" Oct 13 06:52:47 crc kubenswrapper[4833]: I1013 06:52:47.304324 4833 scope.go:117] "RemoveContainer" containerID="2158994d9cef2d1f01dc7ac527dc46f90fdc320c4f2710c425f7843cb592ba4e" Oct 13 06:52:47 crc kubenswrapper[4833]: I1013 06:52:47.328567 4833 scope.go:117] "RemoveContainer" containerID="af24bd2cda69df58cb2ef7d804c8efc24c48e8e22496a30e3272dfcd74a98be6" Oct 13 06:52:47 crc kubenswrapper[4833]: I1013 06:52:47.367942 4833 scope.go:117] "RemoveContainer" containerID="88dd9795a2f0d6b1192640550d57cb400e3c38ef205f9ffda1edfab60c02010b" Oct 13 06:52:47 crc kubenswrapper[4833]: I1013 06:52:47.383731 4833 scope.go:117] "RemoveContainer" containerID="305356998f5d389dcf783b5d0672aa457d5023b743e7a50dc3fee24c7afd5da4" Oct 13 06:52:47 crc kubenswrapper[4833]: I1013 06:52:47.420368 4833 scope.go:117] "RemoveContainer" containerID="e76cc24a2ac94de806ac4fc9d7b14ca2fbf2c4168bff8415d42bbe0da3fe7c8e" Oct 13 06:52:47 crc kubenswrapper[4833]: I1013 06:52:47.435234 4833 scope.go:117] "RemoveContainer" containerID="b3cd93d18ecb51926f4360f17dfaf0788c2428eba5b194104e0d9ba177142a5c" Oct 13 06:52:47 crc kubenswrapper[4833]: I1013 06:52:47.456821 4833 scope.go:117] "RemoveContainer" containerID="2741b44dc5fc1f2672f5979bc81587b08dc2a228b75a1ce72591ec32cbe809c9" Oct 13 06:52:47 crc kubenswrapper[4833]: I1013 06:52:47.479026 4833 scope.go:117] "RemoveContainer" containerID="ab4603996e10efb1837fd43c7ec7732d2eee4186c2e23d81c201b5e44bc74800" Oct 13 06:52:47 crc kubenswrapper[4833]: I1013 06:52:47.517654 4833 scope.go:117] "RemoveContainer" containerID="e6de5ecc3f8c57e4a901a25b2899bef4da8d07a78b9d5ea4e9c74c5629425736" Oct 13 06:52:47 crc kubenswrapper[4833]: I1013 06:52:47.545347 4833 scope.go:117] "RemoveContainer" containerID="deb8cba5ece9f8611d161426287e6b8c0c943a7706726b8b53ed0122357cee8a" Oct 13 06:52:47 crc kubenswrapper[4833]: I1013 06:52:47.565937 4833 scope.go:117] "RemoveContainer" containerID="718295aa7ee717c333352fdf443a7bba813f3a4c32d380f3c3fa23c934f30af0" Oct 13 06:52:47 crc kubenswrapper[4833]: I1013 06:52:47.592455 4833 scope.go:117] "RemoveContainer" containerID="7e9b9ddb1647f7c0accf076a96f7e4e26b43c8c7d1b79a8c05c59c3623bb0777" Oct 13 06:52:47 crc kubenswrapper[4833]: I1013 06:52:47.635401 4833 scope.go:117] "RemoveContainer" containerID="fe364208d348b90b70689ae14abb9a2d806a31109fc66fccd1952b74f9640686" Oct 13 06:52:47 crc kubenswrapper[4833]: I1013 06:52:47.667001 4833 scope.go:117] "RemoveContainer" containerID="569297508f63a6c413306971b9b99483a9541ed3d6f625259c21948210f4cc46" Oct 13 06:52:47 crc kubenswrapper[4833]: I1013 06:52:47.694151 4833 scope.go:117] "RemoveContainer" containerID="8d5de2fe55ed4c376d29367c220f041f0708e49fe8ccbde857152a78ed7c46e7" Oct 13 06:52:52 crc kubenswrapper[4833]: I1013 06:52:52.247354 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b9dd2" Oct 13 06:52:52 crc kubenswrapper[4833]: I1013 06:52:52.247666 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b9dd2" Oct 13 06:52:52 crc kubenswrapper[4833]: I1013 06:52:52.286919 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b9dd2" Oct 13 06:52:52 crc kubenswrapper[4833]: I1013 06:52:52.643394 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b9dd2" Oct 13 06:52:52 crc kubenswrapper[4833]: I1013 06:52:52.689205 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b9dd2"] Oct 13 06:52:54 crc kubenswrapper[4833]: I1013 06:52:54.592068 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b9dd2" podUID="9a9bfeed-cc45-475f-888d-aa16cf0b4745" containerName="registry-server" containerID="cri-o://361a80cb8df3244d489aa12f4b933a724a688d885273732f35f5c1c927aaaa05" gracePeriod=2 Oct 13 06:52:55 crc kubenswrapper[4833]: I1013 06:52:55.057314 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b9dd2" Oct 13 06:52:55 crc kubenswrapper[4833]: I1013 06:52:55.234524 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzjhv\" (UniqueName: \"kubernetes.io/projected/9a9bfeed-cc45-475f-888d-aa16cf0b4745-kube-api-access-nzjhv\") pod \"9a9bfeed-cc45-475f-888d-aa16cf0b4745\" (UID: \"9a9bfeed-cc45-475f-888d-aa16cf0b4745\") " Oct 13 06:52:55 crc kubenswrapper[4833]: I1013 06:52:55.234657 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a9bfeed-cc45-475f-888d-aa16cf0b4745-catalog-content\") pod \"9a9bfeed-cc45-475f-888d-aa16cf0b4745\" (UID: \"9a9bfeed-cc45-475f-888d-aa16cf0b4745\") " Oct 13 06:52:55 crc kubenswrapper[4833]: I1013 06:52:55.234712 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a9bfeed-cc45-475f-888d-aa16cf0b4745-utilities\") pod \"9a9bfeed-cc45-475f-888d-aa16cf0b4745\" (UID: \"9a9bfeed-cc45-475f-888d-aa16cf0b4745\") " Oct 13 06:52:55 crc kubenswrapper[4833]: I1013 06:52:55.238639 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a9bfeed-cc45-475f-888d-aa16cf0b4745-utilities" (OuterVolumeSpecName: "utilities") pod "9a9bfeed-cc45-475f-888d-aa16cf0b4745" (UID: "9a9bfeed-cc45-475f-888d-aa16cf0b4745"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:52:55 crc kubenswrapper[4833]: I1013 06:52:55.258317 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a9bfeed-cc45-475f-888d-aa16cf0b4745-kube-api-access-nzjhv" (OuterVolumeSpecName: "kube-api-access-nzjhv") pod "9a9bfeed-cc45-475f-888d-aa16cf0b4745" (UID: "9a9bfeed-cc45-475f-888d-aa16cf0b4745"). InnerVolumeSpecName "kube-api-access-nzjhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:52:55 crc kubenswrapper[4833]: I1013 06:52:55.318521 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a9bfeed-cc45-475f-888d-aa16cf0b4745-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a9bfeed-cc45-475f-888d-aa16cf0b4745" (UID: "9a9bfeed-cc45-475f-888d-aa16cf0b4745"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:52:55 crc kubenswrapper[4833]: I1013 06:52:55.337477 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a9bfeed-cc45-475f-888d-aa16cf0b4745-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 06:52:55 crc kubenswrapper[4833]: I1013 06:52:55.337921 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzjhv\" (UniqueName: \"kubernetes.io/projected/9a9bfeed-cc45-475f-888d-aa16cf0b4745-kube-api-access-nzjhv\") on node \"crc\" DevicePath \"\"" Oct 13 06:52:55 crc kubenswrapper[4833]: I1013 06:52:55.338020 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a9bfeed-cc45-475f-888d-aa16cf0b4745-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 06:52:55 crc kubenswrapper[4833]: I1013 06:52:55.603326 4833 generic.go:334] "Generic (PLEG): container finished" podID="9a9bfeed-cc45-475f-888d-aa16cf0b4745" containerID="361a80cb8df3244d489aa12f4b933a724a688d885273732f35f5c1c927aaaa05" exitCode=0 Oct 13 06:52:55 crc kubenswrapper[4833]: I1013 06:52:55.603394 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b9dd2" Oct 13 06:52:55 crc kubenswrapper[4833]: I1013 06:52:55.603393 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9dd2" event={"ID":"9a9bfeed-cc45-475f-888d-aa16cf0b4745","Type":"ContainerDied","Data":"361a80cb8df3244d489aa12f4b933a724a688d885273732f35f5c1c927aaaa05"} Oct 13 06:52:55 crc kubenswrapper[4833]: I1013 06:52:55.603478 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9dd2" event={"ID":"9a9bfeed-cc45-475f-888d-aa16cf0b4745","Type":"ContainerDied","Data":"c31c7d316832cf00b4b4bb12ad994b8d1e367ec41db78ab872c1b13265c19703"} Oct 13 06:52:55 crc kubenswrapper[4833]: I1013 06:52:55.603509 4833 scope.go:117] "RemoveContainer" containerID="361a80cb8df3244d489aa12f4b933a724a688d885273732f35f5c1c927aaaa05" Oct 13 06:52:55 crc kubenswrapper[4833]: I1013 06:52:55.629193 4833 scope.go:117] "RemoveContainer" containerID="5dab52dc19707ebb25030c960f5383c01acad809ed8ae17a09cf1a8c61a8ad45" Oct 13 06:52:55 crc kubenswrapper[4833]: I1013 06:52:55.641280 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b9dd2"] Oct 13 06:52:55 crc kubenswrapper[4833]: I1013 06:52:55.647404 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b9dd2"] Oct 13 06:52:55 crc kubenswrapper[4833]: I1013 06:52:55.662797 4833 scope.go:117] "RemoveContainer" containerID="bd0ba59ec2b5124ea72fcc091929a76dfb223224eec05a68d66cbf6ff12f50ab" Oct 13 06:52:55 crc kubenswrapper[4833]: I1013 06:52:55.681087 4833 scope.go:117] "RemoveContainer" containerID="361a80cb8df3244d489aa12f4b933a724a688d885273732f35f5c1c927aaaa05" Oct 13 06:52:55 crc kubenswrapper[4833]: E1013 06:52:55.681671 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"361a80cb8df3244d489aa12f4b933a724a688d885273732f35f5c1c927aaaa05\": container with ID starting with 361a80cb8df3244d489aa12f4b933a724a688d885273732f35f5c1c927aaaa05 not found: ID does not exist" containerID="361a80cb8df3244d489aa12f4b933a724a688d885273732f35f5c1c927aaaa05" Oct 13 06:52:55 crc kubenswrapper[4833]: I1013 06:52:55.681736 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"361a80cb8df3244d489aa12f4b933a724a688d885273732f35f5c1c927aaaa05"} err="failed to get container status \"361a80cb8df3244d489aa12f4b933a724a688d885273732f35f5c1c927aaaa05\": rpc error: code = NotFound desc = could not find container \"361a80cb8df3244d489aa12f4b933a724a688d885273732f35f5c1c927aaaa05\": container with ID starting with 361a80cb8df3244d489aa12f4b933a724a688d885273732f35f5c1c927aaaa05 not found: ID does not exist" Oct 13 06:52:55 crc kubenswrapper[4833]: I1013 06:52:55.681775 4833 scope.go:117] "RemoveContainer" containerID="5dab52dc19707ebb25030c960f5383c01acad809ed8ae17a09cf1a8c61a8ad45" Oct 13 06:52:55 crc kubenswrapper[4833]: E1013 06:52:55.682185 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dab52dc19707ebb25030c960f5383c01acad809ed8ae17a09cf1a8c61a8ad45\": container with ID starting with 5dab52dc19707ebb25030c960f5383c01acad809ed8ae17a09cf1a8c61a8ad45 not found: ID does not exist" containerID="5dab52dc19707ebb25030c960f5383c01acad809ed8ae17a09cf1a8c61a8ad45" Oct 13 06:52:55 crc kubenswrapper[4833]: I1013 06:52:55.682248 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dab52dc19707ebb25030c960f5383c01acad809ed8ae17a09cf1a8c61a8ad45"} err="failed to get container status \"5dab52dc19707ebb25030c960f5383c01acad809ed8ae17a09cf1a8c61a8ad45\": rpc error: code = NotFound desc = could not find container \"5dab52dc19707ebb25030c960f5383c01acad809ed8ae17a09cf1a8c61a8ad45\": container with ID starting with 5dab52dc19707ebb25030c960f5383c01acad809ed8ae17a09cf1a8c61a8ad45 not found: ID does not exist" Oct 13 06:52:55 crc kubenswrapper[4833]: I1013 06:52:55.682278 4833 scope.go:117] "RemoveContainer" containerID="bd0ba59ec2b5124ea72fcc091929a76dfb223224eec05a68d66cbf6ff12f50ab" Oct 13 06:52:55 crc kubenswrapper[4833]: E1013 06:52:55.682875 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd0ba59ec2b5124ea72fcc091929a76dfb223224eec05a68d66cbf6ff12f50ab\": container with ID starting with bd0ba59ec2b5124ea72fcc091929a76dfb223224eec05a68d66cbf6ff12f50ab not found: ID does not exist" containerID="bd0ba59ec2b5124ea72fcc091929a76dfb223224eec05a68d66cbf6ff12f50ab" Oct 13 06:52:55 crc kubenswrapper[4833]: I1013 06:52:55.682909 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd0ba59ec2b5124ea72fcc091929a76dfb223224eec05a68d66cbf6ff12f50ab"} err="failed to get container status \"bd0ba59ec2b5124ea72fcc091929a76dfb223224eec05a68d66cbf6ff12f50ab\": rpc error: code = NotFound desc = could not find container \"bd0ba59ec2b5124ea72fcc091929a76dfb223224eec05a68d66cbf6ff12f50ab\": container with ID starting with bd0ba59ec2b5124ea72fcc091929a76dfb223224eec05a68d66cbf6ff12f50ab not found: ID does not exist" Oct 13 06:52:56 crc kubenswrapper[4833]: I1013 06:52:56.636372 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a9bfeed-cc45-475f-888d-aa16cf0b4745" path="/var/lib/kubelet/pods/9a9bfeed-cc45-475f-888d-aa16cf0b4745/volumes" Oct 13 06:53:00 crc kubenswrapper[4833]: I1013 06:53:00.543455 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 06:53:00 crc kubenswrapper[4833]: I1013 06:53:00.543833 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 06:53:30 crc kubenswrapper[4833]: I1013 06:53:30.542787 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 06:53:30 crc kubenswrapper[4833]: I1013 06:53:30.543272 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 06:53:38 crc kubenswrapper[4833]: I1013 06:53:38.263641 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tss2k"] Oct 13 06:53:38 crc kubenswrapper[4833]: E1013 06:53:38.264749 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a9bfeed-cc45-475f-888d-aa16cf0b4745" containerName="extract-utilities" Oct 13 06:53:38 crc kubenswrapper[4833]: I1013 06:53:38.264771 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a9bfeed-cc45-475f-888d-aa16cf0b4745" containerName="extract-utilities" Oct 13 06:53:38 crc kubenswrapper[4833]: E1013 06:53:38.264796 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bca47b79-4b03-4940-b41e-d0df78918244" containerName="extract-content" Oct 13 06:53:38 crc kubenswrapper[4833]: I1013 06:53:38.264806 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="bca47b79-4b03-4940-b41e-d0df78918244" containerName="extract-content" Oct 13 06:53:38 crc kubenswrapper[4833]: E1013 06:53:38.264825 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bca47b79-4b03-4940-b41e-d0df78918244" containerName="extract-utilities" Oct 13 06:53:38 crc kubenswrapper[4833]: I1013 06:53:38.264835 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="bca47b79-4b03-4940-b41e-d0df78918244" containerName="extract-utilities" Oct 13 06:53:38 crc kubenswrapper[4833]: E1013 06:53:38.264856 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bca47b79-4b03-4940-b41e-d0df78918244" containerName="registry-server" Oct 13 06:53:38 crc kubenswrapper[4833]: I1013 06:53:38.264866 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="bca47b79-4b03-4940-b41e-d0df78918244" containerName="registry-server" Oct 13 06:53:38 crc kubenswrapper[4833]: E1013 06:53:38.264896 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a9bfeed-cc45-475f-888d-aa16cf0b4745" containerName="registry-server" Oct 13 06:53:38 crc kubenswrapper[4833]: I1013 06:53:38.264906 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a9bfeed-cc45-475f-888d-aa16cf0b4745" containerName="registry-server" Oct 13 06:53:38 crc kubenswrapper[4833]: E1013 06:53:38.264923 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a9bfeed-cc45-475f-888d-aa16cf0b4745" containerName="extract-content" Oct 13 06:53:38 crc kubenswrapper[4833]: I1013 06:53:38.264933 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a9bfeed-cc45-475f-888d-aa16cf0b4745" containerName="extract-content" Oct 13 06:53:38 crc kubenswrapper[4833]: I1013 06:53:38.265149 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="bca47b79-4b03-4940-b41e-d0df78918244" containerName="registry-server" Oct 13 06:53:38 crc kubenswrapper[4833]: I1013 06:53:38.265174 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a9bfeed-cc45-475f-888d-aa16cf0b4745" containerName="registry-server" Oct 13 06:53:38 crc kubenswrapper[4833]: I1013 06:53:38.266792 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tss2k" Oct 13 06:53:38 crc kubenswrapper[4833]: I1013 06:53:38.275121 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tss2k"] Oct 13 06:53:38 crc kubenswrapper[4833]: I1013 06:53:38.377313 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcvrl\" (UniqueName: \"kubernetes.io/projected/5801c41d-9ea8-4174-9ea3-a955f071844b-kube-api-access-wcvrl\") pod \"certified-operators-tss2k\" (UID: \"5801c41d-9ea8-4174-9ea3-a955f071844b\") " pod="openshift-marketplace/certified-operators-tss2k" Oct 13 06:53:38 crc kubenswrapper[4833]: I1013 06:53:38.377707 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5801c41d-9ea8-4174-9ea3-a955f071844b-catalog-content\") pod \"certified-operators-tss2k\" (UID: \"5801c41d-9ea8-4174-9ea3-a955f071844b\") " pod="openshift-marketplace/certified-operators-tss2k" Oct 13 06:53:38 crc kubenswrapper[4833]: I1013 06:53:38.377726 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5801c41d-9ea8-4174-9ea3-a955f071844b-utilities\") pod \"certified-operators-tss2k\" (UID: \"5801c41d-9ea8-4174-9ea3-a955f071844b\") " pod="openshift-marketplace/certified-operators-tss2k" Oct 13 06:53:38 crc kubenswrapper[4833]: I1013 06:53:38.478468 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcvrl\" (UniqueName: \"kubernetes.io/projected/5801c41d-9ea8-4174-9ea3-a955f071844b-kube-api-access-wcvrl\") pod \"certified-operators-tss2k\" (UID: \"5801c41d-9ea8-4174-9ea3-a955f071844b\") " pod="openshift-marketplace/certified-operators-tss2k" Oct 13 06:53:38 crc kubenswrapper[4833]: I1013 06:53:38.478560 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5801c41d-9ea8-4174-9ea3-a955f071844b-catalog-content\") pod \"certified-operators-tss2k\" (UID: \"5801c41d-9ea8-4174-9ea3-a955f071844b\") " pod="openshift-marketplace/certified-operators-tss2k" Oct 13 06:53:38 crc kubenswrapper[4833]: I1013 06:53:38.478588 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5801c41d-9ea8-4174-9ea3-a955f071844b-utilities\") pod \"certified-operators-tss2k\" (UID: \"5801c41d-9ea8-4174-9ea3-a955f071844b\") " pod="openshift-marketplace/certified-operators-tss2k" Oct 13 06:53:38 crc kubenswrapper[4833]: I1013 06:53:38.479095 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5801c41d-9ea8-4174-9ea3-a955f071844b-utilities\") pod \"certified-operators-tss2k\" (UID: \"5801c41d-9ea8-4174-9ea3-a955f071844b\") " pod="openshift-marketplace/certified-operators-tss2k" Oct 13 06:53:38 crc kubenswrapper[4833]: I1013 06:53:38.479129 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5801c41d-9ea8-4174-9ea3-a955f071844b-catalog-content\") pod \"certified-operators-tss2k\" (UID: \"5801c41d-9ea8-4174-9ea3-a955f071844b\") " pod="openshift-marketplace/certified-operators-tss2k" Oct 13 06:53:38 crc kubenswrapper[4833]: I1013 06:53:38.498808 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcvrl\" (UniqueName: \"kubernetes.io/projected/5801c41d-9ea8-4174-9ea3-a955f071844b-kube-api-access-wcvrl\") pod \"certified-operators-tss2k\" (UID: \"5801c41d-9ea8-4174-9ea3-a955f071844b\") " pod="openshift-marketplace/certified-operators-tss2k" Oct 13 06:53:38 crc kubenswrapper[4833]: I1013 06:53:38.585408 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tss2k" Oct 13 06:53:39 crc kubenswrapper[4833]: I1013 06:53:39.105954 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tss2k"] Oct 13 06:53:39 crc kubenswrapper[4833]: I1013 06:53:39.969283 4833 generic.go:334] "Generic (PLEG): container finished" podID="5801c41d-9ea8-4174-9ea3-a955f071844b" containerID="9c9c40ec2309e892c35a1369aad26bd220fc834f2b4322d9054134f4f1c47d21" exitCode=0 Oct 13 06:53:39 crc kubenswrapper[4833]: I1013 06:53:39.969355 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tss2k" event={"ID":"5801c41d-9ea8-4174-9ea3-a955f071844b","Type":"ContainerDied","Data":"9c9c40ec2309e892c35a1369aad26bd220fc834f2b4322d9054134f4f1c47d21"} Oct 13 06:53:39 crc kubenswrapper[4833]: I1013 06:53:39.969645 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tss2k" event={"ID":"5801c41d-9ea8-4174-9ea3-a955f071844b","Type":"ContainerStarted","Data":"26e5aa7e6b6ac69f9482e588b24674ebdc1882534e6d24a002866c3d58fc6af1"} Oct 13 06:53:40 crc kubenswrapper[4833]: I1013 06:53:40.980719 4833 generic.go:334] "Generic (PLEG): container finished" podID="5801c41d-9ea8-4174-9ea3-a955f071844b" containerID="0a6c3eacb4d4aef2b9aad641cd79a3c83d7ef92fbfad1b0a280a596407426cda" exitCode=0 Oct 13 06:53:40 crc kubenswrapper[4833]: I1013 06:53:40.980776 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tss2k" event={"ID":"5801c41d-9ea8-4174-9ea3-a955f071844b","Type":"ContainerDied","Data":"0a6c3eacb4d4aef2b9aad641cd79a3c83d7ef92fbfad1b0a280a596407426cda"} Oct 13 06:53:41 crc kubenswrapper[4833]: I1013 06:53:41.992148 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tss2k" event={"ID":"5801c41d-9ea8-4174-9ea3-a955f071844b","Type":"ContainerStarted","Data":"d8cd8570faf622616ab99f1bf93c71ee40fed7d83dd92496e2ff0d9cd2316507"} Oct 13 06:53:42 crc kubenswrapper[4833]: I1013 06:53:42.013698 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tss2k" podStartSLOduration=2.50669258 podStartE2EDuration="4.013682396s" podCreationTimestamp="2025-10-13 06:53:38 +0000 UTC" firstStartedPulling="2025-10-13 06:53:39.972711552 +0000 UTC m=+1510.073134508" lastFinishedPulling="2025-10-13 06:53:41.479701408 +0000 UTC m=+1511.580124324" observedRunningTime="2025-10-13 06:53:42.010198646 +0000 UTC m=+1512.110621612" watchObservedRunningTime="2025-10-13 06:53:42.013682396 +0000 UTC m=+1512.114105312" Oct 13 06:53:47 crc kubenswrapper[4833]: I1013 06:53:47.945691 4833 scope.go:117] "RemoveContainer" containerID="0e7db4ccbcb65bf77590c076e5c09f23a44034b0adfe352bcf2d8160439db4ff" Oct 13 06:53:47 crc kubenswrapper[4833]: I1013 06:53:47.975509 4833 scope.go:117] "RemoveContainer" containerID="069701aaff75a7a32fda759312c10c3428d8a25f981e8f33c90e90f28d140bf7" Oct 13 06:53:48 crc kubenswrapper[4833]: I1013 06:53:48.007759 4833 scope.go:117] "RemoveContainer" containerID="9416ac9c968c4a5a9db5cb7e7ac7d1de0a102dc393f0cd7058fc8426e472195e" Oct 13 06:53:48 crc kubenswrapper[4833]: I1013 06:53:48.029467 4833 scope.go:117] "RemoveContainer" containerID="5e2fdf635c2cb9e5aa06ff67b3bfaee2fe1530a9590e2992c6db48616b93f506" Oct 13 06:53:48 crc kubenswrapper[4833]: I1013 06:53:48.073921 4833 scope.go:117] "RemoveContainer" containerID="166d638371c21b70a6b659f8a7c10a54bb522993f3251baf156ad2b8b7c87960" Oct 13 06:53:48 crc kubenswrapper[4833]: I1013 06:53:48.100185 4833 scope.go:117] "RemoveContainer" containerID="94f41b633f482bd10ac5e593952a9e2259ee80551092d69b162f93a3b634fe8a" Oct 13 06:53:48 crc kubenswrapper[4833]: I1013 06:53:48.156158 4833 scope.go:117] "RemoveContainer" containerID="1a0179804d6c84f013bfd95b5895e54b8a21725efd2ed74b7d0d9644d13dad41" Oct 13 06:53:48 crc kubenswrapper[4833]: I1013 06:53:48.177101 4833 scope.go:117] "RemoveContainer" containerID="b566af3cdcf6966c91d8eb92814d438d4b7a59c8593fa14b053ca258afc3130a" Oct 13 06:53:48 crc kubenswrapper[4833]: I1013 06:53:48.199877 4833 scope.go:117] "RemoveContainer" containerID="5dd9b98ccf548d37dcdac3f8dbd8013568ce016f11698d54a122341be98a8f15" Oct 13 06:53:48 crc kubenswrapper[4833]: I1013 06:53:48.218976 4833 scope.go:117] "RemoveContainer" containerID="ae447bf76892b7eb14df95538c7ae37b62536247b753f59a219c7f2aae34cdf7" Oct 13 06:53:48 crc kubenswrapper[4833]: I1013 06:53:48.233407 4833 scope.go:117] "RemoveContainer" containerID="e1a96ec3f20ad6ec9e3f20cf64c5ec450408d8b81f36296a339f7790dd37c4ac" Oct 13 06:53:48 crc kubenswrapper[4833]: I1013 06:53:48.252148 4833 scope.go:117] "RemoveContainer" containerID="b1df1f9310d6f29aed59c99011ac33f389a422870bb4e4529f70e5dfd9e4d56f" Oct 13 06:53:48 crc kubenswrapper[4833]: I1013 06:53:48.270455 4833 scope.go:117] "RemoveContainer" containerID="9326acc1fd0f02265b36428d0471646f19f4f0d184be080ae968bf52c850e15b" Oct 13 06:53:48 crc kubenswrapper[4833]: I1013 06:53:48.292229 4833 scope.go:117] "RemoveContainer" containerID="09f05ae0f15f158640bfcddf40e73222963237ea018533209d716c3e0d5be6fb" Oct 13 06:53:48 crc kubenswrapper[4833]: I1013 06:53:48.314606 4833 scope.go:117] "RemoveContainer" containerID="cef1392c2e441a2957315fea42ef8d028a266ebdbd1b0a9b26eb9e298a5bb917" Oct 13 06:53:48 crc kubenswrapper[4833]: I1013 06:53:48.586622 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tss2k" Oct 13 06:53:48 crc kubenswrapper[4833]: I1013 06:53:48.586660 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tss2k" Oct 13 06:53:48 crc kubenswrapper[4833]: I1013 06:53:48.636396 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tss2k" Oct 13 06:53:49 crc kubenswrapper[4833]: I1013 06:53:49.110074 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tss2k" Oct 13 06:53:49 crc kubenswrapper[4833]: I1013 06:53:49.161967 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tss2k"] Oct 13 06:53:51 crc kubenswrapper[4833]: I1013 06:53:51.079905 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tss2k" podUID="5801c41d-9ea8-4174-9ea3-a955f071844b" containerName="registry-server" containerID="cri-o://d8cd8570faf622616ab99f1bf93c71ee40fed7d83dd92496e2ff0d9cd2316507" gracePeriod=2 Oct 13 06:53:52 crc kubenswrapper[4833]: I1013 06:53:52.040056 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tss2k" Oct 13 06:53:52 crc kubenswrapper[4833]: I1013 06:53:52.088398 4833 generic.go:334] "Generic (PLEG): container finished" podID="5801c41d-9ea8-4174-9ea3-a955f071844b" containerID="d8cd8570faf622616ab99f1bf93c71ee40fed7d83dd92496e2ff0d9cd2316507" exitCode=0 Oct 13 06:53:52 crc kubenswrapper[4833]: I1013 06:53:52.088446 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tss2k" event={"ID":"5801c41d-9ea8-4174-9ea3-a955f071844b","Type":"ContainerDied","Data":"d8cd8570faf622616ab99f1bf93c71ee40fed7d83dd92496e2ff0d9cd2316507"} Oct 13 06:53:52 crc kubenswrapper[4833]: I1013 06:53:52.088472 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tss2k" event={"ID":"5801c41d-9ea8-4174-9ea3-a955f071844b","Type":"ContainerDied","Data":"26e5aa7e6b6ac69f9482e588b24674ebdc1882534e6d24a002866c3d58fc6af1"} Oct 13 06:53:52 crc kubenswrapper[4833]: I1013 06:53:52.088482 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tss2k" Oct 13 06:53:52 crc kubenswrapper[4833]: I1013 06:53:52.088491 4833 scope.go:117] "RemoveContainer" containerID="d8cd8570faf622616ab99f1bf93c71ee40fed7d83dd92496e2ff0d9cd2316507" Oct 13 06:53:52 crc kubenswrapper[4833]: I1013 06:53:52.106309 4833 scope.go:117] "RemoveContainer" containerID="0a6c3eacb4d4aef2b9aad641cd79a3c83d7ef92fbfad1b0a280a596407426cda" Oct 13 06:53:52 crc kubenswrapper[4833]: I1013 06:53:52.127275 4833 scope.go:117] "RemoveContainer" containerID="9c9c40ec2309e892c35a1369aad26bd220fc834f2b4322d9054134f4f1c47d21" Oct 13 06:53:52 crc kubenswrapper[4833]: I1013 06:53:52.153930 4833 scope.go:117] "RemoveContainer" containerID="d8cd8570faf622616ab99f1bf93c71ee40fed7d83dd92496e2ff0d9cd2316507" Oct 13 06:53:52 crc kubenswrapper[4833]: E1013 06:53:52.154371 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8cd8570faf622616ab99f1bf93c71ee40fed7d83dd92496e2ff0d9cd2316507\": container with ID starting with d8cd8570faf622616ab99f1bf93c71ee40fed7d83dd92496e2ff0d9cd2316507 not found: ID does not exist" containerID="d8cd8570faf622616ab99f1bf93c71ee40fed7d83dd92496e2ff0d9cd2316507" Oct 13 06:53:52 crc kubenswrapper[4833]: I1013 06:53:52.154419 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8cd8570faf622616ab99f1bf93c71ee40fed7d83dd92496e2ff0d9cd2316507"} err="failed to get container status \"d8cd8570faf622616ab99f1bf93c71ee40fed7d83dd92496e2ff0d9cd2316507\": rpc error: code = NotFound desc = could not find container \"d8cd8570faf622616ab99f1bf93c71ee40fed7d83dd92496e2ff0d9cd2316507\": container with ID starting with d8cd8570faf622616ab99f1bf93c71ee40fed7d83dd92496e2ff0d9cd2316507 not found: ID does not exist" Oct 13 06:53:52 crc kubenswrapper[4833]: I1013 06:53:52.154445 4833 scope.go:117] "RemoveContainer" containerID="0a6c3eacb4d4aef2b9aad641cd79a3c83d7ef92fbfad1b0a280a596407426cda" Oct 13 06:53:52 crc kubenswrapper[4833]: E1013 06:53:52.154812 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a6c3eacb4d4aef2b9aad641cd79a3c83d7ef92fbfad1b0a280a596407426cda\": container with ID starting with 0a6c3eacb4d4aef2b9aad641cd79a3c83d7ef92fbfad1b0a280a596407426cda not found: ID does not exist" containerID="0a6c3eacb4d4aef2b9aad641cd79a3c83d7ef92fbfad1b0a280a596407426cda" Oct 13 06:53:52 crc kubenswrapper[4833]: I1013 06:53:52.154848 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a6c3eacb4d4aef2b9aad641cd79a3c83d7ef92fbfad1b0a280a596407426cda"} err="failed to get container status \"0a6c3eacb4d4aef2b9aad641cd79a3c83d7ef92fbfad1b0a280a596407426cda\": rpc error: code = NotFound desc = could not find container \"0a6c3eacb4d4aef2b9aad641cd79a3c83d7ef92fbfad1b0a280a596407426cda\": container with ID starting with 0a6c3eacb4d4aef2b9aad641cd79a3c83d7ef92fbfad1b0a280a596407426cda not found: ID does not exist" Oct 13 06:53:52 crc kubenswrapper[4833]: I1013 06:53:52.154870 4833 scope.go:117] "RemoveContainer" containerID="9c9c40ec2309e892c35a1369aad26bd220fc834f2b4322d9054134f4f1c47d21" Oct 13 06:53:52 crc kubenswrapper[4833]: E1013 06:53:52.155097 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c9c40ec2309e892c35a1369aad26bd220fc834f2b4322d9054134f4f1c47d21\": container with ID starting with 9c9c40ec2309e892c35a1369aad26bd220fc834f2b4322d9054134f4f1c47d21 not found: ID does not exist" containerID="9c9c40ec2309e892c35a1369aad26bd220fc834f2b4322d9054134f4f1c47d21" Oct 13 06:53:52 crc kubenswrapper[4833]: I1013 06:53:52.155208 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c9c40ec2309e892c35a1369aad26bd220fc834f2b4322d9054134f4f1c47d21"} err="failed to get container status \"9c9c40ec2309e892c35a1369aad26bd220fc834f2b4322d9054134f4f1c47d21\": rpc error: code = NotFound desc = could not find container \"9c9c40ec2309e892c35a1369aad26bd220fc834f2b4322d9054134f4f1c47d21\": container with ID starting with 9c9c40ec2309e892c35a1369aad26bd220fc834f2b4322d9054134f4f1c47d21 not found: ID does not exist" Oct 13 06:53:52 crc kubenswrapper[4833]: I1013 06:53:52.181344 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5801c41d-9ea8-4174-9ea3-a955f071844b-utilities\") pod \"5801c41d-9ea8-4174-9ea3-a955f071844b\" (UID: \"5801c41d-9ea8-4174-9ea3-a955f071844b\") " Oct 13 06:53:52 crc kubenswrapper[4833]: I1013 06:53:52.181538 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5801c41d-9ea8-4174-9ea3-a955f071844b-catalog-content\") pod \"5801c41d-9ea8-4174-9ea3-a955f071844b\" (UID: \"5801c41d-9ea8-4174-9ea3-a955f071844b\") " Oct 13 06:53:52 crc kubenswrapper[4833]: I1013 06:53:52.181611 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcvrl\" (UniqueName: \"kubernetes.io/projected/5801c41d-9ea8-4174-9ea3-a955f071844b-kube-api-access-wcvrl\") pod \"5801c41d-9ea8-4174-9ea3-a955f071844b\" (UID: \"5801c41d-9ea8-4174-9ea3-a955f071844b\") " Oct 13 06:53:52 crc kubenswrapper[4833]: I1013 06:53:52.182562 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5801c41d-9ea8-4174-9ea3-a955f071844b-utilities" (OuterVolumeSpecName: "utilities") pod "5801c41d-9ea8-4174-9ea3-a955f071844b" (UID: "5801c41d-9ea8-4174-9ea3-a955f071844b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:53:52 crc kubenswrapper[4833]: I1013 06:53:52.186797 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5801c41d-9ea8-4174-9ea3-a955f071844b-kube-api-access-wcvrl" (OuterVolumeSpecName: "kube-api-access-wcvrl") pod "5801c41d-9ea8-4174-9ea3-a955f071844b" (UID: "5801c41d-9ea8-4174-9ea3-a955f071844b"). InnerVolumeSpecName "kube-api-access-wcvrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:53:52 crc kubenswrapper[4833]: I1013 06:53:52.228666 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5801c41d-9ea8-4174-9ea3-a955f071844b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5801c41d-9ea8-4174-9ea3-a955f071844b" (UID: "5801c41d-9ea8-4174-9ea3-a955f071844b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:53:52 crc kubenswrapper[4833]: I1013 06:53:52.283514 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5801c41d-9ea8-4174-9ea3-a955f071844b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 06:53:52 crc kubenswrapper[4833]: I1013 06:53:52.283586 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcvrl\" (UniqueName: \"kubernetes.io/projected/5801c41d-9ea8-4174-9ea3-a955f071844b-kube-api-access-wcvrl\") on node \"crc\" DevicePath \"\"" Oct 13 06:53:52 crc kubenswrapper[4833]: I1013 06:53:52.283602 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5801c41d-9ea8-4174-9ea3-a955f071844b-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 06:53:52 crc kubenswrapper[4833]: I1013 06:53:52.432218 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tss2k"] Oct 13 06:53:52 crc kubenswrapper[4833]: I1013 06:53:52.439409 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tss2k"] Oct 13 06:53:52 crc kubenswrapper[4833]: I1013 06:53:52.637245 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5801c41d-9ea8-4174-9ea3-a955f071844b" path="/var/lib/kubelet/pods/5801c41d-9ea8-4174-9ea3-a955f071844b/volumes" Oct 13 06:54:00 crc kubenswrapper[4833]: I1013 06:54:00.542898 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 06:54:00 crc kubenswrapper[4833]: I1013 06:54:00.543618 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 06:54:00 crc kubenswrapper[4833]: I1013 06:54:00.543689 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 06:54:00 crc kubenswrapper[4833]: I1013 06:54:00.544660 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"40fe12eaaa5fd00d2647217d1c5ca48c96e0fd15e2615087f10225f5f2df2206"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 06:54:00 crc kubenswrapper[4833]: I1013 06:54:00.544800 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://40fe12eaaa5fd00d2647217d1c5ca48c96e0fd15e2615087f10225f5f2df2206" gracePeriod=600 Oct 13 06:54:00 crc kubenswrapper[4833]: E1013 06:54:00.672130 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 06:54:01 crc kubenswrapper[4833]: I1013 06:54:01.163065 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="40fe12eaaa5fd00d2647217d1c5ca48c96e0fd15e2615087f10225f5f2df2206" exitCode=0 Oct 13 06:54:01 crc kubenswrapper[4833]: I1013 06:54:01.163126 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"40fe12eaaa5fd00d2647217d1c5ca48c96e0fd15e2615087f10225f5f2df2206"} Oct 13 06:54:01 crc kubenswrapper[4833]: I1013 06:54:01.163162 4833 scope.go:117] "RemoveContainer" containerID="1e0b8be85d4de611f9b44e391e311db1ba1fbe1a8afc86f11d35d9a6be4b16ce" Oct 13 06:54:01 crc kubenswrapper[4833]: I1013 06:54:01.163646 4833 scope.go:117] "RemoveContainer" containerID="40fe12eaaa5fd00d2647217d1c5ca48c96e0fd15e2615087f10225f5f2df2206" Oct 13 06:54:01 crc kubenswrapper[4833]: E1013 06:54:01.163848 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 06:54:13 crc kubenswrapper[4833]: I1013 06:54:13.626995 4833 scope.go:117] "RemoveContainer" containerID="40fe12eaaa5fd00d2647217d1c5ca48c96e0fd15e2615087f10225f5f2df2206" Oct 13 06:54:13 crc kubenswrapper[4833]: E1013 06:54:13.627603 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 06:54:27 crc kubenswrapper[4833]: I1013 06:54:27.627809 4833 scope.go:117] "RemoveContainer" containerID="40fe12eaaa5fd00d2647217d1c5ca48c96e0fd15e2615087f10225f5f2df2206" Oct 13 06:54:27 crc kubenswrapper[4833]: E1013 06:54:27.628910 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 06:54:42 crc kubenswrapper[4833]: I1013 06:54:42.627841 4833 scope.go:117] "RemoveContainer" containerID="40fe12eaaa5fd00d2647217d1c5ca48c96e0fd15e2615087f10225f5f2df2206" Oct 13 06:54:42 crc kubenswrapper[4833]: E1013 06:54:42.628659 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 06:54:48 crc kubenswrapper[4833]: I1013 06:54:48.565929 4833 scope.go:117] "RemoveContainer" containerID="90ee1a23cd38eedd15beb4921be258e3fbaa0411381f1f5b0925d97d2d4fcd83" Oct 13 06:54:48 crc kubenswrapper[4833]: I1013 06:54:48.606342 4833 scope.go:117] "RemoveContainer" containerID="75b83eb5a0f4a87001ec550008c86da38762ad7776efbc4d0f7db261a9b40d50" Oct 13 06:54:48 crc kubenswrapper[4833]: I1013 06:54:48.643727 4833 scope.go:117] "RemoveContainer" containerID="bdea18d1dda04a6f48a543d7b832b414dd45a318e0945919afca57e801a3a01c" Oct 13 06:54:57 crc kubenswrapper[4833]: I1013 06:54:57.627399 4833 scope.go:117] "RemoveContainer" containerID="40fe12eaaa5fd00d2647217d1c5ca48c96e0fd15e2615087f10225f5f2df2206" Oct 13 06:54:57 crc kubenswrapper[4833]: E1013 06:54:57.628524 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 06:55:09 crc kubenswrapper[4833]: I1013 06:55:09.627737 4833 scope.go:117] "RemoveContainer" containerID="40fe12eaaa5fd00d2647217d1c5ca48c96e0fd15e2615087f10225f5f2df2206" Oct 13 06:55:09 crc kubenswrapper[4833]: E1013 06:55:09.628895 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 06:55:22 crc kubenswrapper[4833]: I1013 06:55:22.627370 4833 scope.go:117] "RemoveContainer" containerID="40fe12eaaa5fd00d2647217d1c5ca48c96e0fd15e2615087f10225f5f2df2206" Oct 13 06:55:22 crc kubenswrapper[4833]: E1013 06:55:22.628044 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 06:55:37 crc kubenswrapper[4833]: I1013 06:55:37.627514 4833 scope.go:117] "RemoveContainer" containerID="40fe12eaaa5fd00d2647217d1c5ca48c96e0fd15e2615087f10225f5f2df2206" Oct 13 06:55:37 crc kubenswrapper[4833]: E1013 06:55:37.628849 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 06:55:48 crc kubenswrapper[4833]: I1013 06:55:48.627315 4833 scope.go:117] "RemoveContainer" containerID="40fe12eaaa5fd00d2647217d1c5ca48c96e0fd15e2615087f10225f5f2df2206" Oct 13 06:55:48 crc kubenswrapper[4833]: E1013 06:55:48.628142 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 06:55:48 crc kubenswrapper[4833]: I1013 06:55:48.721716 4833 scope.go:117] "RemoveContainer" containerID="dfc611bec1d4ce90589a0eac486b1559ddf622b45b5b507cb8abab55d933b401" Oct 13 06:56:01 crc kubenswrapper[4833]: I1013 06:56:01.627888 4833 scope.go:117] "RemoveContainer" containerID="40fe12eaaa5fd00d2647217d1c5ca48c96e0fd15e2615087f10225f5f2df2206" Oct 13 06:56:01 crc kubenswrapper[4833]: E1013 06:56:01.629077 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 06:56:14 crc kubenswrapper[4833]: I1013 06:56:14.627812 4833 scope.go:117] "RemoveContainer" containerID="40fe12eaaa5fd00d2647217d1c5ca48c96e0fd15e2615087f10225f5f2df2206" Oct 13 06:56:14 crc kubenswrapper[4833]: E1013 06:56:14.628974 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 06:56:26 crc kubenswrapper[4833]: I1013 06:56:26.626826 4833 scope.go:117] "RemoveContainer" containerID="40fe12eaaa5fd00d2647217d1c5ca48c96e0fd15e2615087f10225f5f2df2206" Oct 13 06:56:26 crc kubenswrapper[4833]: E1013 06:56:26.627735 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 06:56:37 crc kubenswrapper[4833]: I1013 06:56:37.627333 4833 scope.go:117] "RemoveContainer" containerID="40fe12eaaa5fd00d2647217d1c5ca48c96e0fd15e2615087f10225f5f2df2206" Oct 13 06:56:37 crc kubenswrapper[4833]: E1013 06:56:37.628151 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 06:56:48 crc kubenswrapper[4833]: I1013 06:56:48.790787 4833 scope.go:117] "RemoveContainer" containerID="1bde0c46530488bbd8249b35833c0404f05caf636dc071a435f090a5decab08b" Oct 13 06:56:48 crc kubenswrapper[4833]: I1013 06:56:48.815613 4833 scope.go:117] "RemoveContainer" containerID="d8b67d2984098598469efeab7a295956ffe682e1e8a8ed1a870f368f9c9f736f" Oct 13 06:56:50 crc kubenswrapper[4833]: I1013 06:56:50.630143 4833 scope.go:117] "RemoveContainer" containerID="40fe12eaaa5fd00d2647217d1c5ca48c96e0fd15e2615087f10225f5f2df2206" Oct 13 06:56:50 crc kubenswrapper[4833]: E1013 06:56:50.630668 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 06:57:05 crc kubenswrapper[4833]: I1013 06:57:05.627651 4833 scope.go:117] "RemoveContainer" containerID="40fe12eaaa5fd00d2647217d1c5ca48c96e0fd15e2615087f10225f5f2df2206" Oct 13 06:57:05 crc kubenswrapper[4833]: E1013 06:57:05.628406 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 06:57:19 crc kubenswrapper[4833]: I1013 06:57:19.627924 4833 scope.go:117] "RemoveContainer" containerID="40fe12eaaa5fd00d2647217d1c5ca48c96e0fd15e2615087f10225f5f2df2206" Oct 13 06:57:19 crc kubenswrapper[4833]: E1013 06:57:19.628754 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 06:57:34 crc kubenswrapper[4833]: I1013 06:57:34.627469 4833 scope.go:117] "RemoveContainer" containerID="40fe12eaaa5fd00d2647217d1c5ca48c96e0fd15e2615087f10225f5f2df2206" Oct 13 06:57:34 crc kubenswrapper[4833]: E1013 06:57:34.628262 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 06:57:47 crc kubenswrapper[4833]: I1013 06:57:47.627887 4833 scope.go:117] "RemoveContainer" containerID="40fe12eaaa5fd00d2647217d1c5ca48c96e0fd15e2615087f10225f5f2df2206" Oct 13 06:57:47 crc kubenswrapper[4833]: E1013 06:57:47.629008 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 06:57:48 crc kubenswrapper[4833]: I1013 06:57:48.642801 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p7lnp"] Oct 13 06:57:48 crc kubenswrapper[4833]: E1013 06:57:48.643433 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5801c41d-9ea8-4174-9ea3-a955f071844b" containerName="registry-server" Oct 13 06:57:48 crc kubenswrapper[4833]: I1013 06:57:48.643448 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5801c41d-9ea8-4174-9ea3-a955f071844b" containerName="registry-server" Oct 13 06:57:48 crc kubenswrapper[4833]: E1013 06:57:48.643466 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5801c41d-9ea8-4174-9ea3-a955f071844b" containerName="extract-content" Oct 13 06:57:48 crc kubenswrapper[4833]: I1013 06:57:48.643472 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5801c41d-9ea8-4174-9ea3-a955f071844b" containerName="extract-content" Oct 13 06:57:48 crc kubenswrapper[4833]: E1013 06:57:48.643486 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5801c41d-9ea8-4174-9ea3-a955f071844b" containerName="extract-utilities" Oct 13 06:57:48 crc kubenswrapper[4833]: I1013 06:57:48.643493 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5801c41d-9ea8-4174-9ea3-a955f071844b" containerName="extract-utilities" Oct 13 06:57:48 crc kubenswrapper[4833]: I1013 06:57:48.643684 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="5801c41d-9ea8-4174-9ea3-a955f071844b" containerName="registry-server" Oct 13 06:57:48 crc kubenswrapper[4833]: I1013 06:57:48.644884 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7lnp" Oct 13 06:57:48 crc kubenswrapper[4833]: I1013 06:57:48.666195 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p7lnp"] Oct 13 06:57:48 crc kubenswrapper[4833]: I1013 06:57:48.710621 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/232dac39-04ff-4952-84ab-f5e7cd45254b-utilities\") pod \"redhat-operators-p7lnp\" (UID: \"232dac39-04ff-4952-84ab-f5e7cd45254b\") " pod="openshift-marketplace/redhat-operators-p7lnp" Oct 13 06:57:48 crc kubenswrapper[4833]: I1013 06:57:48.710694 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/232dac39-04ff-4952-84ab-f5e7cd45254b-catalog-content\") pod \"redhat-operators-p7lnp\" (UID: \"232dac39-04ff-4952-84ab-f5e7cd45254b\") " pod="openshift-marketplace/redhat-operators-p7lnp" Oct 13 06:57:48 crc kubenswrapper[4833]: I1013 06:57:48.710716 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldwrx\" (UniqueName: \"kubernetes.io/projected/232dac39-04ff-4952-84ab-f5e7cd45254b-kube-api-access-ldwrx\") pod \"redhat-operators-p7lnp\" (UID: \"232dac39-04ff-4952-84ab-f5e7cd45254b\") " pod="openshift-marketplace/redhat-operators-p7lnp" Oct 13 06:57:48 crc kubenswrapper[4833]: I1013 06:57:48.811613 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/232dac39-04ff-4952-84ab-f5e7cd45254b-utilities\") pod \"redhat-operators-p7lnp\" (UID: \"232dac39-04ff-4952-84ab-f5e7cd45254b\") " pod="openshift-marketplace/redhat-operators-p7lnp" Oct 13 06:57:48 crc kubenswrapper[4833]: I1013 06:57:48.811710 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/232dac39-04ff-4952-84ab-f5e7cd45254b-catalog-content\") pod \"redhat-operators-p7lnp\" (UID: \"232dac39-04ff-4952-84ab-f5e7cd45254b\") " pod="openshift-marketplace/redhat-operators-p7lnp" Oct 13 06:57:48 crc kubenswrapper[4833]: I1013 06:57:48.811734 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldwrx\" (UniqueName: \"kubernetes.io/projected/232dac39-04ff-4952-84ab-f5e7cd45254b-kube-api-access-ldwrx\") pod \"redhat-operators-p7lnp\" (UID: \"232dac39-04ff-4952-84ab-f5e7cd45254b\") " pod="openshift-marketplace/redhat-operators-p7lnp" Oct 13 06:57:48 crc kubenswrapper[4833]: I1013 06:57:48.812389 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/232dac39-04ff-4952-84ab-f5e7cd45254b-catalog-content\") pod \"redhat-operators-p7lnp\" (UID: \"232dac39-04ff-4952-84ab-f5e7cd45254b\") " pod="openshift-marketplace/redhat-operators-p7lnp" Oct 13 06:57:48 crc kubenswrapper[4833]: I1013 06:57:48.812467 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/232dac39-04ff-4952-84ab-f5e7cd45254b-utilities\") pod \"redhat-operators-p7lnp\" (UID: \"232dac39-04ff-4952-84ab-f5e7cd45254b\") " pod="openshift-marketplace/redhat-operators-p7lnp" Oct 13 06:57:48 crc kubenswrapper[4833]: I1013 06:57:48.844471 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldwrx\" (UniqueName: \"kubernetes.io/projected/232dac39-04ff-4952-84ab-f5e7cd45254b-kube-api-access-ldwrx\") pod \"redhat-operators-p7lnp\" (UID: \"232dac39-04ff-4952-84ab-f5e7cd45254b\") " pod="openshift-marketplace/redhat-operators-p7lnp" Oct 13 06:57:48 crc kubenswrapper[4833]: I1013 06:57:48.963479 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7lnp" Oct 13 06:57:49 crc kubenswrapper[4833]: I1013 06:57:49.402642 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p7lnp"] Oct 13 06:57:50 crc kubenswrapper[4833]: I1013 06:57:50.041402 4833 generic.go:334] "Generic (PLEG): container finished" podID="232dac39-04ff-4952-84ab-f5e7cd45254b" containerID="f7f7816f5bc01c0e241955138840fc66e89e3cd6de2ff2eff703ce8df78e51d8" exitCode=0 Oct 13 06:57:50 crc kubenswrapper[4833]: I1013 06:57:50.041502 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7lnp" event={"ID":"232dac39-04ff-4952-84ab-f5e7cd45254b","Type":"ContainerDied","Data":"f7f7816f5bc01c0e241955138840fc66e89e3cd6de2ff2eff703ce8df78e51d8"} Oct 13 06:57:50 crc kubenswrapper[4833]: I1013 06:57:50.041766 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7lnp" event={"ID":"232dac39-04ff-4952-84ab-f5e7cd45254b","Type":"ContainerStarted","Data":"bfc4ced6d40a8a07ec4679408248485755dfa3d7bbd5ca6e50f874a08cd09863"} Oct 13 06:57:50 crc kubenswrapper[4833]: I1013 06:57:50.044007 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 06:57:52 crc kubenswrapper[4833]: I1013 06:57:52.058762 4833 generic.go:334] "Generic (PLEG): container finished" podID="232dac39-04ff-4952-84ab-f5e7cd45254b" containerID="5d3e80529ce1dc828e7c8d6fa52d9bcea0c83c37eafee608bcd3fcb1f1101a9b" exitCode=0 Oct 13 06:57:52 crc kubenswrapper[4833]: I1013 06:57:52.058896 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7lnp" event={"ID":"232dac39-04ff-4952-84ab-f5e7cd45254b","Type":"ContainerDied","Data":"5d3e80529ce1dc828e7c8d6fa52d9bcea0c83c37eafee608bcd3fcb1f1101a9b"} Oct 13 06:57:53 crc kubenswrapper[4833]: I1013 06:57:53.068506 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7lnp" event={"ID":"232dac39-04ff-4952-84ab-f5e7cd45254b","Type":"ContainerStarted","Data":"4814606c2029574c53e979177ee8533ac0ee9a6b198248e3cd628ee43ae9a534"} Oct 13 06:57:53 crc kubenswrapper[4833]: I1013 06:57:53.092071 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p7lnp" podStartSLOduration=2.6859984839999997 podStartE2EDuration="5.092049227s" podCreationTimestamp="2025-10-13 06:57:48 +0000 UTC" firstStartedPulling="2025-10-13 06:57:50.043738857 +0000 UTC m=+1760.144161783" lastFinishedPulling="2025-10-13 06:57:52.44978961 +0000 UTC m=+1762.550212526" observedRunningTime="2025-10-13 06:57:53.08659001 +0000 UTC m=+1763.187012946" watchObservedRunningTime="2025-10-13 06:57:53.092049227 +0000 UTC m=+1763.192472143" Oct 13 06:57:58 crc kubenswrapper[4833]: I1013 06:57:58.626841 4833 scope.go:117] "RemoveContainer" containerID="40fe12eaaa5fd00d2647217d1c5ca48c96e0fd15e2615087f10225f5f2df2206" Oct 13 06:57:58 crc kubenswrapper[4833]: E1013 06:57:58.627613 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 06:57:58 crc kubenswrapper[4833]: I1013 06:57:58.964137 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p7lnp" Oct 13 06:57:58 crc kubenswrapper[4833]: I1013 06:57:58.964305 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p7lnp" Oct 13 06:57:59 crc kubenswrapper[4833]: I1013 06:57:59.024328 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p7lnp" Oct 13 06:57:59 crc kubenswrapper[4833]: I1013 06:57:59.158286 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p7lnp" Oct 13 06:57:59 crc kubenswrapper[4833]: I1013 06:57:59.271762 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p7lnp"] Oct 13 06:58:01 crc kubenswrapper[4833]: I1013 06:58:01.126246 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p7lnp" podUID="232dac39-04ff-4952-84ab-f5e7cd45254b" containerName="registry-server" containerID="cri-o://4814606c2029574c53e979177ee8533ac0ee9a6b198248e3cd628ee43ae9a534" gracePeriod=2 Oct 13 06:58:01 crc kubenswrapper[4833]: I1013 06:58:01.519017 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7lnp" Oct 13 06:58:01 crc kubenswrapper[4833]: I1013 06:58:01.697766 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/232dac39-04ff-4952-84ab-f5e7cd45254b-utilities\") pod \"232dac39-04ff-4952-84ab-f5e7cd45254b\" (UID: \"232dac39-04ff-4952-84ab-f5e7cd45254b\") " Oct 13 06:58:01 crc kubenswrapper[4833]: I1013 06:58:01.697844 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldwrx\" (UniqueName: \"kubernetes.io/projected/232dac39-04ff-4952-84ab-f5e7cd45254b-kube-api-access-ldwrx\") pod \"232dac39-04ff-4952-84ab-f5e7cd45254b\" (UID: \"232dac39-04ff-4952-84ab-f5e7cd45254b\") " Oct 13 06:58:01 crc kubenswrapper[4833]: I1013 06:58:01.697947 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/232dac39-04ff-4952-84ab-f5e7cd45254b-catalog-content\") pod \"232dac39-04ff-4952-84ab-f5e7cd45254b\" (UID: \"232dac39-04ff-4952-84ab-f5e7cd45254b\") " Oct 13 06:58:01 crc kubenswrapper[4833]: I1013 06:58:01.700102 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/232dac39-04ff-4952-84ab-f5e7cd45254b-utilities" (OuterVolumeSpecName: "utilities") pod "232dac39-04ff-4952-84ab-f5e7cd45254b" (UID: "232dac39-04ff-4952-84ab-f5e7cd45254b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:58:01 crc kubenswrapper[4833]: I1013 06:58:01.706390 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/232dac39-04ff-4952-84ab-f5e7cd45254b-kube-api-access-ldwrx" (OuterVolumeSpecName: "kube-api-access-ldwrx") pod "232dac39-04ff-4952-84ab-f5e7cd45254b" (UID: "232dac39-04ff-4952-84ab-f5e7cd45254b"). InnerVolumeSpecName "kube-api-access-ldwrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 06:58:01 crc kubenswrapper[4833]: I1013 06:58:01.802999 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/232dac39-04ff-4952-84ab-f5e7cd45254b-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 06:58:01 crc kubenswrapper[4833]: I1013 06:58:01.803094 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldwrx\" (UniqueName: \"kubernetes.io/projected/232dac39-04ff-4952-84ab-f5e7cd45254b-kube-api-access-ldwrx\") on node \"crc\" DevicePath \"\"" Oct 13 06:58:02 crc kubenswrapper[4833]: I1013 06:58:02.134001 4833 generic.go:334] "Generic (PLEG): container finished" podID="232dac39-04ff-4952-84ab-f5e7cd45254b" containerID="4814606c2029574c53e979177ee8533ac0ee9a6b198248e3cd628ee43ae9a534" exitCode=0 Oct 13 06:58:02 crc kubenswrapper[4833]: I1013 06:58:02.134064 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7lnp" event={"ID":"232dac39-04ff-4952-84ab-f5e7cd45254b","Type":"ContainerDied","Data":"4814606c2029574c53e979177ee8533ac0ee9a6b198248e3cd628ee43ae9a534"} Oct 13 06:58:02 crc kubenswrapper[4833]: I1013 06:58:02.134143 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7lnp" event={"ID":"232dac39-04ff-4952-84ab-f5e7cd45254b","Type":"ContainerDied","Data":"bfc4ced6d40a8a07ec4679408248485755dfa3d7bbd5ca6e50f874a08cd09863"} Oct 13 06:58:02 crc kubenswrapper[4833]: I1013 06:58:02.134175 4833 scope.go:117] "RemoveContainer" containerID="4814606c2029574c53e979177ee8533ac0ee9a6b198248e3cd628ee43ae9a534" Oct 13 06:58:02 crc kubenswrapper[4833]: I1013 06:58:02.135475 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7lnp" Oct 13 06:58:02 crc kubenswrapper[4833]: I1013 06:58:02.156325 4833 scope.go:117] "RemoveContainer" containerID="5d3e80529ce1dc828e7c8d6fa52d9bcea0c83c37eafee608bcd3fcb1f1101a9b" Oct 13 06:58:02 crc kubenswrapper[4833]: I1013 06:58:02.180251 4833 scope.go:117] "RemoveContainer" containerID="f7f7816f5bc01c0e241955138840fc66e89e3cd6de2ff2eff703ce8df78e51d8" Oct 13 06:58:02 crc kubenswrapper[4833]: I1013 06:58:02.226775 4833 scope.go:117] "RemoveContainer" containerID="4814606c2029574c53e979177ee8533ac0ee9a6b198248e3cd628ee43ae9a534" Oct 13 06:58:02 crc kubenswrapper[4833]: E1013 06:58:02.227621 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4814606c2029574c53e979177ee8533ac0ee9a6b198248e3cd628ee43ae9a534\": container with ID starting with 4814606c2029574c53e979177ee8533ac0ee9a6b198248e3cd628ee43ae9a534 not found: ID does not exist" containerID="4814606c2029574c53e979177ee8533ac0ee9a6b198248e3cd628ee43ae9a534" Oct 13 06:58:02 crc kubenswrapper[4833]: I1013 06:58:02.227816 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4814606c2029574c53e979177ee8533ac0ee9a6b198248e3cd628ee43ae9a534"} err="failed to get container status \"4814606c2029574c53e979177ee8533ac0ee9a6b198248e3cd628ee43ae9a534\": rpc error: code = NotFound desc = could not find container \"4814606c2029574c53e979177ee8533ac0ee9a6b198248e3cd628ee43ae9a534\": container with ID starting with 4814606c2029574c53e979177ee8533ac0ee9a6b198248e3cd628ee43ae9a534 not found: ID does not exist" Oct 13 06:58:02 crc kubenswrapper[4833]: I1013 06:58:02.227926 4833 scope.go:117] "RemoveContainer" containerID="5d3e80529ce1dc828e7c8d6fa52d9bcea0c83c37eafee608bcd3fcb1f1101a9b" Oct 13 06:58:02 crc kubenswrapper[4833]: E1013 06:58:02.228835 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d3e80529ce1dc828e7c8d6fa52d9bcea0c83c37eafee608bcd3fcb1f1101a9b\": container with ID starting with 5d3e80529ce1dc828e7c8d6fa52d9bcea0c83c37eafee608bcd3fcb1f1101a9b not found: ID does not exist" containerID="5d3e80529ce1dc828e7c8d6fa52d9bcea0c83c37eafee608bcd3fcb1f1101a9b" Oct 13 06:58:02 crc kubenswrapper[4833]: I1013 06:58:02.228904 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d3e80529ce1dc828e7c8d6fa52d9bcea0c83c37eafee608bcd3fcb1f1101a9b"} err="failed to get container status \"5d3e80529ce1dc828e7c8d6fa52d9bcea0c83c37eafee608bcd3fcb1f1101a9b\": rpc error: code = NotFound desc = could not find container \"5d3e80529ce1dc828e7c8d6fa52d9bcea0c83c37eafee608bcd3fcb1f1101a9b\": container with ID starting with 5d3e80529ce1dc828e7c8d6fa52d9bcea0c83c37eafee608bcd3fcb1f1101a9b not found: ID does not exist" Oct 13 06:58:02 crc kubenswrapper[4833]: I1013 06:58:02.228957 4833 scope.go:117] "RemoveContainer" containerID="f7f7816f5bc01c0e241955138840fc66e89e3cd6de2ff2eff703ce8df78e51d8" Oct 13 06:58:02 crc kubenswrapper[4833]: E1013 06:58:02.229412 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7f7816f5bc01c0e241955138840fc66e89e3cd6de2ff2eff703ce8df78e51d8\": container with ID starting with f7f7816f5bc01c0e241955138840fc66e89e3cd6de2ff2eff703ce8df78e51d8 not found: ID does not exist" containerID="f7f7816f5bc01c0e241955138840fc66e89e3cd6de2ff2eff703ce8df78e51d8" Oct 13 06:58:02 crc kubenswrapper[4833]: I1013 06:58:02.229450 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7f7816f5bc01c0e241955138840fc66e89e3cd6de2ff2eff703ce8df78e51d8"} err="failed to get container status \"f7f7816f5bc01c0e241955138840fc66e89e3cd6de2ff2eff703ce8df78e51d8\": rpc error: code = NotFound desc = could not find container \"f7f7816f5bc01c0e241955138840fc66e89e3cd6de2ff2eff703ce8df78e51d8\": container with ID starting with f7f7816f5bc01c0e241955138840fc66e89e3cd6de2ff2eff703ce8df78e51d8 not found: ID does not exist" Oct 13 06:58:02 crc kubenswrapper[4833]: I1013 06:58:02.829967 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/232dac39-04ff-4952-84ab-f5e7cd45254b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "232dac39-04ff-4952-84ab-f5e7cd45254b" (UID: "232dac39-04ff-4952-84ab-f5e7cd45254b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 06:58:02 crc kubenswrapper[4833]: I1013 06:58:02.919573 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/232dac39-04ff-4952-84ab-f5e7cd45254b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 06:58:03 crc kubenswrapper[4833]: I1013 06:58:03.079468 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p7lnp"] Oct 13 06:58:03 crc kubenswrapper[4833]: I1013 06:58:03.086815 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p7lnp"] Oct 13 06:58:04 crc kubenswrapper[4833]: I1013 06:58:04.645237 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="232dac39-04ff-4952-84ab-f5e7cd45254b" path="/var/lib/kubelet/pods/232dac39-04ff-4952-84ab-f5e7cd45254b/volumes" Oct 13 06:58:13 crc kubenswrapper[4833]: I1013 06:58:13.627138 4833 scope.go:117] "RemoveContainer" containerID="40fe12eaaa5fd00d2647217d1c5ca48c96e0fd15e2615087f10225f5f2df2206" Oct 13 06:58:13 crc kubenswrapper[4833]: E1013 06:58:13.627878 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 06:58:27 crc kubenswrapper[4833]: I1013 06:58:27.626652 4833 scope.go:117] "RemoveContainer" containerID="40fe12eaaa5fd00d2647217d1c5ca48c96e0fd15e2615087f10225f5f2df2206" Oct 13 06:58:27 crc kubenswrapper[4833]: E1013 06:58:27.627472 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 06:58:41 crc kubenswrapper[4833]: I1013 06:58:41.626847 4833 scope.go:117] "RemoveContainer" containerID="40fe12eaaa5fd00d2647217d1c5ca48c96e0fd15e2615087f10225f5f2df2206" Oct 13 06:58:41 crc kubenswrapper[4833]: E1013 06:58:41.629002 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 06:58:52 crc kubenswrapper[4833]: I1013 06:58:52.627296 4833 scope.go:117] "RemoveContainer" containerID="40fe12eaaa5fd00d2647217d1c5ca48c96e0fd15e2615087f10225f5f2df2206" Oct 13 06:58:52 crc kubenswrapper[4833]: E1013 06:58:52.628243 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 06:59:06 crc kubenswrapper[4833]: I1013 06:59:06.628919 4833 scope.go:117] "RemoveContainer" containerID="40fe12eaaa5fd00d2647217d1c5ca48c96e0fd15e2615087f10225f5f2df2206" Oct 13 06:59:07 crc kubenswrapper[4833]: I1013 06:59:07.677426 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"43b26ec40f4a2a9a47572e66a50a140a105b3043936cbce1085e1ab35bf9a834"} Oct 13 07:00:00 crc kubenswrapper[4833]: I1013 07:00:00.168238 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29338980-fcrcv"] Oct 13 07:00:00 crc kubenswrapper[4833]: E1013 07:00:00.169087 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232dac39-04ff-4952-84ab-f5e7cd45254b" containerName="registry-server" Oct 13 07:00:00 crc kubenswrapper[4833]: I1013 07:00:00.169100 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="232dac39-04ff-4952-84ab-f5e7cd45254b" containerName="registry-server" Oct 13 07:00:00 crc kubenswrapper[4833]: E1013 07:00:00.169121 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232dac39-04ff-4952-84ab-f5e7cd45254b" containerName="extract-content" Oct 13 07:00:00 crc kubenswrapper[4833]: I1013 07:00:00.169129 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="232dac39-04ff-4952-84ab-f5e7cd45254b" containerName="extract-content" Oct 13 07:00:00 crc kubenswrapper[4833]: E1013 07:00:00.169158 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232dac39-04ff-4952-84ab-f5e7cd45254b" containerName="extract-utilities" Oct 13 07:00:00 crc kubenswrapper[4833]: I1013 07:00:00.169166 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="232dac39-04ff-4952-84ab-f5e7cd45254b" containerName="extract-utilities" Oct 13 07:00:00 crc kubenswrapper[4833]: I1013 07:00:00.169324 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="232dac39-04ff-4952-84ab-f5e7cd45254b" containerName="registry-server" Oct 13 07:00:00 crc kubenswrapper[4833]: I1013 07:00:00.169826 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29338980-fcrcv" Oct 13 07:00:00 crc kubenswrapper[4833]: I1013 07:00:00.171699 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 13 07:00:00 crc kubenswrapper[4833]: I1013 07:00:00.172520 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 13 07:00:00 crc kubenswrapper[4833]: I1013 07:00:00.179212 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29338980-fcrcv"] Oct 13 07:00:00 crc kubenswrapper[4833]: I1013 07:00:00.307947 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/074bf6e1-e4bc-43f6-a26e-18f7c5fbe828-config-volume\") pod \"collect-profiles-29338980-fcrcv\" (UID: \"074bf6e1-e4bc-43f6-a26e-18f7c5fbe828\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338980-fcrcv" Oct 13 07:00:00 crc kubenswrapper[4833]: I1013 07:00:00.308051 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/074bf6e1-e4bc-43f6-a26e-18f7c5fbe828-secret-volume\") pod \"collect-profiles-29338980-fcrcv\" (UID: \"074bf6e1-e4bc-43f6-a26e-18f7c5fbe828\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338980-fcrcv" Oct 13 07:00:00 crc kubenswrapper[4833]: I1013 07:00:00.308116 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzdqq\" (UniqueName: \"kubernetes.io/projected/074bf6e1-e4bc-43f6-a26e-18f7c5fbe828-kube-api-access-mzdqq\") pod \"collect-profiles-29338980-fcrcv\" (UID: \"074bf6e1-e4bc-43f6-a26e-18f7c5fbe828\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338980-fcrcv" Oct 13 07:00:00 crc kubenswrapper[4833]: I1013 07:00:00.409031 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/074bf6e1-e4bc-43f6-a26e-18f7c5fbe828-config-volume\") pod \"collect-profiles-29338980-fcrcv\" (UID: \"074bf6e1-e4bc-43f6-a26e-18f7c5fbe828\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338980-fcrcv" Oct 13 07:00:00 crc kubenswrapper[4833]: I1013 07:00:00.409103 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/074bf6e1-e4bc-43f6-a26e-18f7c5fbe828-secret-volume\") pod \"collect-profiles-29338980-fcrcv\" (UID: \"074bf6e1-e4bc-43f6-a26e-18f7c5fbe828\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338980-fcrcv" Oct 13 07:00:00 crc kubenswrapper[4833]: I1013 07:00:00.409155 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzdqq\" (UniqueName: \"kubernetes.io/projected/074bf6e1-e4bc-43f6-a26e-18f7c5fbe828-kube-api-access-mzdqq\") pod \"collect-profiles-29338980-fcrcv\" (UID: \"074bf6e1-e4bc-43f6-a26e-18f7c5fbe828\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338980-fcrcv" Oct 13 07:00:00 crc kubenswrapper[4833]: I1013 07:00:00.410109 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/074bf6e1-e4bc-43f6-a26e-18f7c5fbe828-config-volume\") pod \"collect-profiles-29338980-fcrcv\" (UID: \"074bf6e1-e4bc-43f6-a26e-18f7c5fbe828\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338980-fcrcv" Oct 13 07:00:00 crc kubenswrapper[4833]: I1013 07:00:00.415991 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/074bf6e1-e4bc-43f6-a26e-18f7c5fbe828-secret-volume\") pod \"collect-profiles-29338980-fcrcv\" (UID: \"074bf6e1-e4bc-43f6-a26e-18f7c5fbe828\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338980-fcrcv" Oct 13 07:00:00 crc kubenswrapper[4833]: I1013 07:00:00.424611 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzdqq\" (UniqueName: \"kubernetes.io/projected/074bf6e1-e4bc-43f6-a26e-18f7c5fbe828-kube-api-access-mzdqq\") pod \"collect-profiles-29338980-fcrcv\" (UID: \"074bf6e1-e4bc-43f6-a26e-18f7c5fbe828\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338980-fcrcv" Oct 13 07:00:00 crc kubenswrapper[4833]: I1013 07:00:00.495473 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29338980-fcrcv" Oct 13 07:00:00 crc kubenswrapper[4833]: I1013 07:00:00.958688 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29338980-fcrcv"] Oct 13 07:00:01 crc kubenswrapper[4833]: I1013 07:00:01.148865 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29338980-fcrcv" event={"ID":"074bf6e1-e4bc-43f6-a26e-18f7c5fbe828","Type":"ContainerStarted","Data":"9af7eb907b7dc9f2124815b9411853ac86802063f6e2e77cc4f932e4c1b39ce7"} Oct 13 07:00:01 crc kubenswrapper[4833]: I1013 07:00:01.149187 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29338980-fcrcv" event={"ID":"074bf6e1-e4bc-43f6-a26e-18f7c5fbe828","Type":"ContainerStarted","Data":"fa4d3bb8bad34ac68de244f2c49db276e791b947e71319783226e4a933484200"} Oct 13 07:00:01 crc kubenswrapper[4833]: I1013 07:00:01.165917 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29338980-fcrcv" podStartSLOduration=1.165896056 podStartE2EDuration="1.165896056s" podCreationTimestamp="2025-10-13 07:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 07:00:01.162453197 +0000 UTC m=+1891.262876123" watchObservedRunningTime="2025-10-13 07:00:01.165896056 +0000 UTC m=+1891.266318982" Oct 13 07:00:02 crc kubenswrapper[4833]: I1013 07:00:02.161289 4833 generic.go:334] "Generic (PLEG): container finished" podID="074bf6e1-e4bc-43f6-a26e-18f7c5fbe828" containerID="9af7eb907b7dc9f2124815b9411853ac86802063f6e2e77cc4f932e4c1b39ce7" exitCode=0 Oct 13 07:00:02 crc kubenswrapper[4833]: I1013 07:00:02.161409 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29338980-fcrcv" event={"ID":"074bf6e1-e4bc-43f6-a26e-18f7c5fbe828","Type":"ContainerDied","Data":"9af7eb907b7dc9f2124815b9411853ac86802063f6e2e77cc4f932e4c1b39ce7"} Oct 13 07:00:03 crc kubenswrapper[4833]: I1013 07:00:03.438376 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29338980-fcrcv" Oct 13 07:00:03 crc kubenswrapper[4833]: I1013 07:00:03.482396 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/074bf6e1-e4bc-43f6-a26e-18f7c5fbe828-config-volume\") pod \"074bf6e1-e4bc-43f6-a26e-18f7c5fbe828\" (UID: \"074bf6e1-e4bc-43f6-a26e-18f7c5fbe828\") " Oct 13 07:00:03 crc kubenswrapper[4833]: I1013 07:00:03.482549 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzdqq\" (UniqueName: \"kubernetes.io/projected/074bf6e1-e4bc-43f6-a26e-18f7c5fbe828-kube-api-access-mzdqq\") pod \"074bf6e1-e4bc-43f6-a26e-18f7c5fbe828\" (UID: \"074bf6e1-e4bc-43f6-a26e-18f7c5fbe828\") " Oct 13 07:00:03 crc kubenswrapper[4833]: I1013 07:00:03.482618 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/074bf6e1-e4bc-43f6-a26e-18f7c5fbe828-secret-volume\") pod \"074bf6e1-e4bc-43f6-a26e-18f7c5fbe828\" (UID: \"074bf6e1-e4bc-43f6-a26e-18f7c5fbe828\") " Oct 13 07:00:03 crc kubenswrapper[4833]: I1013 07:00:03.483309 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/074bf6e1-e4bc-43f6-a26e-18f7c5fbe828-config-volume" (OuterVolumeSpecName: "config-volume") pod "074bf6e1-e4bc-43f6-a26e-18f7c5fbe828" (UID: "074bf6e1-e4bc-43f6-a26e-18f7c5fbe828"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 07:00:03 crc kubenswrapper[4833]: I1013 07:00:03.487717 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/074bf6e1-e4bc-43f6-a26e-18f7c5fbe828-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "074bf6e1-e4bc-43f6-a26e-18f7c5fbe828" (UID: "074bf6e1-e4bc-43f6-a26e-18f7c5fbe828"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 07:00:03 crc kubenswrapper[4833]: I1013 07:00:03.488576 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/074bf6e1-e4bc-43f6-a26e-18f7c5fbe828-kube-api-access-mzdqq" (OuterVolumeSpecName: "kube-api-access-mzdqq") pod "074bf6e1-e4bc-43f6-a26e-18f7c5fbe828" (UID: "074bf6e1-e4bc-43f6-a26e-18f7c5fbe828"). InnerVolumeSpecName "kube-api-access-mzdqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:00:03 crc kubenswrapper[4833]: I1013 07:00:03.583852 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/074bf6e1-e4bc-43f6-a26e-18f7c5fbe828-config-volume\") on node \"crc\" DevicePath \"\"" Oct 13 07:00:03 crc kubenswrapper[4833]: I1013 07:00:03.583892 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzdqq\" (UniqueName: \"kubernetes.io/projected/074bf6e1-e4bc-43f6-a26e-18f7c5fbe828-kube-api-access-mzdqq\") on node \"crc\" DevicePath \"\"" Oct 13 07:00:03 crc kubenswrapper[4833]: I1013 07:00:03.583904 4833 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/074bf6e1-e4bc-43f6-a26e-18f7c5fbe828-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 13 07:00:04 crc kubenswrapper[4833]: I1013 07:00:04.179499 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29338980-fcrcv" event={"ID":"074bf6e1-e4bc-43f6-a26e-18f7c5fbe828","Type":"ContainerDied","Data":"fa4d3bb8bad34ac68de244f2c49db276e791b947e71319783226e4a933484200"} Oct 13 07:00:04 crc kubenswrapper[4833]: I1013 07:00:04.179577 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa4d3bb8bad34ac68de244f2c49db276e791b947e71319783226e4a933484200" Oct 13 07:00:04 crc kubenswrapper[4833]: I1013 07:00:04.179563 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29338980-fcrcv" Oct 13 07:01:30 crc kubenswrapper[4833]: I1013 07:01:30.542405 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:01:30 crc kubenswrapper[4833]: I1013 07:01:30.543116 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:02:00 crc kubenswrapper[4833]: I1013 07:02:00.543039 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:02:00 crc kubenswrapper[4833]: I1013 07:02:00.543705 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:02:30 crc kubenswrapper[4833]: I1013 07:02:30.542798 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:02:30 crc kubenswrapper[4833]: I1013 07:02:30.543509 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:02:30 crc kubenswrapper[4833]: I1013 07:02:30.543605 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 07:02:30 crc kubenswrapper[4833]: I1013 07:02:30.544630 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"43b26ec40f4a2a9a47572e66a50a140a105b3043936cbce1085e1ab35bf9a834"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 07:02:30 crc kubenswrapper[4833]: I1013 07:02:30.544716 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://43b26ec40f4a2a9a47572e66a50a140a105b3043936cbce1085e1ab35bf9a834" gracePeriod=600 Oct 13 07:02:31 crc kubenswrapper[4833]: I1013 07:02:31.396260 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="43b26ec40f4a2a9a47572e66a50a140a105b3043936cbce1085e1ab35bf9a834" exitCode=0 Oct 13 07:02:31 crc kubenswrapper[4833]: I1013 07:02:31.396295 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"43b26ec40f4a2a9a47572e66a50a140a105b3043936cbce1085e1ab35bf9a834"} Oct 13 07:02:31 crc kubenswrapper[4833]: I1013 07:02:31.396984 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"5608dff901323ff44a1b097c0038694377adede4f15d8beccc52f5617256783c"} Oct 13 07:02:31 crc kubenswrapper[4833]: I1013 07:02:31.397033 4833 scope.go:117] "RemoveContainer" containerID="40fe12eaaa5fd00d2647217d1c5ca48c96e0fd15e2615087f10225f5f2df2206" Oct 13 07:02:38 crc kubenswrapper[4833]: I1013 07:02:38.017401 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ts7dp"] Oct 13 07:02:38 crc kubenswrapper[4833]: E1013 07:02:38.018618 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="074bf6e1-e4bc-43f6-a26e-18f7c5fbe828" containerName="collect-profiles" Oct 13 07:02:38 crc kubenswrapper[4833]: I1013 07:02:38.018643 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="074bf6e1-e4bc-43f6-a26e-18f7c5fbe828" containerName="collect-profiles" Oct 13 07:02:38 crc kubenswrapper[4833]: I1013 07:02:38.018974 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="074bf6e1-e4bc-43f6-a26e-18f7c5fbe828" containerName="collect-profiles" Oct 13 07:02:38 crc kubenswrapper[4833]: I1013 07:02:38.021330 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ts7dp" Oct 13 07:02:38 crc kubenswrapper[4833]: I1013 07:02:38.029851 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ts7dp"] Oct 13 07:02:38 crc kubenswrapper[4833]: I1013 07:02:38.110585 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd758f12-e038-4fdd-8b09-ef7625eda31f-catalog-content\") pod \"redhat-marketplace-ts7dp\" (UID: \"dd758f12-e038-4fdd-8b09-ef7625eda31f\") " pod="openshift-marketplace/redhat-marketplace-ts7dp" Oct 13 07:02:38 crc kubenswrapper[4833]: I1013 07:02:38.110657 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn7wr\" (UniqueName: \"kubernetes.io/projected/dd758f12-e038-4fdd-8b09-ef7625eda31f-kube-api-access-wn7wr\") pod \"redhat-marketplace-ts7dp\" (UID: \"dd758f12-e038-4fdd-8b09-ef7625eda31f\") " pod="openshift-marketplace/redhat-marketplace-ts7dp" Oct 13 07:02:38 crc kubenswrapper[4833]: I1013 07:02:38.110702 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd758f12-e038-4fdd-8b09-ef7625eda31f-utilities\") pod \"redhat-marketplace-ts7dp\" (UID: \"dd758f12-e038-4fdd-8b09-ef7625eda31f\") " pod="openshift-marketplace/redhat-marketplace-ts7dp" Oct 13 07:02:38 crc kubenswrapper[4833]: I1013 07:02:38.212215 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd758f12-e038-4fdd-8b09-ef7625eda31f-catalog-content\") pod \"redhat-marketplace-ts7dp\" (UID: \"dd758f12-e038-4fdd-8b09-ef7625eda31f\") " pod="openshift-marketplace/redhat-marketplace-ts7dp" Oct 13 07:02:38 crc kubenswrapper[4833]: I1013 07:02:38.212255 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn7wr\" (UniqueName: \"kubernetes.io/projected/dd758f12-e038-4fdd-8b09-ef7625eda31f-kube-api-access-wn7wr\") pod \"redhat-marketplace-ts7dp\" (UID: \"dd758f12-e038-4fdd-8b09-ef7625eda31f\") " pod="openshift-marketplace/redhat-marketplace-ts7dp" Oct 13 07:02:38 crc kubenswrapper[4833]: I1013 07:02:38.212285 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd758f12-e038-4fdd-8b09-ef7625eda31f-utilities\") pod \"redhat-marketplace-ts7dp\" (UID: \"dd758f12-e038-4fdd-8b09-ef7625eda31f\") " pod="openshift-marketplace/redhat-marketplace-ts7dp" Oct 13 07:02:38 crc kubenswrapper[4833]: I1013 07:02:38.212796 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd758f12-e038-4fdd-8b09-ef7625eda31f-catalog-content\") pod \"redhat-marketplace-ts7dp\" (UID: \"dd758f12-e038-4fdd-8b09-ef7625eda31f\") " pod="openshift-marketplace/redhat-marketplace-ts7dp" Oct 13 07:02:38 crc kubenswrapper[4833]: I1013 07:02:38.212809 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd758f12-e038-4fdd-8b09-ef7625eda31f-utilities\") pod \"redhat-marketplace-ts7dp\" (UID: \"dd758f12-e038-4fdd-8b09-ef7625eda31f\") " pod="openshift-marketplace/redhat-marketplace-ts7dp" Oct 13 07:02:38 crc kubenswrapper[4833]: I1013 07:02:38.232571 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn7wr\" (UniqueName: \"kubernetes.io/projected/dd758f12-e038-4fdd-8b09-ef7625eda31f-kube-api-access-wn7wr\") pod \"redhat-marketplace-ts7dp\" (UID: \"dd758f12-e038-4fdd-8b09-ef7625eda31f\") " pod="openshift-marketplace/redhat-marketplace-ts7dp" Oct 13 07:02:38 crc kubenswrapper[4833]: I1013 07:02:38.337933 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ts7dp" Oct 13 07:02:38 crc kubenswrapper[4833]: I1013 07:02:38.773459 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ts7dp"] Oct 13 07:02:39 crc kubenswrapper[4833]: I1013 07:02:39.471621 4833 generic.go:334] "Generic (PLEG): container finished" podID="dd758f12-e038-4fdd-8b09-ef7625eda31f" containerID="205ea3767a376e865524279961ba3b000f3493a8aa0d2f45f6362449b01d4161" exitCode=0 Oct 13 07:02:39 crc kubenswrapper[4833]: I1013 07:02:39.471741 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ts7dp" event={"ID":"dd758f12-e038-4fdd-8b09-ef7625eda31f","Type":"ContainerDied","Data":"205ea3767a376e865524279961ba3b000f3493a8aa0d2f45f6362449b01d4161"} Oct 13 07:02:39 crc kubenswrapper[4833]: I1013 07:02:39.472719 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ts7dp" event={"ID":"dd758f12-e038-4fdd-8b09-ef7625eda31f","Type":"ContainerStarted","Data":"a572384cb88ef26cc65d20c24e5570b4d3d032402e874ff5943840ffb72239cf"} Oct 13 07:02:40 crc kubenswrapper[4833]: I1013 07:02:40.485323 4833 generic.go:334] "Generic (PLEG): container finished" podID="dd758f12-e038-4fdd-8b09-ef7625eda31f" containerID="ed9b4793aca7cff7ea778405b29f4362ce5dd54880f21a9363f02051fb766f1b" exitCode=0 Oct 13 07:02:40 crc kubenswrapper[4833]: I1013 07:02:40.485363 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ts7dp" event={"ID":"dd758f12-e038-4fdd-8b09-ef7625eda31f","Type":"ContainerDied","Data":"ed9b4793aca7cff7ea778405b29f4362ce5dd54880f21a9363f02051fb766f1b"} Oct 13 07:02:41 crc kubenswrapper[4833]: I1013 07:02:41.496071 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ts7dp" event={"ID":"dd758f12-e038-4fdd-8b09-ef7625eda31f","Type":"ContainerStarted","Data":"2d824618b2bb1fa2c94e26e88d56ae749734f41f855d290d855d548dfdb8ee0f"} Oct 13 07:02:41 crc kubenswrapper[4833]: I1013 07:02:41.518434 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ts7dp" podStartSLOduration=3.046014073 podStartE2EDuration="4.518416583s" podCreationTimestamp="2025-10-13 07:02:37 +0000 UTC" firstStartedPulling="2025-10-13 07:02:39.473285783 +0000 UTC m=+2049.573708709" lastFinishedPulling="2025-10-13 07:02:40.945688313 +0000 UTC m=+2051.046111219" observedRunningTime="2025-10-13 07:02:41.512605428 +0000 UTC m=+2051.613028344" watchObservedRunningTime="2025-10-13 07:02:41.518416583 +0000 UTC m=+2051.618839499" Oct 13 07:02:48 crc kubenswrapper[4833]: I1013 07:02:48.339037 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ts7dp" Oct 13 07:02:48 crc kubenswrapper[4833]: I1013 07:02:48.341780 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ts7dp" Oct 13 07:02:48 crc kubenswrapper[4833]: I1013 07:02:48.409202 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ts7dp" Oct 13 07:02:48 crc kubenswrapper[4833]: I1013 07:02:48.596268 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ts7dp" Oct 13 07:02:48 crc kubenswrapper[4833]: I1013 07:02:48.651494 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ts7dp"] Oct 13 07:02:50 crc kubenswrapper[4833]: I1013 07:02:50.565826 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ts7dp" podUID="dd758f12-e038-4fdd-8b09-ef7625eda31f" containerName="registry-server" containerID="cri-o://2d824618b2bb1fa2c94e26e88d56ae749734f41f855d290d855d548dfdb8ee0f" gracePeriod=2 Oct 13 07:02:50 crc kubenswrapper[4833]: I1013 07:02:50.977897 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ts7dp" Oct 13 07:02:51 crc kubenswrapper[4833]: I1013 07:02:51.124171 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd758f12-e038-4fdd-8b09-ef7625eda31f-utilities\") pod \"dd758f12-e038-4fdd-8b09-ef7625eda31f\" (UID: \"dd758f12-e038-4fdd-8b09-ef7625eda31f\") " Oct 13 07:02:51 crc kubenswrapper[4833]: I1013 07:02:51.124271 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn7wr\" (UniqueName: \"kubernetes.io/projected/dd758f12-e038-4fdd-8b09-ef7625eda31f-kube-api-access-wn7wr\") pod \"dd758f12-e038-4fdd-8b09-ef7625eda31f\" (UID: \"dd758f12-e038-4fdd-8b09-ef7625eda31f\") " Oct 13 07:02:51 crc kubenswrapper[4833]: I1013 07:02:51.124304 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd758f12-e038-4fdd-8b09-ef7625eda31f-catalog-content\") pod \"dd758f12-e038-4fdd-8b09-ef7625eda31f\" (UID: \"dd758f12-e038-4fdd-8b09-ef7625eda31f\") " Oct 13 07:02:51 crc kubenswrapper[4833]: I1013 07:02:51.125375 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd758f12-e038-4fdd-8b09-ef7625eda31f-utilities" (OuterVolumeSpecName: "utilities") pod "dd758f12-e038-4fdd-8b09-ef7625eda31f" (UID: "dd758f12-e038-4fdd-8b09-ef7625eda31f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:02:51 crc kubenswrapper[4833]: I1013 07:02:51.130349 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd758f12-e038-4fdd-8b09-ef7625eda31f-kube-api-access-wn7wr" (OuterVolumeSpecName: "kube-api-access-wn7wr") pod "dd758f12-e038-4fdd-8b09-ef7625eda31f" (UID: "dd758f12-e038-4fdd-8b09-ef7625eda31f"). InnerVolumeSpecName "kube-api-access-wn7wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:02:51 crc kubenswrapper[4833]: I1013 07:02:51.137805 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd758f12-e038-4fdd-8b09-ef7625eda31f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd758f12-e038-4fdd-8b09-ef7625eda31f" (UID: "dd758f12-e038-4fdd-8b09-ef7625eda31f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:02:51 crc kubenswrapper[4833]: I1013 07:02:51.225280 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd758f12-e038-4fdd-8b09-ef7625eda31f-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 07:02:51 crc kubenswrapper[4833]: I1013 07:02:51.225330 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn7wr\" (UniqueName: \"kubernetes.io/projected/dd758f12-e038-4fdd-8b09-ef7625eda31f-kube-api-access-wn7wr\") on node \"crc\" DevicePath \"\"" Oct 13 07:02:51 crc kubenswrapper[4833]: I1013 07:02:51.225348 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd758f12-e038-4fdd-8b09-ef7625eda31f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 07:02:51 crc kubenswrapper[4833]: I1013 07:02:51.574338 4833 generic.go:334] "Generic (PLEG): container finished" podID="dd758f12-e038-4fdd-8b09-ef7625eda31f" containerID="2d824618b2bb1fa2c94e26e88d56ae749734f41f855d290d855d548dfdb8ee0f" exitCode=0 Oct 13 07:02:51 crc kubenswrapper[4833]: I1013 07:02:51.574389 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ts7dp" event={"ID":"dd758f12-e038-4fdd-8b09-ef7625eda31f","Type":"ContainerDied","Data":"2d824618b2bb1fa2c94e26e88d56ae749734f41f855d290d855d548dfdb8ee0f"} Oct 13 07:02:51 crc kubenswrapper[4833]: I1013 07:02:51.574403 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ts7dp" Oct 13 07:02:51 crc kubenswrapper[4833]: I1013 07:02:51.574426 4833 scope.go:117] "RemoveContainer" containerID="2d824618b2bb1fa2c94e26e88d56ae749734f41f855d290d855d548dfdb8ee0f" Oct 13 07:02:51 crc kubenswrapper[4833]: I1013 07:02:51.574414 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ts7dp" event={"ID":"dd758f12-e038-4fdd-8b09-ef7625eda31f","Type":"ContainerDied","Data":"a572384cb88ef26cc65d20c24e5570b4d3d032402e874ff5943840ffb72239cf"} Oct 13 07:02:51 crc kubenswrapper[4833]: I1013 07:02:51.593267 4833 scope.go:117] "RemoveContainer" containerID="ed9b4793aca7cff7ea778405b29f4362ce5dd54880f21a9363f02051fb766f1b" Oct 13 07:02:51 crc kubenswrapper[4833]: I1013 07:02:51.610594 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ts7dp"] Oct 13 07:02:51 crc kubenswrapper[4833]: I1013 07:02:51.619385 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ts7dp"] Oct 13 07:02:51 crc kubenswrapper[4833]: I1013 07:02:51.635098 4833 scope.go:117] "RemoveContainer" containerID="205ea3767a376e865524279961ba3b000f3493a8aa0d2f45f6362449b01d4161" Oct 13 07:02:51 crc kubenswrapper[4833]: I1013 07:02:51.652712 4833 scope.go:117] "RemoveContainer" containerID="2d824618b2bb1fa2c94e26e88d56ae749734f41f855d290d855d548dfdb8ee0f" Oct 13 07:02:51 crc kubenswrapper[4833]: E1013 07:02:51.653170 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d824618b2bb1fa2c94e26e88d56ae749734f41f855d290d855d548dfdb8ee0f\": container with ID starting with 2d824618b2bb1fa2c94e26e88d56ae749734f41f855d290d855d548dfdb8ee0f not found: ID does not exist" containerID="2d824618b2bb1fa2c94e26e88d56ae749734f41f855d290d855d548dfdb8ee0f" Oct 13 07:02:51 crc kubenswrapper[4833]: I1013 07:02:51.653220 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d824618b2bb1fa2c94e26e88d56ae749734f41f855d290d855d548dfdb8ee0f"} err="failed to get container status \"2d824618b2bb1fa2c94e26e88d56ae749734f41f855d290d855d548dfdb8ee0f\": rpc error: code = NotFound desc = could not find container \"2d824618b2bb1fa2c94e26e88d56ae749734f41f855d290d855d548dfdb8ee0f\": container with ID starting with 2d824618b2bb1fa2c94e26e88d56ae749734f41f855d290d855d548dfdb8ee0f not found: ID does not exist" Oct 13 07:02:51 crc kubenswrapper[4833]: I1013 07:02:51.653246 4833 scope.go:117] "RemoveContainer" containerID="ed9b4793aca7cff7ea778405b29f4362ce5dd54880f21a9363f02051fb766f1b" Oct 13 07:02:51 crc kubenswrapper[4833]: E1013 07:02:51.653546 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed9b4793aca7cff7ea778405b29f4362ce5dd54880f21a9363f02051fb766f1b\": container with ID starting with ed9b4793aca7cff7ea778405b29f4362ce5dd54880f21a9363f02051fb766f1b not found: ID does not exist" containerID="ed9b4793aca7cff7ea778405b29f4362ce5dd54880f21a9363f02051fb766f1b" Oct 13 07:02:51 crc kubenswrapper[4833]: I1013 07:02:51.653614 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed9b4793aca7cff7ea778405b29f4362ce5dd54880f21a9363f02051fb766f1b"} err="failed to get container status \"ed9b4793aca7cff7ea778405b29f4362ce5dd54880f21a9363f02051fb766f1b\": rpc error: code = NotFound desc = could not find container \"ed9b4793aca7cff7ea778405b29f4362ce5dd54880f21a9363f02051fb766f1b\": container with ID starting with ed9b4793aca7cff7ea778405b29f4362ce5dd54880f21a9363f02051fb766f1b not found: ID does not exist" Oct 13 07:02:51 crc kubenswrapper[4833]: I1013 07:02:51.653649 4833 scope.go:117] "RemoveContainer" containerID="205ea3767a376e865524279961ba3b000f3493a8aa0d2f45f6362449b01d4161" Oct 13 07:02:51 crc kubenswrapper[4833]: E1013 07:02:51.654126 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"205ea3767a376e865524279961ba3b000f3493a8aa0d2f45f6362449b01d4161\": container with ID starting with 205ea3767a376e865524279961ba3b000f3493a8aa0d2f45f6362449b01d4161 not found: ID does not exist" containerID="205ea3767a376e865524279961ba3b000f3493a8aa0d2f45f6362449b01d4161" Oct 13 07:02:51 crc kubenswrapper[4833]: I1013 07:02:51.654158 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"205ea3767a376e865524279961ba3b000f3493a8aa0d2f45f6362449b01d4161"} err="failed to get container status \"205ea3767a376e865524279961ba3b000f3493a8aa0d2f45f6362449b01d4161\": rpc error: code = NotFound desc = could not find container \"205ea3767a376e865524279961ba3b000f3493a8aa0d2f45f6362449b01d4161\": container with ID starting with 205ea3767a376e865524279961ba3b000f3493a8aa0d2f45f6362449b01d4161 not found: ID does not exist" Oct 13 07:02:52 crc kubenswrapper[4833]: I1013 07:02:52.639633 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd758f12-e038-4fdd-8b09-ef7625eda31f" path="/var/lib/kubelet/pods/dd758f12-e038-4fdd-8b09-ef7625eda31f/volumes" Oct 13 07:02:55 crc kubenswrapper[4833]: I1013 07:02:55.611204 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gpxgw"] Oct 13 07:02:55 crc kubenswrapper[4833]: E1013 07:02:55.612169 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd758f12-e038-4fdd-8b09-ef7625eda31f" containerName="extract-content" Oct 13 07:02:55 crc kubenswrapper[4833]: I1013 07:02:55.612185 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd758f12-e038-4fdd-8b09-ef7625eda31f" containerName="extract-content" Oct 13 07:02:55 crc kubenswrapper[4833]: E1013 07:02:55.612203 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd758f12-e038-4fdd-8b09-ef7625eda31f" containerName="extract-utilities" Oct 13 07:02:55 crc kubenswrapper[4833]: I1013 07:02:55.612209 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd758f12-e038-4fdd-8b09-ef7625eda31f" containerName="extract-utilities" Oct 13 07:02:55 crc kubenswrapper[4833]: E1013 07:02:55.612249 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd758f12-e038-4fdd-8b09-ef7625eda31f" containerName="registry-server" Oct 13 07:02:55 crc kubenswrapper[4833]: I1013 07:02:55.612255 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd758f12-e038-4fdd-8b09-ef7625eda31f" containerName="registry-server" Oct 13 07:02:55 crc kubenswrapper[4833]: I1013 07:02:55.612407 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd758f12-e038-4fdd-8b09-ef7625eda31f" containerName="registry-server" Oct 13 07:02:55 crc kubenswrapper[4833]: I1013 07:02:55.613565 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gpxgw" Oct 13 07:02:55 crc kubenswrapper[4833]: I1013 07:02:55.625194 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gpxgw"] Oct 13 07:02:55 crc kubenswrapper[4833]: I1013 07:02:55.786926 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1e72dd9-6116-40c9-a421-ebc21d507598-utilities\") pod \"community-operators-gpxgw\" (UID: \"b1e72dd9-6116-40c9-a421-ebc21d507598\") " pod="openshift-marketplace/community-operators-gpxgw" Oct 13 07:02:55 crc kubenswrapper[4833]: I1013 07:02:55.786981 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dwrb\" (UniqueName: \"kubernetes.io/projected/b1e72dd9-6116-40c9-a421-ebc21d507598-kube-api-access-9dwrb\") pod \"community-operators-gpxgw\" (UID: \"b1e72dd9-6116-40c9-a421-ebc21d507598\") " pod="openshift-marketplace/community-operators-gpxgw" Oct 13 07:02:55 crc kubenswrapper[4833]: I1013 07:02:55.787337 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1e72dd9-6116-40c9-a421-ebc21d507598-catalog-content\") pod \"community-operators-gpxgw\" (UID: \"b1e72dd9-6116-40c9-a421-ebc21d507598\") " pod="openshift-marketplace/community-operators-gpxgw" Oct 13 07:02:55 crc kubenswrapper[4833]: I1013 07:02:55.888708 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1e72dd9-6116-40c9-a421-ebc21d507598-utilities\") pod \"community-operators-gpxgw\" (UID: \"b1e72dd9-6116-40c9-a421-ebc21d507598\") " pod="openshift-marketplace/community-operators-gpxgw" Oct 13 07:02:55 crc kubenswrapper[4833]: I1013 07:02:55.888756 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dwrb\" (UniqueName: \"kubernetes.io/projected/b1e72dd9-6116-40c9-a421-ebc21d507598-kube-api-access-9dwrb\") pod \"community-operators-gpxgw\" (UID: \"b1e72dd9-6116-40c9-a421-ebc21d507598\") " pod="openshift-marketplace/community-operators-gpxgw" Oct 13 07:02:55 crc kubenswrapper[4833]: I1013 07:02:55.888831 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1e72dd9-6116-40c9-a421-ebc21d507598-catalog-content\") pod \"community-operators-gpxgw\" (UID: \"b1e72dd9-6116-40c9-a421-ebc21d507598\") " pod="openshift-marketplace/community-operators-gpxgw" Oct 13 07:02:55 crc kubenswrapper[4833]: I1013 07:02:55.889341 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1e72dd9-6116-40c9-a421-ebc21d507598-utilities\") pod \"community-operators-gpxgw\" (UID: \"b1e72dd9-6116-40c9-a421-ebc21d507598\") " pod="openshift-marketplace/community-operators-gpxgw" Oct 13 07:02:55 crc kubenswrapper[4833]: I1013 07:02:55.889350 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1e72dd9-6116-40c9-a421-ebc21d507598-catalog-content\") pod \"community-operators-gpxgw\" (UID: \"b1e72dd9-6116-40c9-a421-ebc21d507598\") " pod="openshift-marketplace/community-operators-gpxgw" Oct 13 07:02:55 crc kubenswrapper[4833]: I1013 07:02:55.923710 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dwrb\" (UniqueName: \"kubernetes.io/projected/b1e72dd9-6116-40c9-a421-ebc21d507598-kube-api-access-9dwrb\") pod \"community-operators-gpxgw\" (UID: \"b1e72dd9-6116-40c9-a421-ebc21d507598\") " pod="openshift-marketplace/community-operators-gpxgw" Oct 13 07:02:55 crc kubenswrapper[4833]: I1013 07:02:55.948792 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gpxgw" Oct 13 07:02:56 crc kubenswrapper[4833]: I1013 07:02:56.225196 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gpxgw"] Oct 13 07:02:56 crc kubenswrapper[4833]: I1013 07:02:56.624158 4833 generic.go:334] "Generic (PLEG): container finished" podID="b1e72dd9-6116-40c9-a421-ebc21d507598" containerID="678da03319997e2bfd3ca20b036d5308592d874847485b34d5c8a928ac4090b3" exitCode=0 Oct 13 07:02:56 crc kubenswrapper[4833]: I1013 07:02:56.624247 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gpxgw" event={"ID":"b1e72dd9-6116-40c9-a421-ebc21d507598","Type":"ContainerDied","Data":"678da03319997e2bfd3ca20b036d5308592d874847485b34d5c8a928ac4090b3"} Oct 13 07:02:56 crc kubenswrapper[4833]: I1013 07:02:56.624407 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gpxgw" event={"ID":"b1e72dd9-6116-40c9-a421-ebc21d507598","Type":"ContainerStarted","Data":"dbb52b8ceef62eca7e4e1214e1036464cc3644f00907c91a4be3e5e3deba41c5"} Oct 13 07:02:56 crc kubenswrapper[4833]: I1013 07:02:56.634350 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 07:02:57 crc kubenswrapper[4833]: I1013 07:02:57.637249 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gpxgw" event={"ID":"b1e72dd9-6116-40c9-a421-ebc21d507598","Type":"ContainerStarted","Data":"0c2c9aacd1129ef7e6e7f41b3beee7b9a255b0bd80507a917e8d1cd81c045479"} Oct 13 07:02:58 crc kubenswrapper[4833]: I1013 07:02:58.646935 4833 generic.go:334] "Generic (PLEG): container finished" podID="b1e72dd9-6116-40c9-a421-ebc21d507598" containerID="0c2c9aacd1129ef7e6e7f41b3beee7b9a255b0bd80507a917e8d1cd81c045479" exitCode=0 Oct 13 07:02:58 crc kubenswrapper[4833]: I1013 07:02:58.646978 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gpxgw" event={"ID":"b1e72dd9-6116-40c9-a421-ebc21d507598","Type":"ContainerDied","Data":"0c2c9aacd1129ef7e6e7f41b3beee7b9a255b0bd80507a917e8d1cd81c045479"} Oct 13 07:02:59 crc kubenswrapper[4833]: I1013 07:02:59.656712 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gpxgw" event={"ID":"b1e72dd9-6116-40c9-a421-ebc21d507598","Type":"ContainerStarted","Data":"ef8b01c0562d07f98e9d8983afd107fa5e7adc01757995b7c419a4d64bb1e708"} Oct 13 07:02:59 crc kubenswrapper[4833]: I1013 07:02:59.676444 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gpxgw" podStartSLOduration=2.05106711 podStartE2EDuration="4.676423593s" podCreationTimestamp="2025-10-13 07:02:55 +0000 UTC" firstStartedPulling="2025-10-13 07:02:56.633247706 +0000 UTC m=+2066.733670662" lastFinishedPulling="2025-10-13 07:02:59.258604229 +0000 UTC m=+2069.359027145" observedRunningTime="2025-10-13 07:02:59.671942025 +0000 UTC m=+2069.772364961" watchObservedRunningTime="2025-10-13 07:02:59.676423593 +0000 UTC m=+2069.776846509" Oct 13 07:03:05 crc kubenswrapper[4833]: I1013 07:03:05.950178 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gpxgw" Oct 13 07:03:05 crc kubenswrapper[4833]: I1013 07:03:05.950695 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gpxgw" Oct 13 07:03:05 crc kubenswrapper[4833]: I1013 07:03:05.999585 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gpxgw" Oct 13 07:03:06 crc kubenswrapper[4833]: I1013 07:03:06.790398 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gpxgw" Oct 13 07:03:06 crc kubenswrapper[4833]: I1013 07:03:06.838504 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gpxgw"] Oct 13 07:03:08 crc kubenswrapper[4833]: I1013 07:03:08.734984 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gpxgw" podUID="b1e72dd9-6116-40c9-a421-ebc21d507598" containerName="registry-server" containerID="cri-o://ef8b01c0562d07f98e9d8983afd107fa5e7adc01757995b7c419a4d64bb1e708" gracePeriod=2 Oct 13 07:03:09 crc kubenswrapper[4833]: I1013 07:03:09.129573 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gpxgw" Oct 13 07:03:09 crc kubenswrapper[4833]: I1013 07:03:09.298497 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1e72dd9-6116-40c9-a421-ebc21d507598-utilities\") pod \"b1e72dd9-6116-40c9-a421-ebc21d507598\" (UID: \"b1e72dd9-6116-40c9-a421-ebc21d507598\") " Oct 13 07:03:09 crc kubenswrapper[4833]: I1013 07:03:09.298861 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1e72dd9-6116-40c9-a421-ebc21d507598-catalog-content\") pod \"b1e72dd9-6116-40c9-a421-ebc21d507598\" (UID: \"b1e72dd9-6116-40c9-a421-ebc21d507598\") " Oct 13 07:03:09 crc kubenswrapper[4833]: I1013 07:03:09.299047 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dwrb\" (UniqueName: \"kubernetes.io/projected/b1e72dd9-6116-40c9-a421-ebc21d507598-kube-api-access-9dwrb\") pod \"b1e72dd9-6116-40c9-a421-ebc21d507598\" (UID: \"b1e72dd9-6116-40c9-a421-ebc21d507598\") " Oct 13 07:03:09 crc kubenswrapper[4833]: I1013 07:03:09.299465 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1e72dd9-6116-40c9-a421-ebc21d507598-utilities" (OuterVolumeSpecName: "utilities") pod "b1e72dd9-6116-40c9-a421-ebc21d507598" (UID: "b1e72dd9-6116-40c9-a421-ebc21d507598"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:03:09 crc kubenswrapper[4833]: I1013 07:03:09.314910 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1e72dd9-6116-40c9-a421-ebc21d507598-kube-api-access-9dwrb" (OuterVolumeSpecName: "kube-api-access-9dwrb") pod "b1e72dd9-6116-40c9-a421-ebc21d507598" (UID: "b1e72dd9-6116-40c9-a421-ebc21d507598"). InnerVolumeSpecName "kube-api-access-9dwrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:03:09 crc kubenswrapper[4833]: I1013 07:03:09.368263 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1e72dd9-6116-40c9-a421-ebc21d507598-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1e72dd9-6116-40c9-a421-ebc21d507598" (UID: "b1e72dd9-6116-40c9-a421-ebc21d507598"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:03:09 crc kubenswrapper[4833]: I1013 07:03:09.401160 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1e72dd9-6116-40c9-a421-ebc21d507598-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 07:03:09 crc kubenswrapper[4833]: I1013 07:03:09.401218 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1e72dd9-6116-40c9-a421-ebc21d507598-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 07:03:09 crc kubenswrapper[4833]: I1013 07:03:09.401246 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dwrb\" (UniqueName: \"kubernetes.io/projected/b1e72dd9-6116-40c9-a421-ebc21d507598-kube-api-access-9dwrb\") on node \"crc\" DevicePath \"\"" Oct 13 07:03:09 crc kubenswrapper[4833]: I1013 07:03:09.748025 4833 generic.go:334] "Generic (PLEG): container finished" podID="b1e72dd9-6116-40c9-a421-ebc21d507598" containerID="ef8b01c0562d07f98e9d8983afd107fa5e7adc01757995b7c419a4d64bb1e708" exitCode=0 Oct 13 07:03:09 crc kubenswrapper[4833]: I1013 07:03:09.748067 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gpxgw" event={"ID":"b1e72dd9-6116-40c9-a421-ebc21d507598","Type":"ContainerDied","Data":"ef8b01c0562d07f98e9d8983afd107fa5e7adc01757995b7c419a4d64bb1e708"} Oct 13 07:03:09 crc kubenswrapper[4833]: I1013 07:03:09.748094 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gpxgw" event={"ID":"b1e72dd9-6116-40c9-a421-ebc21d507598","Type":"ContainerDied","Data":"dbb52b8ceef62eca7e4e1214e1036464cc3644f00907c91a4be3e5e3deba41c5"} Oct 13 07:03:09 crc kubenswrapper[4833]: I1013 07:03:09.748113 4833 scope.go:117] "RemoveContainer" containerID="ef8b01c0562d07f98e9d8983afd107fa5e7adc01757995b7c419a4d64bb1e708" Oct 13 07:03:09 crc kubenswrapper[4833]: I1013 07:03:09.748154 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gpxgw" Oct 13 07:03:09 crc kubenswrapper[4833]: I1013 07:03:09.775480 4833 scope.go:117] "RemoveContainer" containerID="0c2c9aacd1129ef7e6e7f41b3beee7b9a255b0bd80507a917e8d1cd81c045479" Oct 13 07:03:09 crc kubenswrapper[4833]: I1013 07:03:09.795859 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gpxgw"] Oct 13 07:03:09 crc kubenswrapper[4833]: I1013 07:03:09.801122 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gpxgw"] Oct 13 07:03:09 crc kubenswrapper[4833]: I1013 07:03:09.828183 4833 scope.go:117] "RemoveContainer" containerID="678da03319997e2bfd3ca20b036d5308592d874847485b34d5c8a928ac4090b3" Oct 13 07:03:09 crc kubenswrapper[4833]: I1013 07:03:09.848136 4833 scope.go:117] "RemoveContainer" containerID="ef8b01c0562d07f98e9d8983afd107fa5e7adc01757995b7c419a4d64bb1e708" Oct 13 07:03:09 crc kubenswrapper[4833]: E1013 07:03:09.848564 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef8b01c0562d07f98e9d8983afd107fa5e7adc01757995b7c419a4d64bb1e708\": container with ID starting with ef8b01c0562d07f98e9d8983afd107fa5e7adc01757995b7c419a4d64bb1e708 not found: ID does not exist" containerID="ef8b01c0562d07f98e9d8983afd107fa5e7adc01757995b7c419a4d64bb1e708" Oct 13 07:03:09 crc kubenswrapper[4833]: I1013 07:03:09.848603 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef8b01c0562d07f98e9d8983afd107fa5e7adc01757995b7c419a4d64bb1e708"} err="failed to get container status \"ef8b01c0562d07f98e9d8983afd107fa5e7adc01757995b7c419a4d64bb1e708\": rpc error: code = NotFound desc = could not find container \"ef8b01c0562d07f98e9d8983afd107fa5e7adc01757995b7c419a4d64bb1e708\": container with ID starting with ef8b01c0562d07f98e9d8983afd107fa5e7adc01757995b7c419a4d64bb1e708 not found: ID does not exist" Oct 13 07:03:09 crc kubenswrapper[4833]: I1013 07:03:09.848631 4833 scope.go:117] "RemoveContainer" containerID="0c2c9aacd1129ef7e6e7f41b3beee7b9a255b0bd80507a917e8d1cd81c045479" Oct 13 07:03:09 crc kubenswrapper[4833]: E1013 07:03:09.848935 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c2c9aacd1129ef7e6e7f41b3beee7b9a255b0bd80507a917e8d1cd81c045479\": container with ID starting with 0c2c9aacd1129ef7e6e7f41b3beee7b9a255b0bd80507a917e8d1cd81c045479 not found: ID does not exist" containerID="0c2c9aacd1129ef7e6e7f41b3beee7b9a255b0bd80507a917e8d1cd81c045479" Oct 13 07:03:09 crc kubenswrapper[4833]: I1013 07:03:09.848980 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c2c9aacd1129ef7e6e7f41b3beee7b9a255b0bd80507a917e8d1cd81c045479"} err="failed to get container status \"0c2c9aacd1129ef7e6e7f41b3beee7b9a255b0bd80507a917e8d1cd81c045479\": rpc error: code = NotFound desc = could not find container \"0c2c9aacd1129ef7e6e7f41b3beee7b9a255b0bd80507a917e8d1cd81c045479\": container with ID starting with 0c2c9aacd1129ef7e6e7f41b3beee7b9a255b0bd80507a917e8d1cd81c045479 not found: ID does not exist" Oct 13 07:03:09 crc kubenswrapper[4833]: I1013 07:03:09.849118 4833 scope.go:117] "RemoveContainer" containerID="678da03319997e2bfd3ca20b036d5308592d874847485b34d5c8a928ac4090b3" Oct 13 07:03:09 crc kubenswrapper[4833]: E1013 07:03:09.849514 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"678da03319997e2bfd3ca20b036d5308592d874847485b34d5c8a928ac4090b3\": container with ID starting with 678da03319997e2bfd3ca20b036d5308592d874847485b34d5c8a928ac4090b3 not found: ID does not exist" containerID="678da03319997e2bfd3ca20b036d5308592d874847485b34d5c8a928ac4090b3" Oct 13 07:03:09 crc kubenswrapper[4833]: I1013 07:03:09.849567 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"678da03319997e2bfd3ca20b036d5308592d874847485b34d5c8a928ac4090b3"} err="failed to get container status \"678da03319997e2bfd3ca20b036d5308592d874847485b34d5c8a928ac4090b3\": rpc error: code = NotFound desc = could not find container \"678da03319997e2bfd3ca20b036d5308592d874847485b34d5c8a928ac4090b3\": container with ID starting with 678da03319997e2bfd3ca20b036d5308592d874847485b34d5c8a928ac4090b3 not found: ID does not exist" Oct 13 07:03:10 crc kubenswrapper[4833]: I1013 07:03:10.638722 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1e72dd9-6116-40c9-a421-ebc21d507598" path="/var/lib/kubelet/pods/b1e72dd9-6116-40c9-a421-ebc21d507598/volumes" Oct 13 07:04:30 crc kubenswrapper[4833]: I1013 07:04:30.542285 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:04:30 crc kubenswrapper[4833]: I1013 07:04:30.542903 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:04:35 crc kubenswrapper[4833]: I1013 07:04:35.853732 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pkvvb"] Oct 13 07:04:35 crc kubenswrapper[4833]: E1013 07:04:35.854519 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1e72dd9-6116-40c9-a421-ebc21d507598" containerName="registry-server" Oct 13 07:04:35 crc kubenswrapper[4833]: I1013 07:04:35.854549 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1e72dd9-6116-40c9-a421-ebc21d507598" containerName="registry-server" Oct 13 07:04:35 crc kubenswrapper[4833]: E1013 07:04:35.854566 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1e72dd9-6116-40c9-a421-ebc21d507598" containerName="extract-utilities" Oct 13 07:04:35 crc kubenswrapper[4833]: I1013 07:04:35.854572 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1e72dd9-6116-40c9-a421-ebc21d507598" containerName="extract-utilities" Oct 13 07:04:35 crc kubenswrapper[4833]: E1013 07:04:35.854588 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1e72dd9-6116-40c9-a421-ebc21d507598" containerName="extract-content" Oct 13 07:04:35 crc kubenswrapper[4833]: I1013 07:04:35.854594 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1e72dd9-6116-40c9-a421-ebc21d507598" containerName="extract-content" Oct 13 07:04:35 crc kubenswrapper[4833]: I1013 07:04:35.854741 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1e72dd9-6116-40c9-a421-ebc21d507598" containerName="registry-server" Oct 13 07:04:35 crc kubenswrapper[4833]: I1013 07:04:35.855672 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pkvvb" Oct 13 07:04:35 crc kubenswrapper[4833]: I1013 07:04:35.871189 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pkvvb"] Oct 13 07:04:35 crc kubenswrapper[4833]: I1013 07:04:35.883289 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c840d99f-2833-49e6-bde2-d0568232a9e3-catalog-content\") pod \"certified-operators-pkvvb\" (UID: \"c840d99f-2833-49e6-bde2-d0568232a9e3\") " pod="openshift-marketplace/certified-operators-pkvvb" Oct 13 07:04:35 crc kubenswrapper[4833]: I1013 07:04:35.883465 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c840d99f-2833-49e6-bde2-d0568232a9e3-utilities\") pod \"certified-operators-pkvvb\" (UID: \"c840d99f-2833-49e6-bde2-d0568232a9e3\") " pod="openshift-marketplace/certified-operators-pkvvb" Oct 13 07:04:35 crc kubenswrapper[4833]: I1013 07:04:35.883721 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxrnd\" (UniqueName: \"kubernetes.io/projected/c840d99f-2833-49e6-bde2-d0568232a9e3-kube-api-access-dxrnd\") pod \"certified-operators-pkvvb\" (UID: \"c840d99f-2833-49e6-bde2-d0568232a9e3\") " pod="openshift-marketplace/certified-operators-pkvvb" Oct 13 07:04:35 crc kubenswrapper[4833]: I1013 07:04:35.985184 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c840d99f-2833-49e6-bde2-d0568232a9e3-catalog-content\") pod \"certified-operators-pkvvb\" (UID: \"c840d99f-2833-49e6-bde2-d0568232a9e3\") " pod="openshift-marketplace/certified-operators-pkvvb" Oct 13 07:04:35 crc kubenswrapper[4833]: I1013 07:04:35.985274 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c840d99f-2833-49e6-bde2-d0568232a9e3-utilities\") pod \"certified-operators-pkvvb\" (UID: \"c840d99f-2833-49e6-bde2-d0568232a9e3\") " pod="openshift-marketplace/certified-operators-pkvvb" Oct 13 07:04:35 crc kubenswrapper[4833]: I1013 07:04:35.985338 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxrnd\" (UniqueName: \"kubernetes.io/projected/c840d99f-2833-49e6-bde2-d0568232a9e3-kube-api-access-dxrnd\") pod \"certified-operators-pkvvb\" (UID: \"c840d99f-2833-49e6-bde2-d0568232a9e3\") " pod="openshift-marketplace/certified-operators-pkvvb" Oct 13 07:04:35 crc kubenswrapper[4833]: I1013 07:04:35.985818 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c840d99f-2833-49e6-bde2-d0568232a9e3-utilities\") pod \"certified-operators-pkvvb\" (UID: \"c840d99f-2833-49e6-bde2-d0568232a9e3\") " pod="openshift-marketplace/certified-operators-pkvvb" Oct 13 07:04:35 crc kubenswrapper[4833]: I1013 07:04:35.986101 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c840d99f-2833-49e6-bde2-d0568232a9e3-catalog-content\") pod \"certified-operators-pkvvb\" (UID: \"c840d99f-2833-49e6-bde2-d0568232a9e3\") " pod="openshift-marketplace/certified-operators-pkvvb" Oct 13 07:04:36 crc kubenswrapper[4833]: I1013 07:04:36.025793 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxrnd\" (UniqueName: \"kubernetes.io/projected/c840d99f-2833-49e6-bde2-d0568232a9e3-kube-api-access-dxrnd\") pod \"certified-operators-pkvvb\" (UID: \"c840d99f-2833-49e6-bde2-d0568232a9e3\") " pod="openshift-marketplace/certified-operators-pkvvb" Oct 13 07:04:36 crc kubenswrapper[4833]: I1013 07:04:36.178737 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pkvvb" Oct 13 07:04:36 crc kubenswrapper[4833]: I1013 07:04:36.701464 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pkvvb"] Oct 13 07:04:37 crc kubenswrapper[4833]: I1013 07:04:37.501688 4833 generic.go:334] "Generic (PLEG): container finished" podID="c840d99f-2833-49e6-bde2-d0568232a9e3" containerID="27421ac75394ead76648e44c67654341f2c580e440108ec42ba9fbb83737d82c" exitCode=0 Oct 13 07:04:37 crc kubenswrapper[4833]: I1013 07:04:37.501808 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pkvvb" event={"ID":"c840d99f-2833-49e6-bde2-d0568232a9e3","Type":"ContainerDied","Data":"27421ac75394ead76648e44c67654341f2c580e440108ec42ba9fbb83737d82c"} Oct 13 07:04:37 crc kubenswrapper[4833]: I1013 07:04:37.501936 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pkvvb" event={"ID":"c840d99f-2833-49e6-bde2-d0568232a9e3","Type":"ContainerStarted","Data":"54a8cf257948f13854ad221a50a07fe426a68f6415594be07874ca109b68c68b"} Oct 13 07:04:38 crc kubenswrapper[4833]: I1013 07:04:38.513293 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pkvvb" event={"ID":"c840d99f-2833-49e6-bde2-d0568232a9e3","Type":"ContainerStarted","Data":"764ee5e09df0e416516e758ba0d08529a69b290a33bffa3a91336f79be008a8c"} Oct 13 07:04:39 crc kubenswrapper[4833]: I1013 07:04:39.528859 4833 generic.go:334] "Generic (PLEG): container finished" podID="c840d99f-2833-49e6-bde2-d0568232a9e3" containerID="764ee5e09df0e416516e758ba0d08529a69b290a33bffa3a91336f79be008a8c" exitCode=0 Oct 13 07:04:39 crc kubenswrapper[4833]: I1013 07:04:39.528944 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pkvvb" event={"ID":"c840d99f-2833-49e6-bde2-d0568232a9e3","Type":"ContainerDied","Data":"764ee5e09df0e416516e758ba0d08529a69b290a33bffa3a91336f79be008a8c"} Oct 13 07:04:40 crc kubenswrapper[4833]: I1013 07:04:40.542009 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pkvvb" event={"ID":"c840d99f-2833-49e6-bde2-d0568232a9e3","Type":"ContainerStarted","Data":"a95eabd7160de7dc73aca5a1f8230612310e2d6cfc262aefc01917c8f3a0e86c"} Oct 13 07:04:40 crc kubenswrapper[4833]: I1013 07:04:40.562483 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pkvvb" podStartSLOduration=3.123607237 podStartE2EDuration="5.562466966s" podCreationTimestamp="2025-10-13 07:04:35 +0000 UTC" firstStartedPulling="2025-10-13 07:04:37.50312221 +0000 UTC m=+2167.603545126" lastFinishedPulling="2025-10-13 07:04:39.941981939 +0000 UTC m=+2170.042404855" observedRunningTime="2025-10-13 07:04:40.560341856 +0000 UTC m=+2170.660764772" watchObservedRunningTime="2025-10-13 07:04:40.562466966 +0000 UTC m=+2170.662889902" Oct 13 07:04:46 crc kubenswrapper[4833]: I1013 07:04:46.179026 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pkvvb" Oct 13 07:04:46 crc kubenswrapper[4833]: I1013 07:04:46.179423 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pkvvb" Oct 13 07:04:46 crc kubenswrapper[4833]: I1013 07:04:46.235207 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pkvvb" Oct 13 07:04:46 crc kubenswrapper[4833]: I1013 07:04:46.641972 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pkvvb" Oct 13 07:04:46 crc kubenswrapper[4833]: I1013 07:04:46.691354 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pkvvb"] Oct 13 07:04:48 crc kubenswrapper[4833]: I1013 07:04:48.601636 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pkvvb" podUID="c840d99f-2833-49e6-bde2-d0568232a9e3" containerName="registry-server" containerID="cri-o://a95eabd7160de7dc73aca5a1f8230612310e2d6cfc262aefc01917c8f3a0e86c" gracePeriod=2 Oct 13 07:04:48 crc kubenswrapper[4833]: I1013 07:04:48.953377 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pkvvb" Oct 13 07:04:48 crc kubenswrapper[4833]: I1013 07:04:48.975311 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxrnd\" (UniqueName: \"kubernetes.io/projected/c840d99f-2833-49e6-bde2-d0568232a9e3-kube-api-access-dxrnd\") pod \"c840d99f-2833-49e6-bde2-d0568232a9e3\" (UID: \"c840d99f-2833-49e6-bde2-d0568232a9e3\") " Oct 13 07:04:48 crc kubenswrapper[4833]: I1013 07:04:48.975860 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c840d99f-2833-49e6-bde2-d0568232a9e3-catalog-content\") pod \"c840d99f-2833-49e6-bde2-d0568232a9e3\" (UID: \"c840d99f-2833-49e6-bde2-d0568232a9e3\") " Oct 13 07:04:48 crc kubenswrapper[4833]: I1013 07:04:48.975944 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c840d99f-2833-49e6-bde2-d0568232a9e3-utilities\") pod \"c840d99f-2833-49e6-bde2-d0568232a9e3\" (UID: \"c840d99f-2833-49e6-bde2-d0568232a9e3\") " Oct 13 07:04:48 crc kubenswrapper[4833]: I1013 07:04:48.977285 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c840d99f-2833-49e6-bde2-d0568232a9e3-utilities" (OuterVolumeSpecName: "utilities") pod "c840d99f-2833-49e6-bde2-d0568232a9e3" (UID: "c840d99f-2833-49e6-bde2-d0568232a9e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:04:48 crc kubenswrapper[4833]: I1013 07:04:48.985746 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c840d99f-2833-49e6-bde2-d0568232a9e3-kube-api-access-dxrnd" (OuterVolumeSpecName: "kube-api-access-dxrnd") pod "c840d99f-2833-49e6-bde2-d0568232a9e3" (UID: "c840d99f-2833-49e6-bde2-d0568232a9e3"). InnerVolumeSpecName "kube-api-access-dxrnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:04:49 crc kubenswrapper[4833]: I1013 07:04:49.077196 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxrnd\" (UniqueName: \"kubernetes.io/projected/c840d99f-2833-49e6-bde2-d0568232a9e3-kube-api-access-dxrnd\") on node \"crc\" DevicePath \"\"" Oct 13 07:04:49 crc kubenswrapper[4833]: I1013 07:04:49.077227 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c840d99f-2833-49e6-bde2-d0568232a9e3-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 07:04:49 crc kubenswrapper[4833]: I1013 07:04:49.185004 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c840d99f-2833-49e6-bde2-d0568232a9e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c840d99f-2833-49e6-bde2-d0568232a9e3" (UID: "c840d99f-2833-49e6-bde2-d0568232a9e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:04:49 crc kubenswrapper[4833]: I1013 07:04:49.279768 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c840d99f-2833-49e6-bde2-d0568232a9e3-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 07:04:49 crc kubenswrapper[4833]: I1013 07:04:49.611734 4833 generic.go:334] "Generic (PLEG): container finished" podID="c840d99f-2833-49e6-bde2-d0568232a9e3" containerID="a95eabd7160de7dc73aca5a1f8230612310e2d6cfc262aefc01917c8f3a0e86c" exitCode=0 Oct 13 07:04:49 crc kubenswrapper[4833]: I1013 07:04:49.611785 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pkvvb" event={"ID":"c840d99f-2833-49e6-bde2-d0568232a9e3","Type":"ContainerDied","Data":"a95eabd7160de7dc73aca5a1f8230612310e2d6cfc262aefc01917c8f3a0e86c"} Oct 13 07:04:49 crc kubenswrapper[4833]: I1013 07:04:49.611828 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pkvvb" event={"ID":"c840d99f-2833-49e6-bde2-d0568232a9e3","Type":"ContainerDied","Data":"54a8cf257948f13854ad221a50a07fe426a68f6415594be07874ca109b68c68b"} Oct 13 07:04:49 crc kubenswrapper[4833]: I1013 07:04:49.611850 4833 scope.go:117] "RemoveContainer" containerID="a95eabd7160de7dc73aca5a1f8230612310e2d6cfc262aefc01917c8f3a0e86c" Oct 13 07:04:49 crc kubenswrapper[4833]: I1013 07:04:49.611793 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pkvvb" Oct 13 07:04:49 crc kubenswrapper[4833]: I1013 07:04:49.631453 4833 scope.go:117] "RemoveContainer" containerID="764ee5e09df0e416516e758ba0d08529a69b290a33bffa3a91336f79be008a8c" Oct 13 07:04:49 crc kubenswrapper[4833]: I1013 07:04:49.657557 4833 scope.go:117] "RemoveContainer" containerID="27421ac75394ead76648e44c67654341f2c580e440108ec42ba9fbb83737d82c" Oct 13 07:04:49 crc kubenswrapper[4833]: I1013 07:04:49.659390 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pkvvb"] Oct 13 07:04:49 crc kubenswrapper[4833]: I1013 07:04:49.664270 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pkvvb"] Oct 13 07:04:49 crc kubenswrapper[4833]: I1013 07:04:49.684846 4833 scope.go:117] "RemoveContainer" containerID="a95eabd7160de7dc73aca5a1f8230612310e2d6cfc262aefc01917c8f3a0e86c" Oct 13 07:04:49 crc kubenswrapper[4833]: E1013 07:04:49.685209 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a95eabd7160de7dc73aca5a1f8230612310e2d6cfc262aefc01917c8f3a0e86c\": container with ID starting with a95eabd7160de7dc73aca5a1f8230612310e2d6cfc262aefc01917c8f3a0e86c not found: ID does not exist" containerID="a95eabd7160de7dc73aca5a1f8230612310e2d6cfc262aefc01917c8f3a0e86c" Oct 13 07:04:49 crc kubenswrapper[4833]: I1013 07:04:49.685239 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a95eabd7160de7dc73aca5a1f8230612310e2d6cfc262aefc01917c8f3a0e86c"} err="failed to get container status \"a95eabd7160de7dc73aca5a1f8230612310e2d6cfc262aefc01917c8f3a0e86c\": rpc error: code = NotFound desc = could not find container \"a95eabd7160de7dc73aca5a1f8230612310e2d6cfc262aefc01917c8f3a0e86c\": container with ID starting with a95eabd7160de7dc73aca5a1f8230612310e2d6cfc262aefc01917c8f3a0e86c not found: ID does not exist" Oct 13 07:04:49 crc kubenswrapper[4833]: I1013 07:04:49.685260 4833 scope.go:117] "RemoveContainer" containerID="764ee5e09df0e416516e758ba0d08529a69b290a33bffa3a91336f79be008a8c" Oct 13 07:04:49 crc kubenswrapper[4833]: E1013 07:04:49.685452 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"764ee5e09df0e416516e758ba0d08529a69b290a33bffa3a91336f79be008a8c\": container with ID starting with 764ee5e09df0e416516e758ba0d08529a69b290a33bffa3a91336f79be008a8c not found: ID does not exist" containerID="764ee5e09df0e416516e758ba0d08529a69b290a33bffa3a91336f79be008a8c" Oct 13 07:04:49 crc kubenswrapper[4833]: I1013 07:04:49.685476 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"764ee5e09df0e416516e758ba0d08529a69b290a33bffa3a91336f79be008a8c"} err="failed to get container status \"764ee5e09df0e416516e758ba0d08529a69b290a33bffa3a91336f79be008a8c\": rpc error: code = NotFound desc = could not find container \"764ee5e09df0e416516e758ba0d08529a69b290a33bffa3a91336f79be008a8c\": container with ID starting with 764ee5e09df0e416516e758ba0d08529a69b290a33bffa3a91336f79be008a8c not found: ID does not exist" Oct 13 07:04:49 crc kubenswrapper[4833]: I1013 07:04:49.685488 4833 scope.go:117] "RemoveContainer" containerID="27421ac75394ead76648e44c67654341f2c580e440108ec42ba9fbb83737d82c" Oct 13 07:04:49 crc kubenswrapper[4833]: E1013 07:04:49.685740 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27421ac75394ead76648e44c67654341f2c580e440108ec42ba9fbb83737d82c\": container with ID starting with 27421ac75394ead76648e44c67654341f2c580e440108ec42ba9fbb83737d82c not found: ID does not exist" containerID="27421ac75394ead76648e44c67654341f2c580e440108ec42ba9fbb83737d82c" Oct 13 07:04:49 crc kubenswrapper[4833]: I1013 07:04:49.685785 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27421ac75394ead76648e44c67654341f2c580e440108ec42ba9fbb83737d82c"} err="failed to get container status \"27421ac75394ead76648e44c67654341f2c580e440108ec42ba9fbb83737d82c\": rpc error: code = NotFound desc = could not find container \"27421ac75394ead76648e44c67654341f2c580e440108ec42ba9fbb83737d82c\": container with ID starting with 27421ac75394ead76648e44c67654341f2c580e440108ec42ba9fbb83737d82c not found: ID does not exist" Oct 13 07:04:50 crc kubenswrapper[4833]: I1013 07:04:50.645343 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c840d99f-2833-49e6-bde2-d0568232a9e3" path="/var/lib/kubelet/pods/c840d99f-2833-49e6-bde2-d0568232a9e3/volumes" Oct 13 07:05:00 crc kubenswrapper[4833]: I1013 07:05:00.543109 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:05:00 crc kubenswrapper[4833]: I1013 07:05:00.543744 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:05:30 crc kubenswrapper[4833]: I1013 07:05:30.542814 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:05:30 crc kubenswrapper[4833]: I1013 07:05:30.543455 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:05:30 crc kubenswrapper[4833]: I1013 07:05:30.543521 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 07:05:30 crc kubenswrapper[4833]: I1013 07:05:30.544405 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5608dff901323ff44a1b097c0038694377adede4f15d8beccc52f5617256783c"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 07:05:30 crc kubenswrapper[4833]: I1013 07:05:30.544501 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://5608dff901323ff44a1b097c0038694377adede4f15d8beccc52f5617256783c" gracePeriod=600 Oct 13 07:05:30 crc kubenswrapper[4833]: E1013 07:05:30.670521 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:05:30 crc kubenswrapper[4833]: I1013 07:05:30.926486 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="5608dff901323ff44a1b097c0038694377adede4f15d8beccc52f5617256783c" exitCode=0 Oct 13 07:05:30 crc kubenswrapper[4833]: I1013 07:05:30.926529 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"5608dff901323ff44a1b097c0038694377adede4f15d8beccc52f5617256783c"} Oct 13 07:05:30 crc kubenswrapper[4833]: I1013 07:05:30.926667 4833 scope.go:117] "RemoveContainer" containerID="43b26ec40f4a2a9a47572e66a50a140a105b3043936cbce1085e1ab35bf9a834" Oct 13 07:05:30 crc kubenswrapper[4833]: I1013 07:05:30.927049 4833 scope.go:117] "RemoveContainer" containerID="5608dff901323ff44a1b097c0038694377adede4f15d8beccc52f5617256783c" Oct 13 07:05:30 crc kubenswrapper[4833]: E1013 07:05:30.927245 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:05:42 crc kubenswrapper[4833]: I1013 07:05:42.627663 4833 scope.go:117] "RemoveContainer" containerID="5608dff901323ff44a1b097c0038694377adede4f15d8beccc52f5617256783c" Oct 13 07:05:42 crc kubenswrapper[4833]: E1013 07:05:42.628297 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:05:57 crc kubenswrapper[4833]: I1013 07:05:57.627163 4833 scope.go:117] "RemoveContainer" containerID="5608dff901323ff44a1b097c0038694377adede4f15d8beccc52f5617256783c" Oct 13 07:05:57 crc kubenswrapper[4833]: E1013 07:05:57.628121 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:06:10 crc kubenswrapper[4833]: I1013 07:06:10.631167 4833 scope.go:117] "RemoveContainer" containerID="5608dff901323ff44a1b097c0038694377adede4f15d8beccc52f5617256783c" Oct 13 07:06:10 crc kubenswrapper[4833]: E1013 07:06:10.631904 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:06:22 crc kubenswrapper[4833]: I1013 07:06:22.627044 4833 scope.go:117] "RemoveContainer" containerID="5608dff901323ff44a1b097c0038694377adede4f15d8beccc52f5617256783c" Oct 13 07:06:22 crc kubenswrapper[4833]: E1013 07:06:22.627888 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:06:34 crc kubenswrapper[4833]: I1013 07:06:34.627106 4833 scope.go:117] "RemoveContainer" containerID="5608dff901323ff44a1b097c0038694377adede4f15d8beccc52f5617256783c" Oct 13 07:06:34 crc kubenswrapper[4833]: E1013 07:06:34.630017 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:06:49 crc kubenswrapper[4833]: I1013 07:06:49.627148 4833 scope.go:117] "RemoveContainer" containerID="5608dff901323ff44a1b097c0038694377adede4f15d8beccc52f5617256783c" Oct 13 07:06:49 crc kubenswrapper[4833]: E1013 07:06:49.627871 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:07:01 crc kubenswrapper[4833]: I1013 07:07:01.627272 4833 scope.go:117] "RemoveContainer" containerID="5608dff901323ff44a1b097c0038694377adede4f15d8beccc52f5617256783c" Oct 13 07:07:01 crc kubenswrapper[4833]: E1013 07:07:01.628946 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:07:14 crc kubenswrapper[4833]: I1013 07:07:14.627444 4833 scope.go:117] "RemoveContainer" containerID="5608dff901323ff44a1b097c0038694377adede4f15d8beccc52f5617256783c" Oct 13 07:07:14 crc kubenswrapper[4833]: E1013 07:07:14.628244 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:07:29 crc kubenswrapper[4833]: I1013 07:07:29.628175 4833 scope.go:117] "RemoveContainer" containerID="5608dff901323ff44a1b097c0038694377adede4f15d8beccc52f5617256783c" Oct 13 07:07:29 crc kubenswrapper[4833]: E1013 07:07:29.629336 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:07:40 crc kubenswrapper[4833]: I1013 07:07:40.635372 4833 scope.go:117] "RemoveContainer" containerID="5608dff901323ff44a1b097c0038694377adede4f15d8beccc52f5617256783c" Oct 13 07:07:40 crc kubenswrapper[4833]: E1013 07:07:40.636877 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:07:53 crc kubenswrapper[4833]: I1013 07:07:53.626834 4833 scope.go:117] "RemoveContainer" containerID="5608dff901323ff44a1b097c0038694377adede4f15d8beccc52f5617256783c" Oct 13 07:07:53 crc kubenswrapper[4833]: E1013 07:07:53.627721 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:08:06 crc kubenswrapper[4833]: I1013 07:08:06.626849 4833 scope.go:117] "RemoveContainer" containerID="5608dff901323ff44a1b097c0038694377adede4f15d8beccc52f5617256783c" Oct 13 07:08:06 crc kubenswrapper[4833]: E1013 07:08:06.627609 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:08:21 crc kubenswrapper[4833]: I1013 07:08:21.627853 4833 scope.go:117] "RemoveContainer" containerID="5608dff901323ff44a1b097c0038694377adede4f15d8beccc52f5617256783c" Oct 13 07:08:21 crc kubenswrapper[4833]: E1013 07:08:21.628836 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:08:25 crc kubenswrapper[4833]: I1013 07:08:25.908578 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fl56v"] Oct 13 07:08:25 crc kubenswrapper[4833]: E1013 07:08:25.909188 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c840d99f-2833-49e6-bde2-d0568232a9e3" containerName="extract-utilities" Oct 13 07:08:25 crc kubenswrapper[4833]: I1013 07:08:25.909203 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c840d99f-2833-49e6-bde2-d0568232a9e3" containerName="extract-utilities" Oct 13 07:08:25 crc kubenswrapper[4833]: E1013 07:08:25.909234 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c840d99f-2833-49e6-bde2-d0568232a9e3" containerName="extract-content" Oct 13 07:08:25 crc kubenswrapper[4833]: I1013 07:08:25.909244 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c840d99f-2833-49e6-bde2-d0568232a9e3" containerName="extract-content" Oct 13 07:08:25 crc kubenswrapper[4833]: E1013 07:08:25.909263 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c840d99f-2833-49e6-bde2-d0568232a9e3" containerName="registry-server" Oct 13 07:08:25 crc kubenswrapper[4833]: I1013 07:08:25.909271 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c840d99f-2833-49e6-bde2-d0568232a9e3" containerName="registry-server" Oct 13 07:08:25 crc kubenswrapper[4833]: I1013 07:08:25.909445 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c840d99f-2833-49e6-bde2-d0568232a9e3" containerName="registry-server" Oct 13 07:08:25 crc kubenswrapper[4833]: I1013 07:08:25.910743 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fl56v" Oct 13 07:08:25 crc kubenswrapper[4833]: I1013 07:08:25.926746 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fl56v"] Oct 13 07:08:26 crc kubenswrapper[4833]: I1013 07:08:26.106996 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t2fs\" (UniqueName: \"kubernetes.io/projected/e48cfb66-5672-427f-ac38-4c191d8735e9-kube-api-access-2t2fs\") pod \"redhat-operators-fl56v\" (UID: \"e48cfb66-5672-427f-ac38-4c191d8735e9\") " pod="openshift-marketplace/redhat-operators-fl56v" Oct 13 07:08:26 crc kubenswrapper[4833]: I1013 07:08:26.107103 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e48cfb66-5672-427f-ac38-4c191d8735e9-catalog-content\") pod \"redhat-operators-fl56v\" (UID: \"e48cfb66-5672-427f-ac38-4c191d8735e9\") " pod="openshift-marketplace/redhat-operators-fl56v" Oct 13 07:08:26 crc kubenswrapper[4833]: I1013 07:08:26.107128 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e48cfb66-5672-427f-ac38-4c191d8735e9-utilities\") pod \"redhat-operators-fl56v\" (UID: \"e48cfb66-5672-427f-ac38-4c191d8735e9\") " pod="openshift-marketplace/redhat-operators-fl56v" Oct 13 07:08:26 crc kubenswrapper[4833]: I1013 07:08:26.207917 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t2fs\" (UniqueName: \"kubernetes.io/projected/e48cfb66-5672-427f-ac38-4c191d8735e9-kube-api-access-2t2fs\") pod \"redhat-operators-fl56v\" (UID: \"e48cfb66-5672-427f-ac38-4c191d8735e9\") " pod="openshift-marketplace/redhat-operators-fl56v" Oct 13 07:08:26 crc kubenswrapper[4833]: I1013 07:08:26.207990 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e48cfb66-5672-427f-ac38-4c191d8735e9-catalog-content\") pod \"redhat-operators-fl56v\" (UID: \"e48cfb66-5672-427f-ac38-4c191d8735e9\") " pod="openshift-marketplace/redhat-operators-fl56v" Oct 13 07:08:26 crc kubenswrapper[4833]: I1013 07:08:26.208015 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e48cfb66-5672-427f-ac38-4c191d8735e9-utilities\") pod \"redhat-operators-fl56v\" (UID: \"e48cfb66-5672-427f-ac38-4c191d8735e9\") " pod="openshift-marketplace/redhat-operators-fl56v" Oct 13 07:08:26 crc kubenswrapper[4833]: I1013 07:08:26.208436 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e48cfb66-5672-427f-ac38-4c191d8735e9-utilities\") pod \"redhat-operators-fl56v\" (UID: \"e48cfb66-5672-427f-ac38-4c191d8735e9\") " pod="openshift-marketplace/redhat-operators-fl56v" Oct 13 07:08:26 crc kubenswrapper[4833]: I1013 07:08:26.208970 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e48cfb66-5672-427f-ac38-4c191d8735e9-catalog-content\") pod \"redhat-operators-fl56v\" (UID: \"e48cfb66-5672-427f-ac38-4c191d8735e9\") " pod="openshift-marketplace/redhat-operators-fl56v" Oct 13 07:08:26 crc kubenswrapper[4833]: I1013 07:08:26.228028 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t2fs\" (UniqueName: \"kubernetes.io/projected/e48cfb66-5672-427f-ac38-4c191d8735e9-kube-api-access-2t2fs\") pod \"redhat-operators-fl56v\" (UID: \"e48cfb66-5672-427f-ac38-4c191d8735e9\") " pod="openshift-marketplace/redhat-operators-fl56v" Oct 13 07:08:26 crc kubenswrapper[4833]: I1013 07:08:26.246636 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fl56v" Oct 13 07:08:26 crc kubenswrapper[4833]: I1013 07:08:26.693926 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fl56v"] Oct 13 07:08:27 crc kubenswrapper[4833]: I1013 07:08:27.382831 4833 generic.go:334] "Generic (PLEG): container finished" podID="e48cfb66-5672-427f-ac38-4c191d8735e9" containerID="bfc92cd488b47c7b27604d5b16474bd90ea113105ab4494a4715cf12f0557336" exitCode=0 Oct 13 07:08:27 crc kubenswrapper[4833]: I1013 07:08:27.382934 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fl56v" event={"ID":"e48cfb66-5672-427f-ac38-4c191d8735e9","Type":"ContainerDied","Data":"bfc92cd488b47c7b27604d5b16474bd90ea113105ab4494a4715cf12f0557336"} Oct 13 07:08:27 crc kubenswrapper[4833]: I1013 07:08:27.383192 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fl56v" event={"ID":"e48cfb66-5672-427f-ac38-4c191d8735e9","Type":"ContainerStarted","Data":"9af8efcc333448031c480c9fa1250f6b6fa34165da9cf408d52134ac03c90018"} Oct 13 07:08:27 crc kubenswrapper[4833]: I1013 07:08:27.386573 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 07:08:35 crc kubenswrapper[4833]: I1013 07:08:35.443761 4833 generic.go:334] "Generic (PLEG): container finished" podID="e48cfb66-5672-427f-ac38-4c191d8735e9" containerID="ba28aa6c9cd01c3928a754de2dea9b11c869214970bfa20a45e07f5fe650c598" exitCode=0 Oct 13 07:08:35 crc kubenswrapper[4833]: I1013 07:08:35.443810 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fl56v" event={"ID":"e48cfb66-5672-427f-ac38-4c191d8735e9","Type":"ContainerDied","Data":"ba28aa6c9cd01c3928a754de2dea9b11c869214970bfa20a45e07f5fe650c598"} Oct 13 07:08:36 crc kubenswrapper[4833]: I1013 07:08:36.454427 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fl56v" event={"ID":"e48cfb66-5672-427f-ac38-4c191d8735e9","Type":"ContainerStarted","Data":"361ddb1d99da32faecebf9f20fedfcf69d44ede478ae7e3cee7c53e85758754d"} Oct 13 07:08:36 crc kubenswrapper[4833]: I1013 07:08:36.477924 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fl56v" podStartSLOduration=2.962043219 podStartE2EDuration="11.477905932s" podCreationTimestamp="2025-10-13 07:08:25 +0000 UTC" firstStartedPulling="2025-10-13 07:08:27.386317727 +0000 UTC m=+2397.486740643" lastFinishedPulling="2025-10-13 07:08:35.90218044 +0000 UTC m=+2406.002603356" observedRunningTime="2025-10-13 07:08:36.472328986 +0000 UTC m=+2406.572751932" watchObservedRunningTime="2025-10-13 07:08:36.477905932 +0000 UTC m=+2406.578328848" Oct 13 07:08:36 crc kubenswrapper[4833]: I1013 07:08:36.627747 4833 scope.go:117] "RemoveContainer" containerID="5608dff901323ff44a1b097c0038694377adede4f15d8beccc52f5617256783c" Oct 13 07:08:36 crc kubenswrapper[4833]: E1013 07:08:36.627930 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:08:46 crc kubenswrapper[4833]: I1013 07:08:46.248034 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fl56v" Oct 13 07:08:46 crc kubenswrapper[4833]: I1013 07:08:46.248840 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fl56v" Oct 13 07:08:46 crc kubenswrapper[4833]: I1013 07:08:46.304714 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fl56v" Oct 13 07:08:46 crc kubenswrapper[4833]: I1013 07:08:46.607275 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fl56v" Oct 13 07:08:46 crc kubenswrapper[4833]: I1013 07:08:46.687984 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fl56v"] Oct 13 07:08:46 crc kubenswrapper[4833]: I1013 07:08:46.723323 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n9hd7"] Oct 13 07:08:46 crc kubenswrapper[4833]: I1013 07:08:46.723575 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n9hd7" podUID="a092b978-dd26-456c-bf3c-310a83f188e7" containerName="registry-server" containerID="cri-o://063007b2b6bc580ca334b4de0a575123f603f67e8b687f07cc4701ed05b0eff0" gracePeriod=2 Oct 13 07:08:47 crc kubenswrapper[4833]: I1013 07:08:47.179003 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n9hd7" Oct 13 07:08:47 crc kubenswrapper[4833]: I1013 07:08:47.220017 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a092b978-dd26-456c-bf3c-310a83f188e7-catalog-content\") pod \"a092b978-dd26-456c-bf3c-310a83f188e7\" (UID: \"a092b978-dd26-456c-bf3c-310a83f188e7\") " Oct 13 07:08:47 crc kubenswrapper[4833]: I1013 07:08:47.220196 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nscz5\" (UniqueName: \"kubernetes.io/projected/a092b978-dd26-456c-bf3c-310a83f188e7-kube-api-access-nscz5\") pod \"a092b978-dd26-456c-bf3c-310a83f188e7\" (UID: \"a092b978-dd26-456c-bf3c-310a83f188e7\") " Oct 13 07:08:47 crc kubenswrapper[4833]: I1013 07:08:47.220226 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a092b978-dd26-456c-bf3c-310a83f188e7-utilities\") pod \"a092b978-dd26-456c-bf3c-310a83f188e7\" (UID: \"a092b978-dd26-456c-bf3c-310a83f188e7\") " Oct 13 07:08:47 crc kubenswrapper[4833]: I1013 07:08:47.220898 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a092b978-dd26-456c-bf3c-310a83f188e7-utilities" (OuterVolumeSpecName: "utilities") pod "a092b978-dd26-456c-bf3c-310a83f188e7" (UID: "a092b978-dd26-456c-bf3c-310a83f188e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:08:47 crc kubenswrapper[4833]: I1013 07:08:47.224949 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a092b978-dd26-456c-bf3c-310a83f188e7-kube-api-access-nscz5" (OuterVolumeSpecName: "kube-api-access-nscz5") pod "a092b978-dd26-456c-bf3c-310a83f188e7" (UID: "a092b978-dd26-456c-bf3c-310a83f188e7"). InnerVolumeSpecName "kube-api-access-nscz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:08:47 crc kubenswrapper[4833]: I1013 07:08:47.302149 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a092b978-dd26-456c-bf3c-310a83f188e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a092b978-dd26-456c-bf3c-310a83f188e7" (UID: "a092b978-dd26-456c-bf3c-310a83f188e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:08:47 crc kubenswrapper[4833]: I1013 07:08:47.321477 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nscz5\" (UniqueName: \"kubernetes.io/projected/a092b978-dd26-456c-bf3c-310a83f188e7-kube-api-access-nscz5\") on node \"crc\" DevicePath \"\"" Oct 13 07:08:47 crc kubenswrapper[4833]: I1013 07:08:47.321509 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a092b978-dd26-456c-bf3c-310a83f188e7-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 07:08:47 crc kubenswrapper[4833]: I1013 07:08:47.321519 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a092b978-dd26-456c-bf3c-310a83f188e7-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 07:08:47 crc kubenswrapper[4833]: I1013 07:08:47.549849 4833 generic.go:334] "Generic (PLEG): container finished" podID="a092b978-dd26-456c-bf3c-310a83f188e7" containerID="063007b2b6bc580ca334b4de0a575123f603f67e8b687f07cc4701ed05b0eff0" exitCode=0 Oct 13 07:08:47 crc kubenswrapper[4833]: I1013 07:08:47.549918 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n9hd7" Oct 13 07:08:47 crc kubenswrapper[4833]: I1013 07:08:47.549938 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9hd7" event={"ID":"a092b978-dd26-456c-bf3c-310a83f188e7","Type":"ContainerDied","Data":"063007b2b6bc580ca334b4de0a575123f603f67e8b687f07cc4701ed05b0eff0"} Oct 13 07:08:47 crc kubenswrapper[4833]: I1013 07:08:47.550259 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9hd7" event={"ID":"a092b978-dd26-456c-bf3c-310a83f188e7","Type":"ContainerDied","Data":"f3bc10fb4d27da74cfd4aece4c5cd403275b6dba26a976b7d5e9001dd5ae89a5"} Oct 13 07:08:47 crc kubenswrapper[4833]: I1013 07:08:47.550292 4833 scope.go:117] "RemoveContainer" containerID="063007b2b6bc580ca334b4de0a575123f603f67e8b687f07cc4701ed05b0eff0" Oct 13 07:08:47 crc kubenswrapper[4833]: I1013 07:08:47.575715 4833 scope.go:117] "RemoveContainer" containerID="a591bb9f1aa1d23eabd646415f86e5ba836179be94b19f4778d8f0394c2e124a" Oct 13 07:08:47 crc kubenswrapper[4833]: I1013 07:08:47.581972 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n9hd7"] Oct 13 07:08:47 crc kubenswrapper[4833]: I1013 07:08:47.586622 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n9hd7"] Oct 13 07:08:47 crc kubenswrapper[4833]: I1013 07:08:47.605483 4833 scope.go:117] "RemoveContainer" containerID="301e24cdb77e7c13c16881c6fde1518c05e17b3e0bac0b3648fab1efcbebe837" Oct 13 07:08:47 crc kubenswrapper[4833]: I1013 07:08:47.622369 4833 scope.go:117] "RemoveContainer" containerID="063007b2b6bc580ca334b4de0a575123f603f67e8b687f07cc4701ed05b0eff0" Oct 13 07:08:47 crc kubenswrapper[4833]: E1013 07:08:47.623012 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"063007b2b6bc580ca334b4de0a575123f603f67e8b687f07cc4701ed05b0eff0\": container with ID starting with 063007b2b6bc580ca334b4de0a575123f603f67e8b687f07cc4701ed05b0eff0 not found: ID does not exist" containerID="063007b2b6bc580ca334b4de0a575123f603f67e8b687f07cc4701ed05b0eff0" Oct 13 07:08:47 crc kubenswrapper[4833]: I1013 07:08:47.623046 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"063007b2b6bc580ca334b4de0a575123f603f67e8b687f07cc4701ed05b0eff0"} err="failed to get container status \"063007b2b6bc580ca334b4de0a575123f603f67e8b687f07cc4701ed05b0eff0\": rpc error: code = NotFound desc = could not find container \"063007b2b6bc580ca334b4de0a575123f603f67e8b687f07cc4701ed05b0eff0\": container with ID starting with 063007b2b6bc580ca334b4de0a575123f603f67e8b687f07cc4701ed05b0eff0 not found: ID does not exist" Oct 13 07:08:47 crc kubenswrapper[4833]: I1013 07:08:47.623077 4833 scope.go:117] "RemoveContainer" containerID="a591bb9f1aa1d23eabd646415f86e5ba836179be94b19f4778d8f0394c2e124a" Oct 13 07:08:47 crc kubenswrapper[4833]: E1013 07:08:47.623416 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a591bb9f1aa1d23eabd646415f86e5ba836179be94b19f4778d8f0394c2e124a\": container with ID starting with a591bb9f1aa1d23eabd646415f86e5ba836179be94b19f4778d8f0394c2e124a not found: ID does not exist" containerID="a591bb9f1aa1d23eabd646415f86e5ba836179be94b19f4778d8f0394c2e124a" Oct 13 07:08:47 crc kubenswrapper[4833]: I1013 07:08:47.623460 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a591bb9f1aa1d23eabd646415f86e5ba836179be94b19f4778d8f0394c2e124a"} err="failed to get container status \"a591bb9f1aa1d23eabd646415f86e5ba836179be94b19f4778d8f0394c2e124a\": rpc error: code = NotFound desc = could not find container \"a591bb9f1aa1d23eabd646415f86e5ba836179be94b19f4778d8f0394c2e124a\": container with ID starting with a591bb9f1aa1d23eabd646415f86e5ba836179be94b19f4778d8f0394c2e124a not found: ID does not exist" Oct 13 07:08:47 crc kubenswrapper[4833]: I1013 07:08:47.623491 4833 scope.go:117] "RemoveContainer" containerID="301e24cdb77e7c13c16881c6fde1518c05e17b3e0bac0b3648fab1efcbebe837" Oct 13 07:08:47 crc kubenswrapper[4833]: E1013 07:08:47.623897 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"301e24cdb77e7c13c16881c6fde1518c05e17b3e0bac0b3648fab1efcbebe837\": container with ID starting with 301e24cdb77e7c13c16881c6fde1518c05e17b3e0bac0b3648fab1efcbebe837 not found: ID does not exist" containerID="301e24cdb77e7c13c16881c6fde1518c05e17b3e0bac0b3648fab1efcbebe837" Oct 13 07:08:47 crc kubenswrapper[4833]: I1013 07:08:47.623923 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"301e24cdb77e7c13c16881c6fde1518c05e17b3e0bac0b3648fab1efcbebe837"} err="failed to get container status \"301e24cdb77e7c13c16881c6fde1518c05e17b3e0bac0b3648fab1efcbebe837\": rpc error: code = NotFound desc = could not find container \"301e24cdb77e7c13c16881c6fde1518c05e17b3e0bac0b3648fab1efcbebe837\": container with ID starting with 301e24cdb77e7c13c16881c6fde1518c05e17b3e0bac0b3648fab1efcbebe837 not found: ID does not exist" Oct 13 07:08:48 crc kubenswrapper[4833]: I1013 07:08:48.635026 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a092b978-dd26-456c-bf3c-310a83f188e7" path="/var/lib/kubelet/pods/a092b978-dd26-456c-bf3c-310a83f188e7/volumes" Oct 13 07:08:49 crc kubenswrapper[4833]: I1013 07:08:49.627861 4833 scope.go:117] "RemoveContainer" containerID="5608dff901323ff44a1b097c0038694377adede4f15d8beccc52f5617256783c" Oct 13 07:08:49 crc kubenswrapper[4833]: E1013 07:08:49.628324 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:09:02 crc kubenswrapper[4833]: I1013 07:09:02.627485 4833 scope.go:117] "RemoveContainer" containerID="5608dff901323ff44a1b097c0038694377adede4f15d8beccc52f5617256783c" Oct 13 07:09:02 crc kubenswrapper[4833]: E1013 07:09:02.629000 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:09:16 crc kubenswrapper[4833]: I1013 07:09:16.627103 4833 scope.go:117] "RemoveContainer" containerID="5608dff901323ff44a1b097c0038694377adede4f15d8beccc52f5617256783c" Oct 13 07:09:16 crc kubenswrapper[4833]: E1013 07:09:16.628082 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:09:29 crc kubenswrapper[4833]: I1013 07:09:29.626729 4833 scope.go:117] "RemoveContainer" containerID="5608dff901323ff44a1b097c0038694377adede4f15d8beccc52f5617256783c" Oct 13 07:09:29 crc kubenswrapper[4833]: E1013 07:09:29.627742 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:09:44 crc kubenswrapper[4833]: I1013 07:09:44.627770 4833 scope.go:117] "RemoveContainer" containerID="5608dff901323ff44a1b097c0038694377adede4f15d8beccc52f5617256783c" Oct 13 07:09:44 crc kubenswrapper[4833]: E1013 07:09:44.628571 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:09:59 crc kubenswrapper[4833]: I1013 07:09:59.627149 4833 scope.go:117] "RemoveContainer" containerID="5608dff901323ff44a1b097c0038694377adede4f15d8beccc52f5617256783c" Oct 13 07:09:59 crc kubenswrapper[4833]: E1013 07:09:59.628145 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:10:11 crc kubenswrapper[4833]: I1013 07:10:11.629094 4833 scope.go:117] "RemoveContainer" containerID="5608dff901323ff44a1b097c0038694377adede4f15d8beccc52f5617256783c" Oct 13 07:10:11 crc kubenswrapper[4833]: E1013 07:10:11.630288 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:10:24 crc kubenswrapper[4833]: I1013 07:10:24.629321 4833 scope.go:117] "RemoveContainer" containerID="5608dff901323ff44a1b097c0038694377adede4f15d8beccc52f5617256783c" Oct 13 07:10:24 crc kubenswrapper[4833]: E1013 07:10:24.630747 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:10:36 crc kubenswrapper[4833]: I1013 07:10:36.626582 4833 scope.go:117] "RemoveContainer" containerID="5608dff901323ff44a1b097c0038694377adede4f15d8beccc52f5617256783c" Oct 13 07:10:37 crc kubenswrapper[4833]: I1013 07:10:37.427351 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"5087c8644018381716899a6c740f0cf8cdc609a082dfd4f0fc00630109fb219b"} Oct 13 07:12:41 crc kubenswrapper[4833]: I1013 07:12:41.132021 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gn94g"] Oct 13 07:12:41 crc kubenswrapper[4833]: E1013 07:12:41.133044 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a092b978-dd26-456c-bf3c-310a83f188e7" containerName="extract-content" Oct 13 07:12:41 crc kubenswrapper[4833]: I1013 07:12:41.133058 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a092b978-dd26-456c-bf3c-310a83f188e7" containerName="extract-content" Oct 13 07:12:41 crc kubenswrapper[4833]: E1013 07:12:41.133071 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a092b978-dd26-456c-bf3c-310a83f188e7" containerName="extract-utilities" Oct 13 07:12:41 crc kubenswrapper[4833]: I1013 07:12:41.133078 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a092b978-dd26-456c-bf3c-310a83f188e7" containerName="extract-utilities" Oct 13 07:12:41 crc kubenswrapper[4833]: E1013 07:12:41.133089 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a092b978-dd26-456c-bf3c-310a83f188e7" containerName="registry-server" Oct 13 07:12:41 crc kubenswrapper[4833]: I1013 07:12:41.133095 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a092b978-dd26-456c-bf3c-310a83f188e7" containerName="registry-server" Oct 13 07:12:41 crc kubenswrapper[4833]: I1013 07:12:41.133255 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a092b978-dd26-456c-bf3c-310a83f188e7" containerName="registry-server" Oct 13 07:12:41 crc kubenswrapper[4833]: I1013 07:12:41.134234 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gn94g" Oct 13 07:12:41 crc kubenswrapper[4833]: I1013 07:12:41.150081 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gn94g"] Oct 13 07:12:41 crc kubenswrapper[4833]: I1013 07:12:41.321661 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7afa009-396d-41c5-a36e-bc5acc74888f-utilities\") pod \"redhat-marketplace-gn94g\" (UID: \"b7afa009-396d-41c5-a36e-bc5acc74888f\") " pod="openshift-marketplace/redhat-marketplace-gn94g" Oct 13 07:12:41 crc kubenswrapper[4833]: I1013 07:12:41.322051 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7afa009-396d-41c5-a36e-bc5acc74888f-catalog-content\") pod \"redhat-marketplace-gn94g\" (UID: \"b7afa009-396d-41c5-a36e-bc5acc74888f\") " pod="openshift-marketplace/redhat-marketplace-gn94g" Oct 13 07:12:41 crc kubenswrapper[4833]: I1013 07:12:41.322218 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc689\" (UniqueName: \"kubernetes.io/projected/b7afa009-396d-41c5-a36e-bc5acc74888f-kube-api-access-tc689\") pod \"redhat-marketplace-gn94g\" (UID: \"b7afa009-396d-41c5-a36e-bc5acc74888f\") " pod="openshift-marketplace/redhat-marketplace-gn94g" Oct 13 07:12:41 crc kubenswrapper[4833]: I1013 07:12:41.423046 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7afa009-396d-41c5-a36e-bc5acc74888f-utilities\") pod \"redhat-marketplace-gn94g\" (UID: \"b7afa009-396d-41c5-a36e-bc5acc74888f\") " pod="openshift-marketplace/redhat-marketplace-gn94g" Oct 13 07:12:41 crc kubenswrapper[4833]: I1013 07:12:41.423138 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7afa009-396d-41c5-a36e-bc5acc74888f-catalog-content\") pod \"redhat-marketplace-gn94g\" (UID: \"b7afa009-396d-41c5-a36e-bc5acc74888f\") " pod="openshift-marketplace/redhat-marketplace-gn94g" Oct 13 07:12:41 crc kubenswrapper[4833]: I1013 07:12:41.423168 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc689\" (UniqueName: \"kubernetes.io/projected/b7afa009-396d-41c5-a36e-bc5acc74888f-kube-api-access-tc689\") pod \"redhat-marketplace-gn94g\" (UID: \"b7afa009-396d-41c5-a36e-bc5acc74888f\") " pod="openshift-marketplace/redhat-marketplace-gn94g" Oct 13 07:12:41 crc kubenswrapper[4833]: I1013 07:12:41.423713 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7afa009-396d-41c5-a36e-bc5acc74888f-utilities\") pod \"redhat-marketplace-gn94g\" (UID: \"b7afa009-396d-41c5-a36e-bc5acc74888f\") " pod="openshift-marketplace/redhat-marketplace-gn94g" Oct 13 07:12:41 crc kubenswrapper[4833]: I1013 07:12:41.423893 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7afa009-396d-41c5-a36e-bc5acc74888f-catalog-content\") pod \"redhat-marketplace-gn94g\" (UID: \"b7afa009-396d-41c5-a36e-bc5acc74888f\") " pod="openshift-marketplace/redhat-marketplace-gn94g" Oct 13 07:12:41 crc kubenswrapper[4833]: I1013 07:12:41.448368 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc689\" (UniqueName: \"kubernetes.io/projected/b7afa009-396d-41c5-a36e-bc5acc74888f-kube-api-access-tc689\") pod \"redhat-marketplace-gn94g\" (UID: \"b7afa009-396d-41c5-a36e-bc5acc74888f\") " pod="openshift-marketplace/redhat-marketplace-gn94g" Oct 13 07:12:41 crc kubenswrapper[4833]: I1013 07:12:41.456921 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gn94g" Oct 13 07:12:41 crc kubenswrapper[4833]: I1013 07:12:41.861449 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gn94g"] Oct 13 07:12:42 crc kubenswrapper[4833]: I1013 07:12:42.409193 4833 generic.go:334] "Generic (PLEG): container finished" podID="b7afa009-396d-41c5-a36e-bc5acc74888f" containerID="d362cf6de1a6fdf28519c655847789d8bb03d499734dc56059251239bec656aa" exitCode=0 Oct 13 07:12:42 crc kubenswrapper[4833]: I1013 07:12:42.409237 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gn94g" event={"ID":"b7afa009-396d-41c5-a36e-bc5acc74888f","Type":"ContainerDied","Data":"d362cf6de1a6fdf28519c655847789d8bb03d499734dc56059251239bec656aa"} Oct 13 07:12:42 crc kubenswrapper[4833]: I1013 07:12:42.409265 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gn94g" event={"ID":"b7afa009-396d-41c5-a36e-bc5acc74888f","Type":"ContainerStarted","Data":"65e604d6b392911f1515eadf9d16190e26274649d4732913831131203cdfb70e"} Oct 13 07:12:45 crc kubenswrapper[4833]: I1013 07:12:45.429579 4833 generic.go:334] "Generic (PLEG): container finished" podID="b7afa009-396d-41c5-a36e-bc5acc74888f" containerID="7bdd4601aa2a7fc17b9d666d24488934bc47e3b8056b77a96b27177b5412cba0" exitCode=0 Oct 13 07:12:45 crc kubenswrapper[4833]: I1013 07:12:45.429667 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gn94g" event={"ID":"b7afa009-396d-41c5-a36e-bc5acc74888f","Type":"ContainerDied","Data":"7bdd4601aa2a7fc17b9d666d24488934bc47e3b8056b77a96b27177b5412cba0"} Oct 13 07:12:46 crc kubenswrapper[4833]: I1013 07:12:46.438364 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gn94g" event={"ID":"b7afa009-396d-41c5-a36e-bc5acc74888f","Type":"ContainerStarted","Data":"898415bb57a846424610d715b500f817f72d3b0c79e0d200a54e30de97ee92cc"} Oct 13 07:12:46 crc kubenswrapper[4833]: I1013 07:12:46.452409 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gn94g" podStartSLOduration=2.034812615 podStartE2EDuration="5.452391455s" podCreationTimestamp="2025-10-13 07:12:41 +0000 UTC" firstStartedPulling="2025-10-13 07:12:42.411049927 +0000 UTC m=+2652.511472843" lastFinishedPulling="2025-10-13 07:12:45.828628767 +0000 UTC m=+2655.929051683" observedRunningTime="2025-10-13 07:12:46.451187971 +0000 UTC m=+2656.551610877" watchObservedRunningTime="2025-10-13 07:12:46.452391455 +0000 UTC m=+2656.552814371" Oct 13 07:12:51 crc kubenswrapper[4833]: I1013 07:12:51.457477 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gn94g" Oct 13 07:12:51 crc kubenswrapper[4833]: I1013 07:12:51.459341 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gn94g" Oct 13 07:12:51 crc kubenswrapper[4833]: I1013 07:12:51.499266 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gn94g" Oct 13 07:12:52 crc kubenswrapper[4833]: I1013 07:12:52.582503 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gn94g" Oct 13 07:12:52 crc kubenswrapper[4833]: I1013 07:12:52.626532 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gn94g"] Oct 13 07:12:54 crc kubenswrapper[4833]: I1013 07:12:54.507517 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gn94g" podUID="b7afa009-396d-41c5-a36e-bc5acc74888f" containerName="registry-server" containerID="cri-o://898415bb57a846424610d715b500f817f72d3b0c79e0d200a54e30de97ee92cc" gracePeriod=2 Oct 13 07:12:54 crc kubenswrapper[4833]: I1013 07:12:54.895175 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gn94g" Oct 13 07:12:55 crc kubenswrapper[4833]: I1013 07:12:55.016240 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7afa009-396d-41c5-a36e-bc5acc74888f-utilities\") pod \"b7afa009-396d-41c5-a36e-bc5acc74888f\" (UID: \"b7afa009-396d-41c5-a36e-bc5acc74888f\") " Oct 13 07:12:55 crc kubenswrapper[4833]: I1013 07:12:55.016336 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc689\" (UniqueName: \"kubernetes.io/projected/b7afa009-396d-41c5-a36e-bc5acc74888f-kube-api-access-tc689\") pod \"b7afa009-396d-41c5-a36e-bc5acc74888f\" (UID: \"b7afa009-396d-41c5-a36e-bc5acc74888f\") " Oct 13 07:12:55 crc kubenswrapper[4833]: I1013 07:12:55.016417 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7afa009-396d-41c5-a36e-bc5acc74888f-catalog-content\") pod \"b7afa009-396d-41c5-a36e-bc5acc74888f\" (UID: \"b7afa009-396d-41c5-a36e-bc5acc74888f\") " Oct 13 07:12:55 crc kubenswrapper[4833]: I1013 07:12:55.017245 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7afa009-396d-41c5-a36e-bc5acc74888f-utilities" (OuterVolumeSpecName: "utilities") pod "b7afa009-396d-41c5-a36e-bc5acc74888f" (UID: "b7afa009-396d-41c5-a36e-bc5acc74888f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:12:55 crc kubenswrapper[4833]: I1013 07:12:55.022656 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7afa009-396d-41c5-a36e-bc5acc74888f-kube-api-access-tc689" (OuterVolumeSpecName: "kube-api-access-tc689") pod "b7afa009-396d-41c5-a36e-bc5acc74888f" (UID: "b7afa009-396d-41c5-a36e-bc5acc74888f"). InnerVolumeSpecName "kube-api-access-tc689". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:12:55 crc kubenswrapper[4833]: I1013 07:12:55.029817 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7afa009-396d-41c5-a36e-bc5acc74888f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b7afa009-396d-41c5-a36e-bc5acc74888f" (UID: "b7afa009-396d-41c5-a36e-bc5acc74888f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:12:55 crc kubenswrapper[4833]: I1013 07:12:55.118112 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tc689\" (UniqueName: \"kubernetes.io/projected/b7afa009-396d-41c5-a36e-bc5acc74888f-kube-api-access-tc689\") on node \"crc\" DevicePath \"\"" Oct 13 07:12:55 crc kubenswrapper[4833]: I1013 07:12:55.118146 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7afa009-396d-41c5-a36e-bc5acc74888f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 07:12:55 crc kubenswrapper[4833]: I1013 07:12:55.118155 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7afa009-396d-41c5-a36e-bc5acc74888f-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 07:12:55 crc kubenswrapper[4833]: I1013 07:12:55.515223 4833 generic.go:334] "Generic (PLEG): container finished" podID="b7afa009-396d-41c5-a36e-bc5acc74888f" containerID="898415bb57a846424610d715b500f817f72d3b0c79e0d200a54e30de97ee92cc" exitCode=0 Oct 13 07:12:55 crc kubenswrapper[4833]: I1013 07:12:55.515264 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gn94g" event={"ID":"b7afa009-396d-41c5-a36e-bc5acc74888f","Type":"ContainerDied","Data":"898415bb57a846424610d715b500f817f72d3b0c79e0d200a54e30de97ee92cc"} Oct 13 07:12:55 crc kubenswrapper[4833]: I1013 07:12:55.515300 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gn94g" event={"ID":"b7afa009-396d-41c5-a36e-bc5acc74888f","Type":"ContainerDied","Data":"65e604d6b392911f1515eadf9d16190e26274649d4732913831131203cdfb70e"} Oct 13 07:12:55 crc kubenswrapper[4833]: I1013 07:12:55.515322 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gn94g" Oct 13 07:12:55 crc kubenswrapper[4833]: I1013 07:12:55.515342 4833 scope.go:117] "RemoveContainer" containerID="898415bb57a846424610d715b500f817f72d3b0c79e0d200a54e30de97ee92cc" Oct 13 07:12:55 crc kubenswrapper[4833]: I1013 07:12:55.546350 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gn94g"] Oct 13 07:12:55 crc kubenswrapper[4833]: I1013 07:12:55.553886 4833 scope.go:117] "RemoveContainer" containerID="7bdd4601aa2a7fc17b9d666d24488934bc47e3b8056b77a96b27177b5412cba0" Oct 13 07:12:55 crc kubenswrapper[4833]: I1013 07:12:55.554719 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gn94g"] Oct 13 07:12:55 crc kubenswrapper[4833]: I1013 07:12:55.572890 4833 scope.go:117] "RemoveContainer" containerID="d362cf6de1a6fdf28519c655847789d8bb03d499734dc56059251239bec656aa" Oct 13 07:12:55 crc kubenswrapper[4833]: I1013 07:12:55.597393 4833 scope.go:117] "RemoveContainer" containerID="898415bb57a846424610d715b500f817f72d3b0c79e0d200a54e30de97ee92cc" Oct 13 07:12:55 crc kubenswrapper[4833]: E1013 07:12:55.597822 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"898415bb57a846424610d715b500f817f72d3b0c79e0d200a54e30de97ee92cc\": container with ID starting with 898415bb57a846424610d715b500f817f72d3b0c79e0d200a54e30de97ee92cc not found: ID does not exist" containerID="898415bb57a846424610d715b500f817f72d3b0c79e0d200a54e30de97ee92cc" Oct 13 07:12:55 crc kubenswrapper[4833]: I1013 07:12:55.597889 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"898415bb57a846424610d715b500f817f72d3b0c79e0d200a54e30de97ee92cc"} err="failed to get container status \"898415bb57a846424610d715b500f817f72d3b0c79e0d200a54e30de97ee92cc\": rpc error: code = NotFound desc = could not find container \"898415bb57a846424610d715b500f817f72d3b0c79e0d200a54e30de97ee92cc\": container with ID starting with 898415bb57a846424610d715b500f817f72d3b0c79e0d200a54e30de97ee92cc not found: ID does not exist" Oct 13 07:12:55 crc kubenswrapper[4833]: I1013 07:12:55.597931 4833 scope.go:117] "RemoveContainer" containerID="7bdd4601aa2a7fc17b9d666d24488934bc47e3b8056b77a96b27177b5412cba0" Oct 13 07:12:55 crc kubenswrapper[4833]: E1013 07:12:55.598343 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bdd4601aa2a7fc17b9d666d24488934bc47e3b8056b77a96b27177b5412cba0\": container with ID starting with 7bdd4601aa2a7fc17b9d666d24488934bc47e3b8056b77a96b27177b5412cba0 not found: ID does not exist" containerID="7bdd4601aa2a7fc17b9d666d24488934bc47e3b8056b77a96b27177b5412cba0" Oct 13 07:12:55 crc kubenswrapper[4833]: I1013 07:12:55.598387 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bdd4601aa2a7fc17b9d666d24488934bc47e3b8056b77a96b27177b5412cba0"} err="failed to get container status \"7bdd4601aa2a7fc17b9d666d24488934bc47e3b8056b77a96b27177b5412cba0\": rpc error: code = NotFound desc = could not find container \"7bdd4601aa2a7fc17b9d666d24488934bc47e3b8056b77a96b27177b5412cba0\": container with ID starting with 7bdd4601aa2a7fc17b9d666d24488934bc47e3b8056b77a96b27177b5412cba0 not found: ID does not exist" Oct 13 07:12:55 crc kubenswrapper[4833]: I1013 07:12:55.598413 4833 scope.go:117] "RemoveContainer" containerID="d362cf6de1a6fdf28519c655847789d8bb03d499734dc56059251239bec656aa" Oct 13 07:12:55 crc kubenswrapper[4833]: E1013 07:12:55.599028 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d362cf6de1a6fdf28519c655847789d8bb03d499734dc56059251239bec656aa\": container with ID starting with d362cf6de1a6fdf28519c655847789d8bb03d499734dc56059251239bec656aa not found: ID does not exist" containerID="d362cf6de1a6fdf28519c655847789d8bb03d499734dc56059251239bec656aa" Oct 13 07:12:55 crc kubenswrapper[4833]: I1013 07:12:55.599063 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d362cf6de1a6fdf28519c655847789d8bb03d499734dc56059251239bec656aa"} err="failed to get container status \"d362cf6de1a6fdf28519c655847789d8bb03d499734dc56059251239bec656aa\": rpc error: code = NotFound desc = could not find container \"d362cf6de1a6fdf28519c655847789d8bb03d499734dc56059251239bec656aa\": container with ID starting with d362cf6de1a6fdf28519c655847789d8bb03d499734dc56059251239bec656aa not found: ID does not exist" Oct 13 07:12:56 crc kubenswrapper[4833]: I1013 07:12:56.638559 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7afa009-396d-41c5-a36e-bc5acc74888f" path="/var/lib/kubelet/pods/b7afa009-396d-41c5-a36e-bc5acc74888f/volumes" Oct 13 07:13:00 crc kubenswrapper[4833]: I1013 07:13:00.543059 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:13:00 crc kubenswrapper[4833]: I1013 07:13:00.543571 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:13:07 crc kubenswrapper[4833]: I1013 07:13:07.905167 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2nprx"] Oct 13 07:13:07 crc kubenswrapper[4833]: E1013 07:13:07.905975 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7afa009-396d-41c5-a36e-bc5acc74888f" containerName="registry-server" Oct 13 07:13:07 crc kubenswrapper[4833]: I1013 07:13:07.905987 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7afa009-396d-41c5-a36e-bc5acc74888f" containerName="registry-server" Oct 13 07:13:07 crc kubenswrapper[4833]: E1013 07:13:07.906013 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7afa009-396d-41c5-a36e-bc5acc74888f" containerName="extract-content" Oct 13 07:13:07 crc kubenswrapper[4833]: I1013 07:13:07.906019 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7afa009-396d-41c5-a36e-bc5acc74888f" containerName="extract-content" Oct 13 07:13:07 crc kubenswrapper[4833]: E1013 07:13:07.906028 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7afa009-396d-41c5-a36e-bc5acc74888f" containerName="extract-utilities" Oct 13 07:13:07 crc kubenswrapper[4833]: I1013 07:13:07.906035 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7afa009-396d-41c5-a36e-bc5acc74888f" containerName="extract-utilities" Oct 13 07:13:07 crc kubenswrapper[4833]: I1013 07:13:07.906158 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7afa009-396d-41c5-a36e-bc5acc74888f" containerName="registry-server" Oct 13 07:13:07 crc kubenswrapper[4833]: I1013 07:13:07.908374 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2nprx" Oct 13 07:13:07 crc kubenswrapper[4833]: I1013 07:13:07.927245 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2nprx"] Oct 13 07:13:07 crc kubenswrapper[4833]: I1013 07:13:07.939262 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2553b05a-5a97-451d-b999-2e2701492faf-utilities\") pod \"community-operators-2nprx\" (UID: \"2553b05a-5a97-451d-b999-2e2701492faf\") " pod="openshift-marketplace/community-operators-2nprx" Oct 13 07:13:07 crc kubenswrapper[4833]: I1013 07:13:07.939316 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jkmg\" (UniqueName: \"kubernetes.io/projected/2553b05a-5a97-451d-b999-2e2701492faf-kube-api-access-9jkmg\") pod \"community-operators-2nprx\" (UID: \"2553b05a-5a97-451d-b999-2e2701492faf\") " pod="openshift-marketplace/community-operators-2nprx" Oct 13 07:13:07 crc kubenswrapper[4833]: I1013 07:13:07.939349 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2553b05a-5a97-451d-b999-2e2701492faf-catalog-content\") pod \"community-operators-2nprx\" (UID: \"2553b05a-5a97-451d-b999-2e2701492faf\") " pod="openshift-marketplace/community-operators-2nprx" Oct 13 07:13:08 crc kubenswrapper[4833]: I1013 07:13:08.040535 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jkmg\" (UniqueName: \"kubernetes.io/projected/2553b05a-5a97-451d-b999-2e2701492faf-kube-api-access-9jkmg\") pod \"community-operators-2nprx\" (UID: \"2553b05a-5a97-451d-b999-2e2701492faf\") " pod="openshift-marketplace/community-operators-2nprx" Oct 13 07:13:08 crc kubenswrapper[4833]: I1013 07:13:08.040604 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2553b05a-5a97-451d-b999-2e2701492faf-catalog-content\") pod \"community-operators-2nprx\" (UID: \"2553b05a-5a97-451d-b999-2e2701492faf\") " pod="openshift-marketplace/community-operators-2nprx" Oct 13 07:13:08 crc kubenswrapper[4833]: I1013 07:13:08.040680 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2553b05a-5a97-451d-b999-2e2701492faf-utilities\") pod \"community-operators-2nprx\" (UID: \"2553b05a-5a97-451d-b999-2e2701492faf\") " pod="openshift-marketplace/community-operators-2nprx" Oct 13 07:13:08 crc kubenswrapper[4833]: I1013 07:13:08.041126 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2553b05a-5a97-451d-b999-2e2701492faf-utilities\") pod \"community-operators-2nprx\" (UID: \"2553b05a-5a97-451d-b999-2e2701492faf\") " pod="openshift-marketplace/community-operators-2nprx" Oct 13 07:13:08 crc kubenswrapper[4833]: I1013 07:13:08.041604 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2553b05a-5a97-451d-b999-2e2701492faf-catalog-content\") pod \"community-operators-2nprx\" (UID: \"2553b05a-5a97-451d-b999-2e2701492faf\") " pod="openshift-marketplace/community-operators-2nprx" Oct 13 07:13:08 crc kubenswrapper[4833]: I1013 07:13:08.062257 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jkmg\" (UniqueName: \"kubernetes.io/projected/2553b05a-5a97-451d-b999-2e2701492faf-kube-api-access-9jkmg\") pod \"community-operators-2nprx\" (UID: \"2553b05a-5a97-451d-b999-2e2701492faf\") " pod="openshift-marketplace/community-operators-2nprx" Oct 13 07:13:08 crc kubenswrapper[4833]: I1013 07:13:08.230892 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2nprx" Oct 13 07:13:08 crc kubenswrapper[4833]: I1013 07:13:08.526978 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2nprx"] Oct 13 07:13:08 crc kubenswrapper[4833]: I1013 07:13:08.616709 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nprx" event={"ID":"2553b05a-5a97-451d-b999-2e2701492faf","Type":"ContainerStarted","Data":"f971f175b7059161015276580a1646407aee5cde9bd19ea9c8c1ceed30ab732b"} Oct 13 07:13:09 crc kubenswrapper[4833]: I1013 07:13:09.624929 4833 generic.go:334] "Generic (PLEG): container finished" podID="2553b05a-5a97-451d-b999-2e2701492faf" containerID="81f47c68b56585eb344072b10ba2779bc1e986937388812e89a29b4a8bba24e0" exitCode=0 Oct 13 07:13:09 crc kubenswrapper[4833]: I1013 07:13:09.624976 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nprx" event={"ID":"2553b05a-5a97-451d-b999-2e2701492faf","Type":"ContainerDied","Data":"81f47c68b56585eb344072b10ba2779bc1e986937388812e89a29b4a8bba24e0"} Oct 13 07:13:10 crc kubenswrapper[4833]: I1013 07:13:10.633310 4833 generic.go:334] "Generic (PLEG): container finished" podID="2553b05a-5a97-451d-b999-2e2701492faf" containerID="68d43e63055f5383b97c70e465d903cf2794d64f480bdb99e1a021e90a27a317" exitCode=0 Oct 13 07:13:10 crc kubenswrapper[4833]: I1013 07:13:10.636433 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nprx" event={"ID":"2553b05a-5a97-451d-b999-2e2701492faf","Type":"ContainerDied","Data":"68d43e63055f5383b97c70e465d903cf2794d64f480bdb99e1a021e90a27a317"} Oct 13 07:13:11 crc kubenswrapper[4833]: I1013 07:13:11.642891 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nprx" event={"ID":"2553b05a-5a97-451d-b999-2e2701492faf","Type":"ContainerStarted","Data":"ac28439fe9bff058e9d9f86c31d791568454e3c685fba8bc3c09049058c26af8"} Oct 13 07:13:18 crc kubenswrapper[4833]: I1013 07:13:18.231571 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2nprx" Oct 13 07:13:18 crc kubenswrapper[4833]: I1013 07:13:18.232115 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2nprx" Oct 13 07:13:18 crc kubenswrapper[4833]: I1013 07:13:18.276186 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2nprx" Oct 13 07:13:18 crc kubenswrapper[4833]: I1013 07:13:18.292712 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2nprx" podStartSLOduration=9.864612535 podStartE2EDuration="11.292693714s" podCreationTimestamp="2025-10-13 07:13:07 +0000 UTC" firstStartedPulling="2025-10-13 07:13:09.630800738 +0000 UTC m=+2679.731223664" lastFinishedPulling="2025-10-13 07:13:11.058881907 +0000 UTC m=+2681.159304843" observedRunningTime="2025-10-13 07:13:11.661179545 +0000 UTC m=+2681.761602461" watchObservedRunningTime="2025-10-13 07:13:18.292693714 +0000 UTC m=+2688.393116630" Oct 13 07:13:18 crc kubenswrapper[4833]: I1013 07:13:18.736039 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2nprx" Oct 13 07:13:18 crc kubenswrapper[4833]: I1013 07:13:18.780251 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2nprx"] Oct 13 07:13:20 crc kubenswrapper[4833]: I1013 07:13:20.714375 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2nprx" podUID="2553b05a-5a97-451d-b999-2e2701492faf" containerName="registry-server" containerID="cri-o://ac28439fe9bff058e9d9f86c31d791568454e3c685fba8bc3c09049058c26af8" gracePeriod=2 Oct 13 07:13:21 crc kubenswrapper[4833]: I1013 07:13:21.109714 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2nprx" Oct 13 07:13:21 crc kubenswrapper[4833]: I1013 07:13:21.239176 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jkmg\" (UniqueName: \"kubernetes.io/projected/2553b05a-5a97-451d-b999-2e2701492faf-kube-api-access-9jkmg\") pod \"2553b05a-5a97-451d-b999-2e2701492faf\" (UID: \"2553b05a-5a97-451d-b999-2e2701492faf\") " Oct 13 07:13:21 crc kubenswrapper[4833]: I1013 07:13:21.239293 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2553b05a-5a97-451d-b999-2e2701492faf-catalog-content\") pod \"2553b05a-5a97-451d-b999-2e2701492faf\" (UID: \"2553b05a-5a97-451d-b999-2e2701492faf\") " Oct 13 07:13:21 crc kubenswrapper[4833]: I1013 07:13:21.239332 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2553b05a-5a97-451d-b999-2e2701492faf-utilities\") pod \"2553b05a-5a97-451d-b999-2e2701492faf\" (UID: \"2553b05a-5a97-451d-b999-2e2701492faf\") " Oct 13 07:13:21 crc kubenswrapper[4833]: I1013 07:13:21.240706 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2553b05a-5a97-451d-b999-2e2701492faf-utilities" (OuterVolumeSpecName: "utilities") pod "2553b05a-5a97-451d-b999-2e2701492faf" (UID: "2553b05a-5a97-451d-b999-2e2701492faf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:13:21 crc kubenswrapper[4833]: I1013 07:13:21.245654 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2553b05a-5a97-451d-b999-2e2701492faf-kube-api-access-9jkmg" (OuterVolumeSpecName: "kube-api-access-9jkmg") pod "2553b05a-5a97-451d-b999-2e2701492faf" (UID: "2553b05a-5a97-451d-b999-2e2701492faf"). InnerVolumeSpecName "kube-api-access-9jkmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:13:21 crc kubenswrapper[4833]: I1013 07:13:21.340608 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2553b05a-5a97-451d-b999-2e2701492faf-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 07:13:21 crc kubenswrapper[4833]: I1013 07:13:21.340643 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jkmg\" (UniqueName: \"kubernetes.io/projected/2553b05a-5a97-451d-b999-2e2701492faf-kube-api-access-9jkmg\") on node \"crc\" DevicePath \"\"" Oct 13 07:13:21 crc kubenswrapper[4833]: I1013 07:13:21.484653 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2553b05a-5a97-451d-b999-2e2701492faf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2553b05a-5a97-451d-b999-2e2701492faf" (UID: "2553b05a-5a97-451d-b999-2e2701492faf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:13:21 crc kubenswrapper[4833]: I1013 07:13:21.542721 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2553b05a-5a97-451d-b999-2e2701492faf-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 07:13:21 crc kubenswrapper[4833]: I1013 07:13:21.725947 4833 generic.go:334] "Generic (PLEG): container finished" podID="2553b05a-5a97-451d-b999-2e2701492faf" containerID="ac28439fe9bff058e9d9f86c31d791568454e3c685fba8bc3c09049058c26af8" exitCode=0 Oct 13 07:13:21 crc kubenswrapper[4833]: I1013 07:13:21.725985 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nprx" event={"ID":"2553b05a-5a97-451d-b999-2e2701492faf","Type":"ContainerDied","Data":"ac28439fe9bff058e9d9f86c31d791568454e3c685fba8bc3c09049058c26af8"} Oct 13 07:13:21 crc kubenswrapper[4833]: I1013 07:13:21.726008 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nprx" event={"ID":"2553b05a-5a97-451d-b999-2e2701492faf","Type":"ContainerDied","Data":"f971f175b7059161015276580a1646407aee5cde9bd19ea9c8c1ceed30ab732b"} Oct 13 07:13:21 crc kubenswrapper[4833]: I1013 07:13:21.726025 4833 scope.go:117] "RemoveContainer" containerID="ac28439fe9bff058e9d9f86c31d791568454e3c685fba8bc3c09049058c26af8" Oct 13 07:13:21 crc kubenswrapper[4833]: I1013 07:13:21.726063 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2nprx" Oct 13 07:13:21 crc kubenswrapper[4833]: I1013 07:13:21.751113 4833 scope.go:117] "RemoveContainer" containerID="68d43e63055f5383b97c70e465d903cf2794d64f480bdb99e1a021e90a27a317" Oct 13 07:13:21 crc kubenswrapper[4833]: I1013 07:13:21.769924 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2nprx"] Oct 13 07:13:21 crc kubenswrapper[4833]: I1013 07:13:21.778460 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2nprx"] Oct 13 07:13:21 crc kubenswrapper[4833]: I1013 07:13:21.800286 4833 scope.go:117] "RemoveContainer" containerID="81f47c68b56585eb344072b10ba2779bc1e986937388812e89a29b4a8bba24e0" Oct 13 07:13:21 crc kubenswrapper[4833]: I1013 07:13:21.835734 4833 scope.go:117] "RemoveContainer" containerID="ac28439fe9bff058e9d9f86c31d791568454e3c685fba8bc3c09049058c26af8" Oct 13 07:13:21 crc kubenswrapper[4833]: E1013 07:13:21.836603 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac28439fe9bff058e9d9f86c31d791568454e3c685fba8bc3c09049058c26af8\": container with ID starting with ac28439fe9bff058e9d9f86c31d791568454e3c685fba8bc3c09049058c26af8 not found: ID does not exist" containerID="ac28439fe9bff058e9d9f86c31d791568454e3c685fba8bc3c09049058c26af8" Oct 13 07:13:21 crc kubenswrapper[4833]: I1013 07:13:21.836639 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac28439fe9bff058e9d9f86c31d791568454e3c685fba8bc3c09049058c26af8"} err="failed to get container status \"ac28439fe9bff058e9d9f86c31d791568454e3c685fba8bc3c09049058c26af8\": rpc error: code = NotFound desc = could not find container \"ac28439fe9bff058e9d9f86c31d791568454e3c685fba8bc3c09049058c26af8\": container with ID starting with ac28439fe9bff058e9d9f86c31d791568454e3c685fba8bc3c09049058c26af8 not found: ID does not exist" Oct 13 07:13:21 crc kubenswrapper[4833]: I1013 07:13:21.836665 4833 scope.go:117] "RemoveContainer" containerID="68d43e63055f5383b97c70e465d903cf2794d64f480bdb99e1a021e90a27a317" Oct 13 07:13:21 crc kubenswrapper[4833]: E1013 07:13:21.836996 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68d43e63055f5383b97c70e465d903cf2794d64f480bdb99e1a021e90a27a317\": container with ID starting with 68d43e63055f5383b97c70e465d903cf2794d64f480bdb99e1a021e90a27a317 not found: ID does not exist" containerID="68d43e63055f5383b97c70e465d903cf2794d64f480bdb99e1a021e90a27a317" Oct 13 07:13:21 crc kubenswrapper[4833]: I1013 07:13:21.837022 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68d43e63055f5383b97c70e465d903cf2794d64f480bdb99e1a021e90a27a317"} err="failed to get container status \"68d43e63055f5383b97c70e465d903cf2794d64f480bdb99e1a021e90a27a317\": rpc error: code = NotFound desc = could not find container \"68d43e63055f5383b97c70e465d903cf2794d64f480bdb99e1a021e90a27a317\": container with ID starting with 68d43e63055f5383b97c70e465d903cf2794d64f480bdb99e1a021e90a27a317 not found: ID does not exist" Oct 13 07:13:21 crc kubenswrapper[4833]: I1013 07:13:21.837039 4833 scope.go:117] "RemoveContainer" containerID="81f47c68b56585eb344072b10ba2779bc1e986937388812e89a29b4a8bba24e0" Oct 13 07:13:21 crc kubenswrapper[4833]: E1013 07:13:21.837245 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81f47c68b56585eb344072b10ba2779bc1e986937388812e89a29b4a8bba24e0\": container with ID starting with 81f47c68b56585eb344072b10ba2779bc1e986937388812e89a29b4a8bba24e0 not found: ID does not exist" containerID="81f47c68b56585eb344072b10ba2779bc1e986937388812e89a29b4a8bba24e0" Oct 13 07:13:21 crc kubenswrapper[4833]: I1013 07:13:21.837273 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81f47c68b56585eb344072b10ba2779bc1e986937388812e89a29b4a8bba24e0"} err="failed to get container status \"81f47c68b56585eb344072b10ba2779bc1e986937388812e89a29b4a8bba24e0\": rpc error: code = NotFound desc = could not find container \"81f47c68b56585eb344072b10ba2779bc1e986937388812e89a29b4a8bba24e0\": container with ID starting with 81f47c68b56585eb344072b10ba2779bc1e986937388812e89a29b4a8bba24e0 not found: ID does not exist" Oct 13 07:13:22 crc kubenswrapper[4833]: I1013 07:13:22.635606 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2553b05a-5a97-451d-b999-2e2701492faf" path="/var/lib/kubelet/pods/2553b05a-5a97-451d-b999-2e2701492faf/volumes" Oct 13 07:13:30 crc kubenswrapper[4833]: I1013 07:13:30.543109 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:13:30 crc kubenswrapper[4833]: I1013 07:13:30.544747 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:14:00 crc kubenswrapper[4833]: I1013 07:14:00.543097 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:14:00 crc kubenswrapper[4833]: I1013 07:14:00.543678 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:14:00 crc kubenswrapper[4833]: I1013 07:14:00.543731 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 07:14:00 crc kubenswrapper[4833]: I1013 07:14:00.544366 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5087c8644018381716899a6c740f0cf8cdc609a082dfd4f0fc00630109fb219b"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 07:14:00 crc kubenswrapper[4833]: I1013 07:14:00.544427 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://5087c8644018381716899a6c740f0cf8cdc609a082dfd4f0fc00630109fb219b" gracePeriod=600 Oct 13 07:14:01 crc kubenswrapper[4833]: I1013 07:14:01.018437 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="5087c8644018381716899a6c740f0cf8cdc609a082dfd4f0fc00630109fb219b" exitCode=0 Oct 13 07:14:01 crc kubenswrapper[4833]: I1013 07:14:01.018595 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"5087c8644018381716899a6c740f0cf8cdc609a082dfd4f0fc00630109fb219b"} Oct 13 07:14:01 crc kubenswrapper[4833]: I1013 07:14:01.018808 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"8ca9e0aa458c56b27a31f6725f59af080adc3117aae6b0bd613302cb6f343e70"} Oct 13 07:14:01 crc kubenswrapper[4833]: I1013 07:14:01.018839 4833 scope.go:117] "RemoveContainer" containerID="5608dff901323ff44a1b097c0038694377adede4f15d8beccc52f5617256783c" Oct 13 07:15:00 crc kubenswrapper[4833]: I1013 07:15:00.136068 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29338995-m9jjs"] Oct 13 07:15:00 crc kubenswrapper[4833]: E1013 07:15:00.136890 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2553b05a-5a97-451d-b999-2e2701492faf" containerName="registry-server" Oct 13 07:15:00 crc kubenswrapper[4833]: I1013 07:15:00.136907 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2553b05a-5a97-451d-b999-2e2701492faf" containerName="registry-server" Oct 13 07:15:00 crc kubenswrapper[4833]: E1013 07:15:00.136937 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2553b05a-5a97-451d-b999-2e2701492faf" containerName="extract-content" Oct 13 07:15:00 crc kubenswrapper[4833]: I1013 07:15:00.136944 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2553b05a-5a97-451d-b999-2e2701492faf" containerName="extract-content" Oct 13 07:15:00 crc kubenswrapper[4833]: E1013 07:15:00.136969 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2553b05a-5a97-451d-b999-2e2701492faf" containerName="extract-utilities" Oct 13 07:15:00 crc kubenswrapper[4833]: I1013 07:15:00.136978 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2553b05a-5a97-451d-b999-2e2701492faf" containerName="extract-utilities" Oct 13 07:15:00 crc kubenswrapper[4833]: I1013 07:15:00.137145 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="2553b05a-5a97-451d-b999-2e2701492faf" containerName="registry-server" Oct 13 07:15:00 crc kubenswrapper[4833]: I1013 07:15:00.137760 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29338995-m9jjs" Oct 13 07:15:00 crc kubenswrapper[4833]: I1013 07:15:00.140187 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 13 07:15:00 crc kubenswrapper[4833]: I1013 07:15:00.141334 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 13 07:15:00 crc kubenswrapper[4833]: I1013 07:15:00.156464 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29338995-m9jjs"] Oct 13 07:15:00 crc kubenswrapper[4833]: I1013 07:15:00.239074 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a30bb4f-fdcb-48b3-819c-004e02282a56-config-volume\") pod \"collect-profiles-29338995-m9jjs\" (UID: \"8a30bb4f-fdcb-48b3-819c-004e02282a56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338995-m9jjs" Oct 13 07:15:00 crc kubenswrapper[4833]: I1013 07:15:00.239232 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpj7t\" (UniqueName: \"kubernetes.io/projected/8a30bb4f-fdcb-48b3-819c-004e02282a56-kube-api-access-dpj7t\") pod \"collect-profiles-29338995-m9jjs\" (UID: \"8a30bb4f-fdcb-48b3-819c-004e02282a56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338995-m9jjs" Oct 13 07:15:00 crc kubenswrapper[4833]: I1013 07:15:00.239263 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a30bb4f-fdcb-48b3-819c-004e02282a56-secret-volume\") pod \"collect-profiles-29338995-m9jjs\" (UID: \"8a30bb4f-fdcb-48b3-819c-004e02282a56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338995-m9jjs" Oct 13 07:15:00 crc kubenswrapper[4833]: I1013 07:15:00.340350 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a30bb4f-fdcb-48b3-819c-004e02282a56-config-volume\") pod \"collect-profiles-29338995-m9jjs\" (UID: \"8a30bb4f-fdcb-48b3-819c-004e02282a56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338995-m9jjs" Oct 13 07:15:00 crc kubenswrapper[4833]: I1013 07:15:00.340475 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpj7t\" (UniqueName: \"kubernetes.io/projected/8a30bb4f-fdcb-48b3-819c-004e02282a56-kube-api-access-dpj7t\") pod \"collect-profiles-29338995-m9jjs\" (UID: \"8a30bb4f-fdcb-48b3-819c-004e02282a56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338995-m9jjs" Oct 13 07:15:00 crc kubenswrapper[4833]: I1013 07:15:00.340533 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a30bb4f-fdcb-48b3-819c-004e02282a56-secret-volume\") pod \"collect-profiles-29338995-m9jjs\" (UID: \"8a30bb4f-fdcb-48b3-819c-004e02282a56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338995-m9jjs" Oct 13 07:15:00 crc kubenswrapper[4833]: I1013 07:15:00.341474 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a30bb4f-fdcb-48b3-819c-004e02282a56-config-volume\") pod \"collect-profiles-29338995-m9jjs\" (UID: \"8a30bb4f-fdcb-48b3-819c-004e02282a56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338995-m9jjs" Oct 13 07:15:00 crc kubenswrapper[4833]: I1013 07:15:00.358281 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a30bb4f-fdcb-48b3-819c-004e02282a56-secret-volume\") pod \"collect-profiles-29338995-m9jjs\" (UID: \"8a30bb4f-fdcb-48b3-819c-004e02282a56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338995-m9jjs" Oct 13 07:15:00 crc kubenswrapper[4833]: I1013 07:15:00.360876 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpj7t\" (UniqueName: \"kubernetes.io/projected/8a30bb4f-fdcb-48b3-819c-004e02282a56-kube-api-access-dpj7t\") pod \"collect-profiles-29338995-m9jjs\" (UID: \"8a30bb4f-fdcb-48b3-819c-004e02282a56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29338995-m9jjs" Oct 13 07:15:00 crc kubenswrapper[4833]: I1013 07:15:00.462988 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29338995-m9jjs" Oct 13 07:15:00 crc kubenswrapper[4833]: I1013 07:15:00.884438 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29338995-m9jjs"] Oct 13 07:15:01 crc kubenswrapper[4833]: I1013 07:15:01.502618 4833 generic.go:334] "Generic (PLEG): container finished" podID="8a30bb4f-fdcb-48b3-819c-004e02282a56" containerID="66b21a7bb3015aacf43423146dae504db52692ddd530dc3167ed13630a96bdcc" exitCode=0 Oct 13 07:15:01 crc kubenswrapper[4833]: I1013 07:15:01.502664 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29338995-m9jjs" event={"ID":"8a30bb4f-fdcb-48b3-819c-004e02282a56","Type":"ContainerDied","Data":"66b21a7bb3015aacf43423146dae504db52692ddd530dc3167ed13630a96bdcc"} Oct 13 07:15:01 crc kubenswrapper[4833]: I1013 07:15:01.502726 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29338995-m9jjs" event={"ID":"8a30bb4f-fdcb-48b3-819c-004e02282a56","Type":"ContainerStarted","Data":"83406d0cc348dca2c088911417f1f0e50f45364fcccc89694d4b007233bf4390"} Oct 13 07:15:02 crc kubenswrapper[4833]: I1013 07:15:02.794700 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29338995-m9jjs" Oct 13 07:15:02 crc kubenswrapper[4833]: I1013 07:15:02.882055 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpj7t\" (UniqueName: \"kubernetes.io/projected/8a30bb4f-fdcb-48b3-819c-004e02282a56-kube-api-access-dpj7t\") pod \"8a30bb4f-fdcb-48b3-819c-004e02282a56\" (UID: \"8a30bb4f-fdcb-48b3-819c-004e02282a56\") " Oct 13 07:15:02 crc kubenswrapper[4833]: I1013 07:15:02.882098 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a30bb4f-fdcb-48b3-819c-004e02282a56-config-volume\") pod \"8a30bb4f-fdcb-48b3-819c-004e02282a56\" (UID: \"8a30bb4f-fdcb-48b3-819c-004e02282a56\") " Oct 13 07:15:02 crc kubenswrapper[4833]: I1013 07:15:02.882160 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a30bb4f-fdcb-48b3-819c-004e02282a56-secret-volume\") pod \"8a30bb4f-fdcb-48b3-819c-004e02282a56\" (UID: \"8a30bb4f-fdcb-48b3-819c-004e02282a56\") " Oct 13 07:15:02 crc kubenswrapper[4833]: I1013 07:15:02.882895 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a30bb4f-fdcb-48b3-819c-004e02282a56-config-volume" (OuterVolumeSpecName: "config-volume") pod "8a30bb4f-fdcb-48b3-819c-004e02282a56" (UID: "8a30bb4f-fdcb-48b3-819c-004e02282a56"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 07:15:02 crc kubenswrapper[4833]: I1013 07:15:02.887307 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a30bb4f-fdcb-48b3-819c-004e02282a56-kube-api-access-dpj7t" (OuterVolumeSpecName: "kube-api-access-dpj7t") pod "8a30bb4f-fdcb-48b3-819c-004e02282a56" (UID: "8a30bb4f-fdcb-48b3-819c-004e02282a56"). InnerVolumeSpecName "kube-api-access-dpj7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:15:02 crc kubenswrapper[4833]: I1013 07:15:02.888649 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a30bb4f-fdcb-48b3-819c-004e02282a56-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8a30bb4f-fdcb-48b3-819c-004e02282a56" (UID: "8a30bb4f-fdcb-48b3-819c-004e02282a56"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 07:15:02 crc kubenswrapper[4833]: I1013 07:15:02.983231 4833 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a30bb4f-fdcb-48b3-819c-004e02282a56-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 13 07:15:02 crc kubenswrapper[4833]: I1013 07:15:02.983273 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpj7t\" (UniqueName: \"kubernetes.io/projected/8a30bb4f-fdcb-48b3-819c-004e02282a56-kube-api-access-dpj7t\") on node \"crc\" DevicePath \"\"" Oct 13 07:15:02 crc kubenswrapper[4833]: I1013 07:15:02.983283 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a30bb4f-fdcb-48b3-819c-004e02282a56-config-volume\") on node \"crc\" DevicePath \"\"" Oct 13 07:15:03 crc kubenswrapper[4833]: I1013 07:15:03.518270 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29338995-m9jjs" event={"ID":"8a30bb4f-fdcb-48b3-819c-004e02282a56","Type":"ContainerDied","Data":"83406d0cc348dca2c088911417f1f0e50f45364fcccc89694d4b007233bf4390"} Oct 13 07:15:03 crc kubenswrapper[4833]: I1013 07:15:03.518577 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83406d0cc348dca2c088911417f1f0e50f45364fcccc89694d4b007233bf4390" Oct 13 07:15:03 crc kubenswrapper[4833]: I1013 07:15:03.518326 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29338995-m9jjs" Oct 13 07:15:03 crc kubenswrapper[4833]: I1013 07:15:03.889863 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29338950-4m7p7"] Oct 13 07:15:03 crc kubenswrapper[4833]: I1013 07:15:03.896277 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29338950-4m7p7"] Oct 13 07:15:04 crc kubenswrapper[4833]: I1013 07:15:04.637562 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d537ffb6-77d0-4bfc-bc53-54cd70938e24" path="/var/lib/kubelet/pods/d537ffb6-77d0-4bfc-bc53-54cd70938e24/volumes" Oct 13 07:15:49 crc kubenswrapper[4833]: I1013 07:15:49.250351 4833 scope.go:117] "RemoveContainer" containerID="2f084ac0c9164b4220b602a3e14b51bd0d26a02c2b5adee8704ad4b6f0d06662" Oct 13 07:16:00 crc kubenswrapper[4833]: I1013 07:16:00.543272 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:16:00 crc kubenswrapper[4833]: I1013 07:16:00.543904 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:16:30 crc kubenswrapper[4833]: I1013 07:16:30.543131 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:16:30 crc kubenswrapper[4833]: I1013 07:16:30.543755 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:17:00 crc kubenswrapper[4833]: I1013 07:17:00.542826 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:17:00 crc kubenswrapper[4833]: I1013 07:17:00.543568 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:17:00 crc kubenswrapper[4833]: I1013 07:17:00.543627 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 07:17:00 crc kubenswrapper[4833]: I1013 07:17:00.544315 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ca9e0aa458c56b27a31f6725f59af080adc3117aae6b0bd613302cb6f343e70"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 07:17:00 crc kubenswrapper[4833]: I1013 07:17:00.544390 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://8ca9e0aa458c56b27a31f6725f59af080adc3117aae6b0bd613302cb6f343e70" gracePeriod=600 Oct 13 07:17:00 crc kubenswrapper[4833]: E1013 07:17:00.668147 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:17:01 crc kubenswrapper[4833]: I1013 07:17:01.430814 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="8ca9e0aa458c56b27a31f6725f59af080adc3117aae6b0bd613302cb6f343e70" exitCode=0 Oct 13 07:17:01 crc kubenswrapper[4833]: I1013 07:17:01.430911 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"8ca9e0aa458c56b27a31f6725f59af080adc3117aae6b0bd613302cb6f343e70"} Oct 13 07:17:01 crc kubenswrapper[4833]: I1013 07:17:01.431225 4833 scope.go:117] "RemoveContainer" containerID="5087c8644018381716899a6c740f0cf8cdc609a082dfd4f0fc00630109fb219b" Oct 13 07:17:01 crc kubenswrapper[4833]: I1013 07:17:01.431756 4833 scope.go:117] "RemoveContainer" containerID="8ca9e0aa458c56b27a31f6725f59af080adc3117aae6b0bd613302cb6f343e70" Oct 13 07:17:01 crc kubenswrapper[4833]: E1013 07:17:01.433142 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:17:15 crc kubenswrapper[4833]: I1013 07:17:15.628032 4833 scope.go:117] "RemoveContainer" containerID="8ca9e0aa458c56b27a31f6725f59af080adc3117aae6b0bd613302cb6f343e70" Oct 13 07:17:15 crc kubenswrapper[4833]: E1013 07:17:15.629523 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:17:30 crc kubenswrapper[4833]: I1013 07:17:30.631476 4833 scope.go:117] "RemoveContainer" containerID="8ca9e0aa458c56b27a31f6725f59af080adc3117aae6b0bd613302cb6f343e70" Oct 13 07:17:30 crc kubenswrapper[4833]: E1013 07:17:30.632232 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:17:45 crc kubenswrapper[4833]: I1013 07:17:45.626993 4833 scope.go:117] "RemoveContainer" containerID="8ca9e0aa458c56b27a31f6725f59af080adc3117aae6b0bd613302cb6f343e70" Oct 13 07:17:45 crc kubenswrapper[4833]: E1013 07:17:45.627790 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:17:56 crc kubenswrapper[4833]: I1013 07:17:56.626845 4833 scope.go:117] "RemoveContainer" containerID="8ca9e0aa458c56b27a31f6725f59af080adc3117aae6b0bd613302cb6f343e70" Oct 13 07:17:56 crc kubenswrapper[4833]: E1013 07:17:56.627521 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:18:10 crc kubenswrapper[4833]: I1013 07:18:10.639334 4833 scope.go:117] "RemoveContainer" containerID="8ca9e0aa458c56b27a31f6725f59af080adc3117aae6b0bd613302cb6f343e70" Oct 13 07:18:10 crc kubenswrapper[4833]: E1013 07:18:10.640675 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:18:23 crc kubenswrapper[4833]: I1013 07:18:23.627410 4833 scope.go:117] "RemoveContainer" containerID="8ca9e0aa458c56b27a31f6725f59af080adc3117aae6b0bd613302cb6f343e70" Oct 13 07:18:23 crc kubenswrapper[4833]: E1013 07:18:23.629147 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:18:23 crc kubenswrapper[4833]: I1013 07:18:23.794848 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t8g5m"] Oct 13 07:18:23 crc kubenswrapper[4833]: E1013 07:18:23.795345 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a30bb4f-fdcb-48b3-819c-004e02282a56" containerName="collect-profiles" Oct 13 07:18:23 crc kubenswrapper[4833]: I1013 07:18:23.795370 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a30bb4f-fdcb-48b3-819c-004e02282a56" containerName="collect-profiles" Oct 13 07:18:23 crc kubenswrapper[4833]: I1013 07:18:23.795648 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a30bb4f-fdcb-48b3-819c-004e02282a56" containerName="collect-profiles" Oct 13 07:18:23 crc kubenswrapper[4833]: I1013 07:18:23.797681 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8g5m" Oct 13 07:18:23 crc kubenswrapper[4833]: I1013 07:18:23.844271 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t8g5m"] Oct 13 07:18:23 crc kubenswrapper[4833]: I1013 07:18:23.871726 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cffa2abc-6a93-49cd-bd86-cad8669d1568-utilities\") pod \"certified-operators-t8g5m\" (UID: \"cffa2abc-6a93-49cd-bd86-cad8669d1568\") " pod="openshift-marketplace/certified-operators-t8g5m" Oct 13 07:18:23 crc kubenswrapper[4833]: I1013 07:18:23.871781 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pwwj\" (UniqueName: \"kubernetes.io/projected/cffa2abc-6a93-49cd-bd86-cad8669d1568-kube-api-access-6pwwj\") pod \"certified-operators-t8g5m\" (UID: \"cffa2abc-6a93-49cd-bd86-cad8669d1568\") " pod="openshift-marketplace/certified-operators-t8g5m" Oct 13 07:18:23 crc kubenswrapper[4833]: I1013 07:18:23.871855 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cffa2abc-6a93-49cd-bd86-cad8669d1568-catalog-content\") pod \"certified-operators-t8g5m\" (UID: \"cffa2abc-6a93-49cd-bd86-cad8669d1568\") " pod="openshift-marketplace/certified-operators-t8g5m" Oct 13 07:18:23 crc kubenswrapper[4833]: I1013 07:18:23.972633 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cffa2abc-6a93-49cd-bd86-cad8669d1568-catalog-content\") pod \"certified-operators-t8g5m\" (UID: \"cffa2abc-6a93-49cd-bd86-cad8669d1568\") " pod="openshift-marketplace/certified-operators-t8g5m" Oct 13 07:18:23 crc kubenswrapper[4833]: I1013 07:18:23.972717 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cffa2abc-6a93-49cd-bd86-cad8669d1568-utilities\") pod \"certified-operators-t8g5m\" (UID: \"cffa2abc-6a93-49cd-bd86-cad8669d1568\") " pod="openshift-marketplace/certified-operators-t8g5m" Oct 13 07:18:23 crc kubenswrapper[4833]: I1013 07:18:23.972740 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pwwj\" (UniqueName: \"kubernetes.io/projected/cffa2abc-6a93-49cd-bd86-cad8669d1568-kube-api-access-6pwwj\") pod \"certified-operators-t8g5m\" (UID: \"cffa2abc-6a93-49cd-bd86-cad8669d1568\") " pod="openshift-marketplace/certified-operators-t8g5m" Oct 13 07:18:23 crc kubenswrapper[4833]: I1013 07:18:23.973306 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cffa2abc-6a93-49cd-bd86-cad8669d1568-catalog-content\") pod \"certified-operators-t8g5m\" (UID: \"cffa2abc-6a93-49cd-bd86-cad8669d1568\") " pod="openshift-marketplace/certified-operators-t8g5m" Oct 13 07:18:23 crc kubenswrapper[4833]: I1013 07:18:23.973356 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cffa2abc-6a93-49cd-bd86-cad8669d1568-utilities\") pod \"certified-operators-t8g5m\" (UID: \"cffa2abc-6a93-49cd-bd86-cad8669d1568\") " pod="openshift-marketplace/certified-operators-t8g5m" Oct 13 07:18:24 crc kubenswrapper[4833]: I1013 07:18:24.000385 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pwwj\" (UniqueName: \"kubernetes.io/projected/cffa2abc-6a93-49cd-bd86-cad8669d1568-kube-api-access-6pwwj\") pod \"certified-operators-t8g5m\" (UID: \"cffa2abc-6a93-49cd-bd86-cad8669d1568\") " pod="openshift-marketplace/certified-operators-t8g5m" Oct 13 07:18:24 crc kubenswrapper[4833]: I1013 07:18:24.133370 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8g5m" Oct 13 07:18:24 crc kubenswrapper[4833]: I1013 07:18:24.390336 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t8g5m"] Oct 13 07:18:25 crc kubenswrapper[4833]: I1013 07:18:25.097491 4833 generic.go:334] "Generic (PLEG): container finished" podID="cffa2abc-6a93-49cd-bd86-cad8669d1568" containerID="6ce2581524b8506fd155094b39abd4cf7f4cf540cd83b037b701dbae2db67155" exitCode=0 Oct 13 07:18:25 crc kubenswrapper[4833]: I1013 07:18:25.097600 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8g5m" event={"ID":"cffa2abc-6a93-49cd-bd86-cad8669d1568","Type":"ContainerDied","Data":"6ce2581524b8506fd155094b39abd4cf7f4cf540cd83b037b701dbae2db67155"} Oct 13 07:18:25 crc kubenswrapper[4833]: I1013 07:18:25.097882 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8g5m" event={"ID":"cffa2abc-6a93-49cd-bd86-cad8669d1568","Type":"ContainerStarted","Data":"059893284cb90490101ba70beb8738e5f38caf6ca998af716bb6b1607fbe5a7d"} Oct 13 07:18:25 crc kubenswrapper[4833]: I1013 07:18:25.098983 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 07:18:26 crc kubenswrapper[4833]: I1013 07:18:26.110979 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8g5m" event={"ID":"cffa2abc-6a93-49cd-bd86-cad8669d1568","Type":"ContainerStarted","Data":"fd06132ce825001ff897729116b187c4ad63ae938fb343a01aef4d7a16439a6f"} Oct 13 07:18:27 crc kubenswrapper[4833]: I1013 07:18:27.120713 4833 generic.go:334] "Generic (PLEG): container finished" podID="cffa2abc-6a93-49cd-bd86-cad8669d1568" containerID="fd06132ce825001ff897729116b187c4ad63ae938fb343a01aef4d7a16439a6f" exitCode=0 Oct 13 07:18:27 crc kubenswrapper[4833]: I1013 07:18:27.120760 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8g5m" event={"ID":"cffa2abc-6a93-49cd-bd86-cad8669d1568","Type":"ContainerDied","Data":"fd06132ce825001ff897729116b187c4ad63ae938fb343a01aef4d7a16439a6f"} Oct 13 07:18:28 crc kubenswrapper[4833]: I1013 07:18:28.128382 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8g5m" event={"ID":"cffa2abc-6a93-49cd-bd86-cad8669d1568","Type":"ContainerStarted","Data":"ddcc374b14d600ede587a2f644881ad93c6a637183ce6854c229e14dcd1be13c"} Oct 13 07:18:28 crc kubenswrapper[4833]: I1013 07:18:28.145650 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t8g5m" podStartSLOduration=2.700252678 podStartE2EDuration="5.145632921s" podCreationTimestamp="2025-10-13 07:18:23 +0000 UTC" firstStartedPulling="2025-10-13 07:18:25.098793565 +0000 UTC m=+2995.199216481" lastFinishedPulling="2025-10-13 07:18:27.544173808 +0000 UTC m=+2997.644596724" observedRunningTime="2025-10-13 07:18:28.142124313 +0000 UTC m=+2998.242547249" watchObservedRunningTime="2025-10-13 07:18:28.145632921 +0000 UTC m=+2998.246055837" Oct 13 07:18:34 crc kubenswrapper[4833]: I1013 07:18:34.133924 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t8g5m" Oct 13 07:18:34 crc kubenswrapper[4833]: I1013 07:18:34.136162 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t8g5m" Oct 13 07:18:34 crc kubenswrapper[4833]: I1013 07:18:34.212813 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t8g5m" Oct 13 07:18:34 crc kubenswrapper[4833]: I1013 07:18:34.628037 4833 scope.go:117] "RemoveContainer" containerID="8ca9e0aa458c56b27a31f6725f59af080adc3117aae6b0bd613302cb6f343e70" Oct 13 07:18:34 crc kubenswrapper[4833]: E1013 07:18:34.628852 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:18:35 crc kubenswrapper[4833]: I1013 07:18:35.309281 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t8g5m" Oct 13 07:18:35 crc kubenswrapper[4833]: I1013 07:18:35.362697 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t8g5m"] Oct 13 07:18:37 crc kubenswrapper[4833]: I1013 07:18:37.259761 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t8g5m" podUID="cffa2abc-6a93-49cd-bd86-cad8669d1568" containerName="registry-server" containerID="cri-o://ddcc374b14d600ede587a2f644881ad93c6a637183ce6854c229e14dcd1be13c" gracePeriod=2 Oct 13 07:18:37 crc kubenswrapper[4833]: I1013 07:18:37.686710 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8g5m" Oct 13 07:18:37 crc kubenswrapper[4833]: I1013 07:18:37.787868 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cffa2abc-6a93-49cd-bd86-cad8669d1568-catalog-content\") pod \"cffa2abc-6a93-49cd-bd86-cad8669d1568\" (UID: \"cffa2abc-6a93-49cd-bd86-cad8669d1568\") " Oct 13 07:18:37 crc kubenswrapper[4833]: I1013 07:18:37.788054 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pwwj\" (UniqueName: \"kubernetes.io/projected/cffa2abc-6a93-49cd-bd86-cad8669d1568-kube-api-access-6pwwj\") pod \"cffa2abc-6a93-49cd-bd86-cad8669d1568\" (UID: \"cffa2abc-6a93-49cd-bd86-cad8669d1568\") " Oct 13 07:18:37 crc kubenswrapper[4833]: I1013 07:18:37.788141 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cffa2abc-6a93-49cd-bd86-cad8669d1568-utilities\") pod \"cffa2abc-6a93-49cd-bd86-cad8669d1568\" (UID: \"cffa2abc-6a93-49cd-bd86-cad8669d1568\") " Oct 13 07:18:37 crc kubenswrapper[4833]: I1013 07:18:37.789688 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cffa2abc-6a93-49cd-bd86-cad8669d1568-utilities" (OuterVolumeSpecName: "utilities") pod "cffa2abc-6a93-49cd-bd86-cad8669d1568" (UID: "cffa2abc-6a93-49cd-bd86-cad8669d1568"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:18:37 crc kubenswrapper[4833]: I1013 07:18:37.794131 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cffa2abc-6a93-49cd-bd86-cad8669d1568-kube-api-access-6pwwj" (OuterVolumeSpecName: "kube-api-access-6pwwj") pod "cffa2abc-6a93-49cd-bd86-cad8669d1568" (UID: "cffa2abc-6a93-49cd-bd86-cad8669d1568"). InnerVolumeSpecName "kube-api-access-6pwwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:18:37 crc kubenswrapper[4833]: I1013 07:18:37.838362 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cffa2abc-6a93-49cd-bd86-cad8669d1568-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cffa2abc-6a93-49cd-bd86-cad8669d1568" (UID: "cffa2abc-6a93-49cd-bd86-cad8669d1568"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:18:37 crc kubenswrapper[4833]: I1013 07:18:37.890195 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cffa2abc-6a93-49cd-bd86-cad8669d1568-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 07:18:37 crc kubenswrapper[4833]: I1013 07:18:37.890242 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pwwj\" (UniqueName: \"kubernetes.io/projected/cffa2abc-6a93-49cd-bd86-cad8669d1568-kube-api-access-6pwwj\") on node \"crc\" DevicePath \"\"" Oct 13 07:18:37 crc kubenswrapper[4833]: I1013 07:18:37.890257 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cffa2abc-6a93-49cd-bd86-cad8669d1568-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 07:18:38 crc kubenswrapper[4833]: I1013 07:18:38.272328 4833 generic.go:334] "Generic (PLEG): container finished" podID="cffa2abc-6a93-49cd-bd86-cad8669d1568" containerID="ddcc374b14d600ede587a2f644881ad93c6a637183ce6854c229e14dcd1be13c" exitCode=0 Oct 13 07:18:38 crc kubenswrapper[4833]: I1013 07:18:38.272411 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8g5m" Oct 13 07:18:38 crc kubenswrapper[4833]: I1013 07:18:38.272411 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8g5m" event={"ID":"cffa2abc-6a93-49cd-bd86-cad8669d1568","Type":"ContainerDied","Data":"ddcc374b14d600ede587a2f644881ad93c6a637183ce6854c229e14dcd1be13c"} Oct 13 07:18:38 crc kubenswrapper[4833]: I1013 07:18:38.272573 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8g5m" event={"ID":"cffa2abc-6a93-49cd-bd86-cad8669d1568","Type":"ContainerDied","Data":"059893284cb90490101ba70beb8738e5f38caf6ca998af716bb6b1607fbe5a7d"} Oct 13 07:18:38 crc kubenswrapper[4833]: I1013 07:18:38.272599 4833 scope.go:117] "RemoveContainer" containerID="ddcc374b14d600ede587a2f644881ad93c6a637183ce6854c229e14dcd1be13c" Oct 13 07:18:38 crc kubenswrapper[4833]: I1013 07:18:38.295102 4833 scope.go:117] "RemoveContainer" containerID="fd06132ce825001ff897729116b187c4ad63ae938fb343a01aef4d7a16439a6f" Oct 13 07:18:38 crc kubenswrapper[4833]: I1013 07:18:38.313809 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t8g5m"] Oct 13 07:18:38 crc kubenswrapper[4833]: I1013 07:18:38.322292 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t8g5m"] Oct 13 07:18:38 crc kubenswrapper[4833]: I1013 07:18:38.328223 4833 scope.go:117] "RemoveContainer" containerID="6ce2581524b8506fd155094b39abd4cf7f4cf540cd83b037b701dbae2db67155" Oct 13 07:18:38 crc kubenswrapper[4833]: I1013 07:18:38.357145 4833 scope.go:117] "RemoveContainer" containerID="ddcc374b14d600ede587a2f644881ad93c6a637183ce6854c229e14dcd1be13c" Oct 13 07:18:38 crc kubenswrapper[4833]: E1013 07:18:38.357552 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddcc374b14d600ede587a2f644881ad93c6a637183ce6854c229e14dcd1be13c\": container with ID starting with ddcc374b14d600ede587a2f644881ad93c6a637183ce6854c229e14dcd1be13c not found: ID does not exist" containerID="ddcc374b14d600ede587a2f644881ad93c6a637183ce6854c229e14dcd1be13c" Oct 13 07:18:38 crc kubenswrapper[4833]: I1013 07:18:38.357581 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddcc374b14d600ede587a2f644881ad93c6a637183ce6854c229e14dcd1be13c"} err="failed to get container status \"ddcc374b14d600ede587a2f644881ad93c6a637183ce6854c229e14dcd1be13c\": rpc error: code = NotFound desc = could not find container \"ddcc374b14d600ede587a2f644881ad93c6a637183ce6854c229e14dcd1be13c\": container with ID starting with ddcc374b14d600ede587a2f644881ad93c6a637183ce6854c229e14dcd1be13c not found: ID does not exist" Oct 13 07:18:38 crc kubenswrapper[4833]: I1013 07:18:38.357610 4833 scope.go:117] "RemoveContainer" containerID="fd06132ce825001ff897729116b187c4ad63ae938fb343a01aef4d7a16439a6f" Oct 13 07:18:38 crc kubenswrapper[4833]: E1013 07:18:38.357925 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd06132ce825001ff897729116b187c4ad63ae938fb343a01aef4d7a16439a6f\": container with ID starting with fd06132ce825001ff897729116b187c4ad63ae938fb343a01aef4d7a16439a6f not found: ID does not exist" containerID="fd06132ce825001ff897729116b187c4ad63ae938fb343a01aef4d7a16439a6f" Oct 13 07:18:38 crc kubenswrapper[4833]: I1013 07:18:38.357970 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd06132ce825001ff897729116b187c4ad63ae938fb343a01aef4d7a16439a6f"} err="failed to get container status \"fd06132ce825001ff897729116b187c4ad63ae938fb343a01aef4d7a16439a6f\": rpc error: code = NotFound desc = could not find container \"fd06132ce825001ff897729116b187c4ad63ae938fb343a01aef4d7a16439a6f\": container with ID starting with fd06132ce825001ff897729116b187c4ad63ae938fb343a01aef4d7a16439a6f not found: ID does not exist" Oct 13 07:18:38 crc kubenswrapper[4833]: I1013 07:18:38.358004 4833 scope.go:117] "RemoveContainer" containerID="6ce2581524b8506fd155094b39abd4cf7f4cf540cd83b037b701dbae2db67155" Oct 13 07:18:38 crc kubenswrapper[4833]: E1013 07:18:38.358371 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ce2581524b8506fd155094b39abd4cf7f4cf540cd83b037b701dbae2db67155\": container with ID starting with 6ce2581524b8506fd155094b39abd4cf7f4cf540cd83b037b701dbae2db67155 not found: ID does not exist" containerID="6ce2581524b8506fd155094b39abd4cf7f4cf540cd83b037b701dbae2db67155" Oct 13 07:18:38 crc kubenswrapper[4833]: I1013 07:18:38.358401 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ce2581524b8506fd155094b39abd4cf7f4cf540cd83b037b701dbae2db67155"} err="failed to get container status \"6ce2581524b8506fd155094b39abd4cf7f4cf540cd83b037b701dbae2db67155\": rpc error: code = NotFound desc = could not find container \"6ce2581524b8506fd155094b39abd4cf7f4cf540cd83b037b701dbae2db67155\": container with ID starting with 6ce2581524b8506fd155094b39abd4cf7f4cf540cd83b037b701dbae2db67155 not found: ID does not exist" Oct 13 07:18:38 crc kubenswrapper[4833]: I1013 07:18:38.645893 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cffa2abc-6a93-49cd-bd86-cad8669d1568" path="/var/lib/kubelet/pods/cffa2abc-6a93-49cd-bd86-cad8669d1568/volumes" Oct 13 07:18:46 crc kubenswrapper[4833]: I1013 07:18:46.626805 4833 scope.go:117] "RemoveContainer" containerID="8ca9e0aa458c56b27a31f6725f59af080adc3117aae6b0bd613302cb6f343e70" Oct 13 07:18:46 crc kubenswrapper[4833]: E1013 07:18:46.627650 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:19:00 crc kubenswrapper[4833]: I1013 07:19:00.634661 4833 scope.go:117] "RemoveContainer" containerID="8ca9e0aa458c56b27a31f6725f59af080adc3117aae6b0bd613302cb6f343e70" Oct 13 07:19:00 crc kubenswrapper[4833]: E1013 07:19:00.635673 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:19:12 crc kubenswrapper[4833]: I1013 07:19:12.626772 4833 scope.go:117] "RemoveContainer" containerID="8ca9e0aa458c56b27a31f6725f59af080adc3117aae6b0bd613302cb6f343e70" Oct 13 07:19:12 crc kubenswrapper[4833]: E1013 07:19:12.627558 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:19:26 crc kubenswrapper[4833]: I1013 07:19:26.628120 4833 scope.go:117] "RemoveContainer" containerID="8ca9e0aa458c56b27a31f6725f59af080adc3117aae6b0bd613302cb6f343e70" Oct 13 07:19:26 crc kubenswrapper[4833]: E1013 07:19:26.628939 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:19:37 crc kubenswrapper[4833]: I1013 07:19:37.627086 4833 scope.go:117] "RemoveContainer" containerID="8ca9e0aa458c56b27a31f6725f59af080adc3117aae6b0bd613302cb6f343e70" Oct 13 07:19:37 crc kubenswrapper[4833]: E1013 07:19:37.627745 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:19:48 crc kubenswrapper[4833]: I1013 07:19:48.627743 4833 scope.go:117] "RemoveContainer" containerID="8ca9e0aa458c56b27a31f6725f59af080adc3117aae6b0bd613302cb6f343e70" Oct 13 07:19:48 crc kubenswrapper[4833]: E1013 07:19:48.628741 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:20:01 crc kubenswrapper[4833]: I1013 07:20:01.627210 4833 scope.go:117] "RemoveContainer" containerID="8ca9e0aa458c56b27a31f6725f59af080adc3117aae6b0bd613302cb6f343e70" Oct 13 07:20:01 crc kubenswrapper[4833]: E1013 07:20:01.628187 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:20:15 crc kubenswrapper[4833]: I1013 07:20:15.626911 4833 scope.go:117] "RemoveContainer" containerID="8ca9e0aa458c56b27a31f6725f59af080adc3117aae6b0bd613302cb6f343e70" Oct 13 07:20:15 crc kubenswrapper[4833]: E1013 07:20:15.627915 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:20:28 crc kubenswrapper[4833]: I1013 07:20:28.627131 4833 scope.go:117] "RemoveContainer" containerID="8ca9e0aa458c56b27a31f6725f59af080adc3117aae6b0bd613302cb6f343e70" Oct 13 07:20:28 crc kubenswrapper[4833]: E1013 07:20:28.627951 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:20:41 crc kubenswrapper[4833]: I1013 07:20:41.627590 4833 scope.go:117] "RemoveContainer" containerID="8ca9e0aa458c56b27a31f6725f59af080adc3117aae6b0bd613302cb6f343e70" Oct 13 07:20:41 crc kubenswrapper[4833]: E1013 07:20:41.628616 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:20:52 crc kubenswrapper[4833]: I1013 07:20:52.629080 4833 scope.go:117] "RemoveContainer" containerID="8ca9e0aa458c56b27a31f6725f59af080adc3117aae6b0bd613302cb6f343e70" Oct 13 07:20:52 crc kubenswrapper[4833]: E1013 07:20:52.629790 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:21:07 crc kubenswrapper[4833]: I1013 07:21:07.626765 4833 scope.go:117] "RemoveContainer" containerID="8ca9e0aa458c56b27a31f6725f59af080adc3117aae6b0bd613302cb6f343e70" Oct 13 07:21:07 crc kubenswrapper[4833]: E1013 07:21:07.627515 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:21:18 crc kubenswrapper[4833]: I1013 07:21:18.627137 4833 scope.go:117] "RemoveContainer" containerID="8ca9e0aa458c56b27a31f6725f59af080adc3117aae6b0bd613302cb6f343e70" Oct 13 07:21:18 crc kubenswrapper[4833]: E1013 07:21:18.627896 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:21:29 crc kubenswrapper[4833]: I1013 07:21:29.627010 4833 scope.go:117] "RemoveContainer" containerID="8ca9e0aa458c56b27a31f6725f59af080adc3117aae6b0bd613302cb6f343e70" Oct 13 07:21:29 crc kubenswrapper[4833]: E1013 07:21:29.627782 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:21:42 crc kubenswrapper[4833]: I1013 07:21:42.628222 4833 scope.go:117] "RemoveContainer" containerID="8ca9e0aa458c56b27a31f6725f59af080adc3117aae6b0bd613302cb6f343e70" Oct 13 07:21:42 crc kubenswrapper[4833]: E1013 07:21:42.629011 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:21:57 crc kubenswrapper[4833]: I1013 07:21:57.627337 4833 scope.go:117] "RemoveContainer" containerID="8ca9e0aa458c56b27a31f6725f59af080adc3117aae6b0bd613302cb6f343e70" Oct 13 07:21:57 crc kubenswrapper[4833]: E1013 07:21:57.628287 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:22:11 crc kubenswrapper[4833]: I1013 07:22:11.627057 4833 scope.go:117] "RemoveContainer" containerID="8ca9e0aa458c56b27a31f6725f59af080adc3117aae6b0bd613302cb6f343e70" Oct 13 07:22:12 crc kubenswrapper[4833]: I1013 07:22:12.122620 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"dec8ffab83352499b5fd4956d7345094cdf5fff4cbca6ecd15b239c4c3af2e14"} Oct 13 07:22:54 crc kubenswrapper[4833]: I1013 07:22:54.010317 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-twgqd"] Oct 13 07:22:54 crc kubenswrapper[4833]: E1013 07:22:54.013133 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cffa2abc-6a93-49cd-bd86-cad8669d1568" containerName="extract-utilities" Oct 13 07:22:54 crc kubenswrapper[4833]: I1013 07:22:54.013162 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cffa2abc-6a93-49cd-bd86-cad8669d1568" containerName="extract-utilities" Oct 13 07:22:54 crc kubenswrapper[4833]: E1013 07:22:54.013178 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cffa2abc-6a93-49cd-bd86-cad8669d1568" containerName="registry-server" Oct 13 07:22:54 crc kubenswrapper[4833]: I1013 07:22:54.013187 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cffa2abc-6a93-49cd-bd86-cad8669d1568" containerName="registry-server" Oct 13 07:22:54 crc kubenswrapper[4833]: E1013 07:22:54.013227 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cffa2abc-6a93-49cd-bd86-cad8669d1568" containerName="extract-content" Oct 13 07:22:54 crc kubenswrapper[4833]: I1013 07:22:54.013234 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cffa2abc-6a93-49cd-bd86-cad8669d1568" containerName="extract-content" Oct 13 07:22:54 crc kubenswrapper[4833]: I1013 07:22:54.013391 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="cffa2abc-6a93-49cd-bd86-cad8669d1568" containerName="registry-server" Oct 13 07:22:54 crc kubenswrapper[4833]: I1013 07:22:54.014628 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-twgqd" Oct 13 07:22:54 crc kubenswrapper[4833]: I1013 07:22:54.037588 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-twgqd"] Oct 13 07:22:54 crc kubenswrapper[4833]: I1013 07:22:54.154183 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x5mr\" (UniqueName: \"kubernetes.io/projected/72996464-98f1-41e3-9502-f7c1e37dbd69-kube-api-access-8x5mr\") pod \"redhat-marketplace-twgqd\" (UID: \"72996464-98f1-41e3-9502-f7c1e37dbd69\") " pod="openshift-marketplace/redhat-marketplace-twgqd" Oct 13 07:22:54 crc kubenswrapper[4833]: I1013 07:22:54.154247 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72996464-98f1-41e3-9502-f7c1e37dbd69-utilities\") pod \"redhat-marketplace-twgqd\" (UID: \"72996464-98f1-41e3-9502-f7c1e37dbd69\") " pod="openshift-marketplace/redhat-marketplace-twgqd" Oct 13 07:22:54 crc kubenswrapper[4833]: I1013 07:22:54.154313 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72996464-98f1-41e3-9502-f7c1e37dbd69-catalog-content\") pod \"redhat-marketplace-twgqd\" (UID: \"72996464-98f1-41e3-9502-f7c1e37dbd69\") " pod="openshift-marketplace/redhat-marketplace-twgqd" Oct 13 07:22:54 crc kubenswrapper[4833]: I1013 07:22:54.255699 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x5mr\" (UniqueName: \"kubernetes.io/projected/72996464-98f1-41e3-9502-f7c1e37dbd69-kube-api-access-8x5mr\") pod \"redhat-marketplace-twgqd\" (UID: \"72996464-98f1-41e3-9502-f7c1e37dbd69\") " pod="openshift-marketplace/redhat-marketplace-twgqd" Oct 13 07:22:54 crc kubenswrapper[4833]: I1013 07:22:54.255765 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72996464-98f1-41e3-9502-f7c1e37dbd69-utilities\") pod \"redhat-marketplace-twgqd\" (UID: \"72996464-98f1-41e3-9502-f7c1e37dbd69\") " pod="openshift-marketplace/redhat-marketplace-twgqd" Oct 13 07:22:54 crc kubenswrapper[4833]: I1013 07:22:54.255821 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72996464-98f1-41e3-9502-f7c1e37dbd69-catalog-content\") pod \"redhat-marketplace-twgqd\" (UID: \"72996464-98f1-41e3-9502-f7c1e37dbd69\") " pod="openshift-marketplace/redhat-marketplace-twgqd" Oct 13 07:22:54 crc kubenswrapper[4833]: I1013 07:22:54.256360 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72996464-98f1-41e3-9502-f7c1e37dbd69-utilities\") pod \"redhat-marketplace-twgqd\" (UID: \"72996464-98f1-41e3-9502-f7c1e37dbd69\") " pod="openshift-marketplace/redhat-marketplace-twgqd" Oct 13 07:22:54 crc kubenswrapper[4833]: I1013 07:22:54.256481 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72996464-98f1-41e3-9502-f7c1e37dbd69-catalog-content\") pod \"redhat-marketplace-twgqd\" (UID: \"72996464-98f1-41e3-9502-f7c1e37dbd69\") " pod="openshift-marketplace/redhat-marketplace-twgqd" Oct 13 07:22:54 crc kubenswrapper[4833]: I1013 07:22:54.281291 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x5mr\" (UniqueName: \"kubernetes.io/projected/72996464-98f1-41e3-9502-f7c1e37dbd69-kube-api-access-8x5mr\") pod \"redhat-marketplace-twgqd\" (UID: \"72996464-98f1-41e3-9502-f7c1e37dbd69\") " pod="openshift-marketplace/redhat-marketplace-twgqd" Oct 13 07:22:54 crc kubenswrapper[4833]: I1013 07:22:54.349397 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-twgqd" Oct 13 07:22:54 crc kubenswrapper[4833]: I1013 07:22:54.798057 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-twgqd"] Oct 13 07:22:55 crc kubenswrapper[4833]: I1013 07:22:55.471049 4833 generic.go:334] "Generic (PLEG): container finished" podID="72996464-98f1-41e3-9502-f7c1e37dbd69" containerID="140a71697a166f5c4bc6fcffeb21d1b565b388b32265a0e81f511c91baa5c12f" exitCode=0 Oct 13 07:22:55 crc kubenswrapper[4833]: I1013 07:22:55.471094 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twgqd" event={"ID":"72996464-98f1-41e3-9502-f7c1e37dbd69","Type":"ContainerDied","Data":"140a71697a166f5c4bc6fcffeb21d1b565b388b32265a0e81f511c91baa5c12f"} Oct 13 07:22:55 crc kubenswrapper[4833]: I1013 07:22:55.471118 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twgqd" event={"ID":"72996464-98f1-41e3-9502-f7c1e37dbd69","Type":"ContainerStarted","Data":"17ada7bb33e5b5a78f511590d4da6f41710d9247266a356ac0a614622a772b84"} Oct 13 07:22:56 crc kubenswrapper[4833]: I1013 07:22:56.483085 4833 generic.go:334] "Generic (PLEG): container finished" podID="72996464-98f1-41e3-9502-f7c1e37dbd69" containerID="19ea0cd73c690925dede4b3440888ff26e7d13ee9fd90956c2cd19af0f9f7cd1" exitCode=0 Oct 13 07:22:56 crc kubenswrapper[4833]: I1013 07:22:56.483247 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twgqd" event={"ID":"72996464-98f1-41e3-9502-f7c1e37dbd69","Type":"ContainerDied","Data":"19ea0cd73c690925dede4b3440888ff26e7d13ee9fd90956c2cd19af0f9f7cd1"} Oct 13 07:22:57 crc kubenswrapper[4833]: I1013 07:22:57.501441 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twgqd" event={"ID":"72996464-98f1-41e3-9502-f7c1e37dbd69","Type":"ContainerStarted","Data":"492a751ce2b3c2161d863c47499f3852c48c2d1e487b4778e296057946e1e461"} Oct 13 07:22:57 crc kubenswrapper[4833]: I1013 07:22:57.528023 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-twgqd" podStartSLOduration=2.95722974 podStartE2EDuration="4.527996366s" podCreationTimestamp="2025-10-13 07:22:53 +0000 UTC" firstStartedPulling="2025-10-13 07:22:55.472867742 +0000 UTC m=+3265.573290658" lastFinishedPulling="2025-10-13 07:22:57.043634328 +0000 UTC m=+3267.144057284" observedRunningTime="2025-10-13 07:22:57.521911413 +0000 UTC m=+3267.622334369" watchObservedRunningTime="2025-10-13 07:22:57.527996366 +0000 UTC m=+3267.628419312" Oct 13 07:23:04 crc kubenswrapper[4833]: I1013 07:23:04.350195 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-twgqd" Oct 13 07:23:04 crc kubenswrapper[4833]: I1013 07:23:04.350875 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-twgqd" Oct 13 07:23:04 crc kubenswrapper[4833]: I1013 07:23:04.428397 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-twgqd" Oct 13 07:23:04 crc kubenswrapper[4833]: I1013 07:23:04.646139 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-twgqd" Oct 13 07:23:04 crc kubenswrapper[4833]: I1013 07:23:04.690983 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-twgqd"] Oct 13 07:23:06 crc kubenswrapper[4833]: I1013 07:23:06.581615 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-twgqd" podUID="72996464-98f1-41e3-9502-f7c1e37dbd69" containerName="registry-server" containerID="cri-o://492a751ce2b3c2161d863c47499f3852c48c2d1e487b4778e296057946e1e461" gracePeriod=2 Oct 13 07:23:07 crc kubenswrapper[4833]: I1013 07:23:07.019418 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-twgqd" Oct 13 07:23:07 crc kubenswrapper[4833]: I1013 07:23:07.167875 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x5mr\" (UniqueName: \"kubernetes.io/projected/72996464-98f1-41e3-9502-f7c1e37dbd69-kube-api-access-8x5mr\") pod \"72996464-98f1-41e3-9502-f7c1e37dbd69\" (UID: \"72996464-98f1-41e3-9502-f7c1e37dbd69\") " Oct 13 07:23:07 crc kubenswrapper[4833]: I1013 07:23:07.167944 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72996464-98f1-41e3-9502-f7c1e37dbd69-utilities\") pod \"72996464-98f1-41e3-9502-f7c1e37dbd69\" (UID: \"72996464-98f1-41e3-9502-f7c1e37dbd69\") " Oct 13 07:23:07 crc kubenswrapper[4833]: I1013 07:23:07.168053 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72996464-98f1-41e3-9502-f7c1e37dbd69-catalog-content\") pod \"72996464-98f1-41e3-9502-f7c1e37dbd69\" (UID: \"72996464-98f1-41e3-9502-f7c1e37dbd69\") " Oct 13 07:23:07 crc kubenswrapper[4833]: I1013 07:23:07.168886 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72996464-98f1-41e3-9502-f7c1e37dbd69-utilities" (OuterVolumeSpecName: "utilities") pod "72996464-98f1-41e3-9502-f7c1e37dbd69" (UID: "72996464-98f1-41e3-9502-f7c1e37dbd69"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:23:07 crc kubenswrapper[4833]: I1013 07:23:07.173111 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72996464-98f1-41e3-9502-f7c1e37dbd69-kube-api-access-8x5mr" (OuterVolumeSpecName: "kube-api-access-8x5mr") pod "72996464-98f1-41e3-9502-f7c1e37dbd69" (UID: "72996464-98f1-41e3-9502-f7c1e37dbd69"). InnerVolumeSpecName "kube-api-access-8x5mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:23:07 crc kubenswrapper[4833]: I1013 07:23:07.184166 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72996464-98f1-41e3-9502-f7c1e37dbd69-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72996464-98f1-41e3-9502-f7c1e37dbd69" (UID: "72996464-98f1-41e3-9502-f7c1e37dbd69"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:23:07 crc kubenswrapper[4833]: I1013 07:23:07.269384 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x5mr\" (UniqueName: \"kubernetes.io/projected/72996464-98f1-41e3-9502-f7c1e37dbd69-kube-api-access-8x5mr\") on node \"crc\" DevicePath \"\"" Oct 13 07:23:07 crc kubenswrapper[4833]: I1013 07:23:07.269442 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72996464-98f1-41e3-9502-f7c1e37dbd69-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 07:23:07 crc kubenswrapper[4833]: I1013 07:23:07.269462 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72996464-98f1-41e3-9502-f7c1e37dbd69-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 07:23:07 crc kubenswrapper[4833]: I1013 07:23:07.595232 4833 generic.go:334] "Generic (PLEG): container finished" podID="72996464-98f1-41e3-9502-f7c1e37dbd69" containerID="492a751ce2b3c2161d863c47499f3852c48c2d1e487b4778e296057946e1e461" exitCode=0 Oct 13 07:23:07 crc kubenswrapper[4833]: I1013 07:23:07.595406 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-twgqd" Oct 13 07:23:07 crc kubenswrapper[4833]: I1013 07:23:07.595373 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twgqd" event={"ID":"72996464-98f1-41e3-9502-f7c1e37dbd69","Type":"ContainerDied","Data":"492a751ce2b3c2161d863c47499f3852c48c2d1e487b4778e296057946e1e461"} Oct 13 07:23:07 crc kubenswrapper[4833]: I1013 07:23:07.595711 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twgqd" event={"ID":"72996464-98f1-41e3-9502-f7c1e37dbd69","Type":"ContainerDied","Data":"17ada7bb33e5b5a78f511590d4da6f41710d9247266a356ac0a614622a772b84"} Oct 13 07:23:07 crc kubenswrapper[4833]: I1013 07:23:07.595803 4833 scope.go:117] "RemoveContainer" containerID="492a751ce2b3c2161d863c47499f3852c48c2d1e487b4778e296057946e1e461" Oct 13 07:23:07 crc kubenswrapper[4833]: I1013 07:23:07.630698 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-twgqd"] Oct 13 07:23:07 crc kubenswrapper[4833]: I1013 07:23:07.632457 4833 scope.go:117] "RemoveContainer" containerID="19ea0cd73c690925dede4b3440888ff26e7d13ee9fd90956c2cd19af0f9f7cd1" Oct 13 07:23:07 crc kubenswrapper[4833]: I1013 07:23:07.640846 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-twgqd"] Oct 13 07:23:07 crc kubenswrapper[4833]: I1013 07:23:07.667647 4833 scope.go:117] "RemoveContainer" containerID="140a71697a166f5c4bc6fcffeb21d1b565b388b32265a0e81f511c91baa5c12f" Oct 13 07:23:07 crc kubenswrapper[4833]: I1013 07:23:07.708856 4833 scope.go:117] "RemoveContainer" containerID="492a751ce2b3c2161d863c47499f3852c48c2d1e487b4778e296057946e1e461" Oct 13 07:23:07 crc kubenswrapper[4833]: E1013 07:23:07.709400 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"492a751ce2b3c2161d863c47499f3852c48c2d1e487b4778e296057946e1e461\": container with ID starting with 492a751ce2b3c2161d863c47499f3852c48c2d1e487b4778e296057946e1e461 not found: ID does not exist" containerID="492a751ce2b3c2161d863c47499f3852c48c2d1e487b4778e296057946e1e461" Oct 13 07:23:07 crc kubenswrapper[4833]: I1013 07:23:07.709457 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"492a751ce2b3c2161d863c47499f3852c48c2d1e487b4778e296057946e1e461"} err="failed to get container status \"492a751ce2b3c2161d863c47499f3852c48c2d1e487b4778e296057946e1e461\": rpc error: code = NotFound desc = could not find container \"492a751ce2b3c2161d863c47499f3852c48c2d1e487b4778e296057946e1e461\": container with ID starting with 492a751ce2b3c2161d863c47499f3852c48c2d1e487b4778e296057946e1e461 not found: ID does not exist" Oct 13 07:23:07 crc kubenswrapper[4833]: I1013 07:23:07.709480 4833 scope.go:117] "RemoveContainer" containerID="19ea0cd73c690925dede4b3440888ff26e7d13ee9fd90956c2cd19af0f9f7cd1" Oct 13 07:23:07 crc kubenswrapper[4833]: E1013 07:23:07.710136 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19ea0cd73c690925dede4b3440888ff26e7d13ee9fd90956c2cd19af0f9f7cd1\": container with ID starting with 19ea0cd73c690925dede4b3440888ff26e7d13ee9fd90956c2cd19af0f9f7cd1 not found: ID does not exist" containerID="19ea0cd73c690925dede4b3440888ff26e7d13ee9fd90956c2cd19af0f9f7cd1" Oct 13 07:23:07 crc kubenswrapper[4833]: I1013 07:23:07.710293 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19ea0cd73c690925dede4b3440888ff26e7d13ee9fd90956c2cd19af0f9f7cd1"} err="failed to get container status \"19ea0cd73c690925dede4b3440888ff26e7d13ee9fd90956c2cd19af0f9f7cd1\": rpc error: code = NotFound desc = could not find container \"19ea0cd73c690925dede4b3440888ff26e7d13ee9fd90956c2cd19af0f9f7cd1\": container with ID starting with 19ea0cd73c690925dede4b3440888ff26e7d13ee9fd90956c2cd19af0f9f7cd1 not found: ID does not exist" Oct 13 07:23:07 crc kubenswrapper[4833]: I1013 07:23:07.710340 4833 scope.go:117] "RemoveContainer" containerID="140a71697a166f5c4bc6fcffeb21d1b565b388b32265a0e81f511c91baa5c12f" Oct 13 07:23:07 crc kubenswrapper[4833]: E1013 07:23:07.710903 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"140a71697a166f5c4bc6fcffeb21d1b565b388b32265a0e81f511c91baa5c12f\": container with ID starting with 140a71697a166f5c4bc6fcffeb21d1b565b388b32265a0e81f511c91baa5c12f not found: ID does not exist" containerID="140a71697a166f5c4bc6fcffeb21d1b565b388b32265a0e81f511c91baa5c12f" Oct 13 07:23:07 crc kubenswrapper[4833]: I1013 07:23:07.710926 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"140a71697a166f5c4bc6fcffeb21d1b565b388b32265a0e81f511c91baa5c12f"} err="failed to get container status \"140a71697a166f5c4bc6fcffeb21d1b565b388b32265a0e81f511c91baa5c12f\": rpc error: code = NotFound desc = could not find container \"140a71697a166f5c4bc6fcffeb21d1b565b388b32265a0e81f511c91baa5c12f\": container with ID starting with 140a71697a166f5c4bc6fcffeb21d1b565b388b32265a0e81f511c91baa5c12f not found: ID does not exist" Oct 13 07:23:08 crc kubenswrapper[4833]: I1013 07:23:08.637445 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72996464-98f1-41e3-9502-f7c1e37dbd69" path="/var/lib/kubelet/pods/72996464-98f1-41e3-9502-f7c1e37dbd69/volumes" Oct 13 07:24:05 crc kubenswrapper[4833]: I1013 07:24:05.868926 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qtr2g"] Oct 13 07:24:05 crc kubenswrapper[4833]: E1013 07:24:05.869961 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72996464-98f1-41e3-9502-f7c1e37dbd69" containerName="registry-server" Oct 13 07:24:05 crc kubenswrapper[4833]: I1013 07:24:05.869982 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="72996464-98f1-41e3-9502-f7c1e37dbd69" containerName="registry-server" Oct 13 07:24:05 crc kubenswrapper[4833]: E1013 07:24:05.870011 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72996464-98f1-41e3-9502-f7c1e37dbd69" containerName="extract-utilities" Oct 13 07:24:05 crc kubenswrapper[4833]: I1013 07:24:05.870025 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="72996464-98f1-41e3-9502-f7c1e37dbd69" containerName="extract-utilities" Oct 13 07:24:05 crc kubenswrapper[4833]: E1013 07:24:05.870055 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72996464-98f1-41e3-9502-f7c1e37dbd69" containerName="extract-content" Oct 13 07:24:05 crc kubenswrapper[4833]: I1013 07:24:05.870068 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="72996464-98f1-41e3-9502-f7c1e37dbd69" containerName="extract-content" Oct 13 07:24:05 crc kubenswrapper[4833]: I1013 07:24:05.870347 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="72996464-98f1-41e3-9502-f7c1e37dbd69" containerName="registry-server" Oct 13 07:24:05 crc kubenswrapper[4833]: I1013 07:24:05.872835 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qtr2g" Oct 13 07:24:05 crc kubenswrapper[4833]: I1013 07:24:05.893747 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qtr2g"] Oct 13 07:24:06 crc kubenswrapper[4833]: I1013 07:24:06.011953 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d704cf46-aa21-4c65-ad7c-c1c219b06c40-utilities\") pod \"community-operators-qtr2g\" (UID: \"d704cf46-aa21-4c65-ad7c-c1c219b06c40\") " pod="openshift-marketplace/community-operators-qtr2g" Oct 13 07:24:06 crc kubenswrapper[4833]: I1013 07:24:06.012041 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgwqm\" (UniqueName: \"kubernetes.io/projected/d704cf46-aa21-4c65-ad7c-c1c219b06c40-kube-api-access-fgwqm\") pod \"community-operators-qtr2g\" (UID: \"d704cf46-aa21-4c65-ad7c-c1c219b06c40\") " pod="openshift-marketplace/community-operators-qtr2g" Oct 13 07:24:06 crc kubenswrapper[4833]: I1013 07:24:06.012068 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d704cf46-aa21-4c65-ad7c-c1c219b06c40-catalog-content\") pod \"community-operators-qtr2g\" (UID: \"d704cf46-aa21-4c65-ad7c-c1c219b06c40\") " pod="openshift-marketplace/community-operators-qtr2g" Oct 13 07:24:06 crc kubenswrapper[4833]: I1013 07:24:06.114103 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d704cf46-aa21-4c65-ad7c-c1c219b06c40-utilities\") pod \"community-operators-qtr2g\" (UID: \"d704cf46-aa21-4c65-ad7c-c1c219b06c40\") " pod="openshift-marketplace/community-operators-qtr2g" Oct 13 07:24:06 crc kubenswrapper[4833]: I1013 07:24:06.114601 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgwqm\" (UniqueName: \"kubernetes.io/projected/d704cf46-aa21-4c65-ad7c-c1c219b06c40-kube-api-access-fgwqm\") pod \"community-operators-qtr2g\" (UID: \"d704cf46-aa21-4c65-ad7c-c1c219b06c40\") " pod="openshift-marketplace/community-operators-qtr2g" Oct 13 07:24:06 crc kubenswrapper[4833]: I1013 07:24:06.114632 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d704cf46-aa21-4c65-ad7c-c1c219b06c40-utilities\") pod \"community-operators-qtr2g\" (UID: \"d704cf46-aa21-4c65-ad7c-c1c219b06c40\") " pod="openshift-marketplace/community-operators-qtr2g" Oct 13 07:24:06 crc kubenswrapper[4833]: I1013 07:24:06.114660 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d704cf46-aa21-4c65-ad7c-c1c219b06c40-catalog-content\") pod \"community-operators-qtr2g\" (UID: \"d704cf46-aa21-4c65-ad7c-c1c219b06c40\") " pod="openshift-marketplace/community-operators-qtr2g" Oct 13 07:24:06 crc kubenswrapper[4833]: I1013 07:24:06.115437 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d704cf46-aa21-4c65-ad7c-c1c219b06c40-catalog-content\") pod \"community-operators-qtr2g\" (UID: \"d704cf46-aa21-4c65-ad7c-c1c219b06c40\") " pod="openshift-marketplace/community-operators-qtr2g" Oct 13 07:24:06 crc kubenswrapper[4833]: I1013 07:24:06.152449 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgwqm\" (UniqueName: \"kubernetes.io/projected/d704cf46-aa21-4c65-ad7c-c1c219b06c40-kube-api-access-fgwqm\") pod \"community-operators-qtr2g\" (UID: \"d704cf46-aa21-4c65-ad7c-c1c219b06c40\") " pod="openshift-marketplace/community-operators-qtr2g" Oct 13 07:24:06 crc kubenswrapper[4833]: I1013 07:24:06.212648 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qtr2g" Oct 13 07:24:06 crc kubenswrapper[4833]: W1013 07:24:06.711333 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd704cf46_aa21_4c65_ad7c_c1c219b06c40.slice/crio-5bdf8b3666eba01cb3d6a64472c7335acbc4976794115d1ad565f1f5781d40ef WatchSource:0}: Error finding container 5bdf8b3666eba01cb3d6a64472c7335acbc4976794115d1ad565f1f5781d40ef: Status 404 returned error can't find the container with id 5bdf8b3666eba01cb3d6a64472c7335acbc4976794115d1ad565f1f5781d40ef Oct 13 07:24:06 crc kubenswrapper[4833]: I1013 07:24:06.712247 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qtr2g"] Oct 13 07:24:07 crc kubenswrapper[4833]: I1013 07:24:07.120098 4833 generic.go:334] "Generic (PLEG): container finished" podID="d704cf46-aa21-4c65-ad7c-c1c219b06c40" containerID="1c33113fa3dbde0f204e75873e7f50f019f2f142f7f09151837a5b9a381d4177" exitCode=0 Oct 13 07:24:07 crc kubenswrapper[4833]: I1013 07:24:07.120178 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtr2g" event={"ID":"d704cf46-aa21-4c65-ad7c-c1c219b06c40","Type":"ContainerDied","Data":"1c33113fa3dbde0f204e75873e7f50f019f2f142f7f09151837a5b9a381d4177"} Oct 13 07:24:07 crc kubenswrapper[4833]: I1013 07:24:07.120229 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtr2g" event={"ID":"d704cf46-aa21-4c65-ad7c-c1c219b06c40","Type":"ContainerStarted","Data":"5bdf8b3666eba01cb3d6a64472c7335acbc4976794115d1ad565f1f5781d40ef"} Oct 13 07:24:07 crc kubenswrapper[4833]: I1013 07:24:07.123339 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 07:24:08 crc kubenswrapper[4833]: I1013 07:24:08.128984 4833 generic.go:334] "Generic (PLEG): container finished" podID="d704cf46-aa21-4c65-ad7c-c1c219b06c40" containerID="6a01bca0e0997ee14c2b060bff9701bb3fc6893e2543f320139a8ed8bcc8e059" exitCode=0 Oct 13 07:24:08 crc kubenswrapper[4833]: I1013 07:24:08.129076 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtr2g" event={"ID":"d704cf46-aa21-4c65-ad7c-c1c219b06c40","Type":"ContainerDied","Data":"6a01bca0e0997ee14c2b060bff9701bb3fc6893e2543f320139a8ed8bcc8e059"} Oct 13 07:24:09 crc kubenswrapper[4833]: I1013 07:24:09.141166 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtr2g" event={"ID":"d704cf46-aa21-4c65-ad7c-c1c219b06c40","Type":"ContainerStarted","Data":"e6da9363699e7ba6f7344ae21c0b253ff3818cf24875fe0cb31c5005682e15c9"} Oct 13 07:24:09 crc kubenswrapper[4833]: I1013 07:24:09.171076 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qtr2g" podStartSLOduration=2.712438159 podStartE2EDuration="4.1710537s" podCreationTimestamp="2025-10-13 07:24:05 +0000 UTC" firstStartedPulling="2025-10-13 07:24:07.12275874 +0000 UTC m=+3337.223181686" lastFinishedPulling="2025-10-13 07:24:08.581374271 +0000 UTC m=+3338.681797227" observedRunningTime="2025-10-13 07:24:09.166960314 +0000 UTC m=+3339.267383240" watchObservedRunningTime="2025-10-13 07:24:09.1710537 +0000 UTC m=+3339.271476626" Oct 13 07:24:16 crc kubenswrapper[4833]: I1013 07:24:16.213306 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qtr2g" Oct 13 07:24:16 crc kubenswrapper[4833]: I1013 07:24:16.213904 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qtr2g" Oct 13 07:24:16 crc kubenswrapper[4833]: I1013 07:24:16.304075 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qtr2g" Oct 13 07:24:17 crc kubenswrapper[4833]: I1013 07:24:17.252862 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qtr2g" Oct 13 07:24:17 crc kubenswrapper[4833]: I1013 07:24:17.305153 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qtr2g"] Oct 13 07:24:19 crc kubenswrapper[4833]: I1013 07:24:19.218761 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qtr2g" podUID="d704cf46-aa21-4c65-ad7c-c1c219b06c40" containerName="registry-server" containerID="cri-o://e6da9363699e7ba6f7344ae21c0b253ff3818cf24875fe0cb31c5005682e15c9" gracePeriod=2 Oct 13 07:24:19 crc kubenswrapper[4833]: I1013 07:24:19.592757 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qtr2g" Oct 13 07:24:19 crc kubenswrapper[4833]: I1013 07:24:19.727775 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d704cf46-aa21-4c65-ad7c-c1c219b06c40-catalog-content\") pod \"d704cf46-aa21-4c65-ad7c-c1c219b06c40\" (UID: \"d704cf46-aa21-4c65-ad7c-c1c219b06c40\") " Oct 13 07:24:19 crc kubenswrapper[4833]: I1013 07:24:19.727830 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgwqm\" (UniqueName: \"kubernetes.io/projected/d704cf46-aa21-4c65-ad7c-c1c219b06c40-kube-api-access-fgwqm\") pod \"d704cf46-aa21-4c65-ad7c-c1c219b06c40\" (UID: \"d704cf46-aa21-4c65-ad7c-c1c219b06c40\") " Oct 13 07:24:19 crc kubenswrapper[4833]: I1013 07:24:19.727858 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d704cf46-aa21-4c65-ad7c-c1c219b06c40-utilities\") pod \"d704cf46-aa21-4c65-ad7c-c1c219b06c40\" (UID: \"d704cf46-aa21-4c65-ad7c-c1c219b06c40\") " Oct 13 07:24:19 crc kubenswrapper[4833]: I1013 07:24:19.729725 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d704cf46-aa21-4c65-ad7c-c1c219b06c40-utilities" (OuterVolumeSpecName: "utilities") pod "d704cf46-aa21-4c65-ad7c-c1c219b06c40" (UID: "d704cf46-aa21-4c65-ad7c-c1c219b06c40"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:24:19 crc kubenswrapper[4833]: I1013 07:24:19.733750 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d704cf46-aa21-4c65-ad7c-c1c219b06c40-kube-api-access-fgwqm" (OuterVolumeSpecName: "kube-api-access-fgwqm") pod "d704cf46-aa21-4c65-ad7c-c1c219b06c40" (UID: "d704cf46-aa21-4c65-ad7c-c1c219b06c40"). InnerVolumeSpecName "kube-api-access-fgwqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:24:19 crc kubenswrapper[4833]: I1013 07:24:19.789413 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d704cf46-aa21-4c65-ad7c-c1c219b06c40-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d704cf46-aa21-4c65-ad7c-c1c219b06c40" (UID: "d704cf46-aa21-4c65-ad7c-c1c219b06c40"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:24:19 crc kubenswrapper[4833]: I1013 07:24:19.830431 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d704cf46-aa21-4c65-ad7c-c1c219b06c40-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 07:24:19 crc kubenswrapper[4833]: I1013 07:24:19.830508 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgwqm\" (UniqueName: \"kubernetes.io/projected/d704cf46-aa21-4c65-ad7c-c1c219b06c40-kube-api-access-fgwqm\") on node \"crc\" DevicePath \"\"" Oct 13 07:24:19 crc kubenswrapper[4833]: I1013 07:24:19.830592 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d704cf46-aa21-4c65-ad7c-c1c219b06c40-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 07:24:20 crc kubenswrapper[4833]: I1013 07:24:20.228467 4833 generic.go:334] "Generic (PLEG): container finished" podID="d704cf46-aa21-4c65-ad7c-c1c219b06c40" containerID="e6da9363699e7ba6f7344ae21c0b253ff3818cf24875fe0cb31c5005682e15c9" exitCode=0 Oct 13 07:24:20 crc kubenswrapper[4833]: I1013 07:24:20.228516 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qtr2g" Oct 13 07:24:20 crc kubenswrapper[4833]: I1013 07:24:20.228513 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtr2g" event={"ID":"d704cf46-aa21-4c65-ad7c-c1c219b06c40","Type":"ContainerDied","Data":"e6da9363699e7ba6f7344ae21c0b253ff3818cf24875fe0cb31c5005682e15c9"} Oct 13 07:24:20 crc kubenswrapper[4833]: I1013 07:24:20.228640 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtr2g" event={"ID":"d704cf46-aa21-4c65-ad7c-c1c219b06c40","Type":"ContainerDied","Data":"5bdf8b3666eba01cb3d6a64472c7335acbc4976794115d1ad565f1f5781d40ef"} Oct 13 07:24:20 crc kubenswrapper[4833]: I1013 07:24:20.228661 4833 scope.go:117] "RemoveContainer" containerID="e6da9363699e7ba6f7344ae21c0b253ff3818cf24875fe0cb31c5005682e15c9" Oct 13 07:24:20 crc kubenswrapper[4833]: I1013 07:24:20.250470 4833 scope.go:117] "RemoveContainer" containerID="6a01bca0e0997ee14c2b060bff9701bb3fc6893e2543f320139a8ed8bcc8e059" Oct 13 07:24:20 crc kubenswrapper[4833]: I1013 07:24:20.262239 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qtr2g"] Oct 13 07:24:20 crc kubenswrapper[4833]: I1013 07:24:20.269196 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qtr2g"] Oct 13 07:24:20 crc kubenswrapper[4833]: I1013 07:24:20.322843 4833 scope.go:117] "RemoveContainer" containerID="1c33113fa3dbde0f204e75873e7f50f019f2f142f7f09151837a5b9a381d4177" Oct 13 07:24:20 crc kubenswrapper[4833]: I1013 07:24:20.347316 4833 scope.go:117] "RemoveContainer" containerID="e6da9363699e7ba6f7344ae21c0b253ff3818cf24875fe0cb31c5005682e15c9" Oct 13 07:24:20 crc kubenswrapper[4833]: E1013 07:24:20.347823 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6da9363699e7ba6f7344ae21c0b253ff3818cf24875fe0cb31c5005682e15c9\": container with ID starting with e6da9363699e7ba6f7344ae21c0b253ff3818cf24875fe0cb31c5005682e15c9 not found: ID does not exist" containerID="e6da9363699e7ba6f7344ae21c0b253ff3818cf24875fe0cb31c5005682e15c9" Oct 13 07:24:20 crc kubenswrapper[4833]: I1013 07:24:20.347888 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6da9363699e7ba6f7344ae21c0b253ff3818cf24875fe0cb31c5005682e15c9"} err="failed to get container status \"e6da9363699e7ba6f7344ae21c0b253ff3818cf24875fe0cb31c5005682e15c9\": rpc error: code = NotFound desc = could not find container \"e6da9363699e7ba6f7344ae21c0b253ff3818cf24875fe0cb31c5005682e15c9\": container with ID starting with e6da9363699e7ba6f7344ae21c0b253ff3818cf24875fe0cb31c5005682e15c9 not found: ID does not exist" Oct 13 07:24:20 crc kubenswrapper[4833]: I1013 07:24:20.347921 4833 scope.go:117] "RemoveContainer" containerID="6a01bca0e0997ee14c2b060bff9701bb3fc6893e2543f320139a8ed8bcc8e059" Oct 13 07:24:20 crc kubenswrapper[4833]: E1013 07:24:20.348407 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a01bca0e0997ee14c2b060bff9701bb3fc6893e2543f320139a8ed8bcc8e059\": container with ID starting with 6a01bca0e0997ee14c2b060bff9701bb3fc6893e2543f320139a8ed8bcc8e059 not found: ID does not exist" containerID="6a01bca0e0997ee14c2b060bff9701bb3fc6893e2543f320139a8ed8bcc8e059" Oct 13 07:24:20 crc kubenswrapper[4833]: I1013 07:24:20.348451 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a01bca0e0997ee14c2b060bff9701bb3fc6893e2543f320139a8ed8bcc8e059"} err="failed to get container status \"6a01bca0e0997ee14c2b060bff9701bb3fc6893e2543f320139a8ed8bcc8e059\": rpc error: code = NotFound desc = could not find container \"6a01bca0e0997ee14c2b060bff9701bb3fc6893e2543f320139a8ed8bcc8e059\": container with ID starting with 6a01bca0e0997ee14c2b060bff9701bb3fc6893e2543f320139a8ed8bcc8e059 not found: ID does not exist" Oct 13 07:24:20 crc kubenswrapper[4833]: I1013 07:24:20.348489 4833 scope.go:117] "RemoveContainer" containerID="1c33113fa3dbde0f204e75873e7f50f019f2f142f7f09151837a5b9a381d4177" Oct 13 07:24:20 crc kubenswrapper[4833]: E1013 07:24:20.348778 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c33113fa3dbde0f204e75873e7f50f019f2f142f7f09151837a5b9a381d4177\": container with ID starting with 1c33113fa3dbde0f204e75873e7f50f019f2f142f7f09151837a5b9a381d4177 not found: ID does not exist" containerID="1c33113fa3dbde0f204e75873e7f50f019f2f142f7f09151837a5b9a381d4177" Oct 13 07:24:20 crc kubenswrapper[4833]: I1013 07:24:20.348816 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c33113fa3dbde0f204e75873e7f50f019f2f142f7f09151837a5b9a381d4177"} err="failed to get container status \"1c33113fa3dbde0f204e75873e7f50f019f2f142f7f09151837a5b9a381d4177\": rpc error: code = NotFound desc = could not find container \"1c33113fa3dbde0f204e75873e7f50f019f2f142f7f09151837a5b9a381d4177\": container with ID starting with 1c33113fa3dbde0f204e75873e7f50f019f2f142f7f09151837a5b9a381d4177 not found: ID does not exist" Oct 13 07:24:20 crc kubenswrapper[4833]: I1013 07:24:20.642859 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d704cf46-aa21-4c65-ad7c-c1c219b06c40" path="/var/lib/kubelet/pods/d704cf46-aa21-4c65-ad7c-c1c219b06c40/volumes" Oct 13 07:24:30 crc kubenswrapper[4833]: I1013 07:24:30.550692 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:24:30 crc kubenswrapper[4833]: I1013 07:24:30.551742 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:24:35 crc kubenswrapper[4833]: I1013 07:24:35.804371 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mw7lm"] Oct 13 07:24:35 crc kubenswrapper[4833]: E1013 07:24:35.805355 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d704cf46-aa21-4c65-ad7c-c1c219b06c40" containerName="extract-utilities" Oct 13 07:24:35 crc kubenswrapper[4833]: I1013 07:24:35.805369 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d704cf46-aa21-4c65-ad7c-c1c219b06c40" containerName="extract-utilities" Oct 13 07:24:35 crc kubenswrapper[4833]: E1013 07:24:35.805381 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d704cf46-aa21-4c65-ad7c-c1c219b06c40" containerName="registry-server" Oct 13 07:24:35 crc kubenswrapper[4833]: I1013 07:24:35.805387 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d704cf46-aa21-4c65-ad7c-c1c219b06c40" containerName="registry-server" Oct 13 07:24:35 crc kubenswrapper[4833]: E1013 07:24:35.805412 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d704cf46-aa21-4c65-ad7c-c1c219b06c40" containerName="extract-content" Oct 13 07:24:35 crc kubenswrapper[4833]: I1013 07:24:35.805418 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d704cf46-aa21-4c65-ad7c-c1c219b06c40" containerName="extract-content" Oct 13 07:24:35 crc kubenswrapper[4833]: I1013 07:24:35.805579 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d704cf46-aa21-4c65-ad7c-c1c219b06c40" containerName="registry-server" Oct 13 07:24:35 crc kubenswrapper[4833]: I1013 07:24:35.806581 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mw7lm" Oct 13 07:24:35 crc kubenswrapper[4833]: I1013 07:24:35.830384 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mw7lm"] Oct 13 07:24:35 crc kubenswrapper[4833]: I1013 07:24:35.959733 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c3f022f-af43-4122-859f-a0efa6f985a3-catalog-content\") pod \"redhat-operators-mw7lm\" (UID: \"2c3f022f-af43-4122-859f-a0efa6f985a3\") " pod="openshift-marketplace/redhat-operators-mw7lm" Oct 13 07:24:35 crc kubenswrapper[4833]: I1013 07:24:35.959784 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8k69\" (UniqueName: \"kubernetes.io/projected/2c3f022f-af43-4122-859f-a0efa6f985a3-kube-api-access-z8k69\") pod \"redhat-operators-mw7lm\" (UID: \"2c3f022f-af43-4122-859f-a0efa6f985a3\") " pod="openshift-marketplace/redhat-operators-mw7lm" Oct 13 07:24:35 crc kubenswrapper[4833]: I1013 07:24:35.959891 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c3f022f-af43-4122-859f-a0efa6f985a3-utilities\") pod \"redhat-operators-mw7lm\" (UID: \"2c3f022f-af43-4122-859f-a0efa6f985a3\") " pod="openshift-marketplace/redhat-operators-mw7lm" Oct 13 07:24:36 crc kubenswrapper[4833]: I1013 07:24:36.061638 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c3f022f-af43-4122-859f-a0efa6f985a3-catalog-content\") pod \"redhat-operators-mw7lm\" (UID: \"2c3f022f-af43-4122-859f-a0efa6f985a3\") " pod="openshift-marketplace/redhat-operators-mw7lm" Oct 13 07:24:36 crc kubenswrapper[4833]: I1013 07:24:36.061684 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8k69\" (UniqueName: \"kubernetes.io/projected/2c3f022f-af43-4122-859f-a0efa6f985a3-kube-api-access-z8k69\") pod \"redhat-operators-mw7lm\" (UID: \"2c3f022f-af43-4122-859f-a0efa6f985a3\") " pod="openshift-marketplace/redhat-operators-mw7lm" Oct 13 07:24:36 crc kubenswrapper[4833]: I1013 07:24:36.061720 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c3f022f-af43-4122-859f-a0efa6f985a3-utilities\") pod \"redhat-operators-mw7lm\" (UID: \"2c3f022f-af43-4122-859f-a0efa6f985a3\") " pod="openshift-marketplace/redhat-operators-mw7lm" Oct 13 07:24:36 crc kubenswrapper[4833]: I1013 07:24:36.062214 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c3f022f-af43-4122-859f-a0efa6f985a3-catalog-content\") pod \"redhat-operators-mw7lm\" (UID: \"2c3f022f-af43-4122-859f-a0efa6f985a3\") " pod="openshift-marketplace/redhat-operators-mw7lm" Oct 13 07:24:36 crc kubenswrapper[4833]: I1013 07:24:36.062375 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c3f022f-af43-4122-859f-a0efa6f985a3-utilities\") pod \"redhat-operators-mw7lm\" (UID: \"2c3f022f-af43-4122-859f-a0efa6f985a3\") " pod="openshift-marketplace/redhat-operators-mw7lm" Oct 13 07:24:36 crc kubenswrapper[4833]: I1013 07:24:36.079722 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8k69\" (UniqueName: \"kubernetes.io/projected/2c3f022f-af43-4122-859f-a0efa6f985a3-kube-api-access-z8k69\") pod \"redhat-operators-mw7lm\" (UID: \"2c3f022f-af43-4122-859f-a0efa6f985a3\") " pod="openshift-marketplace/redhat-operators-mw7lm" Oct 13 07:24:36 crc kubenswrapper[4833]: I1013 07:24:36.141082 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mw7lm" Oct 13 07:24:36 crc kubenswrapper[4833]: I1013 07:24:36.591144 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mw7lm"] Oct 13 07:24:37 crc kubenswrapper[4833]: I1013 07:24:37.401026 4833 generic.go:334] "Generic (PLEG): container finished" podID="2c3f022f-af43-4122-859f-a0efa6f985a3" containerID="931f42aeebf2da7a0258a01499875758f0b284aa8148742e616b9d042f17eda8" exitCode=0 Oct 13 07:24:37 crc kubenswrapper[4833]: I1013 07:24:37.401106 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mw7lm" event={"ID":"2c3f022f-af43-4122-859f-a0efa6f985a3","Type":"ContainerDied","Data":"931f42aeebf2da7a0258a01499875758f0b284aa8148742e616b9d042f17eda8"} Oct 13 07:24:37 crc kubenswrapper[4833]: I1013 07:24:37.401217 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mw7lm" event={"ID":"2c3f022f-af43-4122-859f-a0efa6f985a3","Type":"ContainerStarted","Data":"bf33a5e0311781fa7802bb6b1cde0fe6dd2c28a689686daa25c2c715cc4f293a"} Oct 13 07:24:39 crc kubenswrapper[4833]: I1013 07:24:39.419094 4833 generic.go:334] "Generic (PLEG): container finished" podID="2c3f022f-af43-4122-859f-a0efa6f985a3" containerID="50d39f3a16807f76e1bd623600916f1f65bbba1e754c760b2babb446da706e35" exitCode=0 Oct 13 07:24:39 crc kubenswrapper[4833]: I1013 07:24:39.419168 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mw7lm" event={"ID":"2c3f022f-af43-4122-859f-a0efa6f985a3","Type":"ContainerDied","Data":"50d39f3a16807f76e1bd623600916f1f65bbba1e754c760b2babb446da706e35"} Oct 13 07:24:40 crc kubenswrapper[4833]: I1013 07:24:40.430067 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mw7lm" event={"ID":"2c3f022f-af43-4122-859f-a0efa6f985a3","Type":"ContainerStarted","Data":"6dcfd12e93498deb5e4e9d69ed581c8c759cd2c8650046cf769ac5d6162d6d13"} Oct 13 07:24:40 crc kubenswrapper[4833]: I1013 07:24:40.458118 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mw7lm" podStartSLOduration=3.023910806 podStartE2EDuration="5.458092948s" podCreationTimestamp="2025-10-13 07:24:35 +0000 UTC" firstStartedPulling="2025-10-13 07:24:37.403130623 +0000 UTC m=+3367.503553559" lastFinishedPulling="2025-10-13 07:24:39.837312785 +0000 UTC m=+3369.937735701" observedRunningTime="2025-10-13 07:24:40.450891233 +0000 UTC m=+3370.551314179" watchObservedRunningTime="2025-10-13 07:24:40.458092948 +0000 UTC m=+3370.558515884" Oct 13 07:24:46 crc kubenswrapper[4833]: I1013 07:24:46.141461 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mw7lm" Oct 13 07:24:46 crc kubenswrapper[4833]: I1013 07:24:46.143368 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mw7lm" Oct 13 07:24:46 crc kubenswrapper[4833]: I1013 07:24:46.200074 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mw7lm" Oct 13 07:24:46 crc kubenswrapper[4833]: I1013 07:24:46.542237 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mw7lm" Oct 13 07:24:46 crc kubenswrapper[4833]: I1013 07:24:46.615922 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mw7lm"] Oct 13 07:24:48 crc kubenswrapper[4833]: I1013 07:24:48.499112 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mw7lm" podUID="2c3f022f-af43-4122-859f-a0efa6f985a3" containerName="registry-server" containerID="cri-o://6dcfd12e93498deb5e4e9d69ed581c8c759cd2c8650046cf769ac5d6162d6d13" gracePeriod=2 Oct 13 07:24:48 crc kubenswrapper[4833]: I1013 07:24:48.946384 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mw7lm" Oct 13 07:24:49 crc kubenswrapper[4833]: I1013 07:24:49.055592 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8k69\" (UniqueName: \"kubernetes.io/projected/2c3f022f-af43-4122-859f-a0efa6f985a3-kube-api-access-z8k69\") pod \"2c3f022f-af43-4122-859f-a0efa6f985a3\" (UID: \"2c3f022f-af43-4122-859f-a0efa6f985a3\") " Oct 13 07:24:49 crc kubenswrapper[4833]: I1013 07:24:49.055634 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c3f022f-af43-4122-859f-a0efa6f985a3-catalog-content\") pod \"2c3f022f-af43-4122-859f-a0efa6f985a3\" (UID: \"2c3f022f-af43-4122-859f-a0efa6f985a3\") " Oct 13 07:24:49 crc kubenswrapper[4833]: I1013 07:24:49.055662 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c3f022f-af43-4122-859f-a0efa6f985a3-utilities\") pod \"2c3f022f-af43-4122-859f-a0efa6f985a3\" (UID: \"2c3f022f-af43-4122-859f-a0efa6f985a3\") " Oct 13 07:24:49 crc kubenswrapper[4833]: I1013 07:24:49.057432 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c3f022f-af43-4122-859f-a0efa6f985a3-utilities" (OuterVolumeSpecName: "utilities") pod "2c3f022f-af43-4122-859f-a0efa6f985a3" (UID: "2c3f022f-af43-4122-859f-a0efa6f985a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:24:49 crc kubenswrapper[4833]: I1013 07:24:49.061809 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c3f022f-af43-4122-859f-a0efa6f985a3-kube-api-access-z8k69" (OuterVolumeSpecName: "kube-api-access-z8k69") pod "2c3f022f-af43-4122-859f-a0efa6f985a3" (UID: "2c3f022f-af43-4122-859f-a0efa6f985a3"). InnerVolumeSpecName "kube-api-access-z8k69". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:24:49 crc kubenswrapper[4833]: I1013 07:24:49.149083 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c3f022f-af43-4122-859f-a0efa6f985a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c3f022f-af43-4122-859f-a0efa6f985a3" (UID: "2c3f022f-af43-4122-859f-a0efa6f985a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:24:49 crc kubenswrapper[4833]: I1013 07:24:49.157855 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8k69\" (UniqueName: \"kubernetes.io/projected/2c3f022f-af43-4122-859f-a0efa6f985a3-kube-api-access-z8k69\") on node \"crc\" DevicePath \"\"" Oct 13 07:24:49 crc kubenswrapper[4833]: I1013 07:24:49.157898 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c3f022f-af43-4122-859f-a0efa6f985a3-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 07:24:49 crc kubenswrapper[4833]: I1013 07:24:49.157916 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c3f022f-af43-4122-859f-a0efa6f985a3-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 07:24:49 crc kubenswrapper[4833]: I1013 07:24:49.510205 4833 generic.go:334] "Generic (PLEG): container finished" podID="2c3f022f-af43-4122-859f-a0efa6f985a3" containerID="6dcfd12e93498deb5e4e9d69ed581c8c759cd2c8650046cf769ac5d6162d6d13" exitCode=0 Oct 13 07:24:49 crc kubenswrapper[4833]: I1013 07:24:49.510270 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mw7lm" Oct 13 07:24:49 crc kubenswrapper[4833]: I1013 07:24:49.510268 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mw7lm" event={"ID":"2c3f022f-af43-4122-859f-a0efa6f985a3","Type":"ContainerDied","Data":"6dcfd12e93498deb5e4e9d69ed581c8c759cd2c8650046cf769ac5d6162d6d13"} Oct 13 07:24:49 crc kubenswrapper[4833]: I1013 07:24:49.510778 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mw7lm" event={"ID":"2c3f022f-af43-4122-859f-a0efa6f985a3","Type":"ContainerDied","Data":"bf33a5e0311781fa7802bb6b1cde0fe6dd2c28a689686daa25c2c715cc4f293a"} Oct 13 07:24:49 crc kubenswrapper[4833]: I1013 07:24:49.510805 4833 scope.go:117] "RemoveContainer" containerID="6dcfd12e93498deb5e4e9d69ed581c8c759cd2c8650046cf769ac5d6162d6d13" Oct 13 07:24:49 crc kubenswrapper[4833]: I1013 07:24:49.531653 4833 scope.go:117] "RemoveContainer" containerID="50d39f3a16807f76e1bd623600916f1f65bbba1e754c760b2babb446da706e35" Oct 13 07:24:49 crc kubenswrapper[4833]: I1013 07:24:49.541403 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mw7lm"] Oct 13 07:24:49 crc kubenswrapper[4833]: I1013 07:24:49.550669 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mw7lm"] Oct 13 07:24:49 crc kubenswrapper[4833]: I1013 07:24:49.579270 4833 scope.go:117] "RemoveContainer" containerID="931f42aeebf2da7a0258a01499875758f0b284aa8148742e616b9d042f17eda8" Oct 13 07:24:49 crc kubenswrapper[4833]: I1013 07:24:49.602889 4833 scope.go:117] "RemoveContainer" containerID="6dcfd12e93498deb5e4e9d69ed581c8c759cd2c8650046cf769ac5d6162d6d13" Oct 13 07:24:49 crc kubenswrapper[4833]: E1013 07:24:49.603400 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dcfd12e93498deb5e4e9d69ed581c8c759cd2c8650046cf769ac5d6162d6d13\": container with ID starting with 6dcfd12e93498deb5e4e9d69ed581c8c759cd2c8650046cf769ac5d6162d6d13 not found: ID does not exist" containerID="6dcfd12e93498deb5e4e9d69ed581c8c759cd2c8650046cf769ac5d6162d6d13" Oct 13 07:24:49 crc kubenswrapper[4833]: I1013 07:24:49.603576 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dcfd12e93498deb5e4e9d69ed581c8c759cd2c8650046cf769ac5d6162d6d13"} err="failed to get container status \"6dcfd12e93498deb5e4e9d69ed581c8c759cd2c8650046cf769ac5d6162d6d13\": rpc error: code = NotFound desc = could not find container \"6dcfd12e93498deb5e4e9d69ed581c8c759cd2c8650046cf769ac5d6162d6d13\": container with ID starting with 6dcfd12e93498deb5e4e9d69ed581c8c759cd2c8650046cf769ac5d6162d6d13 not found: ID does not exist" Oct 13 07:24:49 crc kubenswrapper[4833]: I1013 07:24:49.603690 4833 scope.go:117] "RemoveContainer" containerID="50d39f3a16807f76e1bd623600916f1f65bbba1e754c760b2babb446da706e35" Oct 13 07:24:49 crc kubenswrapper[4833]: E1013 07:24:49.604185 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50d39f3a16807f76e1bd623600916f1f65bbba1e754c760b2babb446da706e35\": container with ID starting with 50d39f3a16807f76e1bd623600916f1f65bbba1e754c760b2babb446da706e35 not found: ID does not exist" containerID="50d39f3a16807f76e1bd623600916f1f65bbba1e754c760b2babb446da706e35" Oct 13 07:24:49 crc kubenswrapper[4833]: I1013 07:24:49.604304 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50d39f3a16807f76e1bd623600916f1f65bbba1e754c760b2babb446da706e35"} err="failed to get container status \"50d39f3a16807f76e1bd623600916f1f65bbba1e754c760b2babb446da706e35\": rpc error: code = NotFound desc = could not find container \"50d39f3a16807f76e1bd623600916f1f65bbba1e754c760b2babb446da706e35\": container with ID starting with 50d39f3a16807f76e1bd623600916f1f65bbba1e754c760b2babb446da706e35 not found: ID does not exist" Oct 13 07:24:49 crc kubenswrapper[4833]: I1013 07:24:49.604450 4833 scope.go:117] "RemoveContainer" containerID="931f42aeebf2da7a0258a01499875758f0b284aa8148742e616b9d042f17eda8" Oct 13 07:24:49 crc kubenswrapper[4833]: E1013 07:24:49.604857 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"931f42aeebf2da7a0258a01499875758f0b284aa8148742e616b9d042f17eda8\": container with ID starting with 931f42aeebf2da7a0258a01499875758f0b284aa8148742e616b9d042f17eda8 not found: ID does not exist" containerID="931f42aeebf2da7a0258a01499875758f0b284aa8148742e616b9d042f17eda8" Oct 13 07:24:49 crc kubenswrapper[4833]: I1013 07:24:49.604909 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"931f42aeebf2da7a0258a01499875758f0b284aa8148742e616b9d042f17eda8"} err="failed to get container status \"931f42aeebf2da7a0258a01499875758f0b284aa8148742e616b9d042f17eda8\": rpc error: code = NotFound desc = could not find container \"931f42aeebf2da7a0258a01499875758f0b284aa8148742e616b9d042f17eda8\": container with ID starting with 931f42aeebf2da7a0258a01499875758f0b284aa8148742e616b9d042f17eda8 not found: ID does not exist" Oct 13 07:24:50 crc kubenswrapper[4833]: I1013 07:24:50.640922 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c3f022f-af43-4122-859f-a0efa6f985a3" path="/var/lib/kubelet/pods/2c3f022f-af43-4122-859f-a0efa6f985a3/volumes" Oct 13 07:25:00 crc kubenswrapper[4833]: I1013 07:25:00.542691 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:25:00 crc kubenswrapper[4833]: I1013 07:25:00.543351 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:25:30 crc kubenswrapper[4833]: I1013 07:25:30.542564 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:25:30 crc kubenswrapper[4833]: I1013 07:25:30.544931 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:25:30 crc kubenswrapper[4833]: I1013 07:25:30.545199 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 07:25:30 crc kubenswrapper[4833]: I1013 07:25:30.546253 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dec8ffab83352499b5fd4956d7345094cdf5fff4cbca6ecd15b239c4c3af2e14"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 07:25:30 crc kubenswrapper[4833]: I1013 07:25:30.546648 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://dec8ffab83352499b5fd4956d7345094cdf5fff4cbca6ecd15b239c4c3af2e14" gracePeriod=600 Oct 13 07:25:30 crc kubenswrapper[4833]: I1013 07:25:30.889373 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="dec8ffab83352499b5fd4956d7345094cdf5fff4cbca6ecd15b239c4c3af2e14" exitCode=0 Oct 13 07:25:30 crc kubenswrapper[4833]: I1013 07:25:30.889494 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"dec8ffab83352499b5fd4956d7345094cdf5fff4cbca6ecd15b239c4c3af2e14"} Oct 13 07:25:30 crc kubenswrapper[4833]: I1013 07:25:30.889809 4833 scope.go:117] "RemoveContainer" containerID="8ca9e0aa458c56b27a31f6725f59af080adc3117aae6b0bd613302cb6f343e70" Oct 13 07:25:31 crc kubenswrapper[4833]: I1013 07:25:31.905922 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"01ad4a7bf8f87c8ad9767c44979bcb962be5a37b8095be35fed40107923c3eb8"} Oct 13 07:27:30 crc kubenswrapper[4833]: I1013 07:27:30.543292 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:27:30 crc kubenswrapper[4833]: I1013 07:27:30.543981 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:28:00 crc kubenswrapper[4833]: I1013 07:28:00.542939 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:28:00 crc kubenswrapper[4833]: I1013 07:28:00.543642 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:28:30 crc kubenswrapper[4833]: I1013 07:28:30.543374 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:28:30 crc kubenswrapper[4833]: I1013 07:28:30.544395 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:28:30 crc kubenswrapper[4833]: I1013 07:28:30.544486 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 07:28:30 crc kubenswrapper[4833]: I1013 07:28:30.545728 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"01ad4a7bf8f87c8ad9767c44979bcb962be5a37b8095be35fed40107923c3eb8"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 07:28:30 crc kubenswrapper[4833]: I1013 07:28:30.545895 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://01ad4a7bf8f87c8ad9767c44979bcb962be5a37b8095be35fed40107923c3eb8" gracePeriod=600 Oct 13 07:28:30 crc kubenswrapper[4833]: E1013 07:28:30.679695 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:28:31 crc kubenswrapper[4833]: I1013 07:28:31.561770 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="01ad4a7bf8f87c8ad9767c44979bcb962be5a37b8095be35fed40107923c3eb8" exitCode=0 Oct 13 07:28:31 crc kubenswrapper[4833]: I1013 07:28:31.561829 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"01ad4a7bf8f87c8ad9767c44979bcb962be5a37b8095be35fed40107923c3eb8"} Oct 13 07:28:31 crc kubenswrapper[4833]: I1013 07:28:31.561922 4833 scope.go:117] "RemoveContainer" containerID="dec8ffab83352499b5fd4956d7345094cdf5fff4cbca6ecd15b239c4c3af2e14" Oct 13 07:28:31 crc kubenswrapper[4833]: I1013 07:28:31.563225 4833 scope.go:117] "RemoveContainer" containerID="01ad4a7bf8f87c8ad9767c44979bcb962be5a37b8095be35fed40107923c3eb8" Oct 13 07:28:31 crc kubenswrapper[4833]: E1013 07:28:31.564173 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:28:42 crc kubenswrapper[4833]: I1013 07:28:42.629183 4833 scope.go:117] "RemoveContainer" containerID="01ad4a7bf8f87c8ad9767c44979bcb962be5a37b8095be35fed40107923c3eb8" Oct 13 07:28:42 crc kubenswrapper[4833]: E1013 07:28:42.630517 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:28:54 crc kubenswrapper[4833]: I1013 07:28:54.627443 4833 scope.go:117] "RemoveContainer" containerID="01ad4a7bf8f87c8ad9767c44979bcb962be5a37b8095be35fed40107923c3eb8" Oct 13 07:28:54 crc kubenswrapper[4833]: E1013 07:28:54.628424 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:29:09 crc kubenswrapper[4833]: I1013 07:29:09.627910 4833 scope.go:117] "RemoveContainer" containerID="01ad4a7bf8f87c8ad9767c44979bcb962be5a37b8095be35fed40107923c3eb8" Oct 13 07:29:09 crc kubenswrapper[4833]: E1013 07:29:09.628782 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:29:23 crc kubenswrapper[4833]: I1013 07:29:23.627357 4833 scope.go:117] "RemoveContainer" containerID="01ad4a7bf8f87c8ad9767c44979bcb962be5a37b8095be35fed40107923c3eb8" Oct 13 07:29:23 crc kubenswrapper[4833]: E1013 07:29:23.628824 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:29:27 crc kubenswrapper[4833]: I1013 07:29:27.870076 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mg7mm"] Oct 13 07:29:27 crc kubenswrapper[4833]: E1013 07:29:27.870874 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c3f022f-af43-4122-859f-a0efa6f985a3" containerName="extract-content" Oct 13 07:29:27 crc kubenswrapper[4833]: I1013 07:29:27.870896 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c3f022f-af43-4122-859f-a0efa6f985a3" containerName="extract-content" Oct 13 07:29:27 crc kubenswrapper[4833]: E1013 07:29:27.870938 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c3f022f-af43-4122-859f-a0efa6f985a3" containerName="registry-server" Oct 13 07:29:27 crc kubenswrapper[4833]: I1013 07:29:27.870949 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c3f022f-af43-4122-859f-a0efa6f985a3" containerName="registry-server" Oct 13 07:29:27 crc kubenswrapper[4833]: E1013 07:29:27.870970 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c3f022f-af43-4122-859f-a0efa6f985a3" containerName="extract-utilities" Oct 13 07:29:27 crc kubenswrapper[4833]: I1013 07:29:27.870982 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c3f022f-af43-4122-859f-a0efa6f985a3" containerName="extract-utilities" Oct 13 07:29:27 crc kubenswrapper[4833]: I1013 07:29:27.871206 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c3f022f-af43-4122-859f-a0efa6f985a3" containerName="registry-server" Oct 13 07:29:27 crc kubenswrapper[4833]: I1013 07:29:27.872638 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mg7mm" Oct 13 07:29:27 crc kubenswrapper[4833]: I1013 07:29:27.893935 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mg7mm"] Oct 13 07:29:27 crc kubenswrapper[4833]: I1013 07:29:27.976406 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c-utilities\") pod \"certified-operators-mg7mm\" (UID: \"ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c\") " pod="openshift-marketplace/certified-operators-mg7mm" Oct 13 07:29:27 crc kubenswrapper[4833]: I1013 07:29:27.976850 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c-catalog-content\") pod \"certified-operators-mg7mm\" (UID: \"ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c\") " pod="openshift-marketplace/certified-operators-mg7mm" Oct 13 07:29:27 crc kubenswrapper[4833]: I1013 07:29:27.976939 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-746l6\" (UniqueName: \"kubernetes.io/projected/ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c-kube-api-access-746l6\") pod \"certified-operators-mg7mm\" (UID: \"ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c\") " pod="openshift-marketplace/certified-operators-mg7mm" Oct 13 07:29:28 crc kubenswrapper[4833]: I1013 07:29:28.078776 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c-utilities\") pod \"certified-operators-mg7mm\" (UID: \"ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c\") " pod="openshift-marketplace/certified-operators-mg7mm" Oct 13 07:29:28 crc kubenswrapper[4833]: I1013 07:29:28.078843 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c-catalog-content\") pod \"certified-operators-mg7mm\" (UID: \"ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c\") " pod="openshift-marketplace/certified-operators-mg7mm" Oct 13 07:29:28 crc kubenswrapper[4833]: I1013 07:29:28.078892 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-746l6\" (UniqueName: \"kubernetes.io/projected/ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c-kube-api-access-746l6\") pod \"certified-operators-mg7mm\" (UID: \"ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c\") " pod="openshift-marketplace/certified-operators-mg7mm" Oct 13 07:29:28 crc kubenswrapper[4833]: I1013 07:29:28.079438 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c-utilities\") pod \"certified-operators-mg7mm\" (UID: \"ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c\") " pod="openshift-marketplace/certified-operators-mg7mm" Oct 13 07:29:28 crc kubenswrapper[4833]: I1013 07:29:28.079658 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c-catalog-content\") pod \"certified-operators-mg7mm\" (UID: \"ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c\") " pod="openshift-marketplace/certified-operators-mg7mm" Oct 13 07:29:28 crc kubenswrapper[4833]: I1013 07:29:28.103612 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-746l6\" (UniqueName: \"kubernetes.io/projected/ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c-kube-api-access-746l6\") pod \"certified-operators-mg7mm\" (UID: \"ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c\") " pod="openshift-marketplace/certified-operators-mg7mm" Oct 13 07:29:28 crc kubenswrapper[4833]: I1013 07:29:28.199957 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mg7mm" Oct 13 07:29:28 crc kubenswrapper[4833]: I1013 07:29:28.471138 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mg7mm"] Oct 13 07:29:29 crc kubenswrapper[4833]: I1013 07:29:29.053201 4833 generic.go:334] "Generic (PLEG): container finished" podID="ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c" containerID="26e50b3ac9bb8120e4de0472513665e9b8b553baee5457fe6172265516a59084" exitCode=0 Oct 13 07:29:29 crc kubenswrapper[4833]: I1013 07:29:29.053245 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mg7mm" event={"ID":"ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c","Type":"ContainerDied","Data":"26e50b3ac9bb8120e4de0472513665e9b8b553baee5457fe6172265516a59084"} Oct 13 07:29:29 crc kubenswrapper[4833]: I1013 07:29:29.053272 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mg7mm" event={"ID":"ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c","Type":"ContainerStarted","Data":"59c3ac5fe180a2581f7aff5f5a0473b8ddd3de2296a53af96846d33b2059be85"} Oct 13 07:29:29 crc kubenswrapper[4833]: I1013 07:29:29.054841 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 07:29:30 crc kubenswrapper[4833]: I1013 07:29:30.061847 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mg7mm" event={"ID":"ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c","Type":"ContainerStarted","Data":"2875270ebb98df00e0c3ce20ccb630563b85cb128454df0004b103fd16da4b66"} Oct 13 07:29:31 crc kubenswrapper[4833]: I1013 07:29:31.073728 4833 generic.go:334] "Generic (PLEG): container finished" podID="ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c" containerID="2875270ebb98df00e0c3ce20ccb630563b85cb128454df0004b103fd16da4b66" exitCode=0 Oct 13 07:29:31 crc kubenswrapper[4833]: I1013 07:29:31.073788 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mg7mm" event={"ID":"ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c","Type":"ContainerDied","Data":"2875270ebb98df00e0c3ce20ccb630563b85cb128454df0004b103fd16da4b66"} Oct 13 07:29:32 crc kubenswrapper[4833]: I1013 07:29:32.092498 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mg7mm" event={"ID":"ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c","Type":"ContainerStarted","Data":"7057c193a2426263e9add3763e2f079fe396767603b59762e6b797081a5f9f2f"} Oct 13 07:29:32 crc kubenswrapper[4833]: I1013 07:29:32.116986 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mg7mm" podStartSLOduration=2.716161292 podStartE2EDuration="5.116963126s" podCreationTimestamp="2025-10-13 07:29:27 +0000 UTC" firstStartedPulling="2025-10-13 07:29:29.05458845 +0000 UTC m=+3659.155011366" lastFinishedPulling="2025-10-13 07:29:31.455390234 +0000 UTC m=+3661.555813200" observedRunningTime="2025-10-13 07:29:32.108911057 +0000 UTC m=+3662.209334003" watchObservedRunningTime="2025-10-13 07:29:32.116963126 +0000 UTC m=+3662.217386052" Oct 13 07:29:35 crc kubenswrapper[4833]: I1013 07:29:35.627334 4833 scope.go:117] "RemoveContainer" containerID="01ad4a7bf8f87c8ad9767c44979bcb962be5a37b8095be35fed40107923c3eb8" Oct 13 07:29:35 crc kubenswrapper[4833]: E1013 07:29:35.627905 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:29:38 crc kubenswrapper[4833]: I1013 07:29:38.201329 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mg7mm" Oct 13 07:29:38 crc kubenswrapper[4833]: I1013 07:29:38.203097 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mg7mm" Oct 13 07:29:38 crc kubenswrapper[4833]: I1013 07:29:38.269439 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mg7mm" Oct 13 07:29:39 crc kubenswrapper[4833]: I1013 07:29:39.218421 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mg7mm" Oct 13 07:29:39 crc kubenswrapper[4833]: I1013 07:29:39.268236 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mg7mm"] Oct 13 07:29:41 crc kubenswrapper[4833]: I1013 07:29:41.169470 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mg7mm" podUID="ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c" containerName="registry-server" containerID="cri-o://7057c193a2426263e9add3763e2f079fe396767603b59762e6b797081a5f9f2f" gracePeriod=2 Oct 13 07:29:41 crc kubenswrapper[4833]: I1013 07:29:41.546306 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mg7mm" Oct 13 07:29:41 crc kubenswrapper[4833]: I1013 07:29:41.689148 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c-catalog-content\") pod \"ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c\" (UID: \"ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c\") " Oct 13 07:29:41 crc kubenswrapper[4833]: I1013 07:29:41.689345 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c-utilities\") pod \"ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c\" (UID: \"ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c\") " Oct 13 07:29:41 crc kubenswrapper[4833]: I1013 07:29:41.689617 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-746l6\" (UniqueName: \"kubernetes.io/projected/ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c-kube-api-access-746l6\") pod \"ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c\" (UID: \"ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c\") " Oct 13 07:29:41 crc kubenswrapper[4833]: I1013 07:29:41.691610 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c-utilities" (OuterVolumeSpecName: "utilities") pod "ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c" (UID: "ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:29:41 crc kubenswrapper[4833]: I1013 07:29:41.702947 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c-kube-api-access-746l6" (OuterVolumeSpecName: "kube-api-access-746l6") pod "ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c" (UID: "ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c"). InnerVolumeSpecName "kube-api-access-746l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:29:41 crc kubenswrapper[4833]: I1013 07:29:41.745957 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c" (UID: "ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:29:41 crc kubenswrapper[4833]: I1013 07:29:41.791683 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 07:29:41 crc kubenswrapper[4833]: I1013 07:29:41.791752 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 07:29:41 crc kubenswrapper[4833]: I1013 07:29:41.791781 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-746l6\" (UniqueName: \"kubernetes.io/projected/ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c-kube-api-access-746l6\") on node \"crc\" DevicePath \"\"" Oct 13 07:29:42 crc kubenswrapper[4833]: I1013 07:29:42.184124 4833 generic.go:334] "Generic (PLEG): container finished" podID="ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c" containerID="7057c193a2426263e9add3763e2f079fe396767603b59762e6b797081a5f9f2f" exitCode=0 Oct 13 07:29:42 crc kubenswrapper[4833]: I1013 07:29:42.184186 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mg7mm" event={"ID":"ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c","Type":"ContainerDied","Data":"7057c193a2426263e9add3763e2f079fe396767603b59762e6b797081a5f9f2f"} Oct 13 07:29:42 crc kubenswrapper[4833]: I1013 07:29:42.184236 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mg7mm" event={"ID":"ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c","Type":"ContainerDied","Data":"59c3ac5fe180a2581f7aff5f5a0473b8ddd3de2296a53af96846d33b2059be85"} Oct 13 07:29:42 crc kubenswrapper[4833]: I1013 07:29:42.184264 4833 scope.go:117] "RemoveContainer" containerID="7057c193a2426263e9add3763e2f079fe396767603b59762e6b797081a5f9f2f" Oct 13 07:29:42 crc kubenswrapper[4833]: I1013 07:29:42.184298 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mg7mm" Oct 13 07:29:42 crc kubenswrapper[4833]: I1013 07:29:42.229754 4833 scope.go:117] "RemoveContainer" containerID="2875270ebb98df00e0c3ce20ccb630563b85cb128454df0004b103fd16da4b66" Oct 13 07:29:42 crc kubenswrapper[4833]: I1013 07:29:42.258827 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mg7mm"] Oct 13 07:29:42 crc kubenswrapper[4833]: I1013 07:29:42.270233 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mg7mm"] Oct 13 07:29:42 crc kubenswrapper[4833]: I1013 07:29:42.283940 4833 scope.go:117] "RemoveContainer" containerID="26e50b3ac9bb8120e4de0472513665e9b8b553baee5457fe6172265516a59084" Oct 13 07:29:42 crc kubenswrapper[4833]: I1013 07:29:42.324118 4833 scope.go:117] "RemoveContainer" containerID="7057c193a2426263e9add3763e2f079fe396767603b59762e6b797081a5f9f2f" Oct 13 07:29:42 crc kubenswrapper[4833]: E1013 07:29:42.324490 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7057c193a2426263e9add3763e2f079fe396767603b59762e6b797081a5f9f2f\": container with ID starting with 7057c193a2426263e9add3763e2f079fe396767603b59762e6b797081a5f9f2f not found: ID does not exist" containerID="7057c193a2426263e9add3763e2f079fe396767603b59762e6b797081a5f9f2f" Oct 13 07:29:42 crc kubenswrapper[4833]: I1013 07:29:42.324563 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7057c193a2426263e9add3763e2f079fe396767603b59762e6b797081a5f9f2f"} err="failed to get container status \"7057c193a2426263e9add3763e2f079fe396767603b59762e6b797081a5f9f2f\": rpc error: code = NotFound desc = could not find container \"7057c193a2426263e9add3763e2f079fe396767603b59762e6b797081a5f9f2f\": container with ID starting with 7057c193a2426263e9add3763e2f079fe396767603b59762e6b797081a5f9f2f not found: ID does not exist" Oct 13 07:29:42 crc kubenswrapper[4833]: I1013 07:29:42.324600 4833 scope.go:117] "RemoveContainer" containerID="2875270ebb98df00e0c3ce20ccb630563b85cb128454df0004b103fd16da4b66" Oct 13 07:29:42 crc kubenswrapper[4833]: E1013 07:29:42.325100 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2875270ebb98df00e0c3ce20ccb630563b85cb128454df0004b103fd16da4b66\": container with ID starting with 2875270ebb98df00e0c3ce20ccb630563b85cb128454df0004b103fd16da4b66 not found: ID does not exist" containerID="2875270ebb98df00e0c3ce20ccb630563b85cb128454df0004b103fd16da4b66" Oct 13 07:29:42 crc kubenswrapper[4833]: I1013 07:29:42.325218 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2875270ebb98df00e0c3ce20ccb630563b85cb128454df0004b103fd16da4b66"} err="failed to get container status \"2875270ebb98df00e0c3ce20ccb630563b85cb128454df0004b103fd16da4b66\": rpc error: code = NotFound desc = could not find container \"2875270ebb98df00e0c3ce20ccb630563b85cb128454df0004b103fd16da4b66\": container with ID starting with 2875270ebb98df00e0c3ce20ccb630563b85cb128454df0004b103fd16da4b66 not found: ID does not exist" Oct 13 07:29:42 crc kubenswrapper[4833]: I1013 07:29:42.325299 4833 scope.go:117] "RemoveContainer" containerID="26e50b3ac9bb8120e4de0472513665e9b8b553baee5457fe6172265516a59084" Oct 13 07:29:42 crc kubenswrapper[4833]: E1013 07:29:42.325800 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26e50b3ac9bb8120e4de0472513665e9b8b553baee5457fe6172265516a59084\": container with ID starting with 26e50b3ac9bb8120e4de0472513665e9b8b553baee5457fe6172265516a59084 not found: ID does not exist" containerID="26e50b3ac9bb8120e4de0472513665e9b8b553baee5457fe6172265516a59084" Oct 13 07:29:42 crc kubenswrapper[4833]: I1013 07:29:42.325888 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26e50b3ac9bb8120e4de0472513665e9b8b553baee5457fe6172265516a59084"} err="failed to get container status \"26e50b3ac9bb8120e4de0472513665e9b8b553baee5457fe6172265516a59084\": rpc error: code = NotFound desc = could not find container \"26e50b3ac9bb8120e4de0472513665e9b8b553baee5457fe6172265516a59084\": container with ID starting with 26e50b3ac9bb8120e4de0472513665e9b8b553baee5457fe6172265516a59084 not found: ID does not exist" Oct 13 07:29:42 crc kubenswrapper[4833]: I1013 07:29:42.639582 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c" path="/var/lib/kubelet/pods/ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c/volumes" Oct 13 07:29:47 crc kubenswrapper[4833]: I1013 07:29:47.628085 4833 scope.go:117] "RemoveContainer" containerID="01ad4a7bf8f87c8ad9767c44979bcb962be5a37b8095be35fed40107923c3eb8" Oct 13 07:29:47 crc kubenswrapper[4833]: E1013 07:29:47.629270 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:30:00 crc kubenswrapper[4833]: I1013 07:30:00.168417 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339010-9chh5"] Oct 13 07:30:00 crc kubenswrapper[4833]: E1013 07:30:00.169519 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c" containerName="registry-server" Oct 13 07:30:00 crc kubenswrapper[4833]: I1013 07:30:00.169617 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c" containerName="registry-server" Oct 13 07:30:00 crc kubenswrapper[4833]: E1013 07:30:00.169643 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c" containerName="extract-utilities" Oct 13 07:30:00 crc kubenswrapper[4833]: I1013 07:30:00.169656 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c" containerName="extract-utilities" Oct 13 07:30:00 crc kubenswrapper[4833]: E1013 07:30:00.169685 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c" containerName="extract-content" Oct 13 07:30:00 crc kubenswrapper[4833]: I1013 07:30:00.169700 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c" containerName="extract-content" Oct 13 07:30:00 crc kubenswrapper[4833]: I1013 07:30:00.169943 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec7d3bd0-0565-4ab7-9979-b70c1d0a1f1c" containerName="registry-server" Oct 13 07:30:00 crc kubenswrapper[4833]: I1013 07:30:00.170691 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339010-9chh5" Oct 13 07:30:00 crc kubenswrapper[4833]: I1013 07:30:00.172515 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 13 07:30:00 crc kubenswrapper[4833]: I1013 07:30:00.172869 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 13 07:30:00 crc kubenswrapper[4833]: I1013 07:30:00.175991 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339010-9chh5"] Oct 13 07:30:00 crc kubenswrapper[4833]: I1013 07:30:00.203783 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91db143d-f5c3-48fa-b831-85ab090ffb9f-config-volume\") pod \"collect-profiles-29339010-9chh5\" (UID: \"91db143d-f5c3-48fa-b831-85ab090ffb9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339010-9chh5" Oct 13 07:30:00 crc kubenswrapper[4833]: I1013 07:30:00.203856 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91db143d-f5c3-48fa-b831-85ab090ffb9f-secret-volume\") pod \"collect-profiles-29339010-9chh5\" (UID: \"91db143d-f5c3-48fa-b831-85ab090ffb9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339010-9chh5" Oct 13 07:30:00 crc kubenswrapper[4833]: I1013 07:30:00.203902 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x86m\" (UniqueName: \"kubernetes.io/projected/91db143d-f5c3-48fa-b831-85ab090ffb9f-kube-api-access-8x86m\") pod \"collect-profiles-29339010-9chh5\" (UID: \"91db143d-f5c3-48fa-b831-85ab090ffb9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339010-9chh5" Oct 13 07:30:00 crc kubenswrapper[4833]: I1013 07:30:00.304834 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91db143d-f5c3-48fa-b831-85ab090ffb9f-secret-volume\") pod \"collect-profiles-29339010-9chh5\" (UID: \"91db143d-f5c3-48fa-b831-85ab090ffb9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339010-9chh5" Oct 13 07:30:00 crc kubenswrapper[4833]: I1013 07:30:00.304932 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x86m\" (UniqueName: \"kubernetes.io/projected/91db143d-f5c3-48fa-b831-85ab090ffb9f-kube-api-access-8x86m\") pod \"collect-profiles-29339010-9chh5\" (UID: \"91db143d-f5c3-48fa-b831-85ab090ffb9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339010-9chh5" Oct 13 07:30:00 crc kubenswrapper[4833]: I1013 07:30:00.305024 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91db143d-f5c3-48fa-b831-85ab090ffb9f-config-volume\") pod \"collect-profiles-29339010-9chh5\" (UID: \"91db143d-f5c3-48fa-b831-85ab090ffb9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339010-9chh5" Oct 13 07:30:00 crc kubenswrapper[4833]: I1013 07:30:00.306025 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91db143d-f5c3-48fa-b831-85ab090ffb9f-config-volume\") pod \"collect-profiles-29339010-9chh5\" (UID: \"91db143d-f5c3-48fa-b831-85ab090ffb9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339010-9chh5" Oct 13 07:30:00 crc kubenswrapper[4833]: I1013 07:30:00.310041 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91db143d-f5c3-48fa-b831-85ab090ffb9f-secret-volume\") pod \"collect-profiles-29339010-9chh5\" (UID: \"91db143d-f5c3-48fa-b831-85ab090ffb9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339010-9chh5" Oct 13 07:30:00 crc kubenswrapper[4833]: I1013 07:30:00.320794 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x86m\" (UniqueName: \"kubernetes.io/projected/91db143d-f5c3-48fa-b831-85ab090ffb9f-kube-api-access-8x86m\") pod \"collect-profiles-29339010-9chh5\" (UID: \"91db143d-f5c3-48fa-b831-85ab090ffb9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339010-9chh5" Oct 13 07:30:00 crc kubenswrapper[4833]: I1013 07:30:00.495007 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339010-9chh5" Oct 13 07:30:00 crc kubenswrapper[4833]: I1013 07:30:00.639232 4833 scope.go:117] "RemoveContainer" containerID="01ad4a7bf8f87c8ad9767c44979bcb962be5a37b8095be35fed40107923c3eb8" Oct 13 07:30:00 crc kubenswrapper[4833]: E1013 07:30:00.640250 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:30:00 crc kubenswrapper[4833]: I1013 07:30:00.994567 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339010-9chh5"] Oct 13 07:30:01 crc kubenswrapper[4833]: W1013 07:30:00.999862 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91db143d_f5c3_48fa_b831_85ab090ffb9f.slice/crio-9b6843996fb7f6bebdbb1915c8501685d9bd8a4cc6c0697d95a254f82cdde98a WatchSource:0}: Error finding container 9b6843996fb7f6bebdbb1915c8501685d9bd8a4cc6c0697d95a254f82cdde98a: Status 404 returned error can't find the container with id 9b6843996fb7f6bebdbb1915c8501685d9bd8a4cc6c0697d95a254f82cdde98a Oct 13 07:30:01 crc kubenswrapper[4833]: I1013 07:30:01.348965 4833 generic.go:334] "Generic (PLEG): container finished" podID="91db143d-f5c3-48fa-b831-85ab090ffb9f" containerID="71e2dfd01c05c470207a663618ab14c6077d7ff6ae013e0ba734777e754fb07b" exitCode=0 Oct 13 07:30:01 crc kubenswrapper[4833]: I1013 07:30:01.349013 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339010-9chh5" event={"ID":"91db143d-f5c3-48fa-b831-85ab090ffb9f","Type":"ContainerDied","Data":"71e2dfd01c05c470207a663618ab14c6077d7ff6ae013e0ba734777e754fb07b"} Oct 13 07:30:01 crc kubenswrapper[4833]: I1013 07:30:01.349041 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339010-9chh5" event={"ID":"91db143d-f5c3-48fa-b831-85ab090ffb9f","Type":"ContainerStarted","Data":"9b6843996fb7f6bebdbb1915c8501685d9bd8a4cc6c0697d95a254f82cdde98a"} Oct 13 07:30:02 crc kubenswrapper[4833]: I1013 07:30:02.677464 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339010-9chh5" Oct 13 07:30:02 crc kubenswrapper[4833]: I1013 07:30:02.753024 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x86m\" (UniqueName: \"kubernetes.io/projected/91db143d-f5c3-48fa-b831-85ab090ffb9f-kube-api-access-8x86m\") pod \"91db143d-f5c3-48fa-b831-85ab090ffb9f\" (UID: \"91db143d-f5c3-48fa-b831-85ab090ffb9f\") " Oct 13 07:30:02 crc kubenswrapper[4833]: I1013 07:30:02.753374 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91db143d-f5c3-48fa-b831-85ab090ffb9f-secret-volume\") pod \"91db143d-f5c3-48fa-b831-85ab090ffb9f\" (UID: \"91db143d-f5c3-48fa-b831-85ab090ffb9f\") " Oct 13 07:30:02 crc kubenswrapper[4833]: I1013 07:30:02.753505 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91db143d-f5c3-48fa-b831-85ab090ffb9f-config-volume\") pod \"91db143d-f5c3-48fa-b831-85ab090ffb9f\" (UID: \"91db143d-f5c3-48fa-b831-85ab090ffb9f\") " Oct 13 07:30:02 crc kubenswrapper[4833]: I1013 07:30:02.754326 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91db143d-f5c3-48fa-b831-85ab090ffb9f-config-volume" (OuterVolumeSpecName: "config-volume") pod "91db143d-f5c3-48fa-b831-85ab090ffb9f" (UID: "91db143d-f5c3-48fa-b831-85ab090ffb9f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 07:30:02 crc kubenswrapper[4833]: I1013 07:30:02.757497 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91db143d-f5c3-48fa-b831-85ab090ffb9f-kube-api-access-8x86m" (OuterVolumeSpecName: "kube-api-access-8x86m") pod "91db143d-f5c3-48fa-b831-85ab090ffb9f" (UID: "91db143d-f5c3-48fa-b831-85ab090ffb9f"). InnerVolumeSpecName "kube-api-access-8x86m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:30:02 crc kubenswrapper[4833]: I1013 07:30:02.757550 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91db143d-f5c3-48fa-b831-85ab090ffb9f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "91db143d-f5c3-48fa-b831-85ab090ffb9f" (UID: "91db143d-f5c3-48fa-b831-85ab090ffb9f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 07:30:02 crc kubenswrapper[4833]: I1013 07:30:02.854947 4833 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91db143d-f5c3-48fa-b831-85ab090ffb9f-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 13 07:30:02 crc kubenswrapper[4833]: I1013 07:30:02.855205 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91db143d-f5c3-48fa-b831-85ab090ffb9f-config-volume\") on node \"crc\" DevicePath \"\"" Oct 13 07:30:02 crc kubenswrapper[4833]: I1013 07:30:02.855268 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x86m\" (UniqueName: \"kubernetes.io/projected/91db143d-f5c3-48fa-b831-85ab090ffb9f-kube-api-access-8x86m\") on node \"crc\" DevicePath \"\"" Oct 13 07:30:03 crc kubenswrapper[4833]: I1013 07:30:03.372517 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339010-9chh5" event={"ID":"91db143d-f5c3-48fa-b831-85ab090ffb9f","Type":"ContainerDied","Data":"9b6843996fb7f6bebdbb1915c8501685d9bd8a4cc6c0697d95a254f82cdde98a"} Oct 13 07:30:03 crc kubenswrapper[4833]: I1013 07:30:03.372604 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b6843996fb7f6bebdbb1915c8501685d9bd8a4cc6c0697d95a254f82cdde98a" Oct 13 07:30:03 crc kubenswrapper[4833]: I1013 07:30:03.373139 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339010-9chh5" Oct 13 07:30:03 crc kubenswrapper[4833]: I1013 07:30:03.747864 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29338965-dfhlw"] Oct 13 07:30:03 crc kubenswrapper[4833]: I1013 07:30:03.752597 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29338965-dfhlw"] Oct 13 07:30:04 crc kubenswrapper[4833]: I1013 07:30:04.635871 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1129198-3dd0-4ad3-8211-eb80e02362af" path="/var/lib/kubelet/pods/e1129198-3dd0-4ad3-8211-eb80e02362af/volumes" Oct 13 07:30:11 crc kubenswrapper[4833]: I1013 07:30:11.626640 4833 scope.go:117] "RemoveContainer" containerID="01ad4a7bf8f87c8ad9767c44979bcb962be5a37b8095be35fed40107923c3eb8" Oct 13 07:30:11 crc kubenswrapper[4833]: E1013 07:30:11.627384 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:30:24 crc kubenswrapper[4833]: I1013 07:30:24.627988 4833 scope.go:117] "RemoveContainer" containerID="01ad4a7bf8f87c8ad9767c44979bcb962be5a37b8095be35fed40107923c3eb8" Oct 13 07:30:24 crc kubenswrapper[4833]: E1013 07:30:24.629287 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:30:38 crc kubenswrapper[4833]: I1013 07:30:38.628168 4833 scope.go:117] "RemoveContainer" containerID="01ad4a7bf8f87c8ad9767c44979bcb962be5a37b8095be35fed40107923c3eb8" Oct 13 07:30:38 crc kubenswrapper[4833]: E1013 07:30:38.629456 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:30:49 crc kubenswrapper[4833]: I1013 07:30:49.612669 4833 scope.go:117] "RemoveContainer" containerID="e5be0d78a4e72dae27c51937eee7234a33b843053201d55f6115c0604a701dfe" Oct 13 07:30:53 crc kubenswrapper[4833]: I1013 07:30:53.627683 4833 scope.go:117] "RemoveContainer" containerID="01ad4a7bf8f87c8ad9767c44979bcb962be5a37b8095be35fed40107923c3eb8" Oct 13 07:30:53 crc kubenswrapper[4833]: E1013 07:30:53.628941 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:31:04 crc kubenswrapper[4833]: I1013 07:31:04.626984 4833 scope.go:117] "RemoveContainer" containerID="01ad4a7bf8f87c8ad9767c44979bcb962be5a37b8095be35fed40107923c3eb8" Oct 13 07:31:04 crc kubenswrapper[4833]: E1013 07:31:04.627690 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:31:17 crc kubenswrapper[4833]: I1013 07:31:17.626961 4833 scope.go:117] "RemoveContainer" containerID="01ad4a7bf8f87c8ad9767c44979bcb962be5a37b8095be35fed40107923c3eb8" Oct 13 07:31:17 crc kubenswrapper[4833]: E1013 07:31:17.627690 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:31:32 crc kubenswrapper[4833]: I1013 07:31:32.628343 4833 scope.go:117] "RemoveContainer" containerID="01ad4a7bf8f87c8ad9767c44979bcb962be5a37b8095be35fed40107923c3eb8" Oct 13 07:31:32 crc kubenswrapper[4833]: E1013 07:31:32.629861 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:31:43 crc kubenswrapper[4833]: I1013 07:31:43.626979 4833 scope.go:117] "RemoveContainer" containerID="01ad4a7bf8f87c8ad9767c44979bcb962be5a37b8095be35fed40107923c3eb8" Oct 13 07:31:43 crc kubenswrapper[4833]: E1013 07:31:43.627750 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:31:56 crc kubenswrapper[4833]: I1013 07:31:56.634250 4833 scope.go:117] "RemoveContainer" containerID="01ad4a7bf8f87c8ad9767c44979bcb962be5a37b8095be35fed40107923c3eb8" Oct 13 07:31:56 crc kubenswrapper[4833]: E1013 07:31:56.634772 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:32:10 crc kubenswrapper[4833]: I1013 07:32:10.631421 4833 scope.go:117] "RemoveContainer" containerID="01ad4a7bf8f87c8ad9767c44979bcb962be5a37b8095be35fed40107923c3eb8" Oct 13 07:32:10 crc kubenswrapper[4833]: E1013 07:32:10.632883 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:32:22 crc kubenswrapper[4833]: I1013 07:32:22.628226 4833 scope.go:117] "RemoveContainer" containerID="01ad4a7bf8f87c8ad9767c44979bcb962be5a37b8095be35fed40107923c3eb8" Oct 13 07:32:22 crc kubenswrapper[4833]: E1013 07:32:22.628989 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:32:33 crc kubenswrapper[4833]: I1013 07:32:33.627006 4833 scope.go:117] "RemoveContainer" containerID="01ad4a7bf8f87c8ad9767c44979bcb962be5a37b8095be35fed40107923c3eb8" Oct 13 07:32:33 crc kubenswrapper[4833]: E1013 07:32:33.627969 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:32:45 crc kubenswrapper[4833]: I1013 07:32:45.626882 4833 scope.go:117] "RemoveContainer" containerID="01ad4a7bf8f87c8ad9767c44979bcb962be5a37b8095be35fed40107923c3eb8" Oct 13 07:32:45 crc kubenswrapper[4833]: E1013 07:32:45.627643 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:32:58 crc kubenswrapper[4833]: I1013 07:32:58.626851 4833 scope.go:117] "RemoveContainer" containerID="01ad4a7bf8f87c8ad9767c44979bcb962be5a37b8095be35fed40107923c3eb8" Oct 13 07:32:58 crc kubenswrapper[4833]: E1013 07:32:58.628493 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:33:12 crc kubenswrapper[4833]: I1013 07:33:12.627890 4833 scope.go:117] "RemoveContainer" containerID="01ad4a7bf8f87c8ad9767c44979bcb962be5a37b8095be35fed40107923c3eb8" Oct 13 07:33:12 crc kubenswrapper[4833]: E1013 07:33:12.628756 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:33:24 crc kubenswrapper[4833]: I1013 07:33:24.627807 4833 scope.go:117] "RemoveContainer" containerID="01ad4a7bf8f87c8ad9767c44979bcb962be5a37b8095be35fed40107923c3eb8" Oct 13 07:33:24 crc kubenswrapper[4833]: E1013 07:33:24.630025 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:33:36 crc kubenswrapper[4833]: I1013 07:33:36.242315 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sqgf8"] Oct 13 07:33:36 crc kubenswrapper[4833]: E1013 07:33:36.243324 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91db143d-f5c3-48fa-b831-85ab090ffb9f" containerName="collect-profiles" Oct 13 07:33:36 crc kubenswrapper[4833]: I1013 07:33:36.243342 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="91db143d-f5c3-48fa-b831-85ab090ffb9f" containerName="collect-profiles" Oct 13 07:33:36 crc kubenswrapper[4833]: I1013 07:33:36.243572 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="91db143d-f5c3-48fa-b831-85ab090ffb9f" containerName="collect-profiles" Oct 13 07:33:36 crc kubenswrapper[4833]: I1013 07:33:36.246406 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqgf8" Oct 13 07:33:36 crc kubenswrapper[4833]: I1013 07:33:36.261653 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqgf8"] Oct 13 07:33:36 crc kubenswrapper[4833]: I1013 07:33:36.385742 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs4tf\" (UniqueName: \"kubernetes.io/projected/ffb2568c-30fd-41d0-af19-ee0b78e3d41e-kube-api-access-fs4tf\") pod \"redhat-marketplace-sqgf8\" (UID: \"ffb2568c-30fd-41d0-af19-ee0b78e3d41e\") " pod="openshift-marketplace/redhat-marketplace-sqgf8" Oct 13 07:33:36 crc kubenswrapper[4833]: I1013 07:33:36.386242 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffb2568c-30fd-41d0-af19-ee0b78e3d41e-utilities\") pod \"redhat-marketplace-sqgf8\" (UID: \"ffb2568c-30fd-41d0-af19-ee0b78e3d41e\") " pod="openshift-marketplace/redhat-marketplace-sqgf8" Oct 13 07:33:36 crc kubenswrapper[4833]: I1013 07:33:36.386405 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffb2568c-30fd-41d0-af19-ee0b78e3d41e-catalog-content\") pod \"redhat-marketplace-sqgf8\" (UID: \"ffb2568c-30fd-41d0-af19-ee0b78e3d41e\") " pod="openshift-marketplace/redhat-marketplace-sqgf8" Oct 13 07:33:36 crc kubenswrapper[4833]: I1013 07:33:36.487425 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffb2568c-30fd-41d0-af19-ee0b78e3d41e-catalog-content\") pod \"redhat-marketplace-sqgf8\" (UID: \"ffb2568c-30fd-41d0-af19-ee0b78e3d41e\") " pod="openshift-marketplace/redhat-marketplace-sqgf8" Oct 13 07:33:36 crc kubenswrapper[4833]: I1013 07:33:36.487494 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs4tf\" (UniqueName: \"kubernetes.io/projected/ffb2568c-30fd-41d0-af19-ee0b78e3d41e-kube-api-access-fs4tf\") pod \"redhat-marketplace-sqgf8\" (UID: \"ffb2568c-30fd-41d0-af19-ee0b78e3d41e\") " pod="openshift-marketplace/redhat-marketplace-sqgf8" Oct 13 07:33:36 crc kubenswrapper[4833]: I1013 07:33:36.487631 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffb2568c-30fd-41d0-af19-ee0b78e3d41e-utilities\") pod \"redhat-marketplace-sqgf8\" (UID: \"ffb2568c-30fd-41d0-af19-ee0b78e3d41e\") " pod="openshift-marketplace/redhat-marketplace-sqgf8" Oct 13 07:33:36 crc kubenswrapper[4833]: I1013 07:33:36.488177 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffb2568c-30fd-41d0-af19-ee0b78e3d41e-utilities\") pod \"redhat-marketplace-sqgf8\" (UID: \"ffb2568c-30fd-41d0-af19-ee0b78e3d41e\") " pod="openshift-marketplace/redhat-marketplace-sqgf8" Oct 13 07:33:36 crc kubenswrapper[4833]: I1013 07:33:36.488459 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffb2568c-30fd-41d0-af19-ee0b78e3d41e-catalog-content\") pod \"redhat-marketplace-sqgf8\" (UID: \"ffb2568c-30fd-41d0-af19-ee0b78e3d41e\") " pod="openshift-marketplace/redhat-marketplace-sqgf8" Oct 13 07:33:36 crc kubenswrapper[4833]: I1013 07:33:36.507700 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs4tf\" (UniqueName: \"kubernetes.io/projected/ffb2568c-30fd-41d0-af19-ee0b78e3d41e-kube-api-access-fs4tf\") pod \"redhat-marketplace-sqgf8\" (UID: \"ffb2568c-30fd-41d0-af19-ee0b78e3d41e\") " pod="openshift-marketplace/redhat-marketplace-sqgf8" Oct 13 07:33:36 crc kubenswrapper[4833]: I1013 07:33:36.598850 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqgf8" Oct 13 07:33:36 crc kubenswrapper[4833]: I1013 07:33:36.829203 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqgf8"] Oct 13 07:33:37 crc kubenswrapper[4833]: I1013 07:33:37.158675 4833 generic.go:334] "Generic (PLEG): container finished" podID="ffb2568c-30fd-41d0-af19-ee0b78e3d41e" containerID="1ef5c61453fa488e3d32d33e0dedbe51e82b4d20d15450cdb1dd0bd2a815819e" exitCode=0 Oct 13 07:33:37 crc kubenswrapper[4833]: I1013 07:33:37.158778 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqgf8" event={"ID":"ffb2568c-30fd-41d0-af19-ee0b78e3d41e","Type":"ContainerDied","Data":"1ef5c61453fa488e3d32d33e0dedbe51e82b4d20d15450cdb1dd0bd2a815819e"} Oct 13 07:33:37 crc kubenswrapper[4833]: I1013 07:33:37.159133 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqgf8" event={"ID":"ffb2568c-30fd-41d0-af19-ee0b78e3d41e","Type":"ContainerStarted","Data":"89913acebfed2bf728b4f1c96d5b4d7b4d763067c8840061b6073d98803f0a5a"} Oct 13 07:33:37 crc kubenswrapper[4833]: I1013 07:33:37.626806 4833 scope.go:117] "RemoveContainer" containerID="01ad4a7bf8f87c8ad9767c44979bcb962be5a37b8095be35fed40107923c3eb8" Oct 13 07:33:38 crc kubenswrapper[4833]: I1013 07:33:38.169252 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"37ccfe037d527dc1f623c1ffe515273eff62feb34b54804621df13b0947520fa"} Oct 13 07:33:38 crc kubenswrapper[4833]: I1013 07:33:38.171295 4833 generic.go:334] "Generic (PLEG): container finished" podID="ffb2568c-30fd-41d0-af19-ee0b78e3d41e" containerID="8f92cc88ef024c7d97b9a607e5d73fc3f9b3018a51c966b8ddfba822e9c934d2" exitCode=0 Oct 13 07:33:38 crc kubenswrapper[4833]: I1013 07:33:38.171349 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqgf8" event={"ID":"ffb2568c-30fd-41d0-af19-ee0b78e3d41e","Type":"ContainerDied","Data":"8f92cc88ef024c7d97b9a607e5d73fc3f9b3018a51c966b8ddfba822e9c934d2"} Oct 13 07:33:39 crc kubenswrapper[4833]: I1013 07:33:39.183385 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqgf8" event={"ID":"ffb2568c-30fd-41d0-af19-ee0b78e3d41e","Type":"ContainerStarted","Data":"3d8ca9d15c48e94053112e80015d156b7b4f7042d57eb86317bf9be3b9895378"} Oct 13 07:33:39 crc kubenswrapper[4833]: I1013 07:33:39.208301 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sqgf8" podStartSLOduration=1.67962341 podStartE2EDuration="3.208278092s" podCreationTimestamp="2025-10-13 07:33:36 +0000 UTC" firstStartedPulling="2025-10-13 07:33:37.160695013 +0000 UTC m=+3907.261117949" lastFinishedPulling="2025-10-13 07:33:38.689349705 +0000 UTC m=+3908.789772631" observedRunningTime="2025-10-13 07:33:39.206290546 +0000 UTC m=+3909.306713502" watchObservedRunningTime="2025-10-13 07:33:39.208278092 +0000 UTC m=+3909.308700998" Oct 13 07:33:46 crc kubenswrapper[4833]: I1013 07:33:46.598913 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sqgf8" Oct 13 07:33:46 crc kubenswrapper[4833]: I1013 07:33:46.599639 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sqgf8" Oct 13 07:33:46 crc kubenswrapper[4833]: I1013 07:33:46.675470 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sqgf8" Oct 13 07:33:47 crc kubenswrapper[4833]: I1013 07:33:47.322336 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sqgf8" Oct 13 07:33:47 crc kubenswrapper[4833]: I1013 07:33:47.387398 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqgf8"] Oct 13 07:33:49 crc kubenswrapper[4833]: I1013 07:33:49.266272 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sqgf8" podUID="ffb2568c-30fd-41d0-af19-ee0b78e3d41e" containerName="registry-server" containerID="cri-o://3d8ca9d15c48e94053112e80015d156b7b4f7042d57eb86317bf9be3b9895378" gracePeriod=2 Oct 13 07:33:49 crc kubenswrapper[4833]: I1013 07:33:49.657309 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqgf8" Oct 13 07:33:49 crc kubenswrapper[4833]: I1013 07:33:49.812134 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffb2568c-30fd-41d0-af19-ee0b78e3d41e-utilities\") pod \"ffb2568c-30fd-41d0-af19-ee0b78e3d41e\" (UID: \"ffb2568c-30fd-41d0-af19-ee0b78e3d41e\") " Oct 13 07:33:49 crc kubenswrapper[4833]: I1013 07:33:49.812275 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffb2568c-30fd-41d0-af19-ee0b78e3d41e-catalog-content\") pod \"ffb2568c-30fd-41d0-af19-ee0b78e3d41e\" (UID: \"ffb2568c-30fd-41d0-af19-ee0b78e3d41e\") " Oct 13 07:33:49 crc kubenswrapper[4833]: I1013 07:33:49.812346 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs4tf\" (UniqueName: \"kubernetes.io/projected/ffb2568c-30fd-41d0-af19-ee0b78e3d41e-kube-api-access-fs4tf\") pod \"ffb2568c-30fd-41d0-af19-ee0b78e3d41e\" (UID: \"ffb2568c-30fd-41d0-af19-ee0b78e3d41e\") " Oct 13 07:33:49 crc kubenswrapper[4833]: I1013 07:33:49.814124 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffb2568c-30fd-41d0-af19-ee0b78e3d41e-utilities" (OuterVolumeSpecName: "utilities") pod "ffb2568c-30fd-41d0-af19-ee0b78e3d41e" (UID: "ffb2568c-30fd-41d0-af19-ee0b78e3d41e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:33:49 crc kubenswrapper[4833]: I1013 07:33:49.819794 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffb2568c-30fd-41d0-af19-ee0b78e3d41e-kube-api-access-fs4tf" (OuterVolumeSpecName: "kube-api-access-fs4tf") pod "ffb2568c-30fd-41d0-af19-ee0b78e3d41e" (UID: "ffb2568c-30fd-41d0-af19-ee0b78e3d41e"). InnerVolumeSpecName "kube-api-access-fs4tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:33:49 crc kubenswrapper[4833]: I1013 07:33:49.835182 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffb2568c-30fd-41d0-af19-ee0b78e3d41e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ffb2568c-30fd-41d0-af19-ee0b78e3d41e" (UID: "ffb2568c-30fd-41d0-af19-ee0b78e3d41e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:33:49 crc kubenswrapper[4833]: I1013 07:33:49.915313 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffb2568c-30fd-41d0-af19-ee0b78e3d41e-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 07:33:49 crc kubenswrapper[4833]: I1013 07:33:49.915376 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffb2568c-30fd-41d0-af19-ee0b78e3d41e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 07:33:49 crc kubenswrapper[4833]: I1013 07:33:49.915398 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fs4tf\" (UniqueName: \"kubernetes.io/projected/ffb2568c-30fd-41d0-af19-ee0b78e3d41e-kube-api-access-fs4tf\") on node \"crc\" DevicePath \"\"" Oct 13 07:33:50 crc kubenswrapper[4833]: I1013 07:33:50.285347 4833 generic.go:334] "Generic (PLEG): container finished" podID="ffb2568c-30fd-41d0-af19-ee0b78e3d41e" containerID="3d8ca9d15c48e94053112e80015d156b7b4f7042d57eb86317bf9be3b9895378" exitCode=0 Oct 13 07:33:50 crc kubenswrapper[4833]: I1013 07:33:50.285458 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqgf8" event={"ID":"ffb2568c-30fd-41d0-af19-ee0b78e3d41e","Type":"ContainerDied","Data":"3d8ca9d15c48e94053112e80015d156b7b4f7042d57eb86317bf9be3b9895378"} Oct 13 07:33:50 crc kubenswrapper[4833]: I1013 07:33:50.285506 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqgf8" event={"ID":"ffb2568c-30fd-41d0-af19-ee0b78e3d41e","Type":"ContainerDied","Data":"89913acebfed2bf728b4f1c96d5b4d7b4d763067c8840061b6073d98803f0a5a"} Oct 13 07:33:50 crc kubenswrapper[4833]: I1013 07:33:50.285577 4833 scope.go:117] "RemoveContainer" containerID="3d8ca9d15c48e94053112e80015d156b7b4f7042d57eb86317bf9be3b9895378" Oct 13 07:33:50 crc kubenswrapper[4833]: I1013 07:33:50.285813 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqgf8" Oct 13 07:33:50 crc kubenswrapper[4833]: I1013 07:33:50.317404 4833 scope.go:117] "RemoveContainer" containerID="8f92cc88ef024c7d97b9a607e5d73fc3f9b3018a51c966b8ddfba822e9c934d2" Oct 13 07:33:50 crc kubenswrapper[4833]: I1013 07:33:50.337446 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqgf8"] Oct 13 07:33:50 crc kubenswrapper[4833]: I1013 07:33:50.341972 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqgf8"] Oct 13 07:33:50 crc kubenswrapper[4833]: I1013 07:33:50.344791 4833 scope.go:117] "RemoveContainer" containerID="1ef5c61453fa488e3d32d33e0dedbe51e82b4d20d15450cdb1dd0bd2a815819e" Oct 13 07:33:50 crc kubenswrapper[4833]: I1013 07:33:50.364887 4833 scope.go:117] "RemoveContainer" containerID="3d8ca9d15c48e94053112e80015d156b7b4f7042d57eb86317bf9be3b9895378" Oct 13 07:33:50 crc kubenswrapper[4833]: E1013 07:33:50.365417 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d8ca9d15c48e94053112e80015d156b7b4f7042d57eb86317bf9be3b9895378\": container with ID starting with 3d8ca9d15c48e94053112e80015d156b7b4f7042d57eb86317bf9be3b9895378 not found: ID does not exist" containerID="3d8ca9d15c48e94053112e80015d156b7b4f7042d57eb86317bf9be3b9895378" Oct 13 07:33:50 crc kubenswrapper[4833]: I1013 07:33:50.365476 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d8ca9d15c48e94053112e80015d156b7b4f7042d57eb86317bf9be3b9895378"} err="failed to get container status \"3d8ca9d15c48e94053112e80015d156b7b4f7042d57eb86317bf9be3b9895378\": rpc error: code = NotFound desc = could not find container \"3d8ca9d15c48e94053112e80015d156b7b4f7042d57eb86317bf9be3b9895378\": container with ID starting with 3d8ca9d15c48e94053112e80015d156b7b4f7042d57eb86317bf9be3b9895378 not found: ID does not exist" Oct 13 07:33:50 crc kubenswrapper[4833]: I1013 07:33:50.365514 4833 scope.go:117] "RemoveContainer" containerID="8f92cc88ef024c7d97b9a607e5d73fc3f9b3018a51c966b8ddfba822e9c934d2" Oct 13 07:33:50 crc kubenswrapper[4833]: E1013 07:33:50.365887 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f92cc88ef024c7d97b9a607e5d73fc3f9b3018a51c966b8ddfba822e9c934d2\": container with ID starting with 8f92cc88ef024c7d97b9a607e5d73fc3f9b3018a51c966b8ddfba822e9c934d2 not found: ID does not exist" containerID="8f92cc88ef024c7d97b9a607e5d73fc3f9b3018a51c966b8ddfba822e9c934d2" Oct 13 07:33:50 crc kubenswrapper[4833]: I1013 07:33:50.365919 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f92cc88ef024c7d97b9a607e5d73fc3f9b3018a51c966b8ddfba822e9c934d2"} err="failed to get container status \"8f92cc88ef024c7d97b9a607e5d73fc3f9b3018a51c966b8ddfba822e9c934d2\": rpc error: code = NotFound desc = could not find container \"8f92cc88ef024c7d97b9a607e5d73fc3f9b3018a51c966b8ddfba822e9c934d2\": container with ID starting with 8f92cc88ef024c7d97b9a607e5d73fc3f9b3018a51c966b8ddfba822e9c934d2 not found: ID does not exist" Oct 13 07:33:50 crc kubenswrapper[4833]: I1013 07:33:50.365942 4833 scope.go:117] "RemoveContainer" containerID="1ef5c61453fa488e3d32d33e0dedbe51e82b4d20d15450cdb1dd0bd2a815819e" Oct 13 07:33:50 crc kubenswrapper[4833]: E1013 07:33:50.366227 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ef5c61453fa488e3d32d33e0dedbe51e82b4d20d15450cdb1dd0bd2a815819e\": container with ID starting with 1ef5c61453fa488e3d32d33e0dedbe51e82b4d20d15450cdb1dd0bd2a815819e not found: ID does not exist" containerID="1ef5c61453fa488e3d32d33e0dedbe51e82b4d20d15450cdb1dd0bd2a815819e" Oct 13 07:33:50 crc kubenswrapper[4833]: I1013 07:33:50.366261 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ef5c61453fa488e3d32d33e0dedbe51e82b4d20d15450cdb1dd0bd2a815819e"} err="failed to get container status \"1ef5c61453fa488e3d32d33e0dedbe51e82b4d20d15450cdb1dd0bd2a815819e\": rpc error: code = NotFound desc = could not find container \"1ef5c61453fa488e3d32d33e0dedbe51e82b4d20d15450cdb1dd0bd2a815819e\": container with ID starting with 1ef5c61453fa488e3d32d33e0dedbe51e82b4d20d15450cdb1dd0bd2a815819e not found: ID does not exist" Oct 13 07:33:50 crc kubenswrapper[4833]: I1013 07:33:50.643303 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffb2568c-30fd-41d0-af19-ee0b78e3d41e" path="/var/lib/kubelet/pods/ffb2568c-30fd-41d0-af19-ee0b78e3d41e/volumes" Oct 13 07:36:00 crc kubenswrapper[4833]: I1013 07:36:00.543179 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:36:00 crc kubenswrapper[4833]: I1013 07:36:00.543675 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:36:30 crc kubenswrapper[4833]: I1013 07:36:30.543242 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:36:30 crc kubenswrapper[4833]: I1013 07:36:30.543902 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:37:00 crc kubenswrapper[4833]: I1013 07:37:00.542954 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:37:00 crc kubenswrapper[4833]: I1013 07:37:00.543377 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:37:00 crc kubenswrapper[4833]: I1013 07:37:00.543419 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 07:37:00 crc kubenswrapper[4833]: I1013 07:37:00.543847 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"37ccfe037d527dc1f623c1ffe515273eff62feb34b54804621df13b0947520fa"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 07:37:00 crc kubenswrapper[4833]: I1013 07:37:00.543908 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://37ccfe037d527dc1f623c1ffe515273eff62feb34b54804621df13b0947520fa" gracePeriod=600 Oct 13 07:37:00 crc kubenswrapper[4833]: I1013 07:37:00.895005 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="37ccfe037d527dc1f623c1ffe515273eff62feb34b54804621df13b0947520fa" exitCode=0 Oct 13 07:37:00 crc kubenswrapper[4833]: I1013 07:37:00.895087 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"37ccfe037d527dc1f623c1ffe515273eff62feb34b54804621df13b0947520fa"} Oct 13 07:37:00 crc kubenswrapper[4833]: I1013 07:37:00.895716 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"32a6d82a8bf2d5ec1a4e4b9475db65556d7333730766b0699b8cddfd7466600a"} Oct 13 07:37:00 crc kubenswrapper[4833]: I1013 07:37:00.895759 4833 scope.go:117] "RemoveContainer" containerID="01ad4a7bf8f87c8ad9767c44979bcb962be5a37b8095be35fed40107923c3eb8" Oct 13 07:37:59 crc kubenswrapper[4833]: I1013 07:37:59.202592 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bwrt4"] Oct 13 07:37:59 crc kubenswrapper[4833]: E1013 07:37:59.203938 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb2568c-30fd-41d0-af19-ee0b78e3d41e" containerName="extract-utilities" Oct 13 07:37:59 crc kubenswrapper[4833]: I1013 07:37:59.203964 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb2568c-30fd-41d0-af19-ee0b78e3d41e" containerName="extract-utilities" Oct 13 07:37:59 crc kubenswrapper[4833]: E1013 07:37:59.203986 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb2568c-30fd-41d0-af19-ee0b78e3d41e" containerName="extract-content" Oct 13 07:37:59 crc kubenswrapper[4833]: I1013 07:37:59.203999 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb2568c-30fd-41d0-af19-ee0b78e3d41e" containerName="extract-content" Oct 13 07:37:59 crc kubenswrapper[4833]: E1013 07:37:59.204022 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb2568c-30fd-41d0-af19-ee0b78e3d41e" containerName="registry-server" Oct 13 07:37:59 crc kubenswrapper[4833]: I1013 07:37:59.204039 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb2568c-30fd-41d0-af19-ee0b78e3d41e" containerName="registry-server" Oct 13 07:37:59 crc kubenswrapper[4833]: I1013 07:37:59.204321 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffb2568c-30fd-41d0-af19-ee0b78e3d41e" containerName="registry-server" Oct 13 07:37:59 crc kubenswrapper[4833]: I1013 07:37:59.207102 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bwrt4" Oct 13 07:37:59 crc kubenswrapper[4833]: I1013 07:37:59.213933 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bwrt4"] Oct 13 07:37:59 crc kubenswrapper[4833]: I1013 07:37:59.318611 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26fa694e-4c55-40eb-a912-78d3e13520f0-catalog-content\") pod \"community-operators-bwrt4\" (UID: \"26fa694e-4c55-40eb-a912-78d3e13520f0\") " pod="openshift-marketplace/community-operators-bwrt4" Oct 13 07:37:59 crc kubenswrapper[4833]: I1013 07:37:59.318685 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxvwt\" (UniqueName: \"kubernetes.io/projected/26fa694e-4c55-40eb-a912-78d3e13520f0-kube-api-access-gxvwt\") pod \"community-operators-bwrt4\" (UID: \"26fa694e-4c55-40eb-a912-78d3e13520f0\") " pod="openshift-marketplace/community-operators-bwrt4" Oct 13 07:37:59 crc kubenswrapper[4833]: I1013 07:37:59.318801 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26fa694e-4c55-40eb-a912-78d3e13520f0-utilities\") pod \"community-operators-bwrt4\" (UID: \"26fa694e-4c55-40eb-a912-78d3e13520f0\") " pod="openshift-marketplace/community-operators-bwrt4" Oct 13 07:37:59 crc kubenswrapper[4833]: I1013 07:37:59.390514 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8l5jk"] Oct 13 07:37:59 crc kubenswrapper[4833]: I1013 07:37:59.392332 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8l5jk" Oct 13 07:37:59 crc kubenswrapper[4833]: I1013 07:37:59.412465 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8l5jk"] Oct 13 07:37:59 crc kubenswrapper[4833]: I1013 07:37:59.420041 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26fa694e-4c55-40eb-a912-78d3e13520f0-catalog-content\") pod \"community-operators-bwrt4\" (UID: \"26fa694e-4c55-40eb-a912-78d3e13520f0\") " pod="openshift-marketplace/community-operators-bwrt4" Oct 13 07:37:59 crc kubenswrapper[4833]: I1013 07:37:59.420113 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxvwt\" (UniqueName: \"kubernetes.io/projected/26fa694e-4c55-40eb-a912-78d3e13520f0-kube-api-access-gxvwt\") pod \"community-operators-bwrt4\" (UID: \"26fa694e-4c55-40eb-a912-78d3e13520f0\") " pod="openshift-marketplace/community-operators-bwrt4" Oct 13 07:37:59 crc kubenswrapper[4833]: I1013 07:37:59.420580 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26fa694e-4c55-40eb-a912-78d3e13520f0-utilities\") pod \"community-operators-bwrt4\" (UID: \"26fa694e-4c55-40eb-a912-78d3e13520f0\") " pod="openshift-marketplace/community-operators-bwrt4" Oct 13 07:37:59 crc kubenswrapper[4833]: I1013 07:37:59.420697 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26fa694e-4c55-40eb-a912-78d3e13520f0-catalog-content\") pod \"community-operators-bwrt4\" (UID: \"26fa694e-4c55-40eb-a912-78d3e13520f0\") " pod="openshift-marketplace/community-operators-bwrt4" Oct 13 07:37:59 crc kubenswrapper[4833]: I1013 07:37:59.420872 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26fa694e-4c55-40eb-a912-78d3e13520f0-utilities\") pod \"community-operators-bwrt4\" (UID: \"26fa694e-4c55-40eb-a912-78d3e13520f0\") " pod="openshift-marketplace/community-operators-bwrt4" Oct 13 07:37:59 crc kubenswrapper[4833]: I1013 07:37:59.446941 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxvwt\" (UniqueName: \"kubernetes.io/projected/26fa694e-4c55-40eb-a912-78d3e13520f0-kube-api-access-gxvwt\") pod \"community-operators-bwrt4\" (UID: \"26fa694e-4c55-40eb-a912-78d3e13520f0\") " pod="openshift-marketplace/community-operators-bwrt4" Oct 13 07:37:59 crc kubenswrapper[4833]: I1013 07:37:59.522013 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad3f5ec1-9fd8-4805-91d6-e699f528783d-utilities\") pod \"redhat-operators-8l5jk\" (UID: \"ad3f5ec1-9fd8-4805-91d6-e699f528783d\") " pod="openshift-marketplace/redhat-operators-8l5jk" Oct 13 07:37:59 crc kubenswrapper[4833]: I1013 07:37:59.522090 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgnfv\" (UniqueName: \"kubernetes.io/projected/ad3f5ec1-9fd8-4805-91d6-e699f528783d-kube-api-access-kgnfv\") pod \"redhat-operators-8l5jk\" (UID: \"ad3f5ec1-9fd8-4805-91d6-e699f528783d\") " pod="openshift-marketplace/redhat-operators-8l5jk" Oct 13 07:37:59 crc kubenswrapper[4833]: I1013 07:37:59.522168 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad3f5ec1-9fd8-4805-91d6-e699f528783d-catalog-content\") pod \"redhat-operators-8l5jk\" (UID: \"ad3f5ec1-9fd8-4805-91d6-e699f528783d\") " pod="openshift-marketplace/redhat-operators-8l5jk" Oct 13 07:37:59 crc kubenswrapper[4833]: I1013 07:37:59.548028 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bwrt4" Oct 13 07:37:59 crc kubenswrapper[4833]: I1013 07:37:59.623339 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad3f5ec1-9fd8-4805-91d6-e699f528783d-utilities\") pod \"redhat-operators-8l5jk\" (UID: \"ad3f5ec1-9fd8-4805-91d6-e699f528783d\") " pod="openshift-marketplace/redhat-operators-8l5jk" Oct 13 07:37:59 crc kubenswrapper[4833]: I1013 07:37:59.623422 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgnfv\" (UniqueName: \"kubernetes.io/projected/ad3f5ec1-9fd8-4805-91d6-e699f528783d-kube-api-access-kgnfv\") pod \"redhat-operators-8l5jk\" (UID: \"ad3f5ec1-9fd8-4805-91d6-e699f528783d\") " pod="openshift-marketplace/redhat-operators-8l5jk" Oct 13 07:37:59 crc kubenswrapper[4833]: I1013 07:37:59.623475 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad3f5ec1-9fd8-4805-91d6-e699f528783d-catalog-content\") pod \"redhat-operators-8l5jk\" (UID: \"ad3f5ec1-9fd8-4805-91d6-e699f528783d\") " pod="openshift-marketplace/redhat-operators-8l5jk" Oct 13 07:37:59 crc kubenswrapper[4833]: I1013 07:37:59.623982 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad3f5ec1-9fd8-4805-91d6-e699f528783d-utilities\") pod \"redhat-operators-8l5jk\" (UID: \"ad3f5ec1-9fd8-4805-91d6-e699f528783d\") " pod="openshift-marketplace/redhat-operators-8l5jk" Oct 13 07:37:59 crc kubenswrapper[4833]: I1013 07:37:59.624023 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad3f5ec1-9fd8-4805-91d6-e699f528783d-catalog-content\") pod \"redhat-operators-8l5jk\" (UID: \"ad3f5ec1-9fd8-4805-91d6-e699f528783d\") " pod="openshift-marketplace/redhat-operators-8l5jk" Oct 13 07:37:59 crc kubenswrapper[4833]: I1013 07:37:59.672469 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgnfv\" (UniqueName: \"kubernetes.io/projected/ad3f5ec1-9fd8-4805-91d6-e699f528783d-kube-api-access-kgnfv\") pod \"redhat-operators-8l5jk\" (UID: \"ad3f5ec1-9fd8-4805-91d6-e699f528783d\") " pod="openshift-marketplace/redhat-operators-8l5jk" Oct 13 07:37:59 crc kubenswrapper[4833]: I1013 07:37:59.712990 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8l5jk" Oct 13 07:38:00 crc kubenswrapper[4833]: I1013 07:38:00.108460 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bwrt4"] Oct 13 07:38:00 crc kubenswrapper[4833]: I1013 07:38:00.215775 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8l5jk"] Oct 13 07:38:00 crc kubenswrapper[4833]: W1013 07:38:00.217279 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad3f5ec1_9fd8_4805_91d6_e699f528783d.slice/crio-3dfe3b06d04e7d46670a621fdb854a1196cf739ff6c1a04ef9fb74a9029f7cbd WatchSource:0}: Error finding container 3dfe3b06d04e7d46670a621fdb854a1196cf739ff6c1a04ef9fb74a9029f7cbd: Status 404 returned error can't find the container with id 3dfe3b06d04e7d46670a621fdb854a1196cf739ff6c1a04ef9fb74a9029f7cbd Oct 13 07:38:00 crc kubenswrapper[4833]: I1013 07:38:00.390221 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8l5jk" event={"ID":"ad3f5ec1-9fd8-4805-91d6-e699f528783d","Type":"ContainerStarted","Data":"3dfe3b06d04e7d46670a621fdb854a1196cf739ff6c1a04ef9fb74a9029f7cbd"} Oct 13 07:38:00 crc kubenswrapper[4833]: I1013 07:38:00.394156 4833 generic.go:334] "Generic (PLEG): container finished" podID="26fa694e-4c55-40eb-a912-78d3e13520f0" containerID="db7ce0dd5959da12f681b2bc68263d80e6c406242421ef93fdddac866b9a7fc5" exitCode=0 Oct 13 07:38:00 crc kubenswrapper[4833]: I1013 07:38:00.394208 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bwrt4" event={"ID":"26fa694e-4c55-40eb-a912-78d3e13520f0","Type":"ContainerDied","Data":"db7ce0dd5959da12f681b2bc68263d80e6c406242421ef93fdddac866b9a7fc5"} Oct 13 07:38:00 crc kubenswrapper[4833]: I1013 07:38:00.394240 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bwrt4" event={"ID":"26fa694e-4c55-40eb-a912-78d3e13520f0","Type":"ContainerStarted","Data":"2852812e741c6b2bf992eef2fc3961b782ce9db5ed5085f8216d65d2dabba049"} Oct 13 07:38:00 crc kubenswrapper[4833]: I1013 07:38:00.396272 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 07:38:01 crc kubenswrapper[4833]: I1013 07:38:01.405506 4833 generic.go:334] "Generic (PLEG): container finished" podID="ad3f5ec1-9fd8-4805-91d6-e699f528783d" containerID="40becf73b2fe4ba848f7b2440fb47269e4a597bb45f47d39f67b218c36faa52c" exitCode=0 Oct 13 07:38:01 crc kubenswrapper[4833]: I1013 07:38:01.405592 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8l5jk" event={"ID":"ad3f5ec1-9fd8-4805-91d6-e699f528783d","Type":"ContainerDied","Data":"40becf73b2fe4ba848f7b2440fb47269e4a597bb45f47d39f67b218c36faa52c"} Oct 13 07:38:02 crc kubenswrapper[4833]: I1013 07:38:02.416248 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8l5jk" event={"ID":"ad3f5ec1-9fd8-4805-91d6-e699f528783d","Type":"ContainerStarted","Data":"4bbd921055580b0e0123e15bd8a73eac437479bea0930ed8fc31b5c6ef4a4871"} Oct 13 07:38:03 crc kubenswrapper[4833]: I1013 07:38:03.429745 4833 generic.go:334] "Generic (PLEG): container finished" podID="ad3f5ec1-9fd8-4805-91d6-e699f528783d" containerID="4bbd921055580b0e0123e15bd8a73eac437479bea0930ed8fc31b5c6ef4a4871" exitCode=0 Oct 13 07:38:03 crc kubenswrapper[4833]: I1013 07:38:03.429834 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8l5jk" event={"ID":"ad3f5ec1-9fd8-4805-91d6-e699f528783d","Type":"ContainerDied","Data":"4bbd921055580b0e0123e15bd8a73eac437479bea0930ed8fc31b5c6ef4a4871"} Oct 13 07:38:04 crc kubenswrapper[4833]: I1013 07:38:04.437514 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8l5jk" event={"ID":"ad3f5ec1-9fd8-4805-91d6-e699f528783d","Type":"ContainerStarted","Data":"f0ede2f7686a5e4b11b12eeff84659f1decfa166063b514a48a0785cf5424751"} Oct 13 07:38:04 crc kubenswrapper[4833]: I1013 07:38:04.439243 4833 generic.go:334] "Generic (PLEG): container finished" podID="26fa694e-4c55-40eb-a912-78d3e13520f0" containerID="7a36a35565a4fcb9140638d8c25bb551061639969d29f83d7df40ffff8c1778c" exitCode=0 Oct 13 07:38:04 crc kubenswrapper[4833]: I1013 07:38:04.439277 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bwrt4" event={"ID":"26fa694e-4c55-40eb-a912-78d3e13520f0","Type":"ContainerDied","Data":"7a36a35565a4fcb9140638d8c25bb551061639969d29f83d7df40ffff8c1778c"} Oct 13 07:38:04 crc kubenswrapper[4833]: I1013 07:38:04.459850 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8l5jk" podStartSLOduration=2.894640635 podStartE2EDuration="5.459831904s" podCreationTimestamp="2025-10-13 07:37:59 +0000 UTC" firstStartedPulling="2025-10-13 07:38:01.408920851 +0000 UTC m=+4171.509343767" lastFinishedPulling="2025-10-13 07:38:03.9741121 +0000 UTC m=+4174.074535036" observedRunningTime="2025-10-13 07:38:04.454705078 +0000 UTC m=+4174.555128004" watchObservedRunningTime="2025-10-13 07:38:04.459831904 +0000 UTC m=+4174.560254820" Oct 13 07:38:05 crc kubenswrapper[4833]: I1013 07:38:05.447861 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bwrt4" event={"ID":"26fa694e-4c55-40eb-a912-78d3e13520f0","Type":"ContainerStarted","Data":"ecdf11f1c5dd80eeea5d3e2cf7dd8b56a5a907a88a1b5c46dcdbd45fe35388d7"} Oct 13 07:38:05 crc kubenswrapper[4833]: I1013 07:38:05.478507 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bwrt4" podStartSLOduration=2.032195393 podStartE2EDuration="6.478483052s" podCreationTimestamp="2025-10-13 07:37:59 +0000 UTC" firstStartedPulling="2025-10-13 07:38:00.395950804 +0000 UTC m=+4170.496373720" lastFinishedPulling="2025-10-13 07:38:04.842238423 +0000 UTC m=+4174.942661379" observedRunningTime="2025-10-13 07:38:05.469284571 +0000 UTC m=+4175.569707527" watchObservedRunningTime="2025-10-13 07:38:05.478483052 +0000 UTC m=+4175.578905968" Oct 13 07:38:09 crc kubenswrapper[4833]: I1013 07:38:09.549276 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bwrt4" Oct 13 07:38:09 crc kubenswrapper[4833]: I1013 07:38:09.549727 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bwrt4" Oct 13 07:38:09 crc kubenswrapper[4833]: I1013 07:38:09.585738 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bwrt4" Oct 13 07:38:09 crc kubenswrapper[4833]: I1013 07:38:09.713486 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8l5jk" Oct 13 07:38:09 crc kubenswrapper[4833]: I1013 07:38:09.713840 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8l5jk" Oct 13 07:38:09 crc kubenswrapper[4833]: I1013 07:38:09.769174 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8l5jk" Oct 13 07:38:10 crc kubenswrapper[4833]: I1013 07:38:10.545582 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8l5jk" Oct 13 07:38:10 crc kubenswrapper[4833]: I1013 07:38:10.569931 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bwrt4" Oct 13 07:38:11 crc kubenswrapper[4833]: I1013 07:38:11.181340 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8l5jk"] Oct 13 07:38:12 crc kubenswrapper[4833]: I1013 07:38:12.505495 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8l5jk" podUID="ad3f5ec1-9fd8-4805-91d6-e699f528783d" containerName="registry-server" containerID="cri-o://f0ede2f7686a5e4b11b12eeff84659f1decfa166063b514a48a0785cf5424751" gracePeriod=2 Oct 13 07:38:12 crc kubenswrapper[4833]: I1013 07:38:12.600871 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bwrt4"] Oct 13 07:38:12 crc kubenswrapper[4833]: I1013 07:38:12.980985 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rqjsg"] Oct 13 07:38:12 crc kubenswrapper[4833]: I1013 07:38:12.981246 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rqjsg" podUID="9da3536c-cd43-4288-87df-1960453f5d50" containerName="registry-server" containerID="cri-o://ced5788f4566081a4c2c784af881f6abb8a099a4a328c16b21228046563d856b" gracePeriod=2 Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.150990 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8l5jk" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.242755 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgnfv\" (UniqueName: \"kubernetes.io/projected/ad3f5ec1-9fd8-4805-91d6-e699f528783d-kube-api-access-kgnfv\") pod \"ad3f5ec1-9fd8-4805-91d6-e699f528783d\" (UID: \"ad3f5ec1-9fd8-4805-91d6-e699f528783d\") " Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.242867 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad3f5ec1-9fd8-4805-91d6-e699f528783d-utilities\") pod \"ad3f5ec1-9fd8-4805-91d6-e699f528783d\" (UID: \"ad3f5ec1-9fd8-4805-91d6-e699f528783d\") " Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.242909 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad3f5ec1-9fd8-4805-91d6-e699f528783d-catalog-content\") pod \"ad3f5ec1-9fd8-4805-91d6-e699f528783d\" (UID: \"ad3f5ec1-9fd8-4805-91d6-e699f528783d\") " Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.248771 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad3f5ec1-9fd8-4805-91d6-e699f528783d-utilities" (OuterVolumeSpecName: "utilities") pod "ad3f5ec1-9fd8-4805-91d6-e699f528783d" (UID: "ad3f5ec1-9fd8-4805-91d6-e699f528783d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.257757 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad3f5ec1-9fd8-4805-91d6-e699f528783d-kube-api-access-kgnfv" (OuterVolumeSpecName: "kube-api-access-kgnfv") pod "ad3f5ec1-9fd8-4805-91d6-e699f528783d" (UID: "ad3f5ec1-9fd8-4805-91d6-e699f528783d"). InnerVolumeSpecName "kube-api-access-kgnfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.344942 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgnfv\" (UniqueName: \"kubernetes.io/projected/ad3f5ec1-9fd8-4805-91d6-e699f528783d-kube-api-access-kgnfv\") on node \"crc\" DevicePath \"\"" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.344971 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad3f5ec1-9fd8-4805-91d6-e699f528783d-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.349717 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rqjsg" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.387325 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad3f5ec1-9fd8-4805-91d6-e699f528783d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad3f5ec1-9fd8-4805-91d6-e699f528783d" (UID: "ad3f5ec1-9fd8-4805-91d6-e699f528783d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.445578 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tts8z\" (UniqueName: \"kubernetes.io/projected/9da3536c-cd43-4288-87df-1960453f5d50-kube-api-access-tts8z\") pod \"9da3536c-cd43-4288-87df-1960453f5d50\" (UID: \"9da3536c-cd43-4288-87df-1960453f5d50\") " Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.445682 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9da3536c-cd43-4288-87df-1960453f5d50-utilities\") pod \"9da3536c-cd43-4288-87df-1960453f5d50\" (UID: \"9da3536c-cd43-4288-87df-1960453f5d50\") " Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.445732 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9da3536c-cd43-4288-87df-1960453f5d50-catalog-content\") pod \"9da3536c-cd43-4288-87df-1960453f5d50\" (UID: \"9da3536c-cd43-4288-87df-1960453f5d50\") " Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.446038 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad3f5ec1-9fd8-4805-91d6-e699f528783d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.446120 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9da3536c-cd43-4288-87df-1960453f5d50-utilities" (OuterVolumeSpecName: "utilities") pod "9da3536c-cd43-4288-87df-1960453f5d50" (UID: "9da3536c-cd43-4288-87df-1960453f5d50"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.475835 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9da3536c-cd43-4288-87df-1960453f5d50-kube-api-access-tts8z" (OuterVolumeSpecName: "kube-api-access-tts8z") pod "9da3536c-cd43-4288-87df-1960453f5d50" (UID: "9da3536c-cd43-4288-87df-1960453f5d50"). InnerVolumeSpecName "kube-api-access-tts8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.491906 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9da3536c-cd43-4288-87df-1960453f5d50-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9da3536c-cd43-4288-87df-1960453f5d50" (UID: "9da3536c-cd43-4288-87df-1960453f5d50"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.515250 4833 generic.go:334] "Generic (PLEG): container finished" podID="ad3f5ec1-9fd8-4805-91d6-e699f528783d" containerID="f0ede2f7686a5e4b11b12eeff84659f1decfa166063b514a48a0785cf5424751" exitCode=0 Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.515439 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8l5jk" event={"ID":"ad3f5ec1-9fd8-4805-91d6-e699f528783d","Type":"ContainerDied","Data":"f0ede2f7686a5e4b11b12eeff84659f1decfa166063b514a48a0785cf5424751"} Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.516364 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8l5jk" event={"ID":"ad3f5ec1-9fd8-4805-91d6-e699f528783d","Type":"ContainerDied","Data":"3dfe3b06d04e7d46670a621fdb854a1196cf739ff6c1a04ef9fb74a9029f7cbd"} Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.516463 4833 scope.go:117] "RemoveContainer" containerID="f0ede2f7686a5e4b11b12eeff84659f1decfa166063b514a48a0785cf5424751" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.515439 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8l5jk" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.519800 4833 generic.go:334] "Generic (PLEG): container finished" podID="9da3536c-cd43-4288-87df-1960453f5d50" containerID="ced5788f4566081a4c2c784af881f6abb8a099a4a328c16b21228046563d856b" exitCode=0 Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.519876 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqjsg" event={"ID":"9da3536c-cd43-4288-87df-1960453f5d50","Type":"ContainerDied","Data":"ced5788f4566081a4c2c784af881f6abb8a099a4a328c16b21228046563d856b"} Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.519943 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqjsg" event={"ID":"9da3536c-cd43-4288-87df-1960453f5d50","Type":"ContainerDied","Data":"f0ce091a5021446fdefafa3c7b5b5a772210f067e6477e116566ce1d4a02d1b5"} Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.520059 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rqjsg" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.535429 4833 scope.go:117] "RemoveContainer" containerID="4bbd921055580b0e0123e15bd8a73eac437479bea0930ed8fc31b5c6ef4a4871" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.558043 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9da3536c-cd43-4288-87df-1960453f5d50-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.558088 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9da3536c-cd43-4288-87df-1960453f5d50-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.558102 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tts8z\" (UniqueName: \"kubernetes.io/projected/9da3536c-cd43-4288-87df-1960453f5d50-kube-api-access-tts8z\") on node \"crc\" DevicePath \"\"" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.582738 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8l5jk"] Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.587158 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8l5jk"] Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.596944 4833 scope.go:117] "RemoveContainer" containerID="40becf73b2fe4ba848f7b2440fb47269e4a597bb45f47d39f67b218c36faa52c" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.601097 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rqjsg"] Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.608991 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rqjsg"] Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.621472 4833 scope.go:117] "RemoveContainer" containerID="f0ede2f7686a5e4b11b12eeff84659f1decfa166063b514a48a0785cf5424751" Oct 13 07:38:13 crc kubenswrapper[4833]: E1013 07:38:13.621986 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0ede2f7686a5e4b11b12eeff84659f1decfa166063b514a48a0785cf5424751\": container with ID starting with f0ede2f7686a5e4b11b12eeff84659f1decfa166063b514a48a0785cf5424751 not found: ID does not exist" containerID="f0ede2f7686a5e4b11b12eeff84659f1decfa166063b514a48a0785cf5424751" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.622049 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0ede2f7686a5e4b11b12eeff84659f1decfa166063b514a48a0785cf5424751"} err="failed to get container status \"f0ede2f7686a5e4b11b12eeff84659f1decfa166063b514a48a0785cf5424751\": rpc error: code = NotFound desc = could not find container \"f0ede2f7686a5e4b11b12eeff84659f1decfa166063b514a48a0785cf5424751\": container with ID starting with f0ede2f7686a5e4b11b12eeff84659f1decfa166063b514a48a0785cf5424751 not found: ID does not exist" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.622087 4833 scope.go:117] "RemoveContainer" containerID="4bbd921055580b0e0123e15bd8a73eac437479bea0930ed8fc31b5c6ef4a4871" Oct 13 07:38:13 crc kubenswrapper[4833]: E1013 07:38:13.622506 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bbd921055580b0e0123e15bd8a73eac437479bea0930ed8fc31b5c6ef4a4871\": container with ID starting with 4bbd921055580b0e0123e15bd8a73eac437479bea0930ed8fc31b5c6ef4a4871 not found: ID does not exist" containerID="4bbd921055580b0e0123e15bd8a73eac437479bea0930ed8fc31b5c6ef4a4871" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.622643 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bbd921055580b0e0123e15bd8a73eac437479bea0930ed8fc31b5c6ef4a4871"} err="failed to get container status \"4bbd921055580b0e0123e15bd8a73eac437479bea0930ed8fc31b5c6ef4a4871\": rpc error: code = NotFound desc = could not find container \"4bbd921055580b0e0123e15bd8a73eac437479bea0930ed8fc31b5c6ef4a4871\": container with ID starting with 4bbd921055580b0e0123e15bd8a73eac437479bea0930ed8fc31b5c6ef4a4871 not found: ID does not exist" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.622742 4833 scope.go:117] "RemoveContainer" containerID="40becf73b2fe4ba848f7b2440fb47269e4a597bb45f47d39f67b218c36faa52c" Oct 13 07:38:13 crc kubenswrapper[4833]: E1013 07:38:13.623170 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40becf73b2fe4ba848f7b2440fb47269e4a597bb45f47d39f67b218c36faa52c\": container with ID starting with 40becf73b2fe4ba848f7b2440fb47269e4a597bb45f47d39f67b218c36faa52c not found: ID does not exist" containerID="40becf73b2fe4ba848f7b2440fb47269e4a597bb45f47d39f67b218c36faa52c" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.623206 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40becf73b2fe4ba848f7b2440fb47269e4a597bb45f47d39f67b218c36faa52c"} err="failed to get container status \"40becf73b2fe4ba848f7b2440fb47269e4a597bb45f47d39f67b218c36faa52c\": rpc error: code = NotFound desc = could not find container \"40becf73b2fe4ba848f7b2440fb47269e4a597bb45f47d39f67b218c36faa52c\": container with ID starting with 40becf73b2fe4ba848f7b2440fb47269e4a597bb45f47d39f67b218c36faa52c not found: ID does not exist" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.623226 4833 scope.go:117] "RemoveContainer" containerID="ced5788f4566081a4c2c784af881f6abb8a099a4a328c16b21228046563d856b" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.644757 4833 scope.go:117] "RemoveContainer" containerID="450f8178c9c0a2adb7714afa821ef85d122166ba4f35982671b3a2e50a9fe02c" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.676331 4833 scope.go:117] "RemoveContainer" containerID="1f56e2d7347b88dc11b272fd1db3dd5069ad4b8c8946f550ff7e1787e29627cb" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.692376 4833 scope.go:117] "RemoveContainer" containerID="ced5788f4566081a4c2c784af881f6abb8a099a4a328c16b21228046563d856b" Oct 13 07:38:13 crc kubenswrapper[4833]: E1013 07:38:13.692807 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ced5788f4566081a4c2c784af881f6abb8a099a4a328c16b21228046563d856b\": container with ID starting with ced5788f4566081a4c2c784af881f6abb8a099a4a328c16b21228046563d856b not found: ID does not exist" containerID="ced5788f4566081a4c2c784af881f6abb8a099a4a328c16b21228046563d856b" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.692858 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ced5788f4566081a4c2c784af881f6abb8a099a4a328c16b21228046563d856b"} err="failed to get container status \"ced5788f4566081a4c2c784af881f6abb8a099a4a328c16b21228046563d856b\": rpc error: code = NotFound desc = could not find container \"ced5788f4566081a4c2c784af881f6abb8a099a4a328c16b21228046563d856b\": container with ID starting with ced5788f4566081a4c2c784af881f6abb8a099a4a328c16b21228046563d856b not found: ID does not exist" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.692892 4833 scope.go:117] "RemoveContainer" containerID="450f8178c9c0a2adb7714afa821ef85d122166ba4f35982671b3a2e50a9fe02c" Oct 13 07:38:13 crc kubenswrapper[4833]: E1013 07:38:13.693347 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"450f8178c9c0a2adb7714afa821ef85d122166ba4f35982671b3a2e50a9fe02c\": container with ID starting with 450f8178c9c0a2adb7714afa821ef85d122166ba4f35982671b3a2e50a9fe02c not found: ID does not exist" containerID="450f8178c9c0a2adb7714afa821ef85d122166ba4f35982671b3a2e50a9fe02c" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.693369 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"450f8178c9c0a2adb7714afa821ef85d122166ba4f35982671b3a2e50a9fe02c"} err="failed to get container status \"450f8178c9c0a2adb7714afa821ef85d122166ba4f35982671b3a2e50a9fe02c\": rpc error: code = NotFound desc = could not find container \"450f8178c9c0a2adb7714afa821ef85d122166ba4f35982671b3a2e50a9fe02c\": container with ID starting with 450f8178c9c0a2adb7714afa821ef85d122166ba4f35982671b3a2e50a9fe02c not found: ID does not exist" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.693414 4833 scope.go:117] "RemoveContainer" containerID="1f56e2d7347b88dc11b272fd1db3dd5069ad4b8c8946f550ff7e1787e29627cb" Oct 13 07:38:13 crc kubenswrapper[4833]: E1013 07:38:13.693788 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f56e2d7347b88dc11b272fd1db3dd5069ad4b8c8946f550ff7e1787e29627cb\": container with ID starting with 1f56e2d7347b88dc11b272fd1db3dd5069ad4b8c8946f550ff7e1787e29627cb not found: ID does not exist" containerID="1f56e2d7347b88dc11b272fd1db3dd5069ad4b8c8946f550ff7e1787e29627cb" Oct 13 07:38:13 crc kubenswrapper[4833]: I1013 07:38:13.693819 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f56e2d7347b88dc11b272fd1db3dd5069ad4b8c8946f550ff7e1787e29627cb"} err="failed to get container status \"1f56e2d7347b88dc11b272fd1db3dd5069ad4b8c8946f550ff7e1787e29627cb\": rpc error: code = NotFound desc = could not find container \"1f56e2d7347b88dc11b272fd1db3dd5069ad4b8c8946f550ff7e1787e29627cb\": container with ID starting with 1f56e2d7347b88dc11b272fd1db3dd5069ad4b8c8946f550ff7e1787e29627cb not found: ID does not exist" Oct 13 07:38:14 crc kubenswrapper[4833]: I1013 07:38:14.636056 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9da3536c-cd43-4288-87df-1960453f5d50" path="/var/lib/kubelet/pods/9da3536c-cd43-4288-87df-1960453f5d50/volumes" Oct 13 07:38:14 crc kubenswrapper[4833]: I1013 07:38:14.636983 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad3f5ec1-9fd8-4805-91d6-e699f528783d" path="/var/lib/kubelet/pods/ad3f5ec1-9fd8-4805-91d6-e699f528783d/volumes" Oct 13 07:39:00 crc kubenswrapper[4833]: I1013 07:39:00.542648 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:39:00 crc kubenswrapper[4833]: I1013 07:39:00.543415 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:39:30 crc kubenswrapper[4833]: I1013 07:39:30.542507 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:39:30 crc kubenswrapper[4833]: I1013 07:39:30.543330 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:39:50 crc kubenswrapper[4833]: I1013 07:39:50.900607 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bvvtl"] Oct 13 07:39:50 crc kubenswrapper[4833]: E1013 07:39:50.901555 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9da3536c-cd43-4288-87df-1960453f5d50" containerName="extract-content" Oct 13 07:39:50 crc kubenswrapper[4833]: I1013 07:39:50.901566 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9da3536c-cd43-4288-87df-1960453f5d50" containerName="extract-content" Oct 13 07:39:50 crc kubenswrapper[4833]: E1013 07:39:50.901593 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9da3536c-cd43-4288-87df-1960453f5d50" containerName="extract-utilities" Oct 13 07:39:50 crc kubenswrapper[4833]: I1013 07:39:50.901599 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9da3536c-cd43-4288-87df-1960453f5d50" containerName="extract-utilities" Oct 13 07:39:50 crc kubenswrapper[4833]: E1013 07:39:50.901630 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9da3536c-cd43-4288-87df-1960453f5d50" containerName="registry-server" Oct 13 07:39:50 crc kubenswrapper[4833]: I1013 07:39:50.901640 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9da3536c-cd43-4288-87df-1960453f5d50" containerName="registry-server" Oct 13 07:39:50 crc kubenswrapper[4833]: E1013 07:39:50.901648 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad3f5ec1-9fd8-4805-91d6-e699f528783d" containerName="registry-server" Oct 13 07:39:50 crc kubenswrapper[4833]: I1013 07:39:50.901653 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad3f5ec1-9fd8-4805-91d6-e699f528783d" containerName="registry-server" Oct 13 07:39:50 crc kubenswrapper[4833]: E1013 07:39:50.901665 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad3f5ec1-9fd8-4805-91d6-e699f528783d" containerName="extract-content" Oct 13 07:39:50 crc kubenswrapper[4833]: I1013 07:39:50.901671 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad3f5ec1-9fd8-4805-91d6-e699f528783d" containerName="extract-content" Oct 13 07:39:50 crc kubenswrapper[4833]: E1013 07:39:50.901686 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad3f5ec1-9fd8-4805-91d6-e699f528783d" containerName="extract-utilities" Oct 13 07:39:50 crc kubenswrapper[4833]: I1013 07:39:50.901692 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad3f5ec1-9fd8-4805-91d6-e699f528783d" containerName="extract-utilities" Oct 13 07:39:50 crc kubenswrapper[4833]: I1013 07:39:50.901925 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad3f5ec1-9fd8-4805-91d6-e699f528783d" containerName="registry-server" Oct 13 07:39:50 crc kubenswrapper[4833]: I1013 07:39:50.901951 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="9da3536c-cd43-4288-87df-1960453f5d50" containerName="registry-server" Oct 13 07:39:50 crc kubenswrapper[4833]: I1013 07:39:50.903645 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bvvtl" Oct 13 07:39:50 crc kubenswrapper[4833]: I1013 07:39:50.909378 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bvvtl"] Oct 13 07:39:50 crc kubenswrapper[4833]: I1013 07:39:50.923729 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vct4\" (UniqueName: \"kubernetes.io/projected/75aac7de-92db-4051-b4bb-227e2f69a0dd-kube-api-access-5vct4\") pod \"certified-operators-bvvtl\" (UID: \"75aac7de-92db-4051-b4bb-227e2f69a0dd\") " pod="openshift-marketplace/certified-operators-bvvtl" Oct 13 07:39:50 crc kubenswrapper[4833]: I1013 07:39:50.923846 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75aac7de-92db-4051-b4bb-227e2f69a0dd-catalog-content\") pod \"certified-operators-bvvtl\" (UID: \"75aac7de-92db-4051-b4bb-227e2f69a0dd\") " pod="openshift-marketplace/certified-operators-bvvtl" Oct 13 07:39:50 crc kubenswrapper[4833]: I1013 07:39:50.923923 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75aac7de-92db-4051-b4bb-227e2f69a0dd-utilities\") pod \"certified-operators-bvvtl\" (UID: \"75aac7de-92db-4051-b4bb-227e2f69a0dd\") " pod="openshift-marketplace/certified-operators-bvvtl" Oct 13 07:39:51 crc kubenswrapper[4833]: I1013 07:39:51.025379 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75aac7de-92db-4051-b4bb-227e2f69a0dd-utilities\") pod \"certified-operators-bvvtl\" (UID: \"75aac7de-92db-4051-b4bb-227e2f69a0dd\") " pod="openshift-marketplace/certified-operators-bvvtl" Oct 13 07:39:51 crc kubenswrapper[4833]: I1013 07:39:51.025521 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vct4\" (UniqueName: \"kubernetes.io/projected/75aac7de-92db-4051-b4bb-227e2f69a0dd-kube-api-access-5vct4\") pod \"certified-operators-bvvtl\" (UID: \"75aac7de-92db-4051-b4bb-227e2f69a0dd\") " pod="openshift-marketplace/certified-operators-bvvtl" Oct 13 07:39:51 crc kubenswrapper[4833]: I1013 07:39:51.025571 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75aac7de-92db-4051-b4bb-227e2f69a0dd-catalog-content\") pod \"certified-operators-bvvtl\" (UID: \"75aac7de-92db-4051-b4bb-227e2f69a0dd\") " pod="openshift-marketplace/certified-operators-bvvtl" Oct 13 07:39:51 crc kubenswrapper[4833]: I1013 07:39:51.026124 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75aac7de-92db-4051-b4bb-227e2f69a0dd-utilities\") pod \"certified-operators-bvvtl\" (UID: \"75aac7de-92db-4051-b4bb-227e2f69a0dd\") " pod="openshift-marketplace/certified-operators-bvvtl" Oct 13 07:39:51 crc kubenswrapper[4833]: I1013 07:39:51.026267 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75aac7de-92db-4051-b4bb-227e2f69a0dd-catalog-content\") pod \"certified-operators-bvvtl\" (UID: \"75aac7de-92db-4051-b4bb-227e2f69a0dd\") " pod="openshift-marketplace/certified-operators-bvvtl" Oct 13 07:39:51 crc kubenswrapper[4833]: I1013 07:39:51.048477 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vct4\" (UniqueName: \"kubernetes.io/projected/75aac7de-92db-4051-b4bb-227e2f69a0dd-kube-api-access-5vct4\") pod \"certified-operators-bvvtl\" (UID: \"75aac7de-92db-4051-b4bb-227e2f69a0dd\") " pod="openshift-marketplace/certified-operators-bvvtl" Oct 13 07:39:51 crc kubenswrapper[4833]: I1013 07:39:51.227569 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bvvtl" Oct 13 07:39:51 crc kubenswrapper[4833]: I1013 07:39:51.696529 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bvvtl"] Oct 13 07:39:51 crc kubenswrapper[4833]: W1013 07:39:51.705164 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75aac7de_92db_4051_b4bb_227e2f69a0dd.slice/crio-acd3b17f37af28b065a2d5175b4eb9bfffa453a7869c81ae1868f43961a5421c WatchSource:0}: Error finding container acd3b17f37af28b065a2d5175b4eb9bfffa453a7869c81ae1868f43961a5421c: Status 404 returned error can't find the container with id acd3b17f37af28b065a2d5175b4eb9bfffa453a7869c81ae1868f43961a5421c Oct 13 07:39:52 crc kubenswrapper[4833]: I1013 07:39:52.337998 4833 generic.go:334] "Generic (PLEG): container finished" podID="75aac7de-92db-4051-b4bb-227e2f69a0dd" containerID="1528b87a4fc6adab1df43ab5e5d7a7436d9df153af5c3d204b20777837e41a89" exitCode=0 Oct 13 07:39:52 crc kubenswrapper[4833]: I1013 07:39:52.338099 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bvvtl" event={"ID":"75aac7de-92db-4051-b4bb-227e2f69a0dd","Type":"ContainerDied","Data":"1528b87a4fc6adab1df43ab5e5d7a7436d9df153af5c3d204b20777837e41a89"} Oct 13 07:39:52 crc kubenswrapper[4833]: I1013 07:39:52.338292 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bvvtl" event={"ID":"75aac7de-92db-4051-b4bb-227e2f69a0dd","Type":"ContainerStarted","Data":"acd3b17f37af28b065a2d5175b4eb9bfffa453a7869c81ae1868f43961a5421c"} Oct 13 07:39:54 crc kubenswrapper[4833]: I1013 07:39:54.356246 4833 generic.go:334] "Generic (PLEG): container finished" podID="75aac7de-92db-4051-b4bb-227e2f69a0dd" containerID="a9b49c2a8b3b951b9554311c3137973d0708b561cc1c3b18b4768f4fe1385888" exitCode=0 Oct 13 07:39:54 crc kubenswrapper[4833]: I1013 07:39:54.356350 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bvvtl" event={"ID":"75aac7de-92db-4051-b4bb-227e2f69a0dd","Type":"ContainerDied","Data":"a9b49c2a8b3b951b9554311c3137973d0708b561cc1c3b18b4768f4fe1385888"} Oct 13 07:39:55 crc kubenswrapper[4833]: I1013 07:39:55.366935 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bvvtl" event={"ID":"75aac7de-92db-4051-b4bb-227e2f69a0dd","Type":"ContainerStarted","Data":"ca8d8ead7ab22abc0e97372565fe834d488f8af8ba7368d7acaea091b74689d1"} Oct 13 07:39:55 crc kubenswrapper[4833]: I1013 07:39:55.386412 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bvvtl" podStartSLOduration=2.94281216 podStartE2EDuration="5.386382801s" podCreationTimestamp="2025-10-13 07:39:50 +0000 UTC" firstStartedPulling="2025-10-13 07:39:52.341071407 +0000 UTC m=+4282.441494363" lastFinishedPulling="2025-10-13 07:39:54.784642078 +0000 UTC m=+4284.885065004" observedRunningTime="2025-10-13 07:39:55.38565008 +0000 UTC m=+4285.486072996" watchObservedRunningTime="2025-10-13 07:39:55.386382801 +0000 UTC m=+4285.486805757" Oct 13 07:40:00 crc kubenswrapper[4833]: I1013 07:40:00.542576 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:40:00 crc kubenswrapper[4833]: I1013 07:40:00.543595 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:40:00 crc kubenswrapper[4833]: I1013 07:40:00.543657 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 07:40:00 crc kubenswrapper[4833]: I1013 07:40:00.544460 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"32a6d82a8bf2d5ec1a4e4b9475db65556d7333730766b0699b8cddfd7466600a"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 07:40:00 crc kubenswrapper[4833]: I1013 07:40:00.544522 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://32a6d82a8bf2d5ec1a4e4b9475db65556d7333730766b0699b8cddfd7466600a" gracePeriod=600 Oct 13 07:40:00 crc kubenswrapper[4833]: E1013 07:40:00.665610 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:40:01 crc kubenswrapper[4833]: I1013 07:40:01.228090 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bvvtl" Oct 13 07:40:01 crc kubenswrapper[4833]: I1013 07:40:01.228404 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bvvtl" Oct 13 07:40:01 crc kubenswrapper[4833]: I1013 07:40:01.279406 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bvvtl" Oct 13 07:40:01 crc kubenswrapper[4833]: I1013 07:40:01.414077 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="32a6d82a8bf2d5ec1a4e4b9475db65556d7333730766b0699b8cddfd7466600a" exitCode=0 Oct 13 07:40:01 crc kubenswrapper[4833]: I1013 07:40:01.414421 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"32a6d82a8bf2d5ec1a4e4b9475db65556d7333730766b0699b8cddfd7466600a"} Oct 13 07:40:01 crc kubenswrapper[4833]: I1013 07:40:01.414904 4833 scope.go:117] "RemoveContainer" containerID="37ccfe037d527dc1f623c1ffe515273eff62feb34b54804621df13b0947520fa" Oct 13 07:40:01 crc kubenswrapper[4833]: I1013 07:40:01.415624 4833 scope.go:117] "RemoveContainer" containerID="32a6d82a8bf2d5ec1a4e4b9475db65556d7333730766b0699b8cddfd7466600a" Oct 13 07:40:01 crc kubenswrapper[4833]: E1013 07:40:01.415979 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:40:01 crc kubenswrapper[4833]: I1013 07:40:01.476909 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bvvtl" Oct 13 07:40:01 crc kubenswrapper[4833]: I1013 07:40:01.523705 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bvvtl"] Oct 13 07:40:03 crc kubenswrapper[4833]: I1013 07:40:03.432420 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bvvtl" podUID="75aac7de-92db-4051-b4bb-227e2f69a0dd" containerName="registry-server" containerID="cri-o://ca8d8ead7ab22abc0e97372565fe834d488f8af8ba7368d7acaea091b74689d1" gracePeriod=2 Oct 13 07:40:03 crc kubenswrapper[4833]: I1013 07:40:03.864583 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bvvtl" Oct 13 07:40:03 crc kubenswrapper[4833]: I1013 07:40:03.935784 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75aac7de-92db-4051-b4bb-227e2f69a0dd-catalog-content\") pod \"75aac7de-92db-4051-b4bb-227e2f69a0dd\" (UID: \"75aac7de-92db-4051-b4bb-227e2f69a0dd\") " Oct 13 07:40:03 crc kubenswrapper[4833]: I1013 07:40:03.935905 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vct4\" (UniqueName: \"kubernetes.io/projected/75aac7de-92db-4051-b4bb-227e2f69a0dd-kube-api-access-5vct4\") pod \"75aac7de-92db-4051-b4bb-227e2f69a0dd\" (UID: \"75aac7de-92db-4051-b4bb-227e2f69a0dd\") " Oct 13 07:40:03 crc kubenswrapper[4833]: I1013 07:40:03.936025 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75aac7de-92db-4051-b4bb-227e2f69a0dd-utilities\") pod \"75aac7de-92db-4051-b4bb-227e2f69a0dd\" (UID: \"75aac7de-92db-4051-b4bb-227e2f69a0dd\") " Oct 13 07:40:03 crc kubenswrapper[4833]: I1013 07:40:03.937145 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75aac7de-92db-4051-b4bb-227e2f69a0dd-utilities" (OuterVolumeSpecName: "utilities") pod "75aac7de-92db-4051-b4bb-227e2f69a0dd" (UID: "75aac7de-92db-4051-b4bb-227e2f69a0dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:40:03 crc kubenswrapper[4833]: I1013 07:40:03.943358 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75aac7de-92db-4051-b4bb-227e2f69a0dd-kube-api-access-5vct4" (OuterVolumeSpecName: "kube-api-access-5vct4") pod "75aac7de-92db-4051-b4bb-227e2f69a0dd" (UID: "75aac7de-92db-4051-b4bb-227e2f69a0dd"). InnerVolumeSpecName "kube-api-access-5vct4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:40:03 crc kubenswrapper[4833]: I1013 07:40:03.986496 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75aac7de-92db-4051-b4bb-227e2f69a0dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "75aac7de-92db-4051-b4bb-227e2f69a0dd" (UID: "75aac7de-92db-4051-b4bb-227e2f69a0dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:40:04 crc kubenswrapper[4833]: I1013 07:40:04.037288 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vct4\" (UniqueName: \"kubernetes.io/projected/75aac7de-92db-4051-b4bb-227e2f69a0dd-kube-api-access-5vct4\") on node \"crc\" DevicePath \"\"" Oct 13 07:40:04 crc kubenswrapper[4833]: I1013 07:40:04.037325 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75aac7de-92db-4051-b4bb-227e2f69a0dd-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 07:40:04 crc kubenswrapper[4833]: I1013 07:40:04.037336 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75aac7de-92db-4051-b4bb-227e2f69a0dd-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 07:40:04 crc kubenswrapper[4833]: I1013 07:40:04.442097 4833 generic.go:334] "Generic (PLEG): container finished" podID="75aac7de-92db-4051-b4bb-227e2f69a0dd" containerID="ca8d8ead7ab22abc0e97372565fe834d488f8af8ba7368d7acaea091b74689d1" exitCode=0 Oct 13 07:40:04 crc kubenswrapper[4833]: I1013 07:40:04.442143 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bvvtl" event={"ID":"75aac7de-92db-4051-b4bb-227e2f69a0dd","Type":"ContainerDied","Data":"ca8d8ead7ab22abc0e97372565fe834d488f8af8ba7368d7acaea091b74689d1"} Oct 13 07:40:04 crc kubenswrapper[4833]: I1013 07:40:04.442176 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bvvtl" event={"ID":"75aac7de-92db-4051-b4bb-227e2f69a0dd","Type":"ContainerDied","Data":"acd3b17f37af28b065a2d5175b4eb9bfffa453a7869c81ae1868f43961a5421c"} Oct 13 07:40:04 crc kubenswrapper[4833]: I1013 07:40:04.442198 4833 scope.go:117] "RemoveContainer" containerID="ca8d8ead7ab22abc0e97372565fe834d488f8af8ba7368d7acaea091b74689d1" Oct 13 07:40:04 crc kubenswrapper[4833]: I1013 07:40:04.442227 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bvvtl" Oct 13 07:40:04 crc kubenswrapper[4833]: I1013 07:40:04.469524 4833 scope.go:117] "RemoveContainer" containerID="a9b49c2a8b3b951b9554311c3137973d0708b561cc1c3b18b4768f4fe1385888" Oct 13 07:40:04 crc kubenswrapper[4833]: I1013 07:40:04.507920 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bvvtl"] Oct 13 07:40:04 crc kubenswrapper[4833]: I1013 07:40:04.509192 4833 scope.go:117] "RemoveContainer" containerID="1528b87a4fc6adab1df43ab5e5d7a7436d9df153af5c3d204b20777837e41a89" Oct 13 07:40:04 crc kubenswrapper[4833]: I1013 07:40:04.515147 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bvvtl"] Oct 13 07:40:04 crc kubenswrapper[4833]: I1013 07:40:04.537075 4833 scope.go:117] "RemoveContainer" containerID="ca8d8ead7ab22abc0e97372565fe834d488f8af8ba7368d7acaea091b74689d1" Oct 13 07:40:04 crc kubenswrapper[4833]: E1013 07:40:04.537570 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca8d8ead7ab22abc0e97372565fe834d488f8af8ba7368d7acaea091b74689d1\": container with ID starting with ca8d8ead7ab22abc0e97372565fe834d488f8af8ba7368d7acaea091b74689d1 not found: ID does not exist" containerID="ca8d8ead7ab22abc0e97372565fe834d488f8af8ba7368d7acaea091b74689d1" Oct 13 07:40:04 crc kubenswrapper[4833]: I1013 07:40:04.537600 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca8d8ead7ab22abc0e97372565fe834d488f8af8ba7368d7acaea091b74689d1"} err="failed to get container status \"ca8d8ead7ab22abc0e97372565fe834d488f8af8ba7368d7acaea091b74689d1\": rpc error: code = NotFound desc = could not find container \"ca8d8ead7ab22abc0e97372565fe834d488f8af8ba7368d7acaea091b74689d1\": container with ID starting with ca8d8ead7ab22abc0e97372565fe834d488f8af8ba7368d7acaea091b74689d1 not found: ID does not exist" Oct 13 07:40:04 crc kubenswrapper[4833]: I1013 07:40:04.537633 4833 scope.go:117] "RemoveContainer" containerID="a9b49c2a8b3b951b9554311c3137973d0708b561cc1c3b18b4768f4fe1385888" Oct 13 07:40:04 crc kubenswrapper[4833]: E1013 07:40:04.538010 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9b49c2a8b3b951b9554311c3137973d0708b561cc1c3b18b4768f4fe1385888\": container with ID starting with a9b49c2a8b3b951b9554311c3137973d0708b561cc1c3b18b4768f4fe1385888 not found: ID does not exist" containerID="a9b49c2a8b3b951b9554311c3137973d0708b561cc1c3b18b4768f4fe1385888" Oct 13 07:40:04 crc kubenswrapper[4833]: I1013 07:40:04.538051 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9b49c2a8b3b951b9554311c3137973d0708b561cc1c3b18b4768f4fe1385888"} err="failed to get container status \"a9b49c2a8b3b951b9554311c3137973d0708b561cc1c3b18b4768f4fe1385888\": rpc error: code = NotFound desc = could not find container \"a9b49c2a8b3b951b9554311c3137973d0708b561cc1c3b18b4768f4fe1385888\": container with ID starting with a9b49c2a8b3b951b9554311c3137973d0708b561cc1c3b18b4768f4fe1385888 not found: ID does not exist" Oct 13 07:40:04 crc kubenswrapper[4833]: I1013 07:40:04.538078 4833 scope.go:117] "RemoveContainer" containerID="1528b87a4fc6adab1df43ab5e5d7a7436d9df153af5c3d204b20777837e41a89" Oct 13 07:40:04 crc kubenswrapper[4833]: E1013 07:40:04.538472 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1528b87a4fc6adab1df43ab5e5d7a7436d9df153af5c3d204b20777837e41a89\": container with ID starting with 1528b87a4fc6adab1df43ab5e5d7a7436d9df153af5c3d204b20777837e41a89 not found: ID does not exist" containerID="1528b87a4fc6adab1df43ab5e5d7a7436d9df153af5c3d204b20777837e41a89" Oct 13 07:40:04 crc kubenswrapper[4833]: I1013 07:40:04.538504 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1528b87a4fc6adab1df43ab5e5d7a7436d9df153af5c3d204b20777837e41a89"} err="failed to get container status \"1528b87a4fc6adab1df43ab5e5d7a7436d9df153af5c3d204b20777837e41a89\": rpc error: code = NotFound desc = could not find container \"1528b87a4fc6adab1df43ab5e5d7a7436d9df153af5c3d204b20777837e41a89\": container with ID starting with 1528b87a4fc6adab1df43ab5e5d7a7436d9df153af5c3d204b20777837e41a89 not found: ID does not exist" Oct 13 07:40:04 crc kubenswrapper[4833]: I1013 07:40:04.641961 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75aac7de-92db-4051-b4bb-227e2f69a0dd" path="/var/lib/kubelet/pods/75aac7de-92db-4051-b4bb-227e2f69a0dd/volumes" Oct 13 07:40:15 crc kubenswrapper[4833]: I1013 07:40:15.626960 4833 scope.go:117] "RemoveContainer" containerID="32a6d82a8bf2d5ec1a4e4b9475db65556d7333730766b0699b8cddfd7466600a" Oct 13 07:40:15 crc kubenswrapper[4833]: E1013 07:40:15.629015 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:40:27 crc kubenswrapper[4833]: I1013 07:40:27.627525 4833 scope.go:117] "RemoveContainer" containerID="32a6d82a8bf2d5ec1a4e4b9475db65556d7333730766b0699b8cddfd7466600a" Oct 13 07:40:27 crc kubenswrapper[4833]: E1013 07:40:27.628278 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:40:41 crc kubenswrapper[4833]: I1013 07:40:41.627351 4833 scope.go:117] "RemoveContainer" containerID="32a6d82a8bf2d5ec1a4e4b9475db65556d7333730766b0699b8cddfd7466600a" Oct 13 07:40:41 crc kubenswrapper[4833]: E1013 07:40:41.628288 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:40:52 crc kubenswrapper[4833]: I1013 07:40:52.626937 4833 scope.go:117] "RemoveContainer" containerID="32a6d82a8bf2d5ec1a4e4b9475db65556d7333730766b0699b8cddfd7466600a" Oct 13 07:40:52 crc kubenswrapper[4833]: E1013 07:40:52.630689 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:41:05 crc kubenswrapper[4833]: I1013 07:41:05.627248 4833 scope.go:117] "RemoveContainer" containerID="32a6d82a8bf2d5ec1a4e4b9475db65556d7333730766b0699b8cddfd7466600a" Oct 13 07:41:05 crc kubenswrapper[4833]: E1013 07:41:05.628058 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:41:20 crc kubenswrapper[4833]: I1013 07:41:20.630976 4833 scope.go:117] "RemoveContainer" containerID="32a6d82a8bf2d5ec1a4e4b9475db65556d7333730766b0699b8cddfd7466600a" Oct 13 07:41:20 crc kubenswrapper[4833]: E1013 07:41:20.631671 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:41:34 crc kubenswrapper[4833]: I1013 07:41:34.627732 4833 scope.go:117] "RemoveContainer" containerID="32a6d82a8bf2d5ec1a4e4b9475db65556d7333730766b0699b8cddfd7466600a" Oct 13 07:41:34 crc kubenswrapper[4833]: E1013 07:41:34.629977 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:41:46 crc kubenswrapper[4833]: I1013 07:41:46.626772 4833 scope.go:117] "RemoveContainer" containerID="32a6d82a8bf2d5ec1a4e4b9475db65556d7333730766b0699b8cddfd7466600a" Oct 13 07:41:46 crc kubenswrapper[4833]: E1013 07:41:46.627879 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:42:00 crc kubenswrapper[4833]: I1013 07:42:00.636340 4833 scope.go:117] "RemoveContainer" containerID="32a6d82a8bf2d5ec1a4e4b9475db65556d7333730766b0699b8cddfd7466600a" Oct 13 07:42:00 crc kubenswrapper[4833]: E1013 07:42:00.637861 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:42:15 crc kubenswrapper[4833]: I1013 07:42:15.627695 4833 scope.go:117] "RemoveContainer" containerID="32a6d82a8bf2d5ec1a4e4b9475db65556d7333730766b0699b8cddfd7466600a" Oct 13 07:42:15 crc kubenswrapper[4833]: E1013 07:42:15.628725 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:42:26 crc kubenswrapper[4833]: I1013 07:42:26.628425 4833 scope.go:117] "RemoveContainer" containerID="32a6d82a8bf2d5ec1a4e4b9475db65556d7333730766b0699b8cddfd7466600a" Oct 13 07:42:26 crc kubenswrapper[4833]: E1013 07:42:26.630108 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:42:40 crc kubenswrapper[4833]: I1013 07:42:40.630775 4833 scope.go:117] "RemoveContainer" containerID="32a6d82a8bf2d5ec1a4e4b9475db65556d7333730766b0699b8cddfd7466600a" Oct 13 07:42:40 crc kubenswrapper[4833]: E1013 07:42:40.631559 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:42:52 crc kubenswrapper[4833]: I1013 07:42:52.627997 4833 scope.go:117] "RemoveContainer" containerID="32a6d82a8bf2d5ec1a4e4b9475db65556d7333730766b0699b8cddfd7466600a" Oct 13 07:42:52 crc kubenswrapper[4833]: E1013 07:42:52.628984 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:43:06 crc kubenswrapper[4833]: I1013 07:43:06.628430 4833 scope.go:117] "RemoveContainer" containerID="32a6d82a8bf2d5ec1a4e4b9475db65556d7333730766b0699b8cddfd7466600a" Oct 13 07:43:06 crc kubenswrapper[4833]: E1013 07:43:06.629471 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:43:18 crc kubenswrapper[4833]: I1013 07:43:18.626767 4833 scope.go:117] "RemoveContainer" containerID="32a6d82a8bf2d5ec1a4e4b9475db65556d7333730766b0699b8cddfd7466600a" Oct 13 07:43:18 crc kubenswrapper[4833]: E1013 07:43:18.627567 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:43:29 crc kubenswrapper[4833]: I1013 07:43:29.629767 4833 scope.go:117] "RemoveContainer" containerID="32a6d82a8bf2d5ec1a4e4b9475db65556d7333730766b0699b8cddfd7466600a" Oct 13 07:43:29 crc kubenswrapper[4833]: E1013 07:43:29.630606 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:43:41 crc kubenswrapper[4833]: I1013 07:43:41.627814 4833 scope.go:117] "RemoveContainer" containerID="32a6d82a8bf2d5ec1a4e4b9475db65556d7333730766b0699b8cddfd7466600a" Oct 13 07:43:41 crc kubenswrapper[4833]: E1013 07:43:41.629160 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:43:52 crc kubenswrapper[4833]: I1013 07:43:52.627598 4833 scope.go:117] "RemoveContainer" containerID="32a6d82a8bf2d5ec1a4e4b9475db65556d7333730766b0699b8cddfd7466600a" Oct 13 07:43:52 crc kubenswrapper[4833]: E1013 07:43:52.628835 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:44:05 crc kubenswrapper[4833]: I1013 07:44:05.626704 4833 scope.go:117] "RemoveContainer" containerID="32a6d82a8bf2d5ec1a4e4b9475db65556d7333730766b0699b8cddfd7466600a" Oct 13 07:44:05 crc kubenswrapper[4833]: E1013 07:44:05.627309 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:44:20 crc kubenswrapper[4833]: I1013 07:44:20.632175 4833 scope.go:117] "RemoveContainer" containerID="32a6d82a8bf2d5ec1a4e4b9475db65556d7333730766b0699b8cddfd7466600a" Oct 13 07:44:20 crc kubenswrapper[4833]: E1013 07:44:20.633026 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:44:28 crc kubenswrapper[4833]: I1013 07:44:28.763198 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qw6fx"] Oct 13 07:44:28 crc kubenswrapper[4833]: E1013 07:44:28.764914 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75aac7de-92db-4051-b4bb-227e2f69a0dd" containerName="extract-utilities" Oct 13 07:44:28 crc kubenswrapper[4833]: I1013 07:44:28.764938 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="75aac7de-92db-4051-b4bb-227e2f69a0dd" containerName="extract-utilities" Oct 13 07:44:28 crc kubenswrapper[4833]: E1013 07:44:28.764962 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75aac7de-92db-4051-b4bb-227e2f69a0dd" containerName="extract-content" Oct 13 07:44:28 crc kubenswrapper[4833]: I1013 07:44:28.764970 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="75aac7de-92db-4051-b4bb-227e2f69a0dd" containerName="extract-content" Oct 13 07:44:28 crc kubenswrapper[4833]: E1013 07:44:28.764986 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75aac7de-92db-4051-b4bb-227e2f69a0dd" containerName="registry-server" Oct 13 07:44:28 crc kubenswrapper[4833]: I1013 07:44:28.764993 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="75aac7de-92db-4051-b4bb-227e2f69a0dd" containerName="registry-server" Oct 13 07:44:28 crc kubenswrapper[4833]: I1013 07:44:28.765176 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="75aac7de-92db-4051-b4bb-227e2f69a0dd" containerName="registry-server" Oct 13 07:44:28 crc kubenswrapper[4833]: I1013 07:44:28.766458 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qw6fx" Oct 13 07:44:28 crc kubenswrapper[4833]: I1013 07:44:28.782735 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qw6fx"] Oct 13 07:44:28 crc kubenswrapper[4833]: I1013 07:44:28.900048 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flw8b\" (UniqueName: \"kubernetes.io/projected/74825a65-ef20-4740-a6ce-2731dfd96eef-kube-api-access-flw8b\") pod \"redhat-marketplace-qw6fx\" (UID: \"74825a65-ef20-4740-a6ce-2731dfd96eef\") " pod="openshift-marketplace/redhat-marketplace-qw6fx" Oct 13 07:44:28 crc kubenswrapper[4833]: I1013 07:44:28.900116 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74825a65-ef20-4740-a6ce-2731dfd96eef-catalog-content\") pod \"redhat-marketplace-qw6fx\" (UID: \"74825a65-ef20-4740-a6ce-2731dfd96eef\") " pod="openshift-marketplace/redhat-marketplace-qw6fx" Oct 13 07:44:28 crc kubenswrapper[4833]: I1013 07:44:28.900181 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74825a65-ef20-4740-a6ce-2731dfd96eef-utilities\") pod \"redhat-marketplace-qw6fx\" (UID: \"74825a65-ef20-4740-a6ce-2731dfd96eef\") " pod="openshift-marketplace/redhat-marketplace-qw6fx" Oct 13 07:44:29 crc kubenswrapper[4833]: I1013 07:44:29.001184 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flw8b\" (UniqueName: \"kubernetes.io/projected/74825a65-ef20-4740-a6ce-2731dfd96eef-kube-api-access-flw8b\") pod \"redhat-marketplace-qw6fx\" (UID: \"74825a65-ef20-4740-a6ce-2731dfd96eef\") " pod="openshift-marketplace/redhat-marketplace-qw6fx" Oct 13 07:44:29 crc kubenswrapper[4833]: I1013 07:44:29.001255 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74825a65-ef20-4740-a6ce-2731dfd96eef-catalog-content\") pod \"redhat-marketplace-qw6fx\" (UID: \"74825a65-ef20-4740-a6ce-2731dfd96eef\") " pod="openshift-marketplace/redhat-marketplace-qw6fx" Oct 13 07:44:29 crc kubenswrapper[4833]: I1013 07:44:29.001299 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74825a65-ef20-4740-a6ce-2731dfd96eef-utilities\") pod \"redhat-marketplace-qw6fx\" (UID: \"74825a65-ef20-4740-a6ce-2731dfd96eef\") " pod="openshift-marketplace/redhat-marketplace-qw6fx" Oct 13 07:44:29 crc kubenswrapper[4833]: I1013 07:44:29.001771 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74825a65-ef20-4740-a6ce-2731dfd96eef-catalog-content\") pod \"redhat-marketplace-qw6fx\" (UID: \"74825a65-ef20-4740-a6ce-2731dfd96eef\") " pod="openshift-marketplace/redhat-marketplace-qw6fx" Oct 13 07:44:29 crc kubenswrapper[4833]: I1013 07:44:29.001814 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74825a65-ef20-4740-a6ce-2731dfd96eef-utilities\") pod \"redhat-marketplace-qw6fx\" (UID: \"74825a65-ef20-4740-a6ce-2731dfd96eef\") " pod="openshift-marketplace/redhat-marketplace-qw6fx" Oct 13 07:44:29 crc kubenswrapper[4833]: I1013 07:44:29.022353 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flw8b\" (UniqueName: \"kubernetes.io/projected/74825a65-ef20-4740-a6ce-2731dfd96eef-kube-api-access-flw8b\") pod \"redhat-marketplace-qw6fx\" (UID: \"74825a65-ef20-4740-a6ce-2731dfd96eef\") " pod="openshift-marketplace/redhat-marketplace-qw6fx" Oct 13 07:44:29 crc kubenswrapper[4833]: I1013 07:44:29.097954 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qw6fx" Oct 13 07:44:29 crc kubenswrapper[4833]: I1013 07:44:29.559967 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qw6fx"] Oct 13 07:44:29 crc kubenswrapper[4833]: I1013 07:44:29.696521 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qw6fx" event={"ID":"74825a65-ef20-4740-a6ce-2731dfd96eef","Type":"ContainerStarted","Data":"e9f28b5eff8deacfe1d00242033e73b4d90248b7e005775ad6018af356982281"} Oct 13 07:44:30 crc kubenswrapper[4833]: I1013 07:44:30.713465 4833 generic.go:334] "Generic (PLEG): container finished" podID="74825a65-ef20-4740-a6ce-2731dfd96eef" containerID="449aa46c328c283cd8ff3945673259d10eb34378015da0530018b9e0957a0e45" exitCode=0 Oct 13 07:44:30 crc kubenswrapper[4833]: I1013 07:44:30.713859 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qw6fx" event={"ID":"74825a65-ef20-4740-a6ce-2731dfd96eef","Type":"ContainerDied","Data":"449aa46c328c283cd8ff3945673259d10eb34378015da0530018b9e0957a0e45"} Oct 13 07:44:30 crc kubenswrapper[4833]: I1013 07:44:30.716723 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 07:44:31 crc kubenswrapper[4833]: I1013 07:44:31.722491 4833 generic.go:334] "Generic (PLEG): container finished" podID="74825a65-ef20-4740-a6ce-2731dfd96eef" containerID="9189e97e3ea2c01707e6cae853f2c87ae82159478df90364b3b004597ce4a0be" exitCode=0 Oct 13 07:44:31 crc kubenswrapper[4833]: I1013 07:44:31.722565 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qw6fx" event={"ID":"74825a65-ef20-4740-a6ce-2731dfd96eef","Type":"ContainerDied","Data":"9189e97e3ea2c01707e6cae853f2c87ae82159478df90364b3b004597ce4a0be"} Oct 13 07:44:32 crc kubenswrapper[4833]: I1013 07:44:32.741096 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qw6fx" event={"ID":"74825a65-ef20-4740-a6ce-2731dfd96eef","Type":"ContainerStarted","Data":"4256e1de296f888aeb830cf931258e506afcb4bc1b968cc4a22d511314d71771"} Oct 13 07:44:32 crc kubenswrapper[4833]: I1013 07:44:32.769398 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qw6fx" podStartSLOduration=3.332369581 podStartE2EDuration="4.769375952s" podCreationTimestamp="2025-10-13 07:44:28 +0000 UTC" firstStartedPulling="2025-10-13 07:44:30.716389184 +0000 UTC m=+4560.816812110" lastFinishedPulling="2025-10-13 07:44:32.153395565 +0000 UTC m=+4562.253818481" observedRunningTime="2025-10-13 07:44:32.762483566 +0000 UTC m=+4562.862906482" watchObservedRunningTime="2025-10-13 07:44:32.769375952 +0000 UTC m=+4562.869798878" Oct 13 07:44:33 crc kubenswrapper[4833]: I1013 07:44:33.627217 4833 scope.go:117] "RemoveContainer" containerID="32a6d82a8bf2d5ec1a4e4b9475db65556d7333730766b0699b8cddfd7466600a" Oct 13 07:44:33 crc kubenswrapper[4833]: E1013 07:44:33.627793 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:44:39 crc kubenswrapper[4833]: I1013 07:44:39.099210 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qw6fx" Oct 13 07:44:39 crc kubenswrapper[4833]: I1013 07:44:39.100022 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qw6fx" Oct 13 07:44:39 crc kubenswrapper[4833]: I1013 07:44:39.183048 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qw6fx" Oct 13 07:44:39 crc kubenswrapper[4833]: I1013 07:44:39.853444 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qw6fx" Oct 13 07:44:39 crc kubenswrapper[4833]: I1013 07:44:39.901214 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qw6fx"] Oct 13 07:44:41 crc kubenswrapper[4833]: I1013 07:44:41.817386 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qw6fx" podUID="74825a65-ef20-4740-a6ce-2731dfd96eef" containerName="registry-server" containerID="cri-o://4256e1de296f888aeb830cf931258e506afcb4bc1b968cc4a22d511314d71771" gracePeriod=2 Oct 13 07:44:42 crc kubenswrapper[4833]: I1013 07:44:42.239415 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qw6fx" Oct 13 07:44:42 crc kubenswrapper[4833]: I1013 07:44:42.424997 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flw8b\" (UniqueName: \"kubernetes.io/projected/74825a65-ef20-4740-a6ce-2731dfd96eef-kube-api-access-flw8b\") pod \"74825a65-ef20-4740-a6ce-2731dfd96eef\" (UID: \"74825a65-ef20-4740-a6ce-2731dfd96eef\") " Oct 13 07:44:42 crc kubenswrapper[4833]: I1013 07:44:42.425090 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74825a65-ef20-4740-a6ce-2731dfd96eef-utilities\") pod \"74825a65-ef20-4740-a6ce-2731dfd96eef\" (UID: \"74825a65-ef20-4740-a6ce-2731dfd96eef\") " Oct 13 07:44:42 crc kubenswrapper[4833]: I1013 07:44:42.425134 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74825a65-ef20-4740-a6ce-2731dfd96eef-catalog-content\") pod \"74825a65-ef20-4740-a6ce-2731dfd96eef\" (UID: \"74825a65-ef20-4740-a6ce-2731dfd96eef\") " Oct 13 07:44:42 crc kubenswrapper[4833]: I1013 07:44:42.426680 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74825a65-ef20-4740-a6ce-2731dfd96eef-utilities" (OuterVolumeSpecName: "utilities") pod "74825a65-ef20-4740-a6ce-2731dfd96eef" (UID: "74825a65-ef20-4740-a6ce-2731dfd96eef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:44:42 crc kubenswrapper[4833]: I1013 07:44:42.430273 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74825a65-ef20-4740-a6ce-2731dfd96eef-kube-api-access-flw8b" (OuterVolumeSpecName: "kube-api-access-flw8b") pod "74825a65-ef20-4740-a6ce-2731dfd96eef" (UID: "74825a65-ef20-4740-a6ce-2731dfd96eef"). InnerVolumeSpecName "kube-api-access-flw8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:44:42 crc kubenswrapper[4833]: I1013 07:44:42.444044 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74825a65-ef20-4740-a6ce-2731dfd96eef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74825a65-ef20-4740-a6ce-2731dfd96eef" (UID: "74825a65-ef20-4740-a6ce-2731dfd96eef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:44:42 crc kubenswrapper[4833]: I1013 07:44:42.526697 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flw8b\" (UniqueName: \"kubernetes.io/projected/74825a65-ef20-4740-a6ce-2731dfd96eef-kube-api-access-flw8b\") on node \"crc\" DevicePath \"\"" Oct 13 07:44:42 crc kubenswrapper[4833]: I1013 07:44:42.526733 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74825a65-ef20-4740-a6ce-2731dfd96eef-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 07:44:42 crc kubenswrapper[4833]: I1013 07:44:42.526742 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74825a65-ef20-4740-a6ce-2731dfd96eef-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 07:44:42 crc kubenswrapper[4833]: I1013 07:44:42.830090 4833 generic.go:334] "Generic (PLEG): container finished" podID="74825a65-ef20-4740-a6ce-2731dfd96eef" containerID="4256e1de296f888aeb830cf931258e506afcb4bc1b968cc4a22d511314d71771" exitCode=0 Oct 13 07:44:42 crc kubenswrapper[4833]: I1013 07:44:42.830148 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qw6fx" event={"ID":"74825a65-ef20-4740-a6ce-2731dfd96eef","Type":"ContainerDied","Data":"4256e1de296f888aeb830cf931258e506afcb4bc1b968cc4a22d511314d71771"} Oct 13 07:44:42 crc kubenswrapper[4833]: I1013 07:44:42.830189 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qw6fx" event={"ID":"74825a65-ef20-4740-a6ce-2731dfd96eef","Type":"ContainerDied","Data":"e9f28b5eff8deacfe1d00242033e73b4d90248b7e005775ad6018af356982281"} Oct 13 07:44:42 crc kubenswrapper[4833]: I1013 07:44:42.830212 4833 scope.go:117] "RemoveContainer" containerID="4256e1de296f888aeb830cf931258e506afcb4bc1b968cc4a22d511314d71771" Oct 13 07:44:42 crc kubenswrapper[4833]: I1013 07:44:42.830206 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qw6fx" Oct 13 07:44:42 crc kubenswrapper[4833]: I1013 07:44:42.859951 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qw6fx"] Oct 13 07:44:42 crc kubenswrapper[4833]: I1013 07:44:42.865233 4833 scope.go:117] "RemoveContainer" containerID="9189e97e3ea2c01707e6cae853f2c87ae82159478df90364b3b004597ce4a0be" Oct 13 07:44:42 crc kubenswrapper[4833]: I1013 07:44:42.870900 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qw6fx"] Oct 13 07:44:42 crc kubenswrapper[4833]: I1013 07:44:42.884436 4833 scope.go:117] "RemoveContainer" containerID="449aa46c328c283cd8ff3945673259d10eb34378015da0530018b9e0957a0e45" Oct 13 07:44:42 crc kubenswrapper[4833]: I1013 07:44:42.909300 4833 scope.go:117] "RemoveContainer" containerID="4256e1de296f888aeb830cf931258e506afcb4bc1b968cc4a22d511314d71771" Oct 13 07:44:42 crc kubenswrapper[4833]: E1013 07:44:42.909815 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4256e1de296f888aeb830cf931258e506afcb4bc1b968cc4a22d511314d71771\": container with ID starting with 4256e1de296f888aeb830cf931258e506afcb4bc1b968cc4a22d511314d71771 not found: ID does not exist" containerID="4256e1de296f888aeb830cf931258e506afcb4bc1b968cc4a22d511314d71771" Oct 13 07:44:42 crc kubenswrapper[4833]: I1013 07:44:42.909883 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4256e1de296f888aeb830cf931258e506afcb4bc1b968cc4a22d511314d71771"} err="failed to get container status \"4256e1de296f888aeb830cf931258e506afcb4bc1b968cc4a22d511314d71771\": rpc error: code = NotFound desc = could not find container \"4256e1de296f888aeb830cf931258e506afcb4bc1b968cc4a22d511314d71771\": container with ID starting with 4256e1de296f888aeb830cf931258e506afcb4bc1b968cc4a22d511314d71771 not found: ID does not exist" Oct 13 07:44:42 crc kubenswrapper[4833]: I1013 07:44:42.909921 4833 scope.go:117] "RemoveContainer" containerID="9189e97e3ea2c01707e6cae853f2c87ae82159478df90364b3b004597ce4a0be" Oct 13 07:44:42 crc kubenswrapper[4833]: E1013 07:44:42.910449 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9189e97e3ea2c01707e6cae853f2c87ae82159478df90364b3b004597ce4a0be\": container with ID starting with 9189e97e3ea2c01707e6cae853f2c87ae82159478df90364b3b004597ce4a0be not found: ID does not exist" containerID="9189e97e3ea2c01707e6cae853f2c87ae82159478df90364b3b004597ce4a0be" Oct 13 07:44:42 crc kubenswrapper[4833]: I1013 07:44:42.910482 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9189e97e3ea2c01707e6cae853f2c87ae82159478df90364b3b004597ce4a0be"} err="failed to get container status \"9189e97e3ea2c01707e6cae853f2c87ae82159478df90364b3b004597ce4a0be\": rpc error: code = NotFound desc = could not find container \"9189e97e3ea2c01707e6cae853f2c87ae82159478df90364b3b004597ce4a0be\": container with ID starting with 9189e97e3ea2c01707e6cae853f2c87ae82159478df90364b3b004597ce4a0be not found: ID does not exist" Oct 13 07:44:42 crc kubenswrapper[4833]: I1013 07:44:42.910510 4833 scope.go:117] "RemoveContainer" containerID="449aa46c328c283cd8ff3945673259d10eb34378015da0530018b9e0957a0e45" Oct 13 07:44:42 crc kubenswrapper[4833]: E1013 07:44:42.911050 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"449aa46c328c283cd8ff3945673259d10eb34378015da0530018b9e0957a0e45\": container with ID starting with 449aa46c328c283cd8ff3945673259d10eb34378015da0530018b9e0957a0e45 not found: ID does not exist" containerID="449aa46c328c283cd8ff3945673259d10eb34378015da0530018b9e0957a0e45" Oct 13 07:44:42 crc kubenswrapper[4833]: I1013 07:44:42.911081 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"449aa46c328c283cd8ff3945673259d10eb34378015da0530018b9e0957a0e45"} err="failed to get container status \"449aa46c328c283cd8ff3945673259d10eb34378015da0530018b9e0957a0e45\": rpc error: code = NotFound desc = could not find container \"449aa46c328c283cd8ff3945673259d10eb34378015da0530018b9e0957a0e45\": container with ID starting with 449aa46c328c283cd8ff3945673259d10eb34378015da0530018b9e0957a0e45 not found: ID does not exist" Oct 13 07:44:44 crc kubenswrapper[4833]: I1013 07:44:44.635573 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74825a65-ef20-4740-a6ce-2731dfd96eef" path="/var/lib/kubelet/pods/74825a65-ef20-4740-a6ce-2731dfd96eef/volumes" Oct 13 07:44:48 crc kubenswrapper[4833]: I1013 07:44:48.626840 4833 scope.go:117] "RemoveContainer" containerID="32a6d82a8bf2d5ec1a4e4b9475db65556d7333730766b0699b8cddfd7466600a" Oct 13 07:44:48 crc kubenswrapper[4833]: E1013 07:44:48.627346 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:45:00 crc kubenswrapper[4833]: I1013 07:45:00.142265 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339025-lnjs4"] Oct 13 07:45:00 crc kubenswrapper[4833]: E1013 07:45:00.143079 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74825a65-ef20-4740-a6ce-2731dfd96eef" containerName="registry-server" Oct 13 07:45:00 crc kubenswrapper[4833]: I1013 07:45:00.143093 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="74825a65-ef20-4740-a6ce-2731dfd96eef" containerName="registry-server" Oct 13 07:45:00 crc kubenswrapper[4833]: E1013 07:45:00.143109 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74825a65-ef20-4740-a6ce-2731dfd96eef" containerName="extract-utilities" Oct 13 07:45:00 crc kubenswrapper[4833]: I1013 07:45:00.143114 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="74825a65-ef20-4740-a6ce-2731dfd96eef" containerName="extract-utilities" Oct 13 07:45:00 crc kubenswrapper[4833]: E1013 07:45:00.143124 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74825a65-ef20-4740-a6ce-2731dfd96eef" containerName="extract-content" Oct 13 07:45:00 crc kubenswrapper[4833]: I1013 07:45:00.143130 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="74825a65-ef20-4740-a6ce-2731dfd96eef" containerName="extract-content" Oct 13 07:45:00 crc kubenswrapper[4833]: I1013 07:45:00.143293 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="74825a65-ef20-4740-a6ce-2731dfd96eef" containerName="registry-server" Oct 13 07:45:00 crc kubenswrapper[4833]: I1013 07:45:00.143855 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339025-lnjs4" Oct 13 07:45:00 crc kubenswrapper[4833]: I1013 07:45:00.146792 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 13 07:45:00 crc kubenswrapper[4833]: I1013 07:45:00.147024 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 13 07:45:00 crc kubenswrapper[4833]: I1013 07:45:00.155145 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339025-lnjs4"] Oct 13 07:45:00 crc kubenswrapper[4833]: I1013 07:45:00.191930 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/781ba963-7821-4dfd-9baa-8a66f21e58c0-secret-volume\") pod \"collect-profiles-29339025-lnjs4\" (UID: \"781ba963-7821-4dfd-9baa-8a66f21e58c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339025-lnjs4" Oct 13 07:45:00 crc kubenswrapper[4833]: I1013 07:45:00.192091 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xz22\" (UniqueName: \"kubernetes.io/projected/781ba963-7821-4dfd-9baa-8a66f21e58c0-kube-api-access-2xz22\") pod \"collect-profiles-29339025-lnjs4\" (UID: \"781ba963-7821-4dfd-9baa-8a66f21e58c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339025-lnjs4" Oct 13 07:45:00 crc kubenswrapper[4833]: I1013 07:45:00.192119 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/781ba963-7821-4dfd-9baa-8a66f21e58c0-config-volume\") pod \"collect-profiles-29339025-lnjs4\" (UID: \"781ba963-7821-4dfd-9baa-8a66f21e58c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339025-lnjs4" Oct 13 07:45:00 crc kubenswrapper[4833]: I1013 07:45:00.293348 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/781ba963-7821-4dfd-9baa-8a66f21e58c0-secret-volume\") pod \"collect-profiles-29339025-lnjs4\" (UID: \"781ba963-7821-4dfd-9baa-8a66f21e58c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339025-lnjs4" Oct 13 07:45:00 crc kubenswrapper[4833]: I1013 07:45:00.293440 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xz22\" (UniqueName: \"kubernetes.io/projected/781ba963-7821-4dfd-9baa-8a66f21e58c0-kube-api-access-2xz22\") pod \"collect-profiles-29339025-lnjs4\" (UID: \"781ba963-7821-4dfd-9baa-8a66f21e58c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339025-lnjs4" Oct 13 07:45:00 crc kubenswrapper[4833]: I1013 07:45:00.293462 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/781ba963-7821-4dfd-9baa-8a66f21e58c0-config-volume\") pod \"collect-profiles-29339025-lnjs4\" (UID: \"781ba963-7821-4dfd-9baa-8a66f21e58c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339025-lnjs4" Oct 13 07:45:00 crc kubenswrapper[4833]: I1013 07:45:00.294842 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/781ba963-7821-4dfd-9baa-8a66f21e58c0-config-volume\") pod \"collect-profiles-29339025-lnjs4\" (UID: \"781ba963-7821-4dfd-9baa-8a66f21e58c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339025-lnjs4" Oct 13 07:45:00 crc kubenswrapper[4833]: I1013 07:45:00.302081 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/781ba963-7821-4dfd-9baa-8a66f21e58c0-secret-volume\") pod \"collect-profiles-29339025-lnjs4\" (UID: \"781ba963-7821-4dfd-9baa-8a66f21e58c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339025-lnjs4" Oct 13 07:45:00 crc kubenswrapper[4833]: I1013 07:45:00.309282 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xz22\" (UniqueName: \"kubernetes.io/projected/781ba963-7821-4dfd-9baa-8a66f21e58c0-kube-api-access-2xz22\") pod \"collect-profiles-29339025-lnjs4\" (UID: \"781ba963-7821-4dfd-9baa-8a66f21e58c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339025-lnjs4" Oct 13 07:45:00 crc kubenswrapper[4833]: I1013 07:45:00.495670 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339025-lnjs4" Oct 13 07:45:00 crc kubenswrapper[4833]: I1013 07:45:00.641257 4833 scope.go:117] "RemoveContainer" containerID="32a6d82a8bf2d5ec1a4e4b9475db65556d7333730766b0699b8cddfd7466600a" Oct 13 07:45:00 crc kubenswrapper[4833]: I1013 07:45:00.945033 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339025-lnjs4"] Oct 13 07:45:00 crc kubenswrapper[4833]: I1013 07:45:00.976087 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"691976b18bac0545b5c830635fa73e1fa7474db2a1c6b6609dd3465486bed431"} Oct 13 07:45:00 crc kubenswrapper[4833]: I1013 07:45:00.977223 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339025-lnjs4" event={"ID":"781ba963-7821-4dfd-9baa-8a66f21e58c0","Type":"ContainerStarted","Data":"bc8fa0ebe1082ac1f163056d1718ae1b694b3349dfdae4a98cc0ae4cb8a46829"} Oct 13 07:45:01 crc kubenswrapper[4833]: I1013 07:45:01.988213 4833 generic.go:334] "Generic (PLEG): container finished" podID="781ba963-7821-4dfd-9baa-8a66f21e58c0" containerID="56e448c4790e7a12a463971e3e51c26270f114ad6970436ad09b0932cb9e3191" exitCode=0 Oct 13 07:45:01 crc kubenswrapper[4833]: I1013 07:45:01.988266 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339025-lnjs4" event={"ID":"781ba963-7821-4dfd-9baa-8a66f21e58c0","Type":"ContainerDied","Data":"56e448c4790e7a12a463971e3e51c26270f114ad6970436ad09b0932cb9e3191"} Oct 13 07:45:03 crc kubenswrapper[4833]: I1013 07:45:03.272598 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339025-lnjs4" Oct 13 07:45:03 crc kubenswrapper[4833]: I1013 07:45:03.452808 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/781ba963-7821-4dfd-9baa-8a66f21e58c0-secret-volume\") pod \"781ba963-7821-4dfd-9baa-8a66f21e58c0\" (UID: \"781ba963-7821-4dfd-9baa-8a66f21e58c0\") " Oct 13 07:45:03 crc kubenswrapper[4833]: I1013 07:45:03.452867 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/781ba963-7821-4dfd-9baa-8a66f21e58c0-config-volume\") pod \"781ba963-7821-4dfd-9baa-8a66f21e58c0\" (UID: \"781ba963-7821-4dfd-9baa-8a66f21e58c0\") " Oct 13 07:45:03 crc kubenswrapper[4833]: I1013 07:45:03.452912 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xz22\" (UniqueName: \"kubernetes.io/projected/781ba963-7821-4dfd-9baa-8a66f21e58c0-kube-api-access-2xz22\") pod \"781ba963-7821-4dfd-9baa-8a66f21e58c0\" (UID: \"781ba963-7821-4dfd-9baa-8a66f21e58c0\") " Oct 13 07:45:03 crc kubenswrapper[4833]: I1013 07:45:03.453728 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/781ba963-7821-4dfd-9baa-8a66f21e58c0-config-volume" (OuterVolumeSpecName: "config-volume") pod "781ba963-7821-4dfd-9baa-8a66f21e58c0" (UID: "781ba963-7821-4dfd-9baa-8a66f21e58c0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 07:45:03 crc kubenswrapper[4833]: I1013 07:45:03.459156 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/781ba963-7821-4dfd-9baa-8a66f21e58c0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "781ba963-7821-4dfd-9baa-8a66f21e58c0" (UID: "781ba963-7821-4dfd-9baa-8a66f21e58c0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 07:45:03 crc kubenswrapper[4833]: I1013 07:45:03.459734 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/781ba963-7821-4dfd-9baa-8a66f21e58c0-kube-api-access-2xz22" (OuterVolumeSpecName: "kube-api-access-2xz22") pod "781ba963-7821-4dfd-9baa-8a66f21e58c0" (UID: "781ba963-7821-4dfd-9baa-8a66f21e58c0"). InnerVolumeSpecName "kube-api-access-2xz22". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:45:03 crc kubenswrapper[4833]: I1013 07:45:03.554290 4833 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/781ba963-7821-4dfd-9baa-8a66f21e58c0-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 13 07:45:03 crc kubenswrapper[4833]: I1013 07:45:03.554341 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/781ba963-7821-4dfd-9baa-8a66f21e58c0-config-volume\") on node \"crc\" DevicePath \"\"" Oct 13 07:45:03 crc kubenswrapper[4833]: I1013 07:45:03.554355 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xz22\" (UniqueName: \"kubernetes.io/projected/781ba963-7821-4dfd-9baa-8a66f21e58c0-kube-api-access-2xz22\") on node \"crc\" DevicePath \"\"" Oct 13 07:45:04 crc kubenswrapper[4833]: I1013 07:45:04.004367 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339025-lnjs4" event={"ID":"781ba963-7821-4dfd-9baa-8a66f21e58c0","Type":"ContainerDied","Data":"bc8fa0ebe1082ac1f163056d1718ae1b694b3349dfdae4a98cc0ae4cb8a46829"} Oct 13 07:45:04 crc kubenswrapper[4833]: I1013 07:45:04.004451 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc8fa0ebe1082ac1f163056d1718ae1b694b3349dfdae4a98cc0ae4cb8a46829" Oct 13 07:45:04 crc kubenswrapper[4833]: I1013 07:45:04.004512 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339025-lnjs4" Oct 13 07:45:04 crc kubenswrapper[4833]: I1013 07:45:04.345514 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29338980-fcrcv"] Oct 13 07:45:04 crc kubenswrapper[4833]: I1013 07:45:04.351312 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29338980-fcrcv"] Oct 13 07:45:04 crc kubenswrapper[4833]: I1013 07:45:04.635245 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="074bf6e1-e4bc-43f6-a26e-18f7c5fbe828" path="/var/lib/kubelet/pods/074bf6e1-e4bc-43f6-a26e-18f7c5fbe828/volumes" Oct 13 07:45:49 crc kubenswrapper[4833]: I1013 07:45:49.983768 4833 scope.go:117] "RemoveContainer" containerID="9af7eb907b7dc9f2124815b9411853ac86802063f6e2e77cc4f932e4c1b39ce7" Oct 13 07:47:00 crc kubenswrapper[4833]: I1013 07:47:00.542958 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:47:00 crc kubenswrapper[4833]: I1013 07:47:00.543694 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:47:30 crc kubenswrapper[4833]: I1013 07:47:30.543246 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:47:30 crc kubenswrapper[4833]: I1013 07:47:30.543921 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:48:00 crc kubenswrapper[4833]: I1013 07:48:00.543096 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:48:00 crc kubenswrapper[4833]: I1013 07:48:00.543843 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:48:00 crc kubenswrapper[4833]: I1013 07:48:00.543900 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 07:48:00 crc kubenswrapper[4833]: I1013 07:48:00.544678 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"691976b18bac0545b5c830635fa73e1fa7474db2a1c6b6609dd3465486bed431"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 07:48:00 crc kubenswrapper[4833]: I1013 07:48:00.544750 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://691976b18bac0545b5c830635fa73e1fa7474db2a1c6b6609dd3465486bed431" gracePeriod=600 Oct 13 07:48:01 crc kubenswrapper[4833]: I1013 07:48:01.446667 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="691976b18bac0545b5c830635fa73e1fa7474db2a1c6b6609dd3465486bed431" exitCode=0 Oct 13 07:48:01 crc kubenswrapper[4833]: I1013 07:48:01.446766 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"691976b18bac0545b5c830635fa73e1fa7474db2a1c6b6609dd3465486bed431"} Oct 13 07:48:01 crc kubenswrapper[4833]: I1013 07:48:01.447129 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"a730b8e15d65542d1555c0490d7044bbf38de54446d43d1c11482a8cdf705cba"} Oct 13 07:48:01 crc kubenswrapper[4833]: I1013 07:48:01.447176 4833 scope.go:117] "RemoveContainer" containerID="32a6d82a8bf2d5ec1a4e4b9475db65556d7333730766b0699b8cddfd7466600a" Oct 13 07:48:37 crc kubenswrapper[4833]: I1013 07:48:37.722781 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-jq9tm"] Oct 13 07:48:37 crc kubenswrapper[4833]: I1013 07:48:37.728204 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-jq9tm"] Oct 13 07:48:37 crc kubenswrapper[4833]: I1013 07:48:37.882607 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-kqhnn"] Oct 13 07:48:37 crc kubenswrapper[4833]: E1013 07:48:37.883017 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="781ba963-7821-4dfd-9baa-8a66f21e58c0" containerName="collect-profiles" Oct 13 07:48:37 crc kubenswrapper[4833]: I1013 07:48:37.883042 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="781ba963-7821-4dfd-9baa-8a66f21e58c0" containerName="collect-profiles" Oct 13 07:48:37 crc kubenswrapper[4833]: I1013 07:48:37.883262 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="781ba963-7821-4dfd-9baa-8a66f21e58c0" containerName="collect-profiles" Oct 13 07:48:37 crc kubenswrapper[4833]: I1013 07:48:37.883912 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kqhnn" Oct 13 07:48:37 crc kubenswrapper[4833]: I1013 07:48:37.886447 4833 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-bcgmx" Oct 13 07:48:37 crc kubenswrapper[4833]: I1013 07:48:37.887226 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Oct 13 07:48:37 crc kubenswrapper[4833]: I1013 07:48:37.887998 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Oct 13 07:48:37 crc kubenswrapper[4833]: I1013 07:48:37.888150 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Oct 13 07:48:37 crc kubenswrapper[4833]: I1013 07:48:37.889428 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-kqhnn"] Oct 13 07:48:37 crc kubenswrapper[4833]: I1013 07:48:37.990474 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/6d4a2e60-5406-45a5-814b-cb4685a49da0-crc-storage\") pod \"crc-storage-crc-kqhnn\" (UID: \"6d4a2e60-5406-45a5-814b-cb4685a49da0\") " pod="crc-storage/crc-storage-crc-kqhnn" Oct 13 07:48:37 crc kubenswrapper[4833]: I1013 07:48:37.990557 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5z74\" (UniqueName: \"kubernetes.io/projected/6d4a2e60-5406-45a5-814b-cb4685a49da0-kube-api-access-q5z74\") pod \"crc-storage-crc-kqhnn\" (UID: \"6d4a2e60-5406-45a5-814b-cb4685a49da0\") " pod="crc-storage/crc-storage-crc-kqhnn" Oct 13 07:48:37 crc kubenswrapper[4833]: I1013 07:48:37.990948 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/6d4a2e60-5406-45a5-814b-cb4685a49da0-node-mnt\") pod \"crc-storage-crc-kqhnn\" (UID: \"6d4a2e60-5406-45a5-814b-cb4685a49da0\") " pod="crc-storage/crc-storage-crc-kqhnn" Oct 13 07:48:38 crc kubenswrapper[4833]: I1013 07:48:38.092465 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/6d4a2e60-5406-45a5-814b-cb4685a49da0-crc-storage\") pod \"crc-storage-crc-kqhnn\" (UID: \"6d4a2e60-5406-45a5-814b-cb4685a49da0\") " pod="crc-storage/crc-storage-crc-kqhnn" Oct 13 07:48:38 crc kubenswrapper[4833]: I1013 07:48:38.092620 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5z74\" (UniqueName: \"kubernetes.io/projected/6d4a2e60-5406-45a5-814b-cb4685a49da0-kube-api-access-q5z74\") pod \"crc-storage-crc-kqhnn\" (UID: \"6d4a2e60-5406-45a5-814b-cb4685a49da0\") " pod="crc-storage/crc-storage-crc-kqhnn" Oct 13 07:48:38 crc kubenswrapper[4833]: I1013 07:48:38.092713 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/6d4a2e60-5406-45a5-814b-cb4685a49da0-node-mnt\") pod \"crc-storage-crc-kqhnn\" (UID: \"6d4a2e60-5406-45a5-814b-cb4685a49da0\") " pod="crc-storage/crc-storage-crc-kqhnn" Oct 13 07:48:38 crc kubenswrapper[4833]: I1013 07:48:38.093129 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/6d4a2e60-5406-45a5-814b-cb4685a49da0-node-mnt\") pod \"crc-storage-crc-kqhnn\" (UID: \"6d4a2e60-5406-45a5-814b-cb4685a49da0\") " pod="crc-storage/crc-storage-crc-kqhnn" Oct 13 07:48:38 crc kubenswrapper[4833]: I1013 07:48:38.093610 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/6d4a2e60-5406-45a5-814b-cb4685a49da0-crc-storage\") pod \"crc-storage-crc-kqhnn\" (UID: \"6d4a2e60-5406-45a5-814b-cb4685a49da0\") " pod="crc-storage/crc-storage-crc-kqhnn" Oct 13 07:48:38 crc kubenswrapper[4833]: I1013 07:48:38.129279 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5z74\" (UniqueName: \"kubernetes.io/projected/6d4a2e60-5406-45a5-814b-cb4685a49da0-kube-api-access-q5z74\") pod \"crc-storage-crc-kqhnn\" (UID: \"6d4a2e60-5406-45a5-814b-cb4685a49da0\") " pod="crc-storage/crc-storage-crc-kqhnn" Oct 13 07:48:38 crc kubenswrapper[4833]: I1013 07:48:38.203083 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kqhnn" Oct 13 07:48:38 crc kubenswrapper[4833]: I1013 07:48:38.640979 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f098f03e-e4ee-4cad-b3cd-6887b3423124" path="/var/lib/kubelet/pods/f098f03e-e4ee-4cad-b3cd-6887b3423124/volumes" Oct 13 07:48:38 crc kubenswrapper[4833]: I1013 07:48:38.696354 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-kqhnn"] Oct 13 07:48:38 crc kubenswrapper[4833]: I1013 07:48:38.778253 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kqhnn" event={"ID":"6d4a2e60-5406-45a5-814b-cb4685a49da0","Type":"ContainerStarted","Data":"0b6957ce4c92a2f19802a88da47a431ff6b551c6ced27267db1b7b43579bb32b"} Oct 13 07:48:39 crc kubenswrapper[4833]: I1013 07:48:39.789420 4833 generic.go:334] "Generic (PLEG): container finished" podID="6d4a2e60-5406-45a5-814b-cb4685a49da0" containerID="050f4d381c5181b1988676067674e4b7c6291ec14b2caca8ec3c03b15355acbf" exitCode=0 Oct 13 07:48:39 crc kubenswrapper[4833]: I1013 07:48:39.789525 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kqhnn" event={"ID":"6d4a2e60-5406-45a5-814b-cb4685a49da0","Type":"ContainerDied","Data":"050f4d381c5181b1988676067674e4b7c6291ec14b2caca8ec3c03b15355acbf"} Oct 13 07:48:41 crc kubenswrapper[4833]: I1013 07:48:41.110594 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kqhnn" Oct 13 07:48:41 crc kubenswrapper[4833]: I1013 07:48:41.153178 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/6d4a2e60-5406-45a5-814b-cb4685a49da0-node-mnt\") pod \"6d4a2e60-5406-45a5-814b-cb4685a49da0\" (UID: \"6d4a2e60-5406-45a5-814b-cb4685a49da0\") " Oct 13 07:48:41 crc kubenswrapper[4833]: I1013 07:48:41.153256 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5z74\" (UniqueName: \"kubernetes.io/projected/6d4a2e60-5406-45a5-814b-cb4685a49da0-kube-api-access-q5z74\") pod \"6d4a2e60-5406-45a5-814b-cb4685a49da0\" (UID: \"6d4a2e60-5406-45a5-814b-cb4685a49da0\") " Oct 13 07:48:41 crc kubenswrapper[4833]: I1013 07:48:41.153404 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/6d4a2e60-5406-45a5-814b-cb4685a49da0-crc-storage\") pod \"6d4a2e60-5406-45a5-814b-cb4685a49da0\" (UID: \"6d4a2e60-5406-45a5-814b-cb4685a49da0\") " Oct 13 07:48:41 crc kubenswrapper[4833]: I1013 07:48:41.154320 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d4a2e60-5406-45a5-814b-cb4685a49da0-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "6d4a2e60-5406-45a5-814b-cb4685a49da0" (UID: "6d4a2e60-5406-45a5-814b-cb4685a49da0"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 07:48:41 crc kubenswrapper[4833]: I1013 07:48:41.159646 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d4a2e60-5406-45a5-814b-cb4685a49da0-kube-api-access-q5z74" (OuterVolumeSpecName: "kube-api-access-q5z74") pod "6d4a2e60-5406-45a5-814b-cb4685a49da0" (UID: "6d4a2e60-5406-45a5-814b-cb4685a49da0"). InnerVolumeSpecName "kube-api-access-q5z74". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:48:41 crc kubenswrapper[4833]: I1013 07:48:41.175152 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d4a2e60-5406-45a5-814b-cb4685a49da0-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "6d4a2e60-5406-45a5-814b-cb4685a49da0" (UID: "6d4a2e60-5406-45a5-814b-cb4685a49da0"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 07:48:41 crc kubenswrapper[4833]: I1013 07:48:41.254904 4833 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/6d4a2e60-5406-45a5-814b-cb4685a49da0-crc-storage\") on node \"crc\" DevicePath \"\"" Oct 13 07:48:41 crc kubenswrapper[4833]: I1013 07:48:41.254945 4833 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/6d4a2e60-5406-45a5-814b-cb4685a49da0-node-mnt\") on node \"crc\" DevicePath \"\"" Oct 13 07:48:41 crc kubenswrapper[4833]: I1013 07:48:41.254956 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5z74\" (UniqueName: \"kubernetes.io/projected/6d4a2e60-5406-45a5-814b-cb4685a49da0-kube-api-access-q5z74\") on node \"crc\" DevicePath \"\"" Oct 13 07:48:41 crc kubenswrapper[4833]: I1013 07:48:41.810699 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kqhnn" event={"ID":"6d4a2e60-5406-45a5-814b-cb4685a49da0","Type":"ContainerDied","Data":"0b6957ce4c92a2f19802a88da47a431ff6b551c6ced27267db1b7b43579bb32b"} Oct 13 07:48:41 crc kubenswrapper[4833]: I1013 07:48:41.810749 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b6957ce4c92a2f19802a88da47a431ff6b551c6ced27267db1b7b43579bb32b" Oct 13 07:48:41 crc kubenswrapper[4833]: I1013 07:48:41.810777 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kqhnn" Oct 13 07:48:43 crc kubenswrapper[4833]: I1013 07:48:43.498836 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-kqhnn"] Oct 13 07:48:43 crc kubenswrapper[4833]: I1013 07:48:43.503374 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-kqhnn"] Oct 13 07:48:43 crc kubenswrapper[4833]: I1013 07:48:43.641829 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-28hch"] Oct 13 07:48:43 crc kubenswrapper[4833]: E1013 07:48:43.642139 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d4a2e60-5406-45a5-814b-cb4685a49da0" containerName="storage" Oct 13 07:48:43 crc kubenswrapper[4833]: I1013 07:48:43.642151 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d4a2e60-5406-45a5-814b-cb4685a49da0" containerName="storage" Oct 13 07:48:43 crc kubenswrapper[4833]: I1013 07:48:43.642321 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d4a2e60-5406-45a5-814b-cb4685a49da0" containerName="storage" Oct 13 07:48:43 crc kubenswrapper[4833]: I1013 07:48:43.642799 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-28hch" Oct 13 07:48:43 crc kubenswrapper[4833]: I1013 07:48:43.644806 4833 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-bcgmx" Oct 13 07:48:43 crc kubenswrapper[4833]: I1013 07:48:43.645312 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Oct 13 07:48:43 crc kubenswrapper[4833]: I1013 07:48:43.645468 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Oct 13 07:48:43 crc kubenswrapper[4833]: I1013 07:48:43.646449 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Oct 13 07:48:43 crc kubenswrapper[4833]: I1013 07:48:43.658778 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-28hch"] Oct 13 07:48:43 crc kubenswrapper[4833]: I1013 07:48:43.699287 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05-crc-storage\") pod \"crc-storage-crc-28hch\" (UID: \"92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05\") " pod="crc-storage/crc-storage-crc-28hch" Oct 13 07:48:43 crc kubenswrapper[4833]: I1013 07:48:43.699383 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05-node-mnt\") pod \"crc-storage-crc-28hch\" (UID: \"92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05\") " pod="crc-storage/crc-storage-crc-28hch" Oct 13 07:48:43 crc kubenswrapper[4833]: I1013 07:48:43.699514 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qhtr\" (UniqueName: \"kubernetes.io/projected/92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05-kube-api-access-6qhtr\") pod \"crc-storage-crc-28hch\" (UID: \"92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05\") " pod="crc-storage/crc-storage-crc-28hch" Oct 13 07:48:43 crc kubenswrapper[4833]: I1013 07:48:43.800938 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05-crc-storage\") pod \"crc-storage-crc-28hch\" (UID: \"92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05\") " pod="crc-storage/crc-storage-crc-28hch" Oct 13 07:48:43 crc kubenswrapper[4833]: I1013 07:48:43.800989 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05-node-mnt\") pod \"crc-storage-crc-28hch\" (UID: \"92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05\") " pod="crc-storage/crc-storage-crc-28hch" Oct 13 07:48:43 crc kubenswrapper[4833]: I1013 07:48:43.801040 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qhtr\" (UniqueName: \"kubernetes.io/projected/92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05-kube-api-access-6qhtr\") pod \"crc-storage-crc-28hch\" (UID: \"92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05\") " pod="crc-storage/crc-storage-crc-28hch" Oct 13 07:48:43 crc kubenswrapper[4833]: I1013 07:48:43.801552 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05-node-mnt\") pod \"crc-storage-crc-28hch\" (UID: \"92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05\") " pod="crc-storage/crc-storage-crc-28hch" Oct 13 07:48:43 crc kubenswrapper[4833]: I1013 07:48:43.801978 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05-crc-storage\") pod \"crc-storage-crc-28hch\" (UID: \"92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05\") " pod="crc-storage/crc-storage-crc-28hch" Oct 13 07:48:43 crc kubenswrapper[4833]: I1013 07:48:43.828612 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qhtr\" (UniqueName: \"kubernetes.io/projected/92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05-kube-api-access-6qhtr\") pod \"crc-storage-crc-28hch\" (UID: \"92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05\") " pod="crc-storage/crc-storage-crc-28hch" Oct 13 07:48:43 crc kubenswrapper[4833]: I1013 07:48:43.973798 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-28hch" Oct 13 07:48:44 crc kubenswrapper[4833]: I1013 07:48:44.419068 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-28hch"] Oct 13 07:48:44 crc kubenswrapper[4833]: W1013 07:48:44.426011 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92cc1df1_7245_4cbb_b6f8_1cbb9d50fd05.slice/crio-342b02ee5289e10eb6e38e60059ce1a297d927d8ca5fddba5c2b99e558eb8640 WatchSource:0}: Error finding container 342b02ee5289e10eb6e38e60059ce1a297d927d8ca5fddba5c2b99e558eb8640: Status 404 returned error can't find the container with id 342b02ee5289e10eb6e38e60059ce1a297d927d8ca5fddba5c2b99e558eb8640 Oct 13 07:48:44 crc kubenswrapper[4833]: I1013 07:48:44.644409 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d4a2e60-5406-45a5-814b-cb4685a49da0" path="/var/lib/kubelet/pods/6d4a2e60-5406-45a5-814b-cb4685a49da0/volumes" Oct 13 07:48:44 crc kubenswrapper[4833]: I1013 07:48:44.839532 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-28hch" event={"ID":"92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05","Type":"ContainerStarted","Data":"342b02ee5289e10eb6e38e60059ce1a297d927d8ca5fddba5c2b99e558eb8640"} Oct 13 07:48:45 crc kubenswrapper[4833]: I1013 07:48:45.852046 4833 generic.go:334] "Generic (PLEG): container finished" podID="92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05" containerID="742536e35e0e25b119139bccbebd54b7af55cd110df841964603242707f5a1ef" exitCode=0 Oct 13 07:48:45 crc kubenswrapper[4833]: I1013 07:48:45.852269 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-28hch" event={"ID":"92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05","Type":"ContainerDied","Data":"742536e35e0e25b119139bccbebd54b7af55cd110df841964603242707f5a1ef"} Oct 13 07:48:47 crc kubenswrapper[4833]: I1013 07:48:47.175296 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-28hch" Oct 13 07:48:47 crc kubenswrapper[4833]: I1013 07:48:47.254098 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05-crc-storage\") pod \"92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05\" (UID: \"92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05\") " Oct 13 07:48:47 crc kubenswrapper[4833]: I1013 07:48:47.254285 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05-node-mnt\") pod \"92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05\" (UID: \"92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05\") " Oct 13 07:48:47 crc kubenswrapper[4833]: I1013 07:48:47.254451 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05" (UID: "92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 07:48:47 crc kubenswrapper[4833]: I1013 07:48:47.254562 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qhtr\" (UniqueName: \"kubernetes.io/projected/92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05-kube-api-access-6qhtr\") pod \"92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05\" (UID: \"92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05\") " Oct 13 07:48:47 crc kubenswrapper[4833]: I1013 07:48:47.254991 4833 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05-node-mnt\") on node \"crc\" DevicePath \"\"" Oct 13 07:48:47 crc kubenswrapper[4833]: I1013 07:48:47.260292 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05-kube-api-access-6qhtr" (OuterVolumeSpecName: "kube-api-access-6qhtr") pod "92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05" (UID: "92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05"). InnerVolumeSpecName "kube-api-access-6qhtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:48:47 crc kubenswrapper[4833]: I1013 07:48:47.277333 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05" (UID: "92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 07:48:47 crc kubenswrapper[4833]: I1013 07:48:47.356973 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qhtr\" (UniqueName: \"kubernetes.io/projected/92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05-kube-api-access-6qhtr\") on node \"crc\" DevicePath \"\"" Oct 13 07:48:47 crc kubenswrapper[4833]: I1013 07:48:47.357028 4833 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05-crc-storage\") on node \"crc\" DevicePath \"\"" Oct 13 07:48:47 crc kubenswrapper[4833]: I1013 07:48:47.873066 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-28hch" event={"ID":"92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05","Type":"ContainerDied","Data":"342b02ee5289e10eb6e38e60059ce1a297d927d8ca5fddba5c2b99e558eb8640"} Oct 13 07:48:47 crc kubenswrapper[4833]: I1013 07:48:47.873124 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="342b02ee5289e10eb6e38e60059ce1a297d927d8ca5fddba5c2b99e558eb8640" Oct 13 07:48:47 crc kubenswrapper[4833]: I1013 07:48:47.873203 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-28hch" Oct 13 07:48:50 crc kubenswrapper[4833]: I1013 07:48:50.080989 4833 scope.go:117] "RemoveContainer" containerID="0e81e31fbe90c4b224e4a69666ed044631eace94d7dcc2716994858b155a4812" Oct 13 07:49:15 crc kubenswrapper[4833]: I1013 07:49:15.136996 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tpvtp"] Oct 13 07:49:15 crc kubenswrapper[4833]: E1013 07:49:15.137868 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05" containerName="storage" Oct 13 07:49:15 crc kubenswrapper[4833]: I1013 07:49:15.137883 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05" containerName="storage" Oct 13 07:49:15 crc kubenswrapper[4833]: I1013 07:49:15.138081 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="92cc1df1-7245-4cbb-b6f8-1cbb9d50fd05" containerName="storage" Oct 13 07:49:15 crc kubenswrapper[4833]: I1013 07:49:15.139440 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tpvtp" Oct 13 07:49:15 crc kubenswrapper[4833]: I1013 07:49:15.146789 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tpvtp"] Oct 13 07:49:15 crc kubenswrapper[4833]: I1013 07:49:15.194178 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbh42\" (UniqueName: \"kubernetes.io/projected/02057902-11d0-4e91-9fb5-fc1a1df663a5-kube-api-access-dbh42\") pod \"community-operators-tpvtp\" (UID: \"02057902-11d0-4e91-9fb5-fc1a1df663a5\") " pod="openshift-marketplace/community-operators-tpvtp" Oct 13 07:49:15 crc kubenswrapper[4833]: I1013 07:49:15.194711 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02057902-11d0-4e91-9fb5-fc1a1df663a5-utilities\") pod \"community-operators-tpvtp\" (UID: \"02057902-11d0-4e91-9fb5-fc1a1df663a5\") " pod="openshift-marketplace/community-operators-tpvtp" Oct 13 07:49:15 crc kubenswrapper[4833]: I1013 07:49:15.194882 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02057902-11d0-4e91-9fb5-fc1a1df663a5-catalog-content\") pod \"community-operators-tpvtp\" (UID: \"02057902-11d0-4e91-9fb5-fc1a1df663a5\") " pod="openshift-marketplace/community-operators-tpvtp" Oct 13 07:49:15 crc kubenswrapper[4833]: I1013 07:49:15.295885 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbh42\" (UniqueName: \"kubernetes.io/projected/02057902-11d0-4e91-9fb5-fc1a1df663a5-kube-api-access-dbh42\") pod \"community-operators-tpvtp\" (UID: \"02057902-11d0-4e91-9fb5-fc1a1df663a5\") " pod="openshift-marketplace/community-operators-tpvtp" Oct 13 07:49:15 crc kubenswrapper[4833]: I1013 07:49:15.295940 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02057902-11d0-4e91-9fb5-fc1a1df663a5-utilities\") pod \"community-operators-tpvtp\" (UID: \"02057902-11d0-4e91-9fb5-fc1a1df663a5\") " pod="openshift-marketplace/community-operators-tpvtp" Oct 13 07:49:15 crc kubenswrapper[4833]: I1013 07:49:15.296023 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02057902-11d0-4e91-9fb5-fc1a1df663a5-catalog-content\") pod \"community-operators-tpvtp\" (UID: \"02057902-11d0-4e91-9fb5-fc1a1df663a5\") " pod="openshift-marketplace/community-operators-tpvtp" Oct 13 07:49:15 crc kubenswrapper[4833]: I1013 07:49:15.296584 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02057902-11d0-4e91-9fb5-fc1a1df663a5-catalog-content\") pod \"community-operators-tpvtp\" (UID: \"02057902-11d0-4e91-9fb5-fc1a1df663a5\") " pod="openshift-marketplace/community-operators-tpvtp" Oct 13 07:49:15 crc kubenswrapper[4833]: I1013 07:49:15.296840 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02057902-11d0-4e91-9fb5-fc1a1df663a5-utilities\") pod \"community-operators-tpvtp\" (UID: \"02057902-11d0-4e91-9fb5-fc1a1df663a5\") " pod="openshift-marketplace/community-operators-tpvtp" Oct 13 07:49:15 crc kubenswrapper[4833]: I1013 07:49:15.323368 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbh42\" (UniqueName: \"kubernetes.io/projected/02057902-11d0-4e91-9fb5-fc1a1df663a5-kube-api-access-dbh42\") pod \"community-operators-tpvtp\" (UID: \"02057902-11d0-4e91-9fb5-fc1a1df663a5\") " pod="openshift-marketplace/community-operators-tpvtp" Oct 13 07:49:15 crc kubenswrapper[4833]: I1013 07:49:15.490646 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tpvtp" Oct 13 07:49:15 crc kubenswrapper[4833]: I1013 07:49:15.980919 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tpvtp"] Oct 13 07:49:16 crc kubenswrapper[4833]: I1013 07:49:16.104281 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tpvtp" event={"ID":"02057902-11d0-4e91-9fb5-fc1a1df663a5","Type":"ContainerStarted","Data":"96cf055e19c00d18d0617dbbd84446cb6fb9e568e29db48a7028b548af5c2966"} Oct 13 07:49:17 crc kubenswrapper[4833]: I1013 07:49:17.117924 4833 generic.go:334] "Generic (PLEG): container finished" podID="02057902-11d0-4e91-9fb5-fc1a1df663a5" containerID="1e83e5474bb27b1de32c05cb5c3021d0b18c887c708101cf4e21b8cc136edf5a" exitCode=0 Oct 13 07:49:17 crc kubenswrapper[4833]: I1013 07:49:17.118154 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tpvtp" event={"ID":"02057902-11d0-4e91-9fb5-fc1a1df663a5","Type":"ContainerDied","Data":"1e83e5474bb27b1de32c05cb5c3021d0b18c887c708101cf4e21b8cc136edf5a"} Oct 13 07:49:18 crc kubenswrapper[4833]: I1013 07:49:18.128043 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tpvtp" event={"ID":"02057902-11d0-4e91-9fb5-fc1a1df663a5","Type":"ContainerStarted","Data":"4416990d4fa2c915b9a125ae4254791ee4bb510d5c1e1dedafe84c4be43a7f60"} Oct 13 07:49:19 crc kubenswrapper[4833]: I1013 07:49:19.136387 4833 generic.go:334] "Generic (PLEG): container finished" podID="02057902-11d0-4e91-9fb5-fc1a1df663a5" containerID="4416990d4fa2c915b9a125ae4254791ee4bb510d5c1e1dedafe84c4be43a7f60" exitCode=0 Oct 13 07:49:19 crc kubenswrapper[4833]: I1013 07:49:19.136450 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tpvtp" event={"ID":"02057902-11d0-4e91-9fb5-fc1a1df663a5","Type":"ContainerDied","Data":"4416990d4fa2c915b9a125ae4254791ee4bb510d5c1e1dedafe84c4be43a7f60"} Oct 13 07:49:21 crc kubenswrapper[4833]: I1013 07:49:21.173567 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tpvtp" event={"ID":"02057902-11d0-4e91-9fb5-fc1a1df663a5","Type":"ContainerStarted","Data":"fe07c4e7e65dd54be44890e28cec47ce5b82047a481b073f39a2e9cbb413af08"} Oct 13 07:49:21 crc kubenswrapper[4833]: I1013 07:49:21.196469 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tpvtp" podStartSLOduration=3.520128566 podStartE2EDuration="6.196455171s" podCreationTimestamp="2025-10-13 07:49:15 +0000 UTC" firstStartedPulling="2025-10-13 07:49:17.123287045 +0000 UTC m=+4847.223709971" lastFinishedPulling="2025-10-13 07:49:19.79961365 +0000 UTC m=+4849.900036576" observedRunningTime="2025-10-13 07:49:21.192381705 +0000 UTC m=+4851.292804621" watchObservedRunningTime="2025-10-13 07:49:21.196455171 +0000 UTC m=+4851.296878087" Oct 13 07:49:25 crc kubenswrapper[4833]: I1013 07:49:25.491031 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tpvtp" Oct 13 07:49:25 crc kubenswrapper[4833]: I1013 07:49:25.491929 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tpvtp" Oct 13 07:49:25 crc kubenswrapper[4833]: I1013 07:49:25.570327 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tpvtp" Oct 13 07:49:26 crc kubenswrapper[4833]: I1013 07:49:26.294598 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tpvtp" Oct 13 07:49:26 crc kubenswrapper[4833]: I1013 07:49:26.356038 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tpvtp"] Oct 13 07:49:28 crc kubenswrapper[4833]: I1013 07:49:28.230202 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tpvtp" podUID="02057902-11d0-4e91-9fb5-fc1a1df663a5" containerName="registry-server" containerID="cri-o://fe07c4e7e65dd54be44890e28cec47ce5b82047a481b073f39a2e9cbb413af08" gracePeriod=2 Oct 13 07:49:29 crc kubenswrapper[4833]: I1013 07:49:29.244789 4833 generic.go:334] "Generic (PLEG): container finished" podID="02057902-11d0-4e91-9fb5-fc1a1df663a5" containerID="fe07c4e7e65dd54be44890e28cec47ce5b82047a481b073f39a2e9cbb413af08" exitCode=0 Oct 13 07:49:29 crc kubenswrapper[4833]: I1013 07:49:29.244866 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tpvtp" event={"ID":"02057902-11d0-4e91-9fb5-fc1a1df663a5","Type":"ContainerDied","Data":"fe07c4e7e65dd54be44890e28cec47ce5b82047a481b073f39a2e9cbb413af08"} Oct 13 07:49:29 crc kubenswrapper[4833]: I1013 07:49:29.405259 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tpvtp" Oct 13 07:49:29 crc kubenswrapper[4833]: I1013 07:49:29.506049 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02057902-11d0-4e91-9fb5-fc1a1df663a5-catalog-content\") pod \"02057902-11d0-4e91-9fb5-fc1a1df663a5\" (UID: \"02057902-11d0-4e91-9fb5-fc1a1df663a5\") " Oct 13 07:49:29 crc kubenswrapper[4833]: I1013 07:49:29.506146 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbh42\" (UniqueName: \"kubernetes.io/projected/02057902-11d0-4e91-9fb5-fc1a1df663a5-kube-api-access-dbh42\") pod \"02057902-11d0-4e91-9fb5-fc1a1df663a5\" (UID: \"02057902-11d0-4e91-9fb5-fc1a1df663a5\") " Oct 13 07:49:29 crc kubenswrapper[4833]: I1013 07:49:29.506216 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02057902-11d0-4e91-9fb5-fc1a1df663a5-utilities\") pod \"02057902-11d0-4e91-9fb5-fc1a1df663a5\" (UID: \"02057902-11d0-4e91-9fb5-fc1a1df663a5\") " Oct 13 07:49:29 crc kubenswrapper[4833]: I1013 07:49:29.507228 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02057902-11d0-4e91-9fb5-fc1a1df663a5-utilities" (OuterVolumeSpecName: "utilities") pod "02057902-11d0-4e91-9fb5-fc1a1df663a5" (UID: "02057902-11d0-4e91-9fb5-fc1a1df663a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:49:29 crc kubenswrapper[4833]: I1013 07:49:29.511016 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02057902-11d0-4e91-9fb5-fc1a1df663a5-kube-api-access-dbh42" (OuterVolumeSpecName: "kube-api-access-dbh42") pod "02057902-11d0-4e91-9fb5-fc1a1df663a5" (UID: "02057902-11d0-4e91-9fb5-fc1a1df663a5"). InnerVolumeSpecName "kube-api-access-dbh42". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:49:29 crc kubenswrapper[4833]: I1013 07:49:29.552364 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02057902-11d0-4e91-9fb5-fc1a1df663a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02057902-11d0-4e91-9fb5-fc1a1df663a5" (UID: "02057902-11d0-4e91-9fb5-fc1a1df663a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:49:29 crc kubenswrapper[4833]: I1013 07:49:29.607496 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbh42\" (UniqueName: \"kubernetes.io/projected/02057902-11d0-4e91-9fb5-fc1a1df663a5-kube-api-access-dbh42\") on node \"crc\" DevicePath \"\"" Oct 13 07:49:29 crc kubenswrapper[4833]: I1013 07:49:29.607525 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02057902-11d0-4e91-9fb5-fc1a1df663a5-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 07:49:29 crc kubenswrapper[4833]: I1013 07:49:29.607551 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02057902-11d0-4e91-9fb5-fc1a1df663a5-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 07:49:30 crc kubenswrapper[4833]: I1013 07:49:30.256867 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tpvtp" event={"ID":"02057902-11d0-4e91-9fb5-fc1a1df663a5","Type":"ContainerDied","Data":"96cf055e19c00d18d0617dbbd84446cb6fb9e568e29db48a7028b548af5c2966"} Oct 13 07:49:30 crc kubenswrapper[4833]: I1013 07:49:30.256918 4833 scope.go:117] "RemoveContainer" containerID="fe07c4e7e65dd54be44890e28cec47ce5b82047a481b073f39a2e9cbb413af08" Oct 13 07:49:30 crc kubenswrapper[4833]: I1013 07:49:30.256930 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tpvtp" Oct 13 07:49:30 crc kubenswrapper[4833]: I1013 07:49:30.281745 4833 scope.go:117] "RemoveContainer" containerID="4416990d4fa2c915b9a125ae4254791ee4bb510d5c1e1dedafe84c4be43a7f60" Oct 13 07:49:30 crc kubenswrapper[4833]: I1013 07:49:30.299763 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tpvtp"] Oct 13 07:49:30 crc kubenswrapper[4833]: I1013 07:49:30.303382 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tpvtp"] Oct 13 07:49:30 crc kubenswrapper[4833]: I1013 07:49:30.317906 4833 scope.go:117] "RemoveContainer" containerID="1e83e5474bb27b1de32c05cb5c3021d0b18c887c708101cf4e21b8cc136edf5a" Oct 13 07:49:30 crc kubenswrapper[4833]: I1013 07:49:30.635730 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02057902-11d0-4e91-9fb5-fc1a1df663a5" path="/var/lib/kubelet/pods/02057902-11d0-4e91-9fb5-fc1a1df663a5/volumes" Oct 13 07:49:53 crc kubenswrapper[4833]: I1013 07:49:53.746718 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s8wlr"] Oct 13 07:49:53 crc kubenswrapper[4833]: E1013 07:49:53.747446 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02057902-11d0-4e91-9fb5-fc1a1df663a5" containerName="extract-utilities" Oct 13 07:49:53 crc kubenswrapper[4833]: I1013 07:49:53.747461 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="02057902-11d0-4e91-9fb5-fc1a1df663a5" containerName="extract-utilities" Oct 13 07:49:53 crc kubenswrapper[4833]: E1013 07:49:53.747498 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02057902-11d0-4e91-9fb5-fc1a1df663a5" containerName="registry-server" Oct 13 07:49:53 crc kubenswrapper[4833]: I1013 07:49:53.747508 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="02057902-11d0-4e91-9fb5-fc1a1df663a5" containerName="registry-server" Oct 13 07:49:53 crc kubenswrapper[4833]: E1013 07:49:53.747522 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02057902-11d0-4e91-9fb5-fc1a1df663a5" containerName="extract-content" Oct 13 07:49:53 crc kubenswrapper[4833]: I1013 07:49:53.747533 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="02057902-11d0-4e91-9fb5-fc1a1df663a5" containerName="extract-content" Oct 13 07:49:53 crc kubenswrapper[4833]: I1013 07:49:53.747742 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="02057902-11d0-4e91-9fb5-fc1a1df663a5" containerName="registry-server" Oct 13 07:49:53 crc kubenswrapper[4833]: I1013 07:49:53.748979 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8wlr" Oct 13 07:49:53 crc kubenswrapper[4833]: I1013 07:49:53.786861 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s8wlr"] Oct 13 07:49:53 crc kubenswrapper[4833]: I1013 07:49:53.914138 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd-utilities\") pod \"redhat-operators-s8wlr\" (UID: \"2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd\") " pod="openshift-marketplace/redhat-operators-s8wlr" Oct 13 07:49:53 crc kubenswrapper[4833]: I1013 07:49:53.914319 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg2zp\" (UniqueName: \"kubernetes.io/projected/2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd-kube-api-access-gg2zp\") pod \"redhat-operators-s8wlr\" (UID: \"2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd\") " pod="openshift-marketplace/redhat-operators-s8wlr" Oct 13 07:49:53 crc kubenswrapper[4833]: I1013 07:49:53.914403 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd-catalog-content\") pod \"redhat-operators-s8wlr\" (UID: \"2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd\") " pod="openshift-marketplace/redhat-operators-s8wlr" Oct 13 07:49:54 crc kubenswrapper[4833]: I1013 07:49:54.015474 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd-utilities\") pod \"redhat-operators-s8wlr\" (UID: \"2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd\") " pod="openshift-marketplace/redhat-operators-s8wlr" Oct 13 07:49:54 crc kubenswrapper[4833]: I1013 07:49:54.015589 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg2zp\" (UniqueName: \"kubernetes.io/projected/2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd-kube-api-access-gg2zp\") pod \"redhat-operators-s8wlr\" (UID: \"2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd\") " pod="openshift-marketplace/redhat-operators-s8wlr" Oct 13 07:49:54 crc kubenswrapper[4833]: I1013 07:49:54.015622 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd-catalog-content\") pod \"redhat-operators-s8wlr\" (UID: \"2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd\") " pod="openshift-marketplace/redhat-operators-s8wlr" Oct 13 07:49:54 crc kubenswrapper[4833]: I1013 07:49:54.016077 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd-utilities\") pod \"redhat-operators-s8wlr\" (UID: \"2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd\") " pod="openshift-marketplace/redhat-operators-s8wlr" Oct 13 07:49:54 crc kubenswrapper[4833]: I1013 07:49:54.016121 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd-catalog-content\") pod \"redhat-operators-s8wlr\" (UID: \"2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd\") " pod="openshift-marketplace/redhat-operators-s8wlr" Oct 13 07:49:54 crc kubenswrapper[4833]: I1013 07:49:54.034558 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg2zp\" (UniqueName: \"kubernetes.io/projected/2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd-kube-api-access-gg2zp\") pod \"redhat-operators-s8wlr\" (UID: \"2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd\") " pod="openshift-marketplace/redhat-operators-s8wlr" Oct 13 07:49:54 crc kubenswrapper[4833]: I1013 07:49:54.073239 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8wlr" Oct 13 07:49:54 crc kubenswrapper[4833]: I1013 07:49:54.514676 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s8wlr"] Oct 13 07:49:55 crc kubenswrapper[4833]: I1013 07:49:55.529157 4833 generic.go:334] "Generic (PLEG): container finished" podID="2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd" containerID="18e8a81c23a74998971d635e494016ff907d6f7fc913e5aafe03fe52c9bd94e1" exitCode=0 Oct 13 07:49:55 crc kubenswrapper[4833]: I1013 07:49:55.529256 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8wlr" event={"ID":"2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd","Type":"ContainerDied","Data":"18e8a81c23a74998971d635e494016ff907d6f7fc913e5aafe03fe52c9bd94e1"} Oct 13 07:49:55 crc kubenswrapper[4833]: I1013 07:49:55.529556 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8wlr" event={"ID":"2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd","Type":"ContainerStarted","Data":"98183452b7aa38cfe163bb609fb31c2a89bfdc9e78b1180b322f5ca11a5759ff"} Oct 13 07:49:55 crc kubenswrapper[4833]: I1013 07:49:55.532291 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 07:49:56 crc kubenswrapper[4833]: I1013 07:49:56.550501 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8wlr" event={"ID":"2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd","Type":"ContainerStarted","Data":"27c853c8fcdc90696da3cecc578a89a048e0de5437110d3fe161501b7ceea1f8"} Oct 13 07:49:57 crc kubenswrapper[4833]: I1013 07:49:57.564585 4833 generic.go:334] "Generic (PLEG): container finished" podID="2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd" containerID="27c853c8fcdc90696da3cecc578a89a048e0de5437110d3fe161501b7ceea1f8" exitCode=0 Oct 13 07:49:57 crc kubenswrapper[4833]: I1013 07:49:57.564649 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8wlr" event={"ID":"2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd","Type":"ContainerDied","Data":"27c853c8fcdc90696da3cecc578a89a048e0de5437110d3fe161501b7ceea1f8"} Oct 13 07:49:58 crc kubenswrapper[4833]: I1013 07:49:58.575309 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8wlr" event={"ID":"2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd","Type":"ContainerStarted","Data":"7a5de7b6a07ffb1c32556e8c7cd0e5d696b96ad6ecacdb163605ca682443add1"} Oct 13 07:49:58 crc kubenswrapper[4833]: I1013 07:49:58.605387 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s8wlr" podStartSLOduration=3.10399617 podStartE2EDuration="5.60536908s" podCreationTimestamp="2025-10-13 07:49:53 +0000 UTC" firstStartedPulling="2025-10-13 07:49:55.531769894 +0000 UTC m=+4885.632192840" lastFinishedPulling="2025-10-13 07:49:58.033142834 +0000 UTC m=+4888.133565750" observedRunningTime="2025-10-13 07:49:58.600878702 +0000 UTC m=+4888.701301638" watchObservedRunningTime="2025-10-13 07:49:58.60536908 +0000 UTC m=+4888.705791986" Oct 13 07:50:00 crc kubenswrapper[4833]: I1013 07:50:00.543008 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:50:00 crc kubenswrapper[4833]: I1013 07:50:00.543468 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:50:04 crc kubenswrapper[4833]: I1013 07:50:04.074154 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s8wlr" Oct 13 07:50:04 crc kubenswrapper[4833]: I1013 07:50:04.074468 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s8wlr" Oct 13 07:50:04 crc kubenswrapper[4833]: I1013 07:50:04.127893 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s8wlr" Oct 13 07:50:04 crc kubenswrapper[4833]: I1013 07:50:04.688537 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s8wlr" Oct 13 07:50:04 crc kubenswrapper[4833]: I1013 07:50:04.747652 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s8wlr"] Oct 13 07:50:06 crc kubenswrapper[4833]: I1013 07:50:06.643026 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s8wlr" podUID="2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd" containerName="registry-server" containerID="cri-o://7a5de7b6a07ffb1c32556e8c7cd0e5d696b96ad6ecacdb163605ca682443add1" gracePeriod=2 Oct 13 07:50:08 crc kubenswrapper[4833]: I1013 07:50:08.436305 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8wlr" Oct 13 07:50:08 crc kubenswrapper[4833]: I1013 07:50:08.626245 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg2zp\" (UniqueName: \"kubernetes.io/projected/2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd-kube-api-access-gg2zp\") pod \"2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd\" (UID: \"2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd\") " Oct 13 07:50:08 crc kubenswrapper[4833]: I1013 07:50:08.626520 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd-utilities\") pod \"2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd\" (UID: \"2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd\") " Oct 13 07:50:08 crc kubenswrapper[4833]: I1013 07:50:08.628042 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd-utilities" (OuterVolumeSpecName: "utilities") pod "2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd" (UID: "2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:50:08 crc kubenswrapper[4833]: I1013 07:50:08.628715 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd-catalog-content\") pod \"2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd\" (UID: \"2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd\") " Oct 13 07:50:08 crc kubenswrapper[4833]: I1013 07:50:08.629961 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 07:50:08 crc kubenswrapper[4833]: I1013 07:50:08.639916 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd-kube-api-access-gg2zp" (OuterVolumeSpecName: "kube-api-access-gg2zp") pod "2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd" (UID: "2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd"). InnerVolumeSpecName "kube-api-access-gg2zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:50:08 crc kubenswrapper[4833]: I1013 07:50:08.666668 4833 generic.go:334] "Generic (PLEG): container finished" podID="2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd" containerID="7a5de7b6a07ffb1c32556e8c7cd0e5d696b96ad6ecacdb163605ca682443add1" exitCode=0 Oct 13 07:50:08 crc kubenswrapper[4833]: I1013 07:50:08.667127 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8wlr" Oct 13 07:50:08 crc kubenswrapper[4833]: I1013 07:50:08.673824 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8wlr" event={"ID":"2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd","Type":"ContainerDied","Data":"7a5de7b6a07ffb1c32556e8c7cd0e5d696b96ad6ecacdb163605ca682443add1"} Oct 13 07:50:08 crc kubenswrapper[4833]: I1013 07:50:08.673865 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8wlr" event={"ID":"2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd","Type":"ContainerDied","Data":"98183452b7aa38cfe163bb609fb31c2a89bfdc9e78b1180b322f5ca11a5759ff"} Oct 13 07:50:08 crc kubenswrapper[4833]: I1013 07:50:08.673888 4833 scope.go:117] "RemoveContainer" containerID="7a5de7b6a07ffb1c32556e8c7cd0e5d696b96ad6ecacdb163605ca682443add1" Oct 13 07:50:08 crc kubenswrapper[4833]: I1013 07:50:08.703837 4833 scope.go:117] "RemoveContainer" containerID="27c853c8fcdc90696da3cecc578a89a048e0de5437110d3fe161501b7ceea1f8" Oct 13 07:50:08 crc kubenswrapper[4833]: I1013 07:50:08.730388 4833 scope.go:117] "RemoveContainer" containerID="18e8a81c23a74998971d635e494016ff907d6f7fc913e5aafe03fe52c9bd94e1" Oct 13 07:50:08 crc kubenswrapper[4833]: I1013 07:50:08.732197 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg2zp\" (UniqueName: \"kubernetes.io/projected/2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd-kube-api-access-gg2zp\") on node \"crc\" DevicePath \"\"" Oct 13 07:50:08 crc kubenswrapper[4833]: I1013 07:50:08.758468 4833 scope.go:117] "RemoveContainer" containerID="7a5de7b6a07ffb1c32556e8c7cd0e5d696b96ad6ecacdb163605ca682443add1" Oct 13 07:50:08 crc kubenswrapper[4833]: E1013 07:50:08.759228 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a5de7b6a07ffb1c32556e8c7cd0e5d696b96ad6ecacdb163605ca682443add1\": container with ID starting with 7a5de7b6a07ffb1c32556e8c7cd0e5d696b96ad6ecacdb163605ca682443add1 not found: ID does not exist" containerID="7a5de7b6a07ffb1c32556e8c7cd0e5d696b96ad6ecacdb163605ca682443add1" Oct 13 07:50:08 crc kubenswrapper[4833]: I1013 07:50:08.759279 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a5de7b6a07ffb1c32556e8c7cd0e5d696b96ad6ecacdb163605ca682443add1"} err="failed to get container status \"7a5de7b6a07ffb1c32556e8c7cd0e5d696b96ad6ecacdb163605ca682443add1\": rpc error: code = NotFound desc = could not find container \"7a5de7b6a07ffb1c32556e8c7cd0e5d696b96ad6ecacdb163605ca682443add1\": container with ID starting with 7a5de7b6a07ffb1c32556e8c7cd0e5d696b96ad6ecacdb163605ca682443add1 not found: ID does not exist" Oct 13 07:50:08 crc kubenswrapper[4833]: I1013 07:50:08.759348 4833 scope.go:117] "RemoveContainer" containerID="27c853c8fcdc90696da3cecc578a89a048e0de5437110d3fe161501b7ceea1f8" Oct 13 07:50:08 crc kubenswrapper[4833]: E1013 07:50:08.759773 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27c853c8fcdc90696da3cecc578a89a048e0de5437110d3fe161501b7ceea1f8\": container with ID starting with 27c853c8fcdc90696da3cecc578a89a048e0de5437110d3fe161501b7ceea1f8 not found: ID does not exist" containerID="27c853c8fcdc90696da3cecc578a89a048e0de5437110d3fe161501b7ceea1f8" Oct 13 07:50:08 crc kubenswrapper[4833]: I1013 07:50:08.759803 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27c853c8fcdc90696da3cecc578a89a048e0de5437110d3fe161501b7ceea1f8"} err="failed to get container status \"27c853c8fcdc90696da3cecc578a89a048e0de5437110d3fe161501b7ceea1f8\": rpc error: code = NotFound desc = could not find container \"27c853c8fcdc90696da3cecc578a89a048e0de5437110d3fe161501b7ceea1f8\": container with ID starting with 27c853c8fcdc90696da3cecc578a89a048e0de5437110d3fe161501b7ceea1f8 not found: ID does not exist" Oct 13 07:50:08 crc kubenswrapper[4833]: I1013 07:50:08.759821 4833 scope.go:117] "RemoveContainer" containerID="18e8a81c23a74998971d635e494016ff907d6f7fc913e5aafe03fe52c9bd94e1" Oct 13 07:50:08 crc kubenswrapper[4833]: E1013 07:50:08.760071 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18e8a81c23a74998971d635e494016ff907d6f7fc913e5aafe03fe52c9bd94e1\": container with ID starting with 18e8a81c23a74998971d635e494016ff907d6f7fc913e5aafe03fe52c9bd94e1 not found: ID does not exist" containerID="18e8a81c23a74998971d635e494016ff907d6f7fc913e5aafe03fe52c9bd94e1" Oct 13 07:50:08 crc kubenswrapper[4833]: I1013 07:50:08.760099 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18e8a81c23a74998971d635e494016ff907d6f7fc913e5aafe03fe52c9bd94e1"} err="failed to get container status \"18e8a81c23a74998971d635e494016ff907d6f7fc913e5aafe03fe52c9bd94e1\": rpc error: code = NotFound desc = could not find container \"18e8a81c23a74998971d635e494016ff907d6f7fc913e5aafe03fe52c9bd94e1\": container with ID starting with 18e8a81c23a74998971d635e494016ff907d6f7fc913e5aafe03fe52c9bd94e1 not found: ID does not exist" Oct 13 07:50:08 crc kubenswrapper[4833]: I1013 07:50:08.763805 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd" (UID: "2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:50:08 crc kubenswrapper[4833]: I1013 07:50:08.833196 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 07:50:09 crc kubenswrapper[4833]: I1013 07:50:09.020131 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s8wlr"] Oct 13 07:50:09 crc kubenswrapper[4833]: I1013 07:50:09.031298 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s8wlr"] Oct 13 07:50:10 crc kubenswrapper[4833]: I1013 07:50:10.637529 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd" path="/var/lib/kubelet/pods/2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd/volumes" Oct 13 07:50:30 crc kubenswrapper[4833]: I1013 07:50:30.542670 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:50:30 crc kubenswrapper[4833]: I1013 07:50:30.543359 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:50:49 crc kubenswrapper[4833]: I1013 07:50:49.840523 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-678578b8df-x42kl"] Oct 13 07:50:49 crc kubenswrapper[4833]: E1013 07:50:49.842287 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd" containerName="extract-utilities" Oct 13 07:50:49 crc kubenswrapper[4833]: I1013 07:50:49.842458 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd" containerName="extract-utilities" Oct 13 07:50:49 crc kubenswrapper[4833]: E1013 07:50:49.842548 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd" containerName="extract-content" Oct 13 07:50:49 crc kubenswrapper[4833]: I1013 07:50:49.842618 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd" containerName="extract-content" Oct 13 07:50:49 crc kubenswrapper[4833]: E1013 07:50:49.842683 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd" containerName="registry-server" Oct 13 07:50:49 crc kubenswrapper[4833]: I1013 07:50:49.842743 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd" containerName="registry-server" Oct 13 07:50:49 crc kubenswrapper[4833]: I1013 07:50:49.842949 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="2615ceb1-1c07-4aca-9b9b-bdc6aaa5a3dd" containerName="registry-server" Oct 13 07:50:49 crc kubenswrapper[4833]: I1013 07:50:49.843766 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-678578b8df-x42kl" Oct 13 07:50:49 crc kubenswrapper[4833]: I1013 07:50:49.846023 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Oct 13 07:50:49 crc kubenswrapper[4833]: I1013 07:50:49.846239 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Oct 13 07:50:49 crc kubenswrapper[4833]: I1013 07:50:49.846734 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-zb89n" Oct 13 07:50:49 crc kubenswrapper[4833]: I1013 07:50:49.855810 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b8f87f5c5-ktg9g"] Oct 13 07:50:49 crc kubenswrapper[4833]: I1013 07:50:49.856930 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b8f87f5c5-ktg9g" Oct 13 07:50:49 crc kubenswrapper[4833]: I1013 07:50:49.867015 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 13 07:50:49 crc kubenswrapper[4833]: I1013 07:50:49.869485 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-678578b8df-x42kl"] Oct 13 07:50:49 crc kubenswrapper[4833]: I1013 07:50:49.873808 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b8f87f5c5-ktg9g"] Oct 13 07:50:49 crc kubenswrapper[4833]: I1013 07:50:49.881268 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 13 07:50:49 crc kubenswrapper[4833]: I1013 07:50:49.996888 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d89941be-76db-4f4a-bfdb-66803967a5bc-config\") pod \"dnsmasq-dns-6b8f87f5c5-ktg9g\" (UID: \"d89941be-76db-4f4a-bfdb-66803967a5bc\") " pod="openstack/dnsmasq-dns-6b8f87f5c5-ktg9g" Oct 13 07:50:49 crc kubenswrapper[4833]: I1013 07:50:49.997173 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dckpp\" (UniqueName: \"kubernetes.io/projected/d89941be-76db-4f4a-bfdb-66803967a5bc-kube-api-access-dckpp\") pod \"dnsmasq-dns-6b8f87f5c5-ktg9g\" (UID: \"d89941be-76db-4f4a-bfdb-66803967a5bc\") " pod="openstack/dnsmasq-dns-6b8f87f5c5-ktg9g" Oct 13 07:50:49 crc kubenswrapper[4833]: I1013 07:50:49.997273 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2689e83-2a9c-4e72-bf50-d9b7eef75825-config\") pod \"dnsmasq-dns-678578b8df-x42kl\" (UID: \"f2689e83-2a9c-4e72-bf50-d9b7eef75825\") " pod="openstack/dnsmasq-dns-678578b8df-x42kl" Oct 13 07:50:49 crc kubenswrapper[4833]: I1013 07:50:49.997380 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtjf6\" (UniqueName: \"kubernetes.io/projected/f2689e83-2a9c-4e72-bf50-d9b7eef75825-kube-api-access-gtjf6\") pod \"dnsmasq-dns-678578b8df-x42kl\" (UID: \"f2689e83-2a9c-4e72-bf50-d9b7eef75825\") " pod="openstack/dnsmasq-dns-678578b8df-x42kl" Oct 13 07:50:49 crc kubenswrapper[4833]: I1013 07:50:49.997468 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d89941be-76db-4f4a-bfdb-66803967a5bc-dns-svc\") pod \"dnsmasq-dns-6b8f87f5c5-ktg9g\" (UID: \"d89941be-76db-4f4a-bfdb-66803967a5bc\") " pod="openstack/dnsmasq-dns-6b8f87f5c5-ktg9g" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.098919 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d89941be-76db-4f4a-bfdb-66803967a5bc-config\") pod \"dnsmasq-dns-6b8f87f5c5-ktg9g\" (UID: \"d89941be-76db-4f4a-bfdb-66803967a5bc\") " pod="openstack/dnsmasq-dns-6b8f87f5c5-ktg9g" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.098976 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dckpp\" (UniqueName: \"kubernetes.io/projected/d89941be-76db-4f4a-bfdb-66803967a5bc-kube-api-access-dckpp\") pod \"dnsmasq-dns-6b8f87f5c5-ktg9g\" (UID: \"d89941be-76db-4f4a-bfdb-66803967a5bc\") " pod="openstack/dnsmasq-dns-6b8f87f5c5-ktg9g" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.099016 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2689e83-2a9c-4e72-bf50-d9b7eef75825-config\") pod \"dnsmasq-dns-678578b8df-x42kl\" (UID: \"f2689e83-2a9c-4e72-bf50-d9b7eef75825\") " pod="openstack/dnsmasq-dns-678578b8df-x42kl" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.099054 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtjf6\" (UniqueName: \"kubernetes.io/projected/f2689e83-2a9c-4e72-bf50-d9b7eef75825-kube-api-access-gtjf6\") pod \"dnsmasq-dns-678578b8df-x42kl\" (UID: \"f2689e83-2a9c-4e72-bf50-d9b7eef75825\") " pod="openstack/dnsmasq-dns-678578b8df-x42kl" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.099075 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d89941be-76db-4f4a-bfdb-66803967a5bc-dns-svc\") pod \"dnsmasq-dns-6b8f87f5c5-ktg9g\" (UID: \"d89941be-76db-4f4a-bfdb-66803967a5bc\") " pod="openstack/dnsmasq-dns-6b8f87f5c5-ktg9g" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.100088 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d89941be-76db-4f4a-bfdb-66803967a5bc-dns-svc\") pod \"dnsmasq-dns-6b8f87f5c5-ktg9g\" (UID: \"d89941be-76db-4f4a-bfdb-66803967a5bc\") " pod="openstack/dnsmasq-dns-6b8f87f5c5-ktg9g" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.100842 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d89941be-76db-4f4a-bfdb-66803967a5bc-config\") pod \"dnsmasq-dns-6b8f87f5c5-ktg9g\" (UID: \"d89941be-76db-4f4a-bfdb-66803967a5bc\") " pod="openstack/dnsmasq-dns-6b8f87f5c5-ktg9g" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.101356 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2689e83-2a9c-4e72-bf50-d9b7eef75825-config\") pod \"dnsmasq-dns-678578b8df-x42kl\" (UID: \"f2689e83-2a9c-4e72-bf50-d9b7eef75825\") " pod="openstack/dnsmasq-dns-678578b8df-x42kl" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.130690 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dckpp\" (UniqueName: \"kubernetes.io/projected/d89941be-76db-4f4a-bfdb-66803967a5bc-kube-api-access-dckpp\") pod \"dnsmasq-dns-6b8f87f5c5-ktg9g\" (UID: \"d89941be-76db-4f4a-bfdb-66803967a5bc\") " pod="openstack/dnsmasq-dns-6b8f87f5c5-ktg9g" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.131065 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtjf6\" (UniqueName: \"kubernetes.io/projected/f2689e83-2a9c-4e72-bf50-d9b7eef75825-kube-api-access-gtjf6\") pod \"dnsmasq-dns-678578b8df-x42kl\" (UID: \"f2689e83-2a9c-4e72-bf50-d9b7eef75825\") " pod="openstack/dnsmasq-dns-678578b8df-x42kl" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.171420 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-678578b8df-x42kl" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.184113 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b8f87f5c5-ktg9g" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.194861 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-678578b8df-x42kl"] Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.242331 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b7964457-9cq2f"] Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.243926 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b7964457-9cq2f" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.297557 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b7964457-9cq2f"] Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.404858 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41cb6a99-a382-47fd-9307-22849d9ff54b-config\") pod \"dnsmasq-dns-8b7964457-9cq2f\" (UID: \"41cb6a99-a382-47fd-9307-22849d9ff54b\") " pod="openstack/dnsmasq-dns-8b7964457-9cq2f" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.405225 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41cb6a99-a382-47fd-9307-22849d9ff54b-dns-svc\") pod \"dnsmasq-dns-8b7964457-9cq2f\" (UID: \"41cb6a99-a382-47fd-9307-22849d9ff54b\") " pod="openstack/dnsmasq-dns-8b7964457-9cq2f" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.405255 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vwsd\" (UniqueName: \"kubernetes.io/projected/41cb6a99-a382-47fd-9307-22849d9ff54b-kube-api-access-4vwsd\") pod \"dnsmasq-dns-8b7964457-9cq2f\" (UID: \"41cb6a99-a382-47fd-9307-22849d9ff54b\") " pod="openstack/dnsmasq-dns-8b7964457-9cq2f" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.506555 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41cb6a99-a382-47fd-9307-22849d9ff54b-config\") pod \"dnsmasq-dns-8b7964457-9cq2f\" (UID: \"41cb6a99-a382-47fd-9307-22849d9ff54b\") " pod="openstack/dnsmasq-dns-8b7964457-9cq2f" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.506624 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41cb6a99-a382-47fd-9307-22849d9ff54b-dns-svc\") pod \"dnsmasq-dns-8b7964457-9cq2f\" (UID: \"41cb6a99-a382-47fd-9307-22849d9ff54b\") " pod="openstack/dnsmasq-dns-8b7964457-9cq2f" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.506644 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vwsd\" (UniqueName: \"kubernetes.io/projected/41cb6a99-a382-47fd-9307-22849d9ff54b-kube-api-access-4vwsd\") pod \"dnsmasq-dns-8b7964457-9cq2f\" (UID: \"41cb6a99-a382-47fd-9307-22849d9ff54b\") " pod="openstack/dnsmasq-dns-8b7964457-9cq2f" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.507988 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41cb6a99-a382-47fd-9307-22849d9ff54b-config\") pod \"dnsmasq-dns-8b7964457-9cq2f\" (UID: \"41cb6a99-a382-47fd-9307-22849d9ff54b\") " pod="openstack/dnsmasq-dns-8b7964457-9cq2f" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.508782 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41cb6a99-a382-47fd-9307-22849d9ff54b-dns-svc\") pod \"dnsmasq-dns-8b7964457-9cq2f\" (UID: \"41cb6a99-a382-47fd-9307-22849d9ff54b\") " pod="openstack/dnsmasq-dns-8b7964457-9cq2f" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.531499 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vwsd\" (UniqueName: \"kubernetes.io/projected/41cb6a99-a382-47fd-9307-22849d9ff54b-kube-api-access-4vwsd\") pod \"dnsmasq-dns-8b7964457-9cq2f\" (UID: \"41cb6a99-a382-47fd-9307-22849d9ff54b\") " pod="openstack/dnsmasq-dns-8b7964457-9cq2f" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.538450 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b8f87f5c5-ktg9g"] Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.562816 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67d9f7fb89-qsw8m"] Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.564086 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67d9f7fb89-qsw8m" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.576436 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67d9f7fb89-qsw8m"] Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.607032 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b7964457-9cq2f" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.709419 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed-config\") pod \"dnsmasq-dns-67d9f7fb89-qsw8m\" (UID: \"7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed\") " pod="openstack/dnsmasq-dns-67d9f7fb89-qsw8m" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.709487 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh2z9\" (UniqueName: \"kubernetes.io/projected/7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed-kube-api-access-mh2z9\") pod \"dnsmasq-dns-67d9f7fb89-qsw8m\" (UID: \"7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed\") " pod="openstack/dnsmasq-dns-67d9f7fb89-qsw8m" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.709642 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed-dns-svc\") pod \"dnsmasq-dns-67d9f7fb89-qsw8m\" (UID: \"7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed\") " pod="openstack/dnsmasq-dns-67d9f7fb89-qsw8m" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.751389 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-678578b8df-x42kl"] Oct 13 07:50:50 crc kubenswrapper[4833]: W1013 07:50:50.758834 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2689e83_2a9c_4e72_bf50_d9b7eef75825.slice/crio-5854c9549a04f23b3fa3e0ff0c92e5eb2310c6e0495269e8d6ed9d7171cfa3f8 WatchSource:0}: Error finding container 5854c9549a04f23b3fa3e0ff0c92e5eb2310c6e0495269e8d6ed9d7171cfa3f8: Status 404 returned error can't find the container with id 5854c9549a04f23b3fa3e0ff0c92e5eb2310c6e0495269e8d6ed9d7171cfa3f8 Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.812103 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b8f87f5c5-ktg9g"] Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.812158 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed-dns-svc\") pod \"dnsmasq-dns-67d9f7fb89-qsw8m\" (UID: \"7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed\") " pod="openstack/dnsmasq-dns-67d9f7fb89-qsw8m" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.812561 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed-config\") pod \"dnsmasq-dns-67d9f7fb89-qsw8m\" (UID: \"7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed\") " pod="openstack/dnsmasq-dns-67d9f7fb89-qsw8m" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.812624 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh2z9\" (UniqueName: \"kubernetes.io/projected/7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed-kube-api-access-mh2z9\") pod \"dnsmasq-dns-67d9f7fb89-qsw8m\" (UID: \"7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed\") " pod="openstack/dnsmasq-dns-67d9f7fb89-qsw8m" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.813046 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed-dns-svc\") pod \"dnsmasq-dns-67d9f7fb89-qsw8m\" (UID: \"7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed\") " pod="openstack/dnsmasq-dns-67d9f7fb89-qsw8m" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.814060 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed-config\") pod \"dnsmasq-dns-67d9f7fb89-qsw8m\" (UID: \"7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed\") " pod="openstack/dnsmasq-dns-67d9f7fb89-qsw8m" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.827522 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh2z9\" (UniqueName: \"kubernetes.io/projected/7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed-kube-api-access-mh2z9\") pod \"dnsmasq-dns-67d9f7fb89-qsw8m\" (UID: \"7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed\") " pod="openstack/dnsmasq-dns-67d9f7fb89-qsw8m" Oct 13 07:50:50 crc kubenswrapper[4833]: I1013 07:50:50.886946 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67d9f7fb89-qsw8m" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.028987 4833 generic.go:334] "Generic (PLEG): container finished" podID="f2689e83-2a9c-4e72-bf50-d9b7eef75825" containerID="0347887718c807a4a9659ff89cb064ac92fc4f771141124857c4aafb84030ac8" exitCode=0 Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.029035 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-678578b8df-x42kl" event={"ID":"f2689e83-2a9c-4e72-bf50-d9b7eef75825","Type":"ContainerDied","Data":"0347887718c807a4a9659ff89cb064ac92fc4f771141124857c4aafb84030ac8"} Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.029322 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-678578b8df-x42kl" event={"ID":"f2689e83-2a9c-4e72-bf50-d9b7eef75825","Type":"ContainerStarted","Data":"5854c9549a04f23b3fa3e0ff0c92e5eb2310c6e0495269e8d6ed9d7171cfa3f8"} Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.031399 4833 generic.go:334] "Generic (PLEG): container finished" podID="d89941be-76db-4f4a-bfdb-66803967a5bc" containerID="56917bdf1fdb28a757ec0e08a40a44268a093e4bce3d2aee78cbffe0ee163898" exitCode=0 Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.031443 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b8f87f5c5-ktg9g" event={"ID":"d89941be-76db-4f4a-bfdb-66803967a5bc","Type":"ContainerDied","Data":"56917bdf1fdb28a757ec0e08a40a44268a093e4bce3d2aee78cbffe0ee163898"} Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.031474 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b8f87f5c5-ktg9g" event={"ID":"d89941be-76db-4f4a-bfdb-66803967a5bc","Type":"ContainerStarted","Data":"9c4499b96235b51c0aceeb9724348fd1029c43d69c5e435158a8264bd496fa57"} Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.081961 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b7964457-9cq2f"] Oct 13 07:50:51 crc kubenswrapper[4833]: W1013 07:50:51.096223 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41cb6a99_a382_47fd_9307_22849d9ff54b.slice/crio-c747fafb33be4b1846298b232e8c6cdc750831e222e62f71be0bfbe7ca9fc31a WatchSource:0}: Error finding container c747fafb33be4b1846298b232e8c6cdc750831e222e62f71be0bfbe7ca9fc31a: Status 404 returned error can't find the container with id c747fafb33be4b1846298b232e8c6cdc750831e222e62f71be0bfbe7ca9fc31a Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.410920 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.412369 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.417933 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.418128 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.418405 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.418561 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.418676 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.418807 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.418910 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-5t9lx" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.443595 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.528323 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cce9a391-72e2-421f-9311-a1afea3c6ee0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.528387 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cce9a391-72e2-421f-9311-a1afea3c6ee0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.528414 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cce9a391-72e2-421f-9311-a1afea3c6ee0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.528434 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cce9a391-72e2-421f-9311-a1afea3c6ee0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.528469 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cce9a391-72e2-421f-9311-a1afea3c6ee0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.528490 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mfpm\" (UniqueName: \"kubernetes.io/projected/cce9a391-72e2-421f-9311-a1afea3c6ee0-kube-api-access-2mfpm\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.528520 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-eac4c1df-ce8d-41ad-90e5-d3be9584c483\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac4c1df-ce8d-41ad-90e5-d3be9584c483\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.528610 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cce9a391-72e2-421f-9311-a1afea3c6ee0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.528726 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cce9a391-72e2-421f-9311-a1afea3c6ee0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.528832 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cce9a391-72e2-421f-9311-a1afea3c6ee0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.528912 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cce9a391-72e2-421f-9311-a1afea3c6ee0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.629752 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cce9a391-72e2-421f-9311-a1afea3c6ee0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.629796 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cce9a391-72e2-421f-9311-a1afea3c6ee0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.629816 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cce9a391-72e2-421f-9311-a1afea3c6ee0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.629837 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cce9a391-72e2-421f-9311-a1afea3c6ee0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.629872 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cce9a391-72e2-421f-9311-a1afea3c6ee0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.629897 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mfpm\" (UniqueName: \"kubernetes.io/projected/cce9a391-72e2-421f-9311-a1afea3c6ee0-kube-api-access-2mfpm\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.629953 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-eac4c1df-ce8d-41ad-90e5-d3be9584c483\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac4c1df-ce8d-41ad-90e5-d3be9584c483\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.629982 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cce9a391-72e2-421f-9311-a1afea3c6ee0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.630019 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cce9a391-72e2-421f-9311-a1afea3c6ee0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.630068 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cce9a391-72e2-421f-9311-a1afea3c6ee0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.630112 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cce9a391-72e2-421f-9311-a1afea3c6ee0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.631878 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cce9a391-72e2-421f-9311-a1afea3c6ee0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.632328 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cce9a391-72e2-421f-9311-a1afea3c6ee0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.633021 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cce9a391-72e2-421f-9311-a1afea3c6ee0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.633236 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cce9a391-72e2-421f-9311-a1afea3c6ee0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.633255 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cce9a391-72e2-421f-9311-a1afea3c6ee0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.634252 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cce9a391-72e2-421f-9311-a1afea3c6ee0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.635638 4833 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.635672 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-eac4c1df-ce8d-41ad-90e5-d3be9584c483\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac4c1df-ce8d-41ad-90e5-d3be9584c483\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/683dbe71177f427d5b7a565097b523476a3c38613846158d82f673131ad51b80/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.635751 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cce9a391-72e2-421f-9311-a1afea3c6ee0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.640162 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cce9a391-72e2-421f-9311-a1afea3c6ee0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.640530 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cce9a391-72e2-421f-9311-a1afea3c6ee0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.649404 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mfpm\" (UniqueName: \"kubernetes.io/projected/cce9a391-72e2-421f-9311-a1afea3c6ee0-kube-api-access-2mfpm\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.677437 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.679928 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.688019 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.688086 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.688031 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.688306 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.688571 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.688843 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-8rjnn" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.689019 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.697673 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.739076 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-eac4c1df-ce8d-41ad-90e5-d3be9584c483\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac4c1df-ce8d-41ad-90e5-d3be9584c483\") pod \"rabbitmq-cell1-server-0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.798331 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.831984 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.832109 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-config-data\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.832146 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.832169 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.832208 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.832236 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c05dc055-3665-4bf3-a05a-bf5e015e0a88\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c05dc055-3665-4bf3-a05a-bf5e015e0a88\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.832263 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.832287 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.832398 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.832457 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9m9c\" (UniqueName: \"kubernetes.io/projected/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-kube-api-access-h9m9c\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.832575 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.933880 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.934248 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-config-data\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.934288 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.934307 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.934332 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.934352 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c05dc055-3665-4bf3-a05a-bf5e015e0a88\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c05dc055-3665-4bf3-a05a-bf5e015e0a88\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.934372 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.934387 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.934402 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.934422 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9m9c\" (UniqueName: \"kubernetes.io/projected/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-kube-api-access-h9m9c\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.934460 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.937381 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.937746 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.938333 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-config-data\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.938608 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.939962 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.943347 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.943463 4833 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.943490 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c05dc055-3665-4bf3-a05a-bf5e015e0a88\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c05dc055-3665-4bf3-a05a-bf5e015e0a88\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2f1934ad412792be8e2bf06fa14d02296d4168d544547d24afcc4d0ada46464f/globalmount\"" pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.944493 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.949372 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-678578b8df-x42kl" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.951711 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.954991 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.960826 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9m9c\" (UniqueName: \"kubernetes.io/projected/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-kube-api-access-h9m9c\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.972435 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b8f87f5c5-ktg9g" Oct 13 07:50:51 crc kubenswrapper[4833]: I1013 07:50:51.984196 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c05dc055-3665-4bf3-a05a-bf5e015e0a88\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c05dc055-3665-4bf3-a05a-bf5e015e0a88\") pod \"rabbitmq-server-0\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " pod="openstack/rabbitmq-server-0" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.036012 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2689e83-2a9c-4e72-bf50-d9b7eef75825-config\") pod \"f2689e83-2a9c-4e72-bf50-d9b7eef75825\" (UID: \"f2689e83-2a9c-4e72-bf50-d9b7eef75825\") " Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.036208 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtjf6\" (UniqueName: \"kubernetes.io/projected/f2689e83-2a9c-4e72-bf50-d9b7eef75825-kube-api-access-gtjf6\") pod \"f2689e83-2a9c-4e72-bf50-d9b7eef75825\" (UID: \"f2689e83-2a9c-4e72-bf50-d9b7eef75825\") " Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.041932 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2689e83-2a9c-4e72-bf50-d9b7eef75825-kube-api-access-gtjf6" (OuterVolumeSpecName: "kube-api-access-gtjf6") pod "f2689e83-2a9c-4e72-bf50-d9b7eef75825" (UID: "f2689e83-2a9c-4e72-bf50-d9b7eef75825"). InnerVolumeSpecName "kube-api-access-gtjf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.047482 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-678578b8df-x42kl" event={"ID":"f2689e83-2a9c-4e72-bf50-d9b7eef75825","Type":"ContainerDied","Data":"5854c9549a04f23b3fa3e0ff0c92e5eb2310c6e0495269e8d6ed9d7171cfa3f8"} Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.047554 4833 scope.go:117] "RemoveContainer" containerID="0347887718c807a4a9659ff89cb064ac92fc4f771141124857c4aafb84030ac8" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.048109 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-678578b8df-x42kl" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.049122 4833 generic.go:334] "Generic (PLEG): container finished" podID="41cb6a99-a382-47fd-9307-22849d9ff54b" containerID="75c9bbbf468fbab14a189658c74824b8b4a89c0e5f116b0a457b43bdd1a65ccd" exitCode=0 Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.049201 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b7964457-9cq2f" event={"ID":"41cb6a99-a382-47fd-9307-22849d9ff54b","Type":"ContainerDied","Data":"75c9bbbf468fbab14a189658c74824b8b4a89c0e5f116b0a457b43bdd1a65ccd"} Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.049260 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b7964457-9cq2f" event={"ID":"41cb6a99-a382-47fd-9307-22849d9ff54b","Type":"ContainerStarted","Data":"c747fafb33be4b1846298b232e8c6cdc750831e222e62f71be0bfbe7ca9fc31a"} Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.051067 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b8f87f5c5-ktg9g" event={"ID":"d89941be-76db-4f4a-bfdb-66803967a5bc","Type":"ContainerDied","Data":"9c4499b96235b51c0aceeb9724348fd1029c43d69c5e435158a8264bd496fa57"} Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.051352 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b8f87f5c5-ktg9g" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.052782 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2689e83-2a9c-4e72-bf50-d9b7eef75825-config" (OuterVolumeSpecName: "config") pod "f2689e83-2a9c-4e72-bf50-d9b7eef75825" (UID: "f2689e83-2a9c-4e72-bf50-d9b7eef75825"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.059420 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.088681 4833 scope.go:117] "RemoveContainer" containerID="56917bdf1fdb28a757ec0e08a40a44268a093e4bce3d2aee78cbffe0ee163898" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.116980 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67d9f7fb89-qsw8m"] Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.138764 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d89941be-76db-4f4a-bfdb-66803967a5bc-config\") pod \"d89941be-76db-4f4a-bfdb-66803967a5bc\" (UID: \"d89941be-76db-4f4a-bfdb-66803967a5bc\") " Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.138894 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dckpp\" (UniqueName: \"kubernetes.io/projected/d89941be-76db-4f4a-bfdb-66803967a5bc-kube-api-access-dckpp\") pod \"d89941be-76db-4f4a-bfdb-66803967a5bc\" (UID: \"d89941be-76db-4f4a-bfdb-66803967a5bc\") " Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.138941 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d89941be-76db-4f4a-bfdb-66803967a5bc-dns-svc\") pod \"d89941be-76db-4f4a-bfdb-66803967a5bc\" (UID: \"d89941be-76db-4f4a-bfdb-66803967a5bc\") " Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.139429 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2689e83-2a9c-4e72-bf50-d9b7eef75825-config\") on node \"crc\" DevicePath \"\"" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.139447 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtjf6\" (UniqueName: \"kubernetes.io/projected/f2689e83-2a9c-4e72-bf50-d9b7eef75825-kube-api-access-gtjf6\") on node \"crc\" DevicePath \"\"" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.149897 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d89941be-76db-4f4a-bfdb-66803967a5bc-kube-api-access-dckpp" (OuterVolumeSpecName: "kube-api-access-dckpp") pod "d89941be-76db-4f4a-bfdb-66803967a5bc" (UID: "d89941be-76db-4f4a-bfdb-66803967a5bc"). InnerVolumeSpecName "kube-api-access-dckpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:50:52 crc kubenswrapper[4833]: E1013 07:50:52.166035 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d89941be-76db-4f4a-bfdb-66803967a5bc-config podName:d89941be-76db-4f4a-bfdb-66803967a5bc nodeName:}" failed. No retries permitted until 2025-10-13 07:50:52.665990104 +0000 UTC m=+4942.766413020 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config" (UniqueName: "kubernetes.io/configmap/d89941be-76db-4f4a-bfdb-66803967a5bc-config") pod "d89941be-76db-4f4a-bfdb-66803967a5bc" (UID: "d89941be-76db-4f4a-bfdb-66803967a5bc") : error deleting /var/lib/kubelet/pods/d89941be-76db-4f4a-bfdb-66803967a5bc/volume-subpaths: remove /var/lib/kubelet/pods/d89941be-76db-4f4a-bfdb-66803967a5bc/volume-subpaths: no such file or directory Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.171029 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d89941be-76db-4f4a-bfdb-66803967a5bc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d89941be-76db-4f4a-bfdb-66803967a5bc" (UID: "d89941be-76db-4f4a-bfdb-66803967a5bc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.240799 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dckpp\" (UniqueName: \"kubernetes.io/projected/d89941be-76db-4f4a-bfdb-66803967a5bc-kube-api-access-dckpp\") on node \"crc\" DevicePath \"\"" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.241057 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d89941be-76db-4f4a-bfdb-66803967a5bc-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.268926 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.412088 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-678578b8df-x42kl"] Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.416570 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-678578b8df-x42kl"] Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.530491 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 13 07:50:52 crc kubenswrapper[4833]: W1013 07:50:52.536107 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a6a9ada_bed0_4151_9ec5_cb318a4cb43c.slice/crio-da4e578f1a266e6391c8f4b8153e8b76dc3b7f9218ce3a38900f28b36669b794 WatchSource:0}: Error finding container da4e578f1a266e6391c8f4b8153e8b76dc3b7f9218ce3a38900f28b36669b794: Status 404 returned error can't find the container with id da4e578f1a266e6391c8f4b8153e8b76dc3b7f9218ce3a38900f28b36669b794 Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.635442 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2689e83-2a9c-4e72-bf50-d9b7eef75825" path="/var/lib/kubelet/pods/f2689e83-2a9c-4e72-bf50-d9b7eef75825/volumes" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.682601 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Oct 13 07:50:52 crc kubenswrapper[4833]: E1013 07:50:52.683127 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2689e83-2a9c-4e72-bf50-d9b7eef75825" containerName="init" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.683196 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2689e83-2a9c-4e72-bf50-d9b7eef75825" containerName="init" Oct 13 07:50:52 crc kubenswrapper[4833]: E1013 07:50:52.683252 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d89941be-76db-4f4a-bfdb-66803967a5bc" containerName="init" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.683297 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d89941be-76db-4f4a-bfdb-66803967a5bc" containerName="init" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.683520 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2689e83-2a9c-4e72-bf50-d9b7eef75825" containerName="init" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.683614 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d89941be-76db-4f4a-bfdb-66803967a5bc" containerName="init" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.684438 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.693061 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.693291 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.693695 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.694607 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.694632 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-v476d" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.697210 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.700150 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.747501 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d89941be-76db-4f4a-bfdb-66803967a5bc-config\") pod \"d89941be-76db-4f4a-bfdb-66803967a5bc\" (UID: \"d89941be-76db-4f4a-bfdb-66803967a5bc\") " Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.748238 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d89941be-76db-4f4a-bfdb-66803967a5bc-config" (OuterVolumeSpecName: "config") pod "d89941be-76db-4f4a-bfdb-66803967a5bc" (UID: "d89941be-76db-4f4a-bfdb-66803967a5bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.848882 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f5728b9b-3142-4f3c-af80-b23b846b22e0-config-data-default\") pod \"openstack-galera-0\" (UID: \"f5728b9b-3142-4f3c-af80-b23b846b22e0\") " pod="openstack/openstack-galera-0" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.849094 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9685cf2f-0f53-49dd-8808-9742da6c64b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9685cf2f-0f53-49dd-8808-9742da6c64b9\") pod \"openstack-galera-0\" (UID: \"f5728b9b-3142-4f3c-af80-b23b846b22e0\") " pod="openstack/openstack-galera-0" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.849169 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5728b9b-3142-4f3c-af80-b23b846b22e0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f5728b9b-3142-4f3c-af80-b23b846b22e0\") " pod="openstack/openstack-galera-0" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.849258 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5728b9b-3142-4f3c-af80-b23b846b22e0-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f5728b9b-3142-4f3c-af80-b23b846b22e0\") " pod="openstack/openstack-galera-0" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.849347 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmfj5\" (UniqueName: \"kubernetes.io/projected/f5728b9b-3142-4f3c-af80-b23b846b22e0-kube-api-access-qmfj5\") pod \"openstack-galera-0\" (UID: \"f5728b9b-3142-4f3c-af80-b23b846b22e0\") " pod="openstack/openstack-galera-0" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.849428 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f5728b9b-3142-4f3c-af80-b23b846b22e0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f5728b9b-3142-4f3c-af80-b23b846b22e0\") " pod="openstack/openstack-galera-0" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.849488 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5728b9b-3142-4f3c-af80-b23b846b22e0-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f5728b9b-3142-4f3c-af80-b23b846b22e0\") " pod="openstack/openstack-galera-0" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.849579 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f5728b9b-3142-4f3c-af80-b23b846b22e0-kolla-config\") pod \"openstack-galera-0\" (UID: \"f5728b9b-3142-4f3c-af80-b23b846b22e0\") " pod="openstack/openstack-galera-0" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.849659 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/f5728b9b-3142-4f3c-af80-b23b846b22e0-secrets\") pod \"openstack-galera-0\" (UID: \"f5728b9b-3142-4f3c-af80-b23b846b22e0\") " pod="openstack/openstack-galera-0" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.849897 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d89941be-76db-4f4a-bfdb-66803967a5bc-config\") on node \"crc\" DevicePath \"\"" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.951304 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9685cf2f-0f53-49dd-8808-9742da6c64b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9685cf2f-0f53-49dd-8808-9742da6c64b9\") pod \"openstack-galera-0\" (UID: \"f5728b9b-3142-4f3c-af80-b23b846b22e0\") " pod="openstack/openstack-galera-0" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.951359 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5728b9b-3142-4f3c-af80-b23b846b22e0-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f5728b9b-3142-4f3c-af80-b23b846b22e0\") " pod="openstack/openstack-galera-0" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.951383 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5728b9b-3142-4f3c-af80-b23b846b22e0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f5728b9b-3142-4f3c-af80-b23b846b22e0\") " pod="openstack/openstack-galera-0" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.951424 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmfj5\" (UniqueName: \"kubernetes.io/projected/f5728b9b-3142-4f3c-af80-b23b846b22e0-kube-api-access-qmfj5\") pod \"openstack-galera-0\" (UID: \"f5728b9b-3142-4f3c-af80-b23b846b22e0\") " pod="openstack/openstack-galera-0" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.951468 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f5728b9b-3142-4f3c-af80-b23b846b22e0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f5728b9b-3142-4f3c-af80-b23b846b22e0\") " pod="openstack/openstack-galera-0" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.951490 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5728b9b-3142-4f3c-af80-b23b846b22e0-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f5728b9b-3142-4f3c-af80-b23b846b22e0\") " pod="openstack/openstack-galera-0" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.951528 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f5728b9b-3142-4f3c-af80-b23b846b22e0-kolla-config\") pod \"openstack-galera-0\" (UID: \"f5728b9b-3142-4f3c-af80-b23b846b22e0\") " pod="openstack/openstack-galera-0" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.951579 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/f5728b9b-3142-4f3c-af80-b23b846b22e0-secrets\") pod \"openstack-galera-0\" (UID: \"f5728b9b-3142-4f3c-af80-b23b846b22e0\") " pod="openstack/openstack-galera-0" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.951637 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f5728b9b-3142-4f3c-af80-b23b846b22e0-config-data-default\") pod \"openstack-galera-0\" (UID: \"f5728b9b-3142-4f3c-af80-b23b846b22e0\") " pod="openstack/openstack-galera-0" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.952493 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f5728b9b-3142-4f3c-af80-b23b846b22e0-config-data-default\") pod \"openstack-galera-0\" (UID: \"f5728b9b-3142-4f3c-af80-b23b846b22e0\") " pod="openstack/openstack-galera-0" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.952958 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f5728b9b-3142-4f3c-af80-b23b846b22e0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f5728b9b-3142-4f3c-af80-b23b846b22e0\") " pod="openstack/openstack-galera-0" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.953724 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f5728b9b-3142-4f3c-af80-b23b846b22e0-kolla-config\") pod \"openstack-galera-0\" (UID: \"f5728b9b-3142-4f3c-af80-b23b846b22e0\") " pod="openstack/openstack-galera-0" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.954785 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5728b9b-3142-4f3c-af80-b23b846b22e0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f5728b9b-3142-4f3c-af80-b23b846b22e0\") " pod="openstack/openstack-galera-0" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.960936 4833 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.960978 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9685cf2f-0f53-49dd-8808-9742da6c64b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9685cf2f-0f53-49dd-8808-9742da6c64b9\") pod \"openstack-galera-0\" (UID: \"f5728b9b-3142-4f3c-af80-b23b846b22e0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5aa20d83cb357c0ffb1ecf66a1105a6a85eb08179c1b0e9f1d0fcb3b1bc3ebb2/globalmount\"" pod="openstack/openstack-galera-0" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.981315 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/f5728b9b-3142-4f3c-af80-b23b846b22e0-secrets\") pod \"openstack-galera-0\" (UID: \"f5728b9b-3142-4f3c-af80-b23b846b22e0\") " pod="openstack/openstack-galera-0" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.981393 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5728b9b-3142-4f3c-af80-b23b846b22e0-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f5728b9b-3142-4f3c-af80-b23b846b22e0\") " pod="openstack/openstack-galera-0" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.984410 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5728b9b-3142-4f3c-af80-b23b846b22e0-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f5728b9b-3142-4f3c-af80-b23b846b22e0\") " pod="openstack/openstack-galera-0" Oct 13 07:50:52 crc kubenswrapper[4833]: I1013 07:50:52.988012 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmfj5\" (UniqueName: \"kubernetes.io/projected/f5728b9b-3142-4f3c-af80-b23b846b22e0-kube-api-access-qmfj5\") pod \"openstack-galera-0\" (UID: \"f5728b9b-3142-4f3c-af80-b23b846b22e0\") " pod="openstack/openstack-galera-0" Oct 13 07:50:53 crc kubenswrapper[4833]: I1013 07:50:53.030177 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b8f87f5c5-ktg9g"] Oct 13 07:50:53 crc kubenswrapper[4833]: I1013 07:50:53.040231 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b8f87f5c5-ktg9g"] Oct 13 07:50:53 crc kubenswrapper[4833]: I1013 07:50:53.063504 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c","Type":"ContainerStarted","Data":"da4e578f1a266e6391c8f4b8153e8b76dc3b7f9218ce3a38900f28b36669b794"} Oct 13 07:50:53 crc kubenswrapper[4833]: I1013 07:50:53.065860 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cce9a391-72e2-421f-9311-a1afea3c6ee0","Type":"ContainerStarted","Data":"0ce10fabcf7b9fd034a5b8b3a45fc9ac4859a3e88bf9b70c3c32c6405d87b745"} Oct 13 07:50:53 crc kubenswrapper[4833]: I1013 07:50:53.078295 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b7964457-9cq2f" event={"ID":"41cb6a99-a382-47fd-9307-22849d9ff54b","Type":"ContainerStarted","Data":"be0857147bcc48136d0e0f7b9713b3ddc01c0eaf287cf338c7c6d62abb0651e2"} Oct 13 07:50:53 crc kubenswrapper[4833]: I1013 07:50:53.078434 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b7964457-9cq2f" Oct 13 07:50:53 crc kubenswrapper[4833]: I1013 07:50:53.089207 4833 generic.go:334] "Generic (PLEG): container finished" podID="7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed" containerID="7f34ea29ca0461ab2ae91b0b114ecbf673db48cd73671a808cd6e9530d1176af" exitCode=0 Oct 13 07:50:53 crc kubenswrapper[4833]: I1013 07:50:53.089284 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67d9f7fb89-qsw8m" event={"ID":"7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed","Type":"ContainerDied","Data":"7f34ea29ca0461ab2ae91b0b114ecbf673db48cd73671a808cd6e9530d1176af"} Oct 13 07:50:53 crc kubenswrapper[4833]: I1013 07:50:53.089317 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67d9f7fb89-qsw8m" event={"ID":"7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed","Type":"ContainerStarted","Data":"a08d211570d418875fb4f4fce4561002c2bf1273c3a67107431b67f527f1b5e0"} Oct 13 07:50:53 crc kubenswrapper[4833]: I1013 07:50:53.108694 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b7964457-9cq2f" podStartSLOduration=3.108667681 podStartE2EDuration="3.108667681s" podCreationTimestamp="2025-10-13 07:50:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 07:50:53.105499441 +0000 UTC m=+4943.205922367" watchObservedRunningTime="2025-10-13 07:50:53.108667681 +0000 UTC m=+4943.209090597" Oct 13 07:50:53 crc kubenswrapper[4833]: I1013 07:50:53.413791 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9685cf2f-0f53-49dd-8808-9742da6c64b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9685cf2f-0f53-49dd-8808-9742da6c64b9\") pod \"openstack-galera-0\" (UID: \"f5728b9b-3142-4f3c-af80-b23b846b22e0\") " pod="openstack/openstack-galera-0" Oct 13 07:50:53 crc kubenswrapper[4833]: I1013 07:50:53.634220 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.053067 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 13 07:50:54 crc kubenswrapper[4833]: W1013 07:50:54.066777 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5728b9b_3142_4f3c_af80_b23b846b22e0.slice/crio-e6e533e0c0bcb117512f42313142f9b17f33610ba8696576cde272376297582c WatchSource:0}: Error finding container e6e533e0c0bcb117512f42313142f9b17f33610ba8696576cde272376297582c: Status 404 returned error can't find the container with id e6e533e0c0bcb117512f42313142f9b17f33610ba8696576cde272376297582c Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.100291 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f5728b9b-3142-4f3c-af80-b23b846b22e0","Type":"ContainerStarted","Data":"e6e533e0c0bcb117512f42313142f9b17f33610ba8696576cde272376297582c"} Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.104259 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67d9f7fb89-qsw8m" event={"ID":"7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed","Type":"ContainerStarted","Data":"87390d06b661efeb111463db357756ad593225d01d13a01108c48c8ccf60a44f"} Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.104329 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67d9f7fb89-qsw8m" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.105940 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.107722 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.111769 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c","Type":"ContainerStarted","Data":"405edc8a94a76dc01fbaf327c9a64c8309a2f7a74a22bec439d83653166d5d38"} Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.111851 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.112060 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-wvd7g" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.113069 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.113339 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.116196 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cce9a391-72e2-421f-9311-a1afea3c6ee0","Type":"ContainerStarted","Data":"ae8678314deb28e10be8375963e36724650c5cb97a6d04802773227783516ae3"} Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.135175 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.143343 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67d9f7fb89-qsw8m" podStartSLOduration=4.143284556 podStartE2EDuration="4.143284556s" podCreationTimestamp="2025-10-13 07:50:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 07:50:54.137019578 +0000 UTC m=+4944.237442504" watchObservedRunningTime="2025-10-13 07:50:54.143284556 +0000 UTC m=+4944.243707612" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.275786 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e193d26-9513-4f0d-bed6-e499f9264ba6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"8e193d26-9513-4f0d-bed6-e499f9264ba6\") " pod="openstack/openstack-cell1-galera-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.275860 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8e193d26-9513-4f0d-bed6-e499f9264ba6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"8e193d26-9513-4f0d-bed6-e499f9264ba6\") " pod="openstack/openstack-cell1-galera-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.275922 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c8c16db4-1426-49c0-9455-5a7a77cda12c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8c16db4-1426-49c0-9455-5a7a77cda12c\") pod \"openstack-cell1-galera-0\" (UID: \"8e193d26-9513-4f0d-bed6-e499f9264ba6\") " pod="openstack/openstack-cell1-galera-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.275976 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e193d26-9513-4f0d-bed6-e499f9264ba6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"8e193d26-9513-4f0d-bed6-e499f9264ba6\") " pod="openstack/openstack-cell1-galera-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.276061 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8e193d26-9513-4f0d-bed6-e499f9264ba6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"8e193d26-9513-4f0d-bed6-e499f9264ba6\") " pod="openstack/openstack-cell1-galera-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.276279 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/8e193d26-9513-4f0d-bed6-e499f9264ba6-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"8e193d26-9513-4f0d-bed6-e499f9264ba6\") " pod="openstack/openstack-cell1-galera-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.276325 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8e193d26-9513-4f0d-bed6-e499f9264ba6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"8e193d26-9513-4f0d-bed6-e499f9264ba6\") " pod="openstack/openstack-cell1-galera-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.276363 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m2ht\" (UniqueName: \"kubernetes.io/projected/8e193d26-9513-4f0d-bed6-e499f9264ba6-kube-api-access-5m2ht\") pod \"openstack-cell1-galera-0\" (UID: \"8e193d26-9513-4f0d-bed6-e499f9264ba6\") " pod="openstack/openstack-cell1-galera-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.276458 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e193d26-9513-4f0d-bed6-e499f9264ba6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"8e193d26-9513-4f0d-bed6-e499f9264ba6\") " pod="openstack/openstack-cell1-galera-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.377399 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/8e193d26-9513-4f0d-bed6-e499f9264ba6-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"8e193d26-9513-4f0d-bed6-e499f9264ba6\") " pod="openstack/openstack-cell1-galera-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.377672 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8e193d26-9513-4f0d-bed6-e499f9264ba6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"8e193d26-9513-4f0d-bed6-e499f9264ba6\") " pod="openstack/openstack-cell1-galera-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.377702 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m2ht\" (UniqueName: \"kubernetes.io/projected/8e193d26-9513-4f0d-bed6-e499f9264ba6-kube-api-access-5m2ht\") pod \"openstack-cell1-galera-0\" (UID: \"8e193d26-9513-4f0d-bed6-e499f9264ba6\") " pod="openstack/openstack-cell1-galera-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.377981 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8e193d26-9513-4f0d-bed6-e499f9264ba6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"8e193d26-9513-4f0d-bed6-e499f9264ba6\") " pod="openstack/openstack-cell1-galera-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.378062 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e193d26-9513-4f0d-bed6-e499f9264ba6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"8e193d26-9513-4f0d-bed6-e499f9264ba6\") " pod="openstack/openstack-cell1-galera-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.378113 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e193d26-9513-4f0d-bed6-e499f9264ba6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"8e193d26-9513-4f0d-bed6-e499f9264ba6\") " pod="openstack/openstack-cell1-galera-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.378138 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8e193d26-9513-4f0d-bed6-e499f9264ba6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"8e193d26-9513-4f0d-bed6-e499f9264ba6\") " pod="openstack/openstack-cell1-galera-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.378409 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c8c16db4-1426-49c0-9455-5a7a77cda12c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8c16db4-1426-49c0-9455-5a7a77cda12c\") pod \"openstack-cell1-galera-0\" (UID: \"8e193d26-9513-4f0d-bed6-e499f9264ba6\") " pod="openstack/openstack-cell1-galera-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.378441 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e193d26-9513-4f0d-bed6-e499f9264ba6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"8e193d26-9513-4f0d-bed6-e499f9264ba6\") " pod="openstack/openstack-cell1-galera-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.378463 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8e193d26-9513-4f0d-bed6-e499f9264ba6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"8e193d26-9513-4f0d-bed6-e499f9264ba6\") " pod="openstack/openstack-cell1-galera-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.378943 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8e193d26-9513-4f0d-bed6-e499f9264ba6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"8e193d26-9513-4f0d-bed6-e499f9264ba6\") " pod="openstack/openstack-cell1-galera-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.379192 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8e193d26-9513-4f0d-bed6-e499f9264ba6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"8e193d26-9513-4f0d-bed6-e499f9264ba6\") " pod="openstack/openstack-cell1-galera-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.379360 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e193d26-9513-4f0d-bed6-e499f9264ba6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"8e193d26-9513-4f0d-bed6-e499f9264ba6\") " pod="openstack/openstack-cell1-galera-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.381768 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/8e193d26-9513-4f0d-bed6-e499f9264ba6-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"8e193d26-9513-4f0d-bed6-e499f9264ba6\") " pod="openstack/openstack-cell1-galera-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.382115 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e193d26-9513-4f0d-bed6-e499f9264ba6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"8e193d26-9513-4f0d-bed6-e499f9264ba6\") " pod="openstack/openstack-cell1-galera-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.387225 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e193d26-9513-4f0d-bed6-e499f9264ba6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"8e193d26-9513-4f0d-bed6-e499f9264ba6\") " pod="openstack/openstack-cell1-galera-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.388523 4833 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.388589 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c8c16db4-1426-49c0-9455-5a7a77cda12c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8c16db4-1426-49c0-9455-5a7a77cda12c\") pod \"openstack-cell1-galera-0\" (UID: \"8e193d26-9513-4f0d-bed6-e499f9264ba6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f0aaf67efd0a3edfd77cd3259a5315ac90119a97d3c5a0b85f91369a8360c704/globalmount\"" pod="openstack/openstack-cell1-galera-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.396040 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m2ht\" (UniqueName: \"kubernetes.io/projected/8e193d26-9513-4f0d-bed6-e499f9264ba6-kube-api-access-5m2ht\") pod \"openstack-cell1-galera-0\" (UID: \"8e193d26-9513-4f0d-bed6-e499f9264ba6\") " pod="openstack/openstack-cell1-galera-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.422557 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c8c16db4-1426-49c0-9455-5a7a77cda12c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8c16db4-1426-49c0-9455-5a7a77cda12c\") pod \"openstack-cell1-galera-0\" (UID: \"8e193d26-9513-4f0d-bed6-e499f9264ba6\") " pod="openstack/openstack-cell1-galera-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.473176 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.474400 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.478031 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-nxs6d" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.478086 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.478207 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.484129 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.485601 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.584440 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxsxw\" (UniqueName: \"kubernetes.io/projected/53377dda-bb5d-4ac3-bf05-d6d7a8801896-kube-api-access-hxsxw\") pod \"memcached-0\" (UID: \"53377dda-bb5d-4ac3-bf05-d6d7a8801896\") " pod="openstack/memcached-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.584524 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/53377dda-bb5d-4ac3-bf05-d6d7a8801896-memcached-tls-certs\") pod \"memcached-0\" (UID: \"53377dda-bb5d-4ac3-bf05-d6d7a8801896\") " pod="openstack/memcached-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.584586 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/53377dda-bb5d-4ac3-bf05-d6d7a8801896-kolla-config\") pod \"memcached-0\" (UID: \"53377dda-bb5d-4ac3-bf05-d6d7a8801896\") " pod="openstack/memcached-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.584605 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/53377dda-bb5d-4ac3-bf05-d6d7a8801896-config-data\") pod \"memcached-0\" (UID: \"53377dda-bb5d-4ac3-bf05-d6d7a8801896\") " pod="openstack/memcached-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.584624 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53377dda-bb5d-4ac3-bf05-d6d7a8801896-combined-ca-bundle\") pod \"memcached-0\" (UID: \"53377dda-bb5d-4ac3-bf05-d6d7a8801896\") " pod="openstack/memcached-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.636822 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d89941be-76db-4f4a-bfdb-66803967a5bc" path="/var/lib/kubelet/pods/d89941be-76db-4f4a-bfdb-66803967a5bc/volumes" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.686414 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxsxw\" (UniqueName: \"kubernetes.io/projected/53377dda-bb5d-4ac3-bf05-d6d7a8801896-kube-api-access-hxsxw\") pod \"memcached-0\" (UID: \"53377dda-bb5d-4ac3-bf05-d6d7a8801896\") " pod="openstack/memcached-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.686505 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/53377dda-bb5d-4ac3-bf05-d6d7a8801896-memcached-tls-certs\") pod \"memcached-0\" (UID: \"53377dda-bb5d-4ac3-bf05-d6d7a8801896\") " pod="openstack/memcached-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.686621 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/53377dda-bb5d-4ac3-bf05-d6d7a8801896-kolla-config\") pod \"memcached-0\" (UID: \"53377dda-bb5d-4ac3-bf05-d6d7a8801896\") " pod="openstack/memcached-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.686648 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/53377dda-bb5d-4ac3-bf05-d6d7a8801896-config-data\") pod \"memcached-0\" (UID: \"53377dda-bb5d-4ac3-bf05-d6d7a8801896\") " pod="openstack/memcached-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.686672 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53377dda-bb5d-4ac3-bf05-d6d7a8801896-combined-ca-bundle\") pod \"memcached-0\" (UID: \"53377dda-bb5d-4ac3-bf05-d6d7a8801896\") " pod="openstack/memcached-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.688339 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/53377dda-bb5d-4ac3-bf05-d6d7a8801896-kolla-config\") pod \"memcached-0\" (UID: \"53377dda-bb5d-4ac3-bf05-d6d7a8801896\") " pod="openstack/memcached-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.688420 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/53377dda-bb5d-4ac3-bf05-d6d7a8801896-config-data\") pod \"memcached-0\" (UID: \"53377dda-bb5d-4ac3-bf05-d6d7a8801896\") " pod="openstack/memcached-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.692100 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/53377dda-bb5d-4ac3-bf05-d6d7a8801896-memcached-tls-certs\") pod \"memcached-0\" (UID: \"53377dda-bb5d-4ac3-bf05-d6d7a8801896\") " pod="openstack/memcached-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.692723 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53377dda-bb5d-4ac3-bf05-d6d7a8801896-combined-ca-bundle\") pod \"memcached-0\" (UID: \"53377dda-bb5d-4ac3-bf05-d6d7a8801896\") " pod="openstack/memcached-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.703988 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxsxw\" (UniqueName: \"kubernetes.io/projected/53377dda-bb5d-4ac3-bf05-d6d7a8801896-kube-api-access-hxsxw\") pod \"memcached-0\" (UID: \"53377dda-bb5d-4ac3-bf05-d6d7a8801896\") " pod="openstack/memcached-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.791485 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 13 07:50:54 crc kubenswrapper[4833]: I1013 07:50:54.937222 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 13 07:50:54 crc kubenswrapper[4833]: W1013 07:50:54.944120 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e193d26_9513_4f0d_bed6_e499f9264ba6.slice/crio-789d34ed2bef1f2927d581f9696f85187110778ceb7a3cb0d9fcf5f816fe6368 WatchSource:0}: Error finding container 789d34ed2bef1f2927d581f9696f85187110778ceb7a3cb0d9fcf5f816fe6368: Status 404 returned error can't find the container with id 789d34ed2bef1f2927d581f9696f85187110778ceb7a3cb0d9fcf5f816fe6368 Oct 13 07:50:55 crc kubenswrapper[4833]: I1013 07:50:55.123722 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f5728b9b-3142-4f3c-af80-b23b846b22e0","Type":"ContainerStarted","Data":"a928ece1ff13b3247e93dcee99bae91c77517bab6118251cdecd79a1ec86bac5"} Oct 13 07:50:55 crc kubenswrapper[4833]: I1013 07:50:55.125636 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8e193d26-9513-4f0d-bed6-e499f9264ba6","Type":"ContainerStarted","Data":"dc2b2c9b525c5e32d8fa938d839892a3d44d259e6bf3917c931e1d608a520189"} Oct 13 07:50:55 crc kubenswrapper[4833]: I1013 07:50:55.125701 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8e193d26-9513-4f0d-bed6-e499f9264ba6","Type":"ContainerStarted","Data":"789d34ed2bef1f2927d581f9696f85187110778ceb7a3cb0d9fcf5f816fe6368"} Oct 13 07:50:55 crc kubenswrapper[4833]: I1013 07:50:55.229197 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 13 07:50:56 crc kubenswrapper[4833]: I1013 07:50:56.134984 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"53377dda-bb5d-4ac3-bf05-d6d7a8801896","Type":"ContainerStarted","Data":"552414ecf9c6448d62aafd3ad9dc96566d53edff2ba5db004da10b396f7edd6c"} Oct 13 07:50:56 crc kubenswrapper[4833]: I1013 07:50:56.135401 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"53377dda-bb5d-4ac3-bf05-d6d7a8801896","Type":"ContainerStarted","Data":"ed5faaeabb1fafa7d845f5e48317dfc54c00aad52d272cbcfb8a4e7b74424ce6"} Oct 13 07:50:56 crc kubenswrapper[4833]: I1013 07:50:56.166337 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.1663098339999998 podStartE2EDuration="2.166309834s" podCreationTimestamp="2025-10-13 07:50:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 07:50:56.15839568 +0000 UTC m=+4946.258818616" watchObservedRunningTime="2025-10-13 07:50:56.166309834 +0000 UTC m=+4946.266732760" Oct 13 07:50:57 crc kubenswrapper[4833]: I1013 07:50:57.143020 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Oct 13 07:50:58 crc kubenswrapper[4833]: I1013 07:50:58.151525 4833 generic.go:334] "Generic (PLEG): container finished" podID="f5728b9b-3142-4f3c-af80-b23b846b22e0" containerID="a928ece1ff13b3247e93dcee99bae91c77517bab6118251cdecd79a1ec86bac5" exitCode=0 Oct 13 07:50:58 crc kubenswrapper[4833]: I1013 07:50:58.151612 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f5728b9b-3142-4f3c-af80-b23b846b22e0","Type":"ContainerDied","Data":"a928ece1ff13b3247e93dcee99bae91c77517bab6118251cdecd79a1ec86bac5"} Oct 13 07:50:59 crc kubenswrapper[4833]: I1013 07:50:59.162703 4833 generic.go:334] "Generic (PLEG): container finished" podID="8e193d26-9513-4f0d-bed6-e499f9264ba6" containerID="dc2b2c9b525c5e32d8fa938d839892a3d44d259e6bf3917c931e1d608a520189" exitCode=0 Oct 13 07:50:59 crc kubenswrapper[4833]: I1013 07:50:59.162797 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8e193d26-9513-4f0d-bed6-e499f9264ba6","Type":"ContainerDied","Data":"dc2b2c9b525c5e32d8fa938d839892a3d44d259e6bf3917c931e1d608a520189"} Oct 13 07:50:59 crc kubenswrapper[4833]: I1013 07:50:59.166631 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f5728b9b-3142-4f3c-af80-b23b846b22e0","Type":"ContainerStarted","Data":"9b64b5e4d16891079fb221629ef627ae1fb952057c8ecd50e428fab83de6cd54"} Oct 13 07:50:59 crc kubenswrapper[4833]: I1013 07:50:59.221296 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.221273361 podStartE2EDuration="8.221273361s" podCreationTimestamp="2025-10-13 07:50:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 07:50:59.217283118 +0000 UTC m=+4949.317706054" watchObservedRunningTime="2025-10-13 07:50:59.221273361 +0000 UTC m=+4949.321696287" Oct 13 07:51:00 crc kubenswrapper[4833]: I1013 07:51:00.176530 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8e193d26-9513-4f0d-bed6-e499f9264ba6","Type":"ContainerStarted","Data":"ced589ffa2e456bd04e464b3bda6d16333c3a312c2b0d1dbb3117bf59c528bcc"} Oct 13 07:51:00 crc kubenswrapper[4833]: I1013 07:51:00.199442 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=7.199422544 podStartE2EDuration="7.199422544s" podCreationTimestamp="2025-10-13 07:50:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 07:51:00.195109462 +0000 UTC m=+4950.295532378" watchObservedRunningTime="2025-10-13 07:51:00.199422544 +0000 UTC m=+4950.299845460" Oct 13 07:51:00 crc kubenswrapper[4833]: I1013 07:51:00.542856 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:51:00 crc kubenswrapper[4833]: I1013 07:51:00.542929 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:51:00 crc kubenswrapper[4833]: I1013 07:51:00.542984 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 07:51:00 crc kubenswrapper[4833]: I1013 07:51:00.544005 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a730b8e15d65542d1555c0490d7044bbf38de54446d43d1c11482a8cdf705cba"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 07:51:00 crc kubenswrapper[4833]: I1013 07:51:00.544079 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://a730b8e15d65542d1555c0490d7044bbf38de54446d43d1c11482a8cdf705cba" gracePeriod=600 Oct 13 07:51:00 crc kubenswrapper[4833]: I1013 07:51:00.608908 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b7964457-9cq2f" Oct 13 07:51:00 crc kubenswrapper[4833]: E1013 07:51:00.668681 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:51:00 crc kubenswrapper[4833]: I1013 07:51:00.889029 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67d9f7fb89-qsw8m" Oct 13 07:51:00 crc kubenswrapper[4833]: I1013 07:51:00.939217 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b7964457-9cq2f"] Oct 13 07:51:01 crc kubenswrapper[4833]: I1013 07:51:01.185267 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="a730b8e15d65542d1555c0490d7044bbf38de54446d43d1c11482a8cdf705cba" exitCode=0 Oct 13 07:51:01 crc kubenswrapper[4833]: I1013 07:51:01.185346 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"a730b8e15d65542d1555c0490d7044bbf38de54446d43d1c11482a8cdf705cba"} Oct 13 07:51:01 crc kubenswrapper[4833]: I1013 07:51:01.185407 4833 scope.go:117] "RemoveContainer" containerID="691976b18bac0545b5c830635fa73e1fa7474db2a1c6b6609dd3465486bed431" Oct 13 07:51:01 crc kubenswrapper[4833]: I1013 07:51:01.185525 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b7964457-9cq2f" podUID="41cb6a99-a382-47fd-9307-22849d9ff54b" containerName="dnsmasq-dns" containerID="cri-o://be0857147bcc48136d0e0f7b9713b3ddc01c0eaf287cf338c7c6d62abb0651e2" gracePeriod=10 Oct 13 07:51:01 crc kubenswrapper[4833]: I1013 07:51:01.186054 4833 scope.go:117] "RemoveContainer" containerID="a730b8e15d65542d1555c0490d7044bbf38de54446d43d1c11482a8cdf705cba" Oct 13 07:51:01 crc kubenswrapper[4833]: E1013 07:51:01.186311 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:51:01 crc kubenswrapper[4833]: I1013 07:51:01.648822 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b7964457-9cq2f" Oct 13 07:51:01 crc kubenswrapper[4833]: I1013 07:51:01.796973 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41cb6a99-a382-47fd-9307-22849d9ff54b-config\") pod \"41cb6a99-a382-47fd-9307-22849d9ff54b\" (UID: \"41cb6a99-a382-47fd-9307-22849d9ff54b\") " Oct 13 07:51:01 crc kubenswrapper[4833]: I1013 07:51:01.797172 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vwsd\" (UniqueName: \"kubernetes.io/projected/41cb6a99-a382-47fd-9307-22849d9ff54b-kube-api-access-4vwsd\") pod \"41cb6a99-a382-47fd-9307-22849d9ff54b\" (UID: \"41cb6a99-a382-47fd-9307-22849d9ff54b\") " Oct 13 07:51:01 crc kubenswrapper[4833]: I1013 07:51:01.797208 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41cb6a99-a382-47fd-9307-22849d9ff54b-dns-svc\") pod \"41cb6a99-a382-47fd-9307-22849d9ff54b\" (UID: \"41cb6a99-a382-47fd-9307-22849d9ff54b\") " Oct 13 07:51:01 crc kubenswrapper[4833]: I1013 07:51:01.803080 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41cb6a99-a382-47fd-9307-22849d9ff54b-kube-api-access-4vwsd" (OuterVolumeSpecName: "kube-api-access-4vwsd") pod "41cb6a99-a382-47fd-9307-22849d9ff54b" (UID: "41cb6a99-a382-47fd-9307-22849d9ff54b"). InnerVolumeSpecName "kube-api-access-4vwsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:51:01 crc kubenswrapper[4833]: I1013 07:51:01.838459 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41cb6a99-a382-47fd-9307-22849d9ff54b-config" (OuterVolumeSpecName: "config") pod "41cb6a99-a382-47fd-9307-22849d9ff54b" (UID: "41cb6a99-a382-47fd-9307-22849d9ff54b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 07:51:01 crc kubenswrapper[4833]: I1013 07:51:01.859752 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41cb6a99-a382-47fd-9307-22849d9ff54b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "41cb6a99-a382-47fd-9307-22849d9ff54b" (UID: "41cb6a99-a382-47fd-9307-22849d9ff54b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 07:51:01 crc kubenswrapper[4833]: I1013 07:51:01.899599 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vwsd\" (UniqueName: \"kubernetes.io/projected/41cb6a99-a382-47fd-9307-22849d9ff54b-kube-api-access-4vwsd\") on node \"crc\" DevicePath \"\"" Oct 13 07:51:01 crc kubenswrapper[4833]: I1013 07:51:01.899669 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41cb6a99-a382-47fd-9307-22849d9ff54b-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 07:51:01 crc kubenswrapper[4833]: I1013 07:51:01.899700 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41cb6a99-a382-47fd-9307-22849d9ff54b-config\") on node \"crc\" DevicePath \"\"" Oct 13 07:51:02 crc kubenswrapper[4833]: I1013 07:51:02.195139 4833 generic.go:334] "Generic (PLEG): container finished" podID="41cb6a99-a382-47fd-9307-22849d9ff54b" containerID="be0857147bcc48136d0e0f7b9713b3ddc01c0eaf287cf338c7c6d62abb0651e2" exitCode=0 Oct 13 07:51:02 crc kubenswrapper[4833]: I1013 07:51:02.195223 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b7964457-9cq2f" Oct 13 07:51:02 crc kubenswrapper[4833]: I1013 07:51:02.195248 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b7964457-9cq2f" event={"ID":"41cb6a99-a382-47fd-9307-22849d9ff54b","Type":"ContainerDied","Data":"be0857147bcc48136d0e0f7b9713b3ddc01c0eaf287cf338c7c6d62abb0651e2"} Oct 13 07:51:02 crc kubenswrapper[4833]: I1013 07:51:02.195616 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b7964457-9cq2f" event={"ID":"41cb6a99-a382-47fd-9307-22849d9ff54b","Type":"ContainerDied","Data":"c747fafb33be4b1846298b232e8c6cdc750831e222e62f71be0bfbe7ca9fc31a"} Oct 13 07:51:02 crc kubenswrapper[4833]: I1013 07:51:02.195638 4833 scope.go:117] "RemoveContainer" containerID="be0857147bcc48136d0e0f7b9713b3ddc01c0eaf287cf338c7c6d62abb0651e2" Oct 13 07:51:02 crc kubenswrapper[4833]: I1013 07:51:02.218389 4833 scope.go:117] "RemoveContainer" containerID="75c9bbbf468fbab14a189658c74824b8b4a89c0e5f116b0a457b43bdd1a65ccd" Oct 13 07:51:02 crc kubenswrapper[4833]: I1013 07:51:02.232437 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b7964457-9cq2f"] Oct 13 07:51:02 crc kubenswrapper[4833]: I1013 07:51:02.238203 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b7964457-9cq2f"] Oct 13 07:51:02 crc kubenswrapper[4833]: I1013 07:51:02.250233 4833 scope.go:117] "RemoveContainer" containerID="be0857147bcc48136d0e0f7b9713b3ddc01c0eaf287cf338c7c6d62abb0651e2" Oct 13 07:51:02 crc kubenswrapper[4833]: E1013 07:51:02.250767 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be0857147bcc48136d0e0f7b9713b3ddc01c0eaf287cf338c7c6d62abb0651e2\": container with ID starting with be0857147bcc48136d0e0f7b9713b3ddc01c0eaf287cf338c7c6d62abb0651e2 not found: ID does not exist" containerID="be0857147bcc48136d0e0f7b9713b3ddc01c0eaf287cf338c7c6d62abb0651e2" Oct 13 07:51:02 crc kubenswrapper[4833]: I1013 07:51:02.250844 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be0857147bcc48136d0e0f7b9713b3ddc01c0eaf287cf338c7c6d62abb0651e2"} err="failed to get container status \"be0857147bcc48136d0e0f7b9713b3ddc01c0eaf287cf338c7c6d62abb0651e2\": rpc error: code = NotFound desc = could not find container \"be0857147bcc48136d0e0f7b9713b3ddc01c0eaf287cf338c7c6d62abb0651e2\": container with ID starting with be0857147bcc48136d0e0f7b9713b3ddc01c0eaf287cf338c7c6d62abb0651e2 not found: ID does not exist" Oct 13 07:51:02 crc kubenswrapper[4833]: I1013 07:51:02.250876 4833 scope.go:117] "RemoveContainer" containerID="75c9bbbf468fbab14a189658c74824b8b4a89c0e5f116b0a457b43bdd1a65ccd" Oct 13 07:51:02 crc kubenswrapper[4833]: E1013 07:51:02.251240 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75c9bbbf468fbab14a189658c74824b8b4a89c0e5f116b0a457b43bdd1a65ccd\": container with ID starting with 75c9bbbf468fbab14a189658c74824b8b4a89c0e5f116b0a457b43bdd1a65ccd not found: ID does not exist" containerID="75c9bbbf468fbab14a189658c74824b8b4a89c0e5f116b0a457b43bdd1a65ccd" Oct 13 07:51:02 crc kubenswrapper[4833]: I1013 07:51:02.251279 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75c9bbbf468fbab14a189658c74824b8b4a89c0e5f116b0a457b43bdd1a65ccd"} err="failed to get container status \"75c9bbbf468fbab14a189658c74824b8b4a89c0e5f116b0a457b43bdd1a65ccd\": rpc error: code = NotFound desc = could not find container \"75c9bbbf468fbab14a189658c74824b8b4a89c0e5f116b0a457b43bdd1a65ccd\": container with ID starting with 75c9bbbf468fbab14a189658c74824b8b4a89c0e5f116b0a457b43bdd1a65ccd not found: ID does not exist" Oct 13 07:51:02 crc kubenswrapper[4833]: I1013 07:51:02.636317 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41cb6a99-a382-47fd-9307-22849d9ff54b" path="/var/lib/kubelet/pods/41cb6a99-a382-47fd-9307-22849d9ff54b/volumes" Oct 13 07:51:03 crc kubenswrapper[4833]: I1013 07:51:03.635441 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Oct 13 07:51:03 crc kubenswrapper[4833]: I1013 07:51:03.635749 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Oct 13 07:51:03 crc kubenswrapper[4833]: I1013 07:51:03.703850 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Oct 13 07:51:04 crc kubenswrapper[4833]: I1013 07:51:04.297020 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Oct 13 07:51:04 crc kubenswrapper[4833]: I1013 07:51:04.484713 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Oct 13 07:51:04 crc kubenswrapper[4833]: I1013 07:51:04.485036 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Oct 13 07:51:04 crc kubenswrapper[4833]: I1013 07:51:04.793521 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Oct 13 07:51:06 crc kubenswrapper[4833]: I1013 07:51:06.552890 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Oct 13 07:51:06 crc kubenswrapper[4833]: I1013 07:51:06.617046 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Oct 13 07:51:13 crc kubenswrapper[4833]: I1013 07:51:13.627029 4833 scope.go:117] "RemoveContainer" containerID="a730b8e15d65542d1555c0490d7044bbf38de54446d43d1c11482a8cdf705cba" Oct 13 07:51:13 crc kubenswrapper[4833]: E1013 07:51:13.628036 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:51:20 crc kubenswrapper[4833]: I1013 07:51:20.981376 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n7wqg"] Oct 13 07:51:20 crc kubenswrapper[4833]: E1013 07:51:20.982253 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41cb6a99-a382-47fd-9307-22849d9ff54b" containerName="dnsmasq-dns" Oct 13 07:51:20 crc kubenswrapper[4833]: I1013 07:51:20.982272 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="41cb6a99-a382-47fd-9307-22849d9ff54b" containerName="dnsmasq-dns" Oct 13 07:51:20 crc kubenswrapper[4833]: E1013 07:51:20.982298 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41cb6a99-a382-47fd-9307-22849d9ff54b" containerName="init" Oct 13 07:51:20 crc kubenswrapper[4833]: I1013 07:51:20.982309 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="41cb6a99-a382-47fd-9307-22849d9ff54b" containerName="init" Oct 13 07:51:20 crc kubenswrapper[4833]: I1013 07:51:20.982515 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="41cb6a99-a382-47fd-9307-22849d9ff54b" containerName="dnsmasq-dns" Oct 13 07:51:20 crc kubenswrapper[4833]: I1013 07:51:20.984204 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n7wqg" Oct 13 07:51:21 crc kubenswrapper[4833]: I1013 07:51:21.007661 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n7wqg"] Oct 13 07:51:21 crc kubenswrapper[4833]: I1013 07:51:21.068699 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm75l\" (UniqueName: \"kubernetes.io/projected/0662d68d-9df6-420d-908d-1862e043b026-kube-api-access-qm75l\") pod \"certified-operators-n7wqg\" (UID: \"0662d68d-9df6-420d-908d-1862e043b026\") " pod="openshift-marketplace/certified-operators-n7wqg" Oct 13 07:51:21 crc kubenswrapper[4833]: I1013 07:51:21.068821 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0662d68d-9df6-420d-908d-1862e043b026-utilities\") pod \"certified-operators-n7wqg\" (UID: \"0662d68d-9df6-420d-908d-1862e043b026\") " pod="openshift-marketplace/certified-operators-n7wqg" Oct 13 07:51:21 crc kubenswrapper[4833]: I1013 07:51:21.068849 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0662d68d-9df6-420d-908d-1862e043b026-catalog-content\") pod \"certified-operators-n7wqg\" (UID: \"0662d68d-9df6-420d-908d-1862e043b026\") " pod="openshift-marketplace/certified-operators-n7wqg" Oct 13 07:51:21 crc kubenswrapper[4833]: I1013 07:51:21.170212 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0662d68d-9df6-420d-908d-1862e043b026-utilities\") pod \"certified-operators-n7wqg\" (UID: \"0662d68d-9df6-420d-908d-1862e043b026\") " pod="openshift-marketplace/certified-operators-n7wqg" Oct 13 07:51:21 crc kubenswrapper[4833]: I1013 07:51:21.170263 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0662d68d-9df6-420d-908d-1862e043b026-catalog-content\") pod \"certified-operators-n7wqg\" (UID: \"0662d68d-9df6-420d-908d-1862e043b026\") " pod="openshift-marketplace/certified-operators-n7wqg" Oct 13 07:51:21 crc kubenswrapper[4833]: I1013 07:51:21.170325 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm75l\" (UniqueName: \"kubernetes.io/projected/0662d68d-9df6-420d-908d-1862e043b026-kube-api-access-qm75l\") pod \"certified-operators-n7wqg\" (UID: \"0662d68d-9df6-420d-908d-1862e043b026\") " pod="openshift-marketplace/certified-operators-n7wqg" Oct 13 07:51:21 crc kubenswrapper[4833]: I1013 07:51:21.170856 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0662d68d-9df6-420d-908d-1862e043b026-utilities\") pod \"certified-operators-n7wqg\" (UID: \"0662d68d-9df6-420d-908d-1862e043b026\") " pod="openshift-marketplace/certified-operators-n7wqg" Oct 13 07:51:21 crc kubenswrapper[4833]: I1013 07:51:21.170889 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0662d68d-9df6-420d-908d-1862e043b026-catalog-content\") pod \"certified-operators-n7wqg\" (UID: \"0662d68d-9df6-420d-908d-1862e043b026\") " pod="openshift-marketplace/certified-operators-n7wqg" Oct 13 07:51:21 crc kubenswrapper[4833]: I1013 07:51:21.190698 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm75l\" (UniqueName: \"kubernetes.io/projected/0662d68d-9df6-420d-908d-1862e043b026-kube-api-access-qm75l\") pod \"certified-operators-n7wqg\" (UID: \"0662d68d-9df6-420d-908d-1862e043b026\") " pod="openshift-marketplace/certified-operators-n7wqg" Oct 13 07:51:21 crc kubenswrapper[4833]: I1013 07:51:21.306651 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n7wqg" Oct 13 07:51:21 crc kubenswrapper[4833]: I1013 07:51:21.810143 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n7wqg"] Oct 13 07:51:22 crc kubenswrapper[4833]: I1013 07:51:22.383275 4833 generic.go:334] "Generic (PLEG): container finished" podID="0662d68d-9df6-420d-908d-1862e043b026" containerID="ad8893da89241eceb83094e47d9b96a7e831c526c047d2737f35bdaa6501dcc8" exitCode=0 Oct 13 07:51:22 crc kubenswrapper[4833]: I1013 07:51:22.383361 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7wqg" event={"ID":"0662d68d-9df6-420d-908d-1862e043b026","Type":"ContainerDied","Data":"ad8893da89241eceb83094e47d9b96a7e831c526c047d2737f35bdaa6501dcc8"} Oct 13 07:51:22 crc kubenswrapper[4833]: I1013 07:51:22.383641 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7wqg" event={"ID":"0662d68d-9df6-420d-908d-1862e043b026","Type":"ContainerStarted","Data":"c71f44bf0ac94f36461830a9af20fc1bc2d545dd1721210f55c46ca502e9af7f"} Oct 13 07:51:23 crc kubenswrapper[4833]: I1013 07:51:23.396247 4833 generic.go:334] "Generic (PLEG): container finished" podID="0662d68d-9df6-420d-908d-1862e043b026" containerID="ad76eea8df6bd8df1479c50d71738100f14a0d6d6016223bfc767c3b5a7e6727" exitCode=0 Oct 13 07:51:23 crc kubenswrapper[4833]: I1013 07:51:23.396301 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7wqg" event={"ID":"0662d68d-9df6-420d-908d-1862e043b026","Type":"ContainerDied","Data":"ad76eea8df6bd8df1479c50d71738100f14a0d6d6016223bfc767c3b5a7e6727"} Oct 13 07:51:24 crc kubenswrapper[4833]: I1013 07:51:24.408878 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7wqg" event={"ID":"0662d68d-9df6-420d-908d-1862e043b026","Type":"ContainerStarted","Data":"5e6bfd3230ce0edfc3310953aec02e6cea7ab8d890677e45b2428949c9136cfe"} Oct 13 07:51:24 crc kubenswrapper[4833]: I1013 07:51:24.454737 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n7wqg" podStartSLOduration=3.034917158 podStartE2EDuration="4.45470736s" podCreationTimestamp="2025-10-13 07:51:20 +0000 UTC" firstStartedPulling="2025-10-13 07:51:22.385689687 +0000 UTC m=+4972.486112633" lastFinishedPulling="2025-10-13 07:51:23.805479909 +0000 UTC m=+4973.905902835" observedRunningTime="2025-10-13 07:51:24.437731498 +0000 UTC m=+4974.538154484" watchObservedRunningTime="2025-10-13 07:51:24.45470736 +0000 UTC m=+4974.555130306" Oct 13 07:51:24 crc kubenswrapper[4833]: I1013 07:51:24.627986 4833 scope.go:117] "RemoveContainer" containerID="a730b8e15d65542d1555c0490d7044bbf38de54446d43d1c11482a8cdf705cba" Oct 13 07:51:24 crc kubenswrapper[4833]: E1013 07:51:24.628829 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:51:26 crc kubenswrapper[4833]: I1013 07:51:26.434656 4833 generic.go:334] "Generic (PLEG): container finished" podID="4a6a9ada-bed0-4151-9ec5-cb318a4cb43c" containerID="405edc8a94a76dc01fbaf327c9a64c8309a2f7a74a22bec439d83653166d5d38" exitCode=0 Oct 13 07:51:26 crc kubenswrapper[4833]: I1013 07:51:26.434814 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c","Type":"ContainerDied","Data":"405edc8a94a76dc01fbaf327c9a64c8309a2f7a74a22bec439d83653166d5d38"} Oct 13 07:51:26 crc kubenswrapper[4833]: I1013 07:51:26.437358 4833 generic.go:334] "Generic (PLEG): container finished" podID="cce9a391-72e2-421f-9311-a1afea3c6ee0" containerID="ae8678314deb28e10be8375963e36724650c5cb97a6d04802773227783516ae3" exitCode=0 Oct 13 07:51:26 crc kubenswrapper[4833]: I1013 07:51:26.437428 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cce9a391-72e2-421f-9311-a1afea3c6ee0","Type":"ContainerDied","Data":"ae8678314deb28e10be8375963e36724650c5cb97a6d04802773227783516ae3"} Oct 13 07:51:27 crc kubenswrapper[4833]: I1013 07:51:27.447510 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c","Type":"ContainerStarted","Data":"81073e6e8d29f99b63f9ee00f0a62827ba56668beb58c70115a8486b7649cc83"} Oct 13 07:51:27 crc kubenswrapper[4833]: I1013 07:51:27.448055 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 13 07:51:27 crc kubenswrapper[4833]: I1013 07:51:27.449703 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cce9a391-72e2-421f-9311-a1afea3c6ee0","Type":"ContainerStarted","Data":"61f248a14ddfc133fb67e8a0095e18fdfa4b6e4bf79014b68c9b07d4db245b08"} Oct 13 07:51:27 crc kubenswrapper[4833]: I1013 07:51:27.450042 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:27 crc kubenswrapper[4833]: I1013 07:51:27.489865 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.489836054 podStartE2EDuration="37.489836054s" podCreationTimestamp="2025-10-13 07:50:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 07:51:27.480121608 +0000 UTC m=+4977.580544524" watchObservedRunningTime="2025-10-13 07:51:27.489836054 +0000 UTC m=+4977.590259010" Oct 13 07:51:27 crc kubenswrapper[4833]: I1013 07:51:27.504593 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.504530901 podStartE2EDuration="37.504530901s" podCreationTimestamp="2025-10-13 07:50:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 07:51:27.502589236 +0000 UTC m=+4977.603012162" watchObservedRunningTime="2025-10-13 07:51:27.504530901 +0000 UTC m=+4977.604953847" Oct 13 07:51:31 crc kubenswrapper[4833]: I1013 07:51:31.307497 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-n7wqg" Oct 13 07:51:31 crc kubenswrapper[4833]: I1013 07:51:31.308032 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n7wqg" Oct 13 07:51:31 crc kubenswrapper[4833]: I1013 07:51:31.378100 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n7wqg" Oct 13 07:51:31 crc kubenswrapper[4833]: I1013 07:51:31.541049 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n7wqg" Oct 13 07:51:31 crc kubenswrapper[4833]: I1013 07:51:31.617061 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n7wqg"] Oct 13 07:51:33 crc kubenswrapper[4833]: I1013 07:51:33.495432 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-n7wqg" podUID="0662d68d-9df6-420d-908d-1862e043b026" containerName="registry-server" containerID="cri-o://5e6bfd3230ce0edfc3310953aec02e6cea7ab8d890677e45b2428949c9136cfe" gracePeriod=2 Oct 13 07:51:33 crc kubenswrapper[4833]: I1013 07:51:33.910051 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n7wqg" Oct 13 07:51:33 crc kubenswrapper[4833]: I1013 07:51:33.994025 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0662d68d-9df6-420d-908d-1862e043b026-catalog-content\") pod \"0662d68d-9df6-420d-908d-1862e043b026\" (UID: \"0662d68d-9df6-420d-908d-1862e043b026\") " Oct 13 07:51:33 crc kubenswrapper[4833]: I1013 07:51:33.994104 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm75l\" (UniqueName: \"kubernetes.io/projected/0662d68d-9df6-420d-908d-1862e043b026-kube-api-access-qm75l\") pod \"0662d68d-9df6-420d-908d-1862e043b026\" (UID: \"0662d68d-9df6-420d-908d-1862e043b026\") " Oct 13 07:51:33 crc kubenswrapper[4833]: I1013 07:51:33.994161 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0662d68d-9df6-420d-908d-1862e043b026-utilities\") pod \"0662d68d-9df6-420d-908d-1862e043b026\" (UID: \"0662d68d-9df6-420d-908d-1862e043b026\") " Oct 13 07:51:33 crc kubenswrapper[4833]: I1013 07:51:33.994932 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0662d68d-9df6-420d-908d-1862e043b026-utilities" (OuterVolumeSpecName: "utilities") pod "0662d68d-9df6-420d-908d-1862e043b026" (UID: "0662d68d-9df6-420d-908d-1862e043b026"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:51:34 crc kubenswrapper[4833]: I1013 07:51:34.004219 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0662d68d-9df6-420d-908d-1862e043b026-kube-api-access-qm75l" (OuterVolumeSpecName: "kube-api-access-qm75l") pod "0662d68d-9df6-420d-908d-1862e043b026" (UID: "0662d68d-9df6-420d-908d-1862e043b026"). InnerVolumeSpecName "kube-api-access-qm75l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:51:34 crc kubenswrapper[4833]: I1013 07:51:34.033934 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0662d68d-9df6-420d-908d-1862e043b026-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0662d68d-9df6-420d-908d-1862e043b026" (UID: "0662d68d-9df6-420d-908d-1862e043b026"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:51:34 crc kubenswrapper[4833]: I1013 07:51:34.095799 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0662d68d-9df6-420d-908d-1862e043b026-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 07:51:34 crc kubenswrapper[4833]: I1013 07:51:34.095830 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm75l\" (UniqueName: \"kubernetes.io/projected/0662d68d-9df6-420d-908d-1862e043b026-kube-api-access-qm75l\") on node \"crc\" DevicePath \"\"" Oct 13 07:51:34 crc kubenswrapper[4833]: I1013 07:51:34.095841 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0662d68d-9df6-420d-908d-1862e043b026-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 07:51:34 crc kubenswrapper[4833]: I1013 07:51:34.509659 4833 generic.go:334] "Generic (PLEG): container finished" podID="0662d68d-9df6-420d-908d-1862e043b026" containerID="5e6bfd3230ce0edfc3310953aec02e6cea7ab8d890677e45b2428949c9136cfe" exitCode=0 Oct 13 07:51:34 crc kubenswrapper[4833]: I1013 07:51:34.509716 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7wqg" event={"ID":"0662d68d-9df6-420d-908d-1862e043b026","Type":"ContainerDied","Data":"5e6bfd3230ce0edfc3310953aec02e6cea7ab8d890677e45b2428949c9136cfe"} Oct 13 07:51:34 crc kubenswrapper[4833]: I1013 07:51:34.509733 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n7wqg" Oct 13 07:51:34 crc kubenswrapper[4833]: I1013 07:51:34.509751 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7wqg" event={"ID":"0662d68d-9df6-420d-908d-1862e043b026","Type":"ContainerDied","Data":"c71f44bf0ac94f36461830a9af20fc1bc2d545dd1721210f55c46ca502e9af7f"} Oct 13 07:51:34 crc kubenswrapper[4833]: I1013 07:51:34.509782 4833 scope.go:117] "RemoveContainer" containerID="5e6bfd3230ce0edfc3310953aec02e6cea7ab8d890677e45b2428949c9136cfe" Oct 13 07:51:34 crc kubenswrapper[4833]: I1013 07:51:34.548861 4833 scope.go:117] "RemoveContainer" containerID="ad76eea8df6bd8df1479c50d71738100f14a0d6d6016223bfc767c3b5a7e6727" Oct 13 07:51:34 crc kubenswrapper[4833]: I1013 07:51:34.571808 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n7wqg"] Oct 13 07:51:34 crc kubenswrapper[4833]: I1013 07:51:34.583635 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-n7wqg"] Oct 13 07:51:34 crc kubenswrapper[4833]: I1013 07:51:34.595887 4833 scope.go:117] "RemoveContainer" containerID="ad8893da89241eceb83094e47d9b96a7e831c526c047d2737f35bdaa6501dcc8" Oct 13 07:51:34 crc kubenswrapper[4833]: I1013 07:51:34.622154 4833 scope.go:117] "RemoveContainer" containerID="5e6bfd3230ce0edfc3310953aec02e6cea7ab8d890677e45b2428949c9136cfe" Oct 13 07:51:34 crc kubenswrapper[4833]: E1013 07:51:34.623020 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e6bfd3230ce0edfc3310953aec02e6cea7ab8d890677e45b2428949c9136cfe\": container with ID starting with 5e6bfd3230ce0edfc3310953aec02e6cea7ab8d890677e45b2428949c9136cfe not found: ID does not exist" containerID="5e6bfd3230ce0edfc3310953aec02e6cea7ab8d890677e45b2428949c9136cfe" Oct 13 07:51:34 crc kubenswrapper[4833]: I1013 07:51:34.623144 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e6bfd3230ce0edfc3310953aec02e6cea7ab8d890677e45b2428949c9136cfe"} err="failed to get container status \"5e6bfd3230ce0edfc3310953aec02e6cea7ab8d890677e45b2428949c9136cfe\": rpc error: code = NotFound desc = could not find container \"5e6bfd3230ce0edfc3310953aec02e6cea7ab8d890677e45b2428949c9136cfe\": container with ID starting with 5e6bfd3230ce0edfc3310953aec02e6cea7ab8d890677e45b2428949c9136cfe not found: ID does not exist" Oct 13 07:51:34 crc kubenswrapper[4833]: I1013 07:51:34.623211 4833 scope.go:117] "RemoveContainer" containerID="ad76eea8df6bd8df1479c50d71738100f14a0d6d6016223bfc767c3b5a7e6727" Oct 13 07:51:34 crc kubenswrapper[4833]: E1013 07:51:34.623696 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad76eea8df6bd8df1479c50d71738100f14a0d6d6016223bfc767c3b5a7e6727\": container with ID starting with ad76eea8df6bd8df1479c50d71738100f14a0d6d6016223bfc767c3b5a7e6727 not found: ID does not exist" containerID="ad76eea8df6bd8df1479c50d71738100f14a0d6d6016223bfc767c3b5a7e6727" Oct 13 07:51:34 crc kubenswrapper[4833]: I1013 07:51:34.623857 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad76eea8df6bd8df1479c50d71738100f14a0d6d6016223bfc767c3b5a7e6727"} err="failed to get container status \"ad76eea8df6bd8df1479c50d71738100f14a0d6d6016223bfc767c3b5a7e6727\": rpc error: code = NotFound desc = could not find container \"ad76eea8df6bd8df1479c50d71738100f14a0d6d6016223bfc767c3b5a7e6727\": container with ID starting with ad76eea8df6bd8df1479c50d71738100f14a0d6d6016223bfc767c3b5a7e6727 not found: ID does not exist" Oct 13 07:51:34 crc kubenswrapper[4833]: I1013 07:51:34.623892 4833 scope.go:117] "RemoveContainer" containerID="ad8893da89241eceb83094e47d9b96a7e831c526c047d2737f35bdaa6501dcc8" Oct 13 07:51:34 crc kubenswrapper[4833]: E1013 07:51:34.624480 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad8893da89241eceb83094e47d9b96a7e831c526c047d2737f35bdaa6501dcc8\": container with ID starting with ad8893da89241eceb83094e47d9b96a7e831c526c047d2737f35bdaa6501dcc8 not found: ID does not exist" containerID="ad8893da89241eceb83094e47d9b96a7e831c526c047d2737f35bdaa6501dcc8" Oct 13 07:51:34 crc kubenswrapper[4833]: I1013 07:51:34.624514 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad8893da89241eceb83094e47d9b96a7e831c526c047d2737f35bdaa6501dcc8"} err="failed to get container status \"ad8893da89241eceb83094e47d9b96a7e831c526c047d2737f35bdaa6501dcc8\": rpc error: code = NotFound desc = could not find container \"ad8893da89241eceb83094e47d9b96a7e831c526c047d2737f35bdaa6501dcc8\": container with ID starting with ad8893da89241eceb83094e47d9b96a7e831c526c047d2737f35bdaa6501dcc8 not found: ID does not exist" Oct 13 07:51:34 crc kubenswrapper[4833]: I1013 07:51:34.645399 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0662d68d-9df6-420d-908d-1862e043b026" path="/var/lib/kubelet/pods/0662d68d-9df6-420d-908d-1862e043b026/volumes" Oct 13 07:51:35 crc kubenswrapper[4833]: I1013 07:51:35.627519 4833 scope.go:117] "RemoveContainer" containerID="a730b8e15d65542d1555c0490d7044bbf38de54446d43d1c11482a8cdf705cba" Oct 13 07:51:35 crc kubenswrapper[4833]: E1013 07:51:35.628141 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:51:41 crc kubenswrapper[4833]: I1013 07:51:41.801800 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:42 crc kubenswrapper[4833]: I1013 07:51:42.061725 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 13 07:51:47 crc kubenswrapper[4833]: I1013 07:51:47.181015 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fdc957c47-pct5n"] Oct 13 07:51:47 crc kubenswrapper[4833]: E1013 07:51:47.181807 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0662d68d-9df6-420d-908d-1862e043b026" containerName="registry-server" Oct 13 07:51:47 crc kubenswrapper[4833]: I1013 07:51:47.181830 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0662d68d-9df6-420d-908d-1862e043b026" containerName="registry-server" Oct 13 07:51:47 crc kubenswrapper[4833]: E1013 07:51:47.181877 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0662d68d-9df6-420d-908d-1862e043b026" containerName="extract-content" Oct 13 07:51:47 crc kubenswrapper[4833]: I1013 07:51:47.181889 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0662d68d-9df6-420d-908d-1862e043b026" containerName="extract-content" Oct 13 07:51:47 crc kubenswrapper[4833]: E1013 07:51:47.181914 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0662d68d-9df6-420d-908d-1862e043b026" containerName="extract-utilities" Oct 13 07:51:47 crc kubenswrapper[4833]: I1013 07:51:47.181925 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0662d68d-9df6-420d-908d-1862e043b026" containerName="extract-utilities" Oct 13 07:51:47 crc kubenswrapper[4833]: I1013 07:51:47.182185 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0662d68d-9df6-420d-908d-1862e043b026" containerName="registry-server" Oct 13 07:51:47 crc kubenswrapper[4833]: I1013 07:51:47.183761 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdc957c47-pct5n" Oct 13 07:51:47 crc kubenswrapper[4833]: I1013 07:51:47.192354 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fdc957c47-pct5n"] Oct 13 07:51:47 crc kubenswrapper[4833]: I1013 07:51:47.264881 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75a4b45d-5ace-4527-87b2-083eb5ea3199-config\") pod \"dnsmasq-dns-5fdc957c47-pct5n\" (UID: \"75a4b45d-5ace-4527-87b2-083eb5ea3199\") " pod="openstack/dnsmasq-dns-5fdc957c47-pct5n" Oct 13 07:51:47 crc kubenswrapper[4833]: I1013 07:51:47.264953 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75a4b45d-5ace-4527-87b2-083eb5ea3199-dns-svc\") pod \"dnsmasq-dns-5fdc957c47-pct5n\" (UID: \"75a4b45d-5ace-4527-87b2-083eb5ea3199\") " pod="openstack/dnsmasq-dns-5fdc957c47-pct5n" Oct 13 07:51:47 crc kubenswrapper[4833]: I1013 07:51:47.264984 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv7p2\" (UniqueName: \"kubernetes.io/projected/75a4b45d-5ace-4527-87b2-083eb5ea3199-kube-api-access-pv7p2\") pod \"dnsmasq-dns-5fdc957c47-pct5n\" (UID: \"75a4b45d-5ace-4527-87b2-083eb5ea3199\") " pod="openstack/dnsmasq-dns-5fdc957c47-pct5n" Oct 13 07:51:47 crc kubenswrapper[4833]: I1013 07:51:47.366619 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75a4b45d-5ace-4527-87b2-083eb5ea3199-config\") pod \"dnsmasq-dns-5fdc957c47-pct5n\" (UID: \"75a4b45d-5ace-4527-87b2-083eb5ea3199\") " pod="openstack/dnsmasq-dns-5fdc957c47-pct5n" Oct 13 07:51:47 crc kubenswrapper[4833]: I1013 07:51:47.366731 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75a4b45d-5ace-4527-87b2-083eb5ea3199-dns-svc\") pod \"dnsmasq-dns-5fdc957c47-pct5n\" (UID: \"75a4b45d-5ace-4527-87b2-083eb5ea3199\") " pod="openstack/dnsmasq-dns-5fdc957c47-pct5n" Oct 13 07:51:47 crc kubenswrapper[4833]: I1013 07:51:47.366803 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv7p2\" (UniqueName: \"kubernetes.io/projected/75a4b45d-5ace-4527-87b2-083eb5ea3199-kube-api-access-pv7p2\") pod \"dnsmasq-dns-5fdc957c47-pct5n\" (UID: \"75a4b45d-5ace-4527-87b2-083eb5ea3199\") " pod="openstack/dnsmasq-dns-5fdc957c47-pct5n" Oct 13 07:51:47 crc kubenswrapper[4833]: I1013 07:51:47.367922 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75a4b45d-5ace-4527-87b2-083eb5ea3199-dns-svc\") pod \"dnsmasq-dns-5fdc957c47-pct5n\" (UID: \"75a4b45d-5ace-4527-87b2-083eb5ea3199\") " pod="openstack/dnsmasq-dns-5fdc957c47-pct5n" Oct 13 07:51:47 crc kubenswrapper[4833]: I1013 07:51:47.368020 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75a4b45d-5ace-4527-87b2-083eb5ea3199-config\") pod \"dnsmasq-dns-5fdc957c47-pct5n\" (UID: \"75a4b45d-5ace-4527-87b2-083eb5ea3199\") " pod="openstack/dnsmasq-dns-5fdc957c47-pct5n" Oct 13 07:51:47 crc kubenswrapper[4833]: I1013 07:51:47.399319 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv7p2\" (UniqueName: \"kubernetes.io/projected/75a4b45d-5ace-4527-87b2-083eb5ea3199-kube-api-access-pv7p2\") pod \"dnsmasq-dns-5fdc957c47-pct5n\" (UID: \"75a4b45d-5ace-4527-87b2-083eb5ea3199\") " pod="openstack/dnsmasq-dns-5fdc957c47-pct5n" Oct 13 07:51:47 crc kubenswrapper[4833]: I1013 07:51:47.509689 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdc957c47-pct5n" Oct 13 07:51:47 crc kubenswrapper[4833]: I1013 07:51:47.627495 4833 scope.go:117] "RemoveContainer" containerID="a730b8e15d65542d1555c0490d7044bbf38de54446d43d1c11482a8cdf705cba" Oct 13 07:51:47 crc kubenswrapper[4833]: E1013 07:51:47.627973 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:51:47 crc kubenswrapper[4833]: I1013 07:51:47.998300 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fdc957c47-pct5n"] Oct 13 07:51:48 crc kubenswrapper[4833]: W1013 07:51:48.001715 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75a4b45d_5ace_4527_87b2_083eb5ea3199.slice/crio-bd34a08503bc3a060e7d253ddffc83453896f39336fec18127d708bd26c39b23 WatchSource:0}: Error finding container bd34a08503bc3a060e7d253ddffc83453896f39336fec18127d708bd26c39b23: Status 404 returned error can't find the container with id bd34a08503bc3a060e7d253ddffc83453896f39336fec18127d708bd26c39b23 Oct 13 07:51:48 crc kubenswrapper[4833]: I1013 07:51:48.068250 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 13 07:51:48 crc kubenswrapper[4833]: I1013 07:51:48.664328 4833 generic.go:334] "Generic (PLEG): container finished" podID="75a4b45d-5ace-4527-87b2-083eb5ea3199" containerID="4a5a2ef25b141d4b2641ab1f380b7ca7ed24833e5101fc8ab4db979082b9d746" exitCode=0 Oct 13 07:51:48 crc kubenswrapper[4833]: I1013 07:51:48.664562 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdc957c47-pct5n" event={"ID":"75a4b45d-5ace-4527-87b2-083eb5ea3199","Type":"ContainerDied","Data":"4a5a2ef25b141d4b2641ab1f380b7ca7ed24833e5101fc8ab4db979082b9d746"} Oct 13 07:51:48 crc kubenswrapper[4833]: I1013 07:51:48.664711 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdc957c47-pct5n" event={"ID":"75a4b45d-5ace-4527-87b2-083eb5ea3199","Type":"ContainerStarted","Data":"bd34a08503bc3a060e7d253ddffc83453896f39336fec18127d708bd26c39b23"} Oct 13 07:51:48 crc kubenswrapper[4833]: I1013 07:51:48.811384 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 13 07:51:49 crc kubenswrapper[4833]: I1013 07:51:49.671606 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdc957c47-pct5n" event={"ID":"75a4b45d-5ace-4527-87b2-083eb5ea3199","Type":"ContainerStarted","Data":"e845784987db35c05573d9e70f8bd4c6e6c965b7de10110ef340bbe60f3f1764"} Oct 13 07:51:49 crc kubenswrapper[4833]: I1013 07:51:49.672738 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5fdc957c47-pct5n" Oct 13 07:51:49 crc kubenswrapper[4833]: I1013 07:51:49.694656 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5fdc957c47-pct5n" podStartSLOduration=2.694638923 podStartE2EDuration="2.694638923s" podCreationTimestamp="2025-10-13 07:51:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 07:51:49.688588091 +0000 UTC m=+4999.789011007" watchObservedRunningTime="2025-10-13 07:51:49.694638923 +0000 UTC m=+4999.795061829" Oct 13 07:51:52 crc kubenswrapper[4833]: I1013 07:51:52.091699 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="4a6a9ada-bed0-4151-9ec5-cb318a4cb43c" containerName="rabbitmq" containerID="cri-o://81073e6e8d29f99b63f9ee00f0a62827ba56668beb58c70115a8486b7649cc83" gracePeriod=604796 Oct 13 07:51:52 crc kubenswrapper[4833]: I1013 07:51:52.547523 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="cce9a391-72e2-421f-9311-a1afea3c6ee0" containerName="rabbitmq" containerID="cri-o://61f248a14ddfc133fb67e8a0095e18fdfa4b6e4bf79014b68c9b07d4db245b08" gracePeriod=604797 Oct 13 07:51:57 crc kubenswrapper[4833]: I1013 07:51:57.511821 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5fdc957c47-pct5n" Oct 13 07:51:57 crc kubenswrapper[4833]: I1013 07:51:57.595410 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67d9f7fb89-qsw8m"] Oct 13 07:51:57 crc kubenswrapper[4833]: I1013 07:51:57.595717 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67d9f7fb89-qsw8m" podUID="7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed" containerName="dnsmasq-dns" containerID="cri-o://87390d06b661efeb111463db357756ad593225d01d13a01108c48c8ccf60a44f" gracePeriod=10 Oct 13 07:51:57 crc kubenswrapper[4833]: I1013 07:51:57.745101 4833 generic.go:334] "Generic (PLEG): container finished" podID="7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed" containerID="87390d06b661efeb111463db357756ad593225d01d13a01108c48c8ccf60a44f" exitCode=0 Oct 13 07:51:57 crc kubenswrapper[4833]: I1013 07:51:57.745335 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67d9f7fb89-qsw8m" event={"ID":"7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed","Type":"ContainerDied","Data":"87390d06b661efeb111463db357756ad593225d01d13a01108c48c8ccf60a44f"} Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.027349 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67d9f7fb89-qsw8m" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.149227 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh2z9\" (UniqueName: \"kubernetes.io/projected/7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed-kube-api-access-mh2z9\") pod \"7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed\" (UID: \"7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed\") " Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.149324 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed-dns-svc\") pod \"7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed\" (UID: \"7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed\") " Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.149361 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed-config\") pod \"7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed\" (UID: \"7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed\") " Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.161496 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed-kube-api-access-mh2z9" (OuterVolumeSpecName: "kube-api-access-mh2z9") pod "7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed" (UID: "7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed"). InnerVolumeSpecName "kube-api-access-mh2z9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.191134 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed" (UID: "7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.191454 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed-config" (OuterVolumeSpecName: "config") pod "7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed" (UID: "7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.253095 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh2z9\" (UniqueName: \"kubernetes.io/projected/7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed-kube-api-access-mh2z9\") on node \"crc\" DevicePath \"\"" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.253148 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.253168 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed-config\") on node \"crc\" DevicePath \"\"" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.589604 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.753493 4833 generic.go:334] "Generic (PLEG): container finished" podID="4a6a9ada-bed0-4151-9ec5-cb318a4cb43c" containerID="81073e6e8d29f99b63f9ee00f0a62827ba56668beb58c70115a8486b7649cc83" exitCode=0 Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.753574 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c","Type":"ContainerDied","Data":"81073e6e8d29f99b63f9ee00f0a62827ba56668beb58c70115a8486b7649cc83"} Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.753607 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c","Type":"ContainerDied","Data":"da4e578f1a266e6391c8f4b8153e8b76dc3b7f9218ce3a38900f28b36669b794"} Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.753630 4833 scope.go:117] "RemoveContainer" containerID="81073e6e8d29f99b63f9ee00f0a62827ba56668beb58c70115a8486b7649cc83" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.753765 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.758476 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67d9f7fb89-qsw8m" event={"ID":"7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed","Type":"ContainerDied","Data":"a08d211570d418875fb4f4fce4561002c2bf1273c3a67107431b67f527f1b5e0"} Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.758628 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67d9f7fb89-qsw8m" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.758886 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-rabbitmq-tls\") pod \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.758925 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-pod-info\") pod \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.758968 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-erlang-cookie-secret\") pod \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.759006 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-rabbitmq-plugins\") pod \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.759047 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-config-data\") pod \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.759146 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c05dc055-3665-4bf3-a05a-bf5e015e0a88\") pod \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.759197 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-plugins-conf\") pod \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.759230 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-server-conf\") pod \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.759317 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-rabbitmq-erlang-cookie\") pod \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.759369 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-rabbitmq-confd\") pod \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.759398 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9m9c\" (UniqueName: \"kubernetes.io/projected/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-kube-api-access-h9m9c\") pod \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\" (UID: \"4a6a9ada-bed0-4151-9ec5-cb318a4cb43c\") " Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.762212 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "4a6a9ada-bed0-4151-9ec5-cb318a4cb43c" (UID: "4a6a9ada-bed0-4151-9ec5-cb318a4cb43c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.762487 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "4a6a9ada-bed0-4151-9ec5-cb318a4cb43c" (UID: "4a6a9ada-bed0-4151-9ec5-cb318a4cb43c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.762509 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "4a6a9ada-bed0-4151-9ec5-cb318a4cb43c" (UID: "4a6a9ada-bed0-4151-9ec5-cb318a4cb43c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.764648 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "4a6a9ada-bed0-4151-9ec5-cb318a4cb43c" (UID: "4a6a9ada-bed0-4151-9ec5-cb318a4cb43c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.764789 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "4a6a9ada-bed0-4151-9ec5-cb318a4cb43c" (UID: "4a6a9ada-bed0-4151-9ec5-cb318a4cb43c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.765161 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-kube-api-access-h9m9c" (OuterVolumeSpecName: "kube-api-access-h9m9c") pod "4a6a9ada-bed0-4151-9ec5-cb318a4cb43c" (UID: "4a6a9ada-bed0-4151-9ec5-cb318a4cb43c"). InnerVolumeSpecName "kube-api-access-h9m9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.767031 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-pod-info" (OuterVolumeSpecName: "pod-info") pod "4a6a9ada-bed0-4151-9ec5-cb318a4cb43c" (UID: "4a6a9ada-bed0-4151-9ec5-cb318a4cb43c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.779533 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c05dc055-3665-4bf3-a05a-bf5e015e0a88" (OuterVolumeSpecName: "persistence") pod "4a6a9ada-bed0-4151-9ec5-cb318a4cb43c" (UID: "4a6a9ada-bed0-4151-9ec5-cb318a4cb43c"). InnerVolumeSpecName "pvc-c05dc055-3665-4bf3-a05a-bf5e015e0a88". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.790169 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-config-data" (OuterVolumeSpecName: "config-data") pod "4a6a9ada-bed0-4151-9ec5-cb318a4cb43c" (UID: "4a6a9ada-bed0-4151-9ec5-cb318a4cb43c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.824457 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-server-conf" (OuterVolumeSpecName: "server-conf") pod "4a6a9ada-bed0-4151-9ec5-cb318a4cb43c" (UID: "4a6a9ada-bed0-4151-9ec5-cb318a4cb43c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.844967 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "4a6a9ada-bed0-4151-9ec5-cb318a4cb43c" (UID: "4a6a9ada-bed0-4151-9ec5-cb318a4cb43c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.861021 4833 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.861055 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.861065 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.861103 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c05dc055-3665-4bf3-a05a-bf5e015e0a88\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c05dc055-3665-4bf3-a05a-bf5e015e0a88\") on node \"crc\" " Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.861116 4833 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.861126 4833 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-server-conf\") on node \"crc\" DevicePath \"\"" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.861133 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.861141 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.861149 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9m9c\" (UniqueName: \"kubernetes.io/projected/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-kube-api-access-h9m9c\") on node \"crc\" DevicePath \"\"" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.861156 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.861167 4833 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c-pod-info\") on node \"crc\" DevicePath \"\"" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.899969 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67d9f7fb89-qsw8m"] Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.904208 4833 scope.go:117] "RemoveContainer" containerID="405edc8a94a76dc01fbaf327c9a64c8309a2f7a74a22bec439d83653166d5d38" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.907850 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67d9f7fb89-qsw8m"] Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.916417 4833 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.916621 4833 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c05dc055-3665-4bf3-a05a-bf5e015e0a88" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c05dc055-3665-4bf3-a05a-bf5e015e0a88") on node "crc" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.921716 4833 scope.go:117] "RemoveContainer" containerID="81073e6e8d29f99b63f9ee00f0a62827ba56668beb58c70115a8486b7649cc83" Oct 13 07:51:58 crc kubenswrapper[4833]: E1013 07:51:58.922428 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81073e6e8d29f99b63f9ee00f0a62827ba56668beb58c70115a8486b7649cc83\": container with ID starting with 81073e6e8d29f99b63f9ee00f0a62827ba56668beb58c70115a8486b7649cc83 not found: ID does not exist" containerID="81073e6e8d29f99b63f9ee00f0a62827ba56668beb58c70115a8486b7649cc83" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.922452 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81073e6e8d29f99b63f9ee00f0a62827ba56668beb58c70115a8486b7649cc83"} err="failed to get container status \"81073e6e8d29f99b63f9ee00f0a62827ba56668beb58c70115a8486b7649cc83\": rpc error: code = NotFound desc = could not find container \"81073e6e8d29f99b63f9ee00f0a62827ba56668beb58c70115a8486b7649cc83\": container with ID starting with 81073e6e8d29f99b63f9ee00f0a62827ba56668beb58c70115a8486b7649cc83 not found: ID does not exist" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.922471 4833 scope.go:117] "RemoveContainer" containerID="405edc8a94a76dc01fbaf327c9a64c8309a2f7a74a22bec439d83653166d5d38" Oct 13 07:51:58 crc kubenswrapper[4833]: E1013 07:51:58.922774 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"405edc8a94a76dc01fbaf327c9a64c8309a2f7a74a22bec439d83653166d5d38\": container with ID starting with 405edc8a94a76dc01fbaf327c9a64c8309a2f7a74a22bec439d83653166d5d38 not found: ID does not exist" containerID="405edc8a94a76dc01fbaf327c9a64c8309a2f7a74a22bec439d83653166d5d38" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.922789 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"405edc8a94a76dc01fbaf327c9a64c8309a2f7a74a22bec439d83653166d5d38"} err="failed to get container status \"405edc8a94a76dc01fbaf327c9a64c8309a2f7a74a22bec439d83653166d5d38\": rpc error: code = NotFound desc = could not find container \"405edc8a94a76dc01fbaf327c9a64c8309a2f7a74a22bec439d83653166d5d38\": container with ID starting with 405edc8a94a76dc01fbaf327c9a64c8309a2f7a74a22bec439d83653166d5d38 not found: ID does not exist" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.922801 4833 scope.go:117] "RemoveContainer" containerID="87390d06b661efeb111463db357756ad593225d01d13a01108c48c8ccf60a44f" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.946082 4833 scope.go:117] "RemoveContainer" containerID="7f34ea29ca0461ab2ae91b0b114ecbf673db48cd73671a808cd6e9530d1176af" Oct 13 07:51:58 crc kubenswrapper[4833]: I1013 07:51:58.962074 4833 reconciler_common.go:293] "Volume detached for volume \"pvc-c05dc055-3665-4bf3-a05a-bf5e015e0a88\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c05dc055-3665-4bf3-a05a-bf5e015e0a88\") on node \"crc\" DevicePath \"\"" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.063402 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.144957 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.160260 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.164263 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cce9a391-72e2-421f-9311-a1afea3c6ee0-rabbitmq-confd\") pod \"cce9a391-72e2-421f-9311-a1afea3c6ee0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.164314 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cce9a391-72e2-421f-9311-a1afea3c6ee0-rabbitmq-tls\") pod \"cce9a391-72e2-421f-9311-a1afea3c6ee0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.164579 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac4c1df-ce8d-41ad-90e5-d3be9584c483\") pod \"cce9a391-72e2-421f-9311-a1afea3c6ee0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.164662 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cce9a391-72e2-421f-9311-a1afea3c6ee0-pod-info\") pod \"cce9a391-72e2-421f-9311-a1afea3c6ee0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.164739 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cce9a391-72e2-421f-9311-a1afea3c6ee0-erlang-cookie-secret\") pod \"cce9a391-72e2-421f-9311-a1afea3c6ee0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.164763 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cce9a391-72e2-421f-9311-a1afea3c6ee0-rabbitmq-erlang-cookie\") pod \"cce9a391-72e2-421f-9311-a1afea3c6ee0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.164799 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cce9a391-72e2-421f-9311-a1afea3c6ee0-rabbitmq-plugins\") pod \"cce9a391-72e2-421f-9311-a1afea3c6ee0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.164824 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cce9a391-72e2-421f-9311-a1afea3c6ee0-server-conf\") pod \"cce9a391-72e2-421f-9311-a1afea3c6ee0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.164889 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mfpm\" (UniqueName: \"kubernetes.io/projected/cce9a391-72e2-421f-9311-a1afea3c6ee0-kube-api-access-2mfpm\") pod \"cce9a391-72e2-421f-9311-a1afea3c6ee0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.164909 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cce9a391-72e2-421f-9311-a1afea3c6ee0-config-data\") pod \"cce9a391-72e2-421f-9311-a1afea3c6ee0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.164951 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cce9a391-72e2-421f-9311-a1afea3c6ee0-plugins-conf\") pod \"cce9a391-72e2-421f-9311-a1afea3c6ee0\" (UID: \"cce9a391-72e2-421f-9311-a1afea3c6ee0\") " Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.165260 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cce9a391-72e2-421f-9311-a1afea3c6ee0-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "cce9a391-72e2-421f-9311-a1afea3c6ee0" (UID: "cce9a391-72e2-421f-9311-a1afea3c6ee0"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.165411 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.165908 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cce9a391-72e2-421f-9311-a1afea3c6ee0-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "cce9a391-72e2-421f-9311-a1afea3c6ee0" (UID: "cce9a391-72e2-421f-9311-a1afea3c6ee0"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.165986 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cce9a391-72e2-421f-9311-a1afea3c6ee0-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "cce9a391-72e2-421f-9311-a1afea3c6ee0" (UID: "cce9a391-72e2-421f-9311-a1afea3c6ee0"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:51:59 crc kubenswrapper[4833]: E1013 07:51:59.166174 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a6a9ada-bed0-4151-9ec5-cb318a4cb43c" containerName="rabbitmq" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.166200 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a6a9ada-bed0-4151-9ec5-cb318a4cb43c" containerName="rabbitmq" Oct 13 07:51:59 crc kubenswrapper[4833]: E1013 07:51:59.166218 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cce9a391-72e2-421f-9311-a1afea3c6ee0" containerName="setup-container" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.166225 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cce9a391-72e2-421f-9311-a1afea3c6ee0" containerName="setup-container" Oct 13 07:51:59 crc kubenswrapper[4833]: E1013 07:51:59.166236 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cce9a391-72e2-421f-9311-a1afea3c6ee0" containerName="rabbitmq" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.166242 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cce9a391-72e2-421f-9311-a1afea3c6ee0" containerName="rabbitmq" Oct 13 07:51:59 crc kubenswrapper[4833]: E1013 07:51:59.166253 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed" containerName="dnsmasq-dns" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.166259 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed" containerName="dnsmasq-dns" Oct 13 07:51:59 crc kubenswrapper[4833]: E1013 07:51:59.166267 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed" containerName="init" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.166272 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed" containerName="init" Oct 13 07:51:59 crc kubenswrapper[4833]: E1013 07:51:59.166283 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a6a9ada-bed0-4151-9ec5-cb318a4cb43c" containerName="setup-container" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.166288 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a6a9ada-bed0-4151-9ec5-cb318a4cb43c" containerName="setup-container" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.166421 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed" containerName="dnsmasq-dns" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.166430 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="cce9a391-72e2-421f-9311-a1afea3c6ee0" containerName="rabbitmq" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.166444 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a6a9ada-bed0-4151-9ec5-cb318a4cb43c" containerName="rabbitmq" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.167224 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.170681 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-8rjnn" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.170826 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.170992 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.171028 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.170688 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.171333 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.171608 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.176391 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.176484 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/cce9a391-72e2-421f-9311-a1afea3c6ee0-pod-info" (OuterVolumeSpecName: "pod-info") pod "cce9a391-72e2-421f-9311-a1afea3c6ee0" (UID: "cce9a391-72e2-421f-9311-a1afea3c6ee0"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.176529 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cce9a391-72e2-421f-9311-a1afea3c6ee0-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "cce9a391-72e2-421f-9311-a1afea3c6ee0" (UID: "cce9a391-72e2-421f-9311-a1afea3c6ee0"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.176905 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cce9a391-72e2-421f-9311-a1afea3c6ee0-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "cce9a391-72e2-421f-9311-a1afea3c6ee0" (UID: "cce9a391-72e2-421f-9311-a1afea3c6ee0"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.185728 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cce9a391-72e2-421f-9311-a1afea3c6ee0-kube-api-access-2mfpm" (OuterVolumeSpecName: "kube-api-access-2mfpm") pod "cce9a391-72e2-421f-9311-a1afea3c6ee0" (UID: "cce9a391-72e2-421f-9311-a1afea3c6ee0"). InnerVolumeSpecName "kube-api-access-2mfpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.198141 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac4c1df-ce8d-41ad-90e5-d3be9584c483" (OuterVolumeSpecName: "persistence") pod "cce9a391-72e2-421f-9311-a1afea3c6ee0" (UID: "cce9a391-72e2-421f-9311-a1afea3c6ee0"). InnerVolumeSpecName "pvc-eac4c1df-ce8d-41ad-90e5-d3be9584c483". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.211562 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cce9a391-72e2-421f-9311-a1afea3c6ee0-config-data" (OuterVolumeSpecName: "config-data") pod "cce9a391-72e2-421f-9311-a1afea3c6ee0" (UID: "cce9a391-72e2-421f-9311-a1afea3c6ee0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.215817 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cce9a391-72e2-421f-9311-a1afea3c6ee0-server-conf" (OuterVolumeSpecName: "server-conf") pod "cce9a391-72e2-421f-9311-a1afea3c6ee0" (UID: "cce9a391-72e2-421f-9311-a1afea3c6ee0"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.259774 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cce9a391-72e2-421f-9311-a1afea3c6ee0-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "cce9a391-72e2-421f-9311-a1afea3c6ee0" (UID: "cce9a391-72e2-421f-9311-a1afea3c6ee0"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.266746 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/face6c99-0326-44b0-a9eb-c877d804ca2f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.266805 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ng4b\" (UniqueName: \"kubernetes.io/projected/face6c99-0326-44b0-a9eb-c877d804ca2f-kube-api-access-2ng4b\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.266862 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/face6c99-0326-44b0-a9eb-c877d804ca2f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.266893 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/face6c99-0326-44b0-a9eb-c877d804ca2f-config-data\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.266919 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/face6c99-0326-44b0-a9eb-c877d804ca2f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.266964 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/face6c99-0326-44b0-a9eb-c877d804ca2f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.267009 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c05dc055-3665-4bf3-a05a-bf5e015e0a88\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c05dc055-3665-4bf3-a05a-bf5e015e0a88\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.267052 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/face6c99-0326-44b0-a9eb-c877d804ca2f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.267081 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/face6c99-0326-44b0-a9eb-c877d804ca2f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.267114 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/face6c99-0326-44b0-a9eb-c877d804ca2f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.267144 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/face6c99-0326-44b0-a9eb-c877d804ca2f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.267203 4833 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cce9a391-72e2-421f-9311-a1afea3c6ee0-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.267225 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cce9a391-72e2-421f-9311-a1afea3c6ee0-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.267238 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cce9a391-72e2-421f-9311-a1afea3c6ee0-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.267249 4833 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cce9a391-72e2-421f-9311-a1afea3c6ee0-server-conf\") on node \"crc\" DevicePath \"\"" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.267366 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mfpm\" (UniqueName: \"kubernetes.io/projected/cce9a391-72e2-421f-9311-a1afea3c6ee0-kube-api-access-2mfpm\") on node \"crc\" DevicePath \"\"" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.267396 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cce9a391-72e2-421f-9311-a1afea3c6ee0-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.267413 4833 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cce9a391-72e2-421f-9311-a1afea3c6ee0-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.267426 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cce9a391-72e2-421f-9311-a1afea3c6ee0-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.267438 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cce9a391-72e2-421f-9311-a1afea3c6ee0-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.267466 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-eac4c1df-ce8d-41ad-90e5-d3be9584c483\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac4c1df-ce8d-41ad-90e5-d3be9584c483\") on node \"crc\" " Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.267480 4833 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cce9a391-72e2-421f-9311-a1afea3c6ee0-pod-info\") on node \"crc\" DevicePath \"\"" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.289050 4833 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.289211 4833 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-eac4c1df-ce8d-41ad-90e5-d3be9584c483" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac4c1df-ce8d-41ad-90e5-d3be9584c483") on node "crc" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.368698 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/face6c99-0326-44b0-a9eb-c877d804ca2f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.368763 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/face6c99-0326-44b0-a9eb-c877d804ca2f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.368792 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/face6c99-0326-44b0-a9eb-c877d804ca2f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.368812 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/face6c99-0326-44b0-a9eb-c877d804ca2f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.368831 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ng4b\" (UniqueName: \"kubernetes.io/projected/face6c99-0326-44b0-a9eb-c877d804ca2f-kube-api-access-2ng4b\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.368870 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/face6c99-0326-44b0-a9eb-c877d804ca2f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.368898 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/face6c99-0326-44b0-a9eb-c877d804ca2f-config-data\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.368925 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/face6c99-0326-44b0-a9eb-c877d804ca2f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.368968 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/face6c99-0326-44b0-a9eb-c877d804ca2f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.369015 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c05dc055-3665-4bf3-a05a-bf5e015e0a88\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c05dc055-3665-4bf3-a05a-bf5e015e0a88\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.369057 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/face6c99-0326-44b0-a9eb-c877d804ca2f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.369109 4833 reconciler_common.go:293] "Volume detached for volume \"pvc-eac4c1df-ce8d-41ad-90e5-d3be9584c483\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac4c1df-ce8d-41ad-90e5-d3be9584c483\") on node \"crc\" DevicePath \"\"" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.369377 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/face6c99-0326-44b0-a9eb-c877d804ca2f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.371159 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/face6c99-0326-44b0-a9eb-c877d804ca2f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.371213 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/face6c99-0326-44b0-a9eb-c877d804ca2f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.372116 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/face6c99-0326-44b0-a9eb-c877d804ca2f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.372362 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/face6c99-0326-44b0-a9eb-c877d804ca2f-config-data\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.372780 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/face6c99-0326-44b0-a9eb-c877d804ca2f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.372869 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/face6c99-0326-44b0-a9eb-c877d804ca2f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.373058 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/face6c99-0326-44b0-a9eb-c877d804ca2f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.374125 4833 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.374159 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c05dc055-3665-4bf3-a05a-bf5e015e0a88\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c05dc055-3665-4bf3-a05a-bf5e015e0a88\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2f1934ad412792be8e2bf06fa14d02296d4168d544547d24afcc4d0ada46464f/globalmount\"" pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.375184 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/face6c99-0326-44b0-a9eb-c877d804ca2f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.391240 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ng4b\" (UniqueName: \"kubernetes.io/projected/face6c99-0326-44b0-a9eb-c877d804ca2f-kube-api-access-2ng4b\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.399757 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c05dc055-3665-4bf3-a05a-bf5e015e0a88\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c05dc055-3665-4bf3-a05a-bf5e015e0a88\") pod \"rabbitmq-server-0\" (UID: \"face6c99-0326-44b0-a9eb-c877d804ca2f\") " pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.492129 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.768782 4833 generic.go:334] "Generic (PLEG): container finished" podID="cce9a391-72e2-421f-9311-a1afea3c6ee0" containerID="61f248a14ddfc133fb67e8a0095e18fdfa4b6e4bf79014b68c9b07d4db245b08" exitCode=0 Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.768860 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cce9a391-72e2-421f-9311-a1afea3c6ee0","Type":"ContainerDied","Data":"61f248a14ddfc133fb67e8a0095e18fdfa4b6e4bf79014b68c9b07d4db245b08"} Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.768869 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.768885 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cce9a391-72e2-421f-9311-a1afea3c6ee0","Type":"ContainerDied","Data":"0ce10fabcf7b9fd034a5b8b3a45fc9ac4859a3e88bf9b70c3c32c6405d87b745"} Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.768901 4833 scope.go:117] "RemoveContainer" containerID="61f248a14ddfc133fb67e8a0095e18fdfa4b6e4bf79014b68c9b07d4db245b08" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.800835 4833 scope.go:117] "RemoveContainer" containerID="ae8678314deb28e10be8375963e36724650c5cb97a6d04802773227783516ae3" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.817875 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.828269 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.832825 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.838649 4833 scope.go:117] "RemoveContainer" containerID="61f248a14ddfc133fb67e8a0095e18fdfa4b6e4bf79014b68c9b07d4db245b08" Oct 13 07:51:59 crc kubenswrapper[4833]: E1013 07:51:59.839304 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61f248a14ddfc133fb67e8a0095e18fdfa4b6e4bf79014b68c9b07d4db245b08\": container with ID starting with 61f248a14ddfc133fb67e8a0095e18fdfa4b6e4bf79014b68c9b07d4db245b08 not found: ID does not exist" containerID="61f248a14ddfc133fb67e8a0095e18fdfa4b6e4bf79014b68c9b07d4db245b08" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.839343 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61f248a14ddfc133fb67e8a0095e18fdfa4b6e4bf79014b68c9b07d4db245b08"} err="failed to get container status \"61f248a14ddfc133fb67e8a0095e18fdfa4b6e4bf79014b68c9b07d4db245b08\": rpc error: code = NotFound desc = could not find container \"61f248a14ddfc133fb67e8a0095e18fdfa4b6e4bf79014b68c9b07d4db245b08\": container with ID starting with 61f248a14ddfc133fb67e8a0095e18fdfa4b6e4bf79014b68c9b07d4db245b08 not found: ID does not exist" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.839369 4833 scope.go:117] "RemoveContainer" containerID="ae8678314deb28e10be8375963e36724650c5cb97a6d04802773227783516ae3" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.839559 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: E1013 07:51:59.840089 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae8678314deb28e10be8375963e36724650c5cb97a6d04802773227783516ae3\": container with ID starting with ae8678314deb28e10be8375963e36724650c5cb97a6d04802773227783516ae3 not found: ID does not exist" containerID="ae8678314deb28e10be8375963e36724650c5cb97a6d04802773227783516ae3" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.840125 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae8678314deb28e10be8375963e36724650c5cb97a6d04802773227783516ae3"} err="failed to get container status \"ae8678314deb28e10be8375963e36724650c5cb97a6d04802773227783516ae3\": rpc error: code = NotFound desc = could not find container \"ae8678314deb28e10be8375963e36724650c5cb97a6d04802773227783516ae3\": container with ID starting with ae8678314deb28e10be8375963e36724650c5cb97a6d04802773227783516ae3 not found: ID does not exist" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.841754 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.841834 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.842059 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.842071 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-5t9lx" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.842357 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.842866 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.844925 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.845164 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.877450 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.877651 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.877729 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.877819 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.877882 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-eac4c1df-ce8d-41ad-90e5-d3be9584c483\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac4c1df-ce8d-41ad-90e5-d3be9584c483\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.877949 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.877977 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.878025 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.878053 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.878109 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.878128 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqlbw\" (UniqueName: \"kubernetes.io/projected/aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9-kube-api-access-sqlbw\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.979521 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-eac4c1df-ce8d-41ad-90e5-d3be9584c483\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac4c1df-ce8d-41ad-90e5-d3be9584c483\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.979589 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.979615 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.979648 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.979675 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.979714 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.979731 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqlbw\" (UniqueName: \"kubernetes.io/projected/aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9-kube-api-access-sqlbw\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.979748 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.979784 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.979813 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.979839 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.981327 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.981678 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.982215 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.982279 4833 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.982971 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.982251 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.983083 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-eac4c1df-ce8d-41ad-90e5-d3be9584c483\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac4c1df-ce8d-41ad-90e5-d3be9584c483\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/683dbe71177f427d5b7a565097b523476a3c38613846158d82f673131ad51b80/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.985320 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.985822 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.987320 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:51:59 crc kubenswrapper[4833]: I1013 07:51:59.987434 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:52:00 crc kubenswrapper[4833]: I1013 07:52:00.000013 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqlbw\" (UniqueName: \"kubernetes.io/projected/aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9-kube-api-access-sqlbw\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:52:00 crc kubenswrapper[4833]: I1013 07:52:00.021992 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-eac4c1df-ce8d-41ad-90e5-d3be9584c483\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eac4c1df-ce8d-41ad-90e5-d3be9584c483\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:52:00 crc kubenswrapper[4833]: I1013 07:52:00.028931 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 13 07:52:00 crc kubenswrapper[4833]: W1013 07:52:00.032653 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podface6c99_0326_44b0_a9eb_c877d804ca2f.slice/crio-19c075bf1287bf0227a3f6ef3896cca977602799e55c19d1f08b442dd335d412 WatchSource:0}: Error finding container 19c075bf1287bf0227a3f6ef3896cca977602799e55c19d1f08b442dd335d412: Status 404 returned error can't find the container with id 19c075bf1287bf0227a3f6ef3896cca977602799e55c19d1f08b442dd335d412 Oct 13 07:52:00 crc kubenswrapper[4833]: I1013 07:52:00.174421 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:52:00 crc kubenswrapper[4833]: I1013 07:52:00.639380 4833 scope.go:117] "RemoveContainer" containerID="a730b8e15d65542d1555c0490d7044bbf38de54446d43d1c11482a8cdf705cba" Oct 13 07:52:00 crc kubenswrapper[4833]: E1013 07:52:00.640460 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:52:00 crc kubenswrapper[4833]: I1013 07:52:00.647220 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a6a9ada-bed0-4151-9ec5-cb318a4cb43c" path="/var/lib/kubelet/pods/4a6a9ada-bed0-4151-9ec5-cb318a4cb43c/volumes" Oct 13 07:52:00 crc kubenswrapper[4833]: I1013 07:52:00.648826 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed" path="/var/lib/kubelet/pods/7ead6067-51ec-4a24-81dc-5b0e6bc9e5ed/volumes" Oct 13 07:52:00 crc kubenswrapper[4833]: I1013 07:52:00.651793 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cce9a391-72e2-421f-9311-a1afea3c6ee0" path="/var/lib/kubelet/pods/cce9a391-72e2-421f-9311-a1afea3c6ee0/volumes" Oct 13 07:52:00 crc kubenswrapper[4833]: W1013 07:52:00.705575 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa77cbfd_35bb_40f8_9ba9_62dc99e07ad9.slice/crio-ba0d4798b0eac53c910532bd24842392a319223baf002398db46aa0dc738b729 WatchSource:0}: Error finding container ba0d4798b0eac53c910532bd24842392a319223baf002398db46aa0dc738b729: Status 404 returned error can't find the container with id ba0d4798b0eac53c910532bd24842392a319223baf002398db46aa0dc738b729 Oct 13 07:52:00 crc kubenswrapper[4833]: I1013 07:52:00.706947 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 13 07:52:00 crc kubenswrapper[4833]: I1013 07:52:00.783149 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9","Type":"ContainerStarted","Data":"ba0d4798b0eac53c910532bd24842392a319223baf002398db46aa0dc738b729"} Oct 13 07:52:00 crc kubenswrapper[4833]: I1013 07:52:00.784928 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"face6c99-0326-44b0-a9eb-c877d804ca2f","Type":"ContainerStarted","Data":"19c075bf1287bf0227a3f6ef3896cca977602799e55c19d1f08b442dd335d412"} Oct 13 07:52:01 crc kubenswrapper[4833]: I1013 07:52:01.799662 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"face6c99-0326-44b0-a9eb-c877d804ca2f","Type":"ContainerStarted","Data":"c3232b0be867677fea0d0eb28b78d252c5e66a6e16073ef61b4017895309efa5"} Oct 13 07:52:02 crc kubenswrapper[4833]: I1013 07:52:02.814009 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9","Type":"ContainerStarted","Data":"e7612979f616fdf40ea5be0b6093528b3846c9bddf4633f96c4726aabc231584"} Oct 13 07:52:12 crc kubenswrapper[4833]: I1013 07:52:12.628116 4833 scope.go:117] "RemoveContainer" containerID="a730b8e15d65542d1555c0490d7044bbf38de54446d43d1c11482a8cdf705cba" Oct 13 07:52:12 crc kubenswrapper[4833]: E1013 07:52:12.629600 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:52:23 crc kubenswrapper[4833]: I1013 07:52:23.628158 4833 scope.go:117] "RemoveContainer" containerID="a730b8e15d65542d1555c0490d7044bbf38de54446d43d1c11482a8cdf705cba" Oct 13 07:52:23 crc kubenswrapper[4833]: E1013 07:52:23.629394 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:52:35 crc kubenswrapper[4833]: I1013 07:52:35.141949 4833 generic.go:334] "Generic (PLEG): container finished" podID="face6c99-0326-44b0-a9eb-c877d804ca2f" containerID="c3232b0be867677fea0d0eb28b78d252c5e66a6e16073ef61b4017895309efa5" exitCode=0 Oct 13 07:52:35 crc kubenswrapper[4833]: I1013 07:52:35.142064 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"face6c99-0326-44b0-a9eb-c877d804ca2f","Type":"ContainerDied","Data":"c3232b0be867677fea0d0eb28b78d252c5e66a6e16073ef61b4017895309efa5"} Oct 13 07:52:36 crc kubenswrapper[4833]: I1013 07:52:36.155946 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"face6c99-0326-44b0-a9eb-c877d804ca2f","Type":"ContainerStarted","Data":"ac2c6610781788433064d621ed25dcd78611ec09e70a9799059fb79d712a106d"} Oct 13 07:52:36 crc kubenswrapper[4833]: I1013 07:52:36.156785 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 13 07:52:36 crc kubenswrapper[4833]: I1013 07:52:36.159160 4833 generic.go:334] "Generic (PLEG): container finished" podID="aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9" containerID="e7612979f616fdf40ea5be0b6093528b3846c9bddf4633f96c4726aabc231584" exitCode=0 Oct 13 07:52:36 crc kubenswrapper[4833]: I1013 07:52:36.159248 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9","Type":"ContainerDied","Data":"e7612979f616fdf40ea5be0b6093528b3846c9bddf4633f96c4726aabc231584"} Oct 13 07:52:36 crc kubenswrapper[4833]: I1013 07:52:36.181753 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.181729803 podStartE2EDuration="37.181729803s" podCreationTimestamp="2025-10-13 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 07:52:36.180144388 +0000 UTC m=+5046.280567344" watchObservedRunningTime="2025-10-13 07:52:36.181729803 +0000 UTC m=+5046.282152719" Oct 13 07:52:37 crc kubenswrapper[4833]: I1013 07:52:37.169781 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9","Type":"ContainerStarted","Data":"9654492fa2ba90b324f5a5e3478aa1219d05c828deac163ae7b1865c5ff6d7c0"} Oct 13 07:52:37 crc kubenswrapper[4833]: I1013 07:52:37.170356 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:52:37 crc kubenswrapper[4833]: I1013 07:52:37.206448 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.206420876 podStartE2EDuration="38.206420876s" podCreationTimestamp="2025-10-13 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 07:52:37.203201245 +0000 UTC m=+5047.303624161" watchObservedRunningTime="2025-10-13 07:52:37.206420876 +0000 UTC m=+5047.306843792" Oct 13 07:52:37 crc kubenswrapper[4833]: I1013 07:52:37.628008 4833 scope.go:117] "RemoveContainer" containerID="a730b8e15d65542d1555c0490d7044bbf38de54446d43d1c11482a8cdf705cba" Oct 13 07:52:37 crc kubenswrapper[4833]: E1013 07:52:37.628364 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:52:49 crc kubenswrapper[4833]: I1013 07:52:49.495836 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 13 07:52:50 crc kubenswrapper[4833]: I1013 07:52:50.178852 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 13 07:52:50 crc kubenswrapper[4833]: I1013 07:52:50.635247 4833 scope.go:117] "RemoveContainer" containerID="a730b8e15d65542d1555c0490d7044bbf38de54446d43d1c11482a8cdf705cba" Oct 13 07:52:50 crc kubenswrapper[4833]: E1013 07:52:50.635457 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:52:55 crc kubenswrapper[4833]: I1013 07:52:55.617340 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1-default"] Oct 13 07:52:55 crc kubenswrapper[4833]: I1013 07:52:55.619654 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Oct 13 07:52:55 crc kubenswrapper[4833]: I1013 07:52:55.622432 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bx4vr" Oct 13 07:52:55 crc kubenswrapper[4833]: I1013 07:52:55.629330 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Oct 13 07:52:55 crc kubenswrapper[4833]: I1013 07:52:55.718125 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc4wt\" (UniqueName: \"kubernetes.io/projected/6a5e1f2b-5283-4461-9456-c4b11867e32a-kube-api-access-sc4wt\") pod \"mariadb-client-1-default\" (UID: \"6a5e1f2b-5283-4461-9456-c4b11867e32a\") " pod="openstack/mariadb-client-1-default" Oct 13 07:52:55 crc kubenswrapper[4833]: I1013 07:52:55.819140 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc4wt\" (UniqueName: \"kubernetes.io/projected/6a5e1f2b-5283-4461-9456-c4b11867e32a-kube-api-access-sc4wt\") pod \"mariadb-client-1-default\" (UID: \"6a5e1f2b-5283-4461-9456-c4b11867e32a\") " pod="openstack/mariadb-client-1-default" Oct 13 07:52:55 crc kubenswrapper[4833]: I1013 07:52:55.856990 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc4wt\" (UniqueName: \"kubernetes.io/projected/6a5e1f2b-5283-4461-9456-c4b11867e32a-kube-api-access-sc4wt\") pod \"mariadb-client-1-default\" (UID: \"6a5e1f2b-5283-4461-9456-c4b11867e32a\") " pod="openstack/mariadb-client-1-default" Oct 13 07:52:55 crc kubenswrapper[4833]: I1013 07:52:55.945969 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Oct 13 07:52:56 crc kubenswrapper[4833]: I1013 07:52:56.287393 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Oct 13 07:52:56 crc kubenswrapper[4833]: W1013 07:52:56.292771 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a5e1f2b_5283_4461_9456_c4b11867e32a.slice/crio-ba6695d1b077a201106abdc5643a8619e5f151b3ec8128051de4849ebd74ffd7 WatchSource:0}: Error finding container ba6695d1b077a201106abdc5643a8619e5f151b3ec8128051de4849ebd74ffd7: Status 404 returned error can't find the container with id ba6695d1b077a201106abdc5643a8619e5f151b3ec8128051de4849ebd74ffd7 Oct 13 07:52:56 crc kubenswrapper[4833]: I1013 07:52:56.324527 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"6a5e1f2b-5283-4461-9456-c4b11867e32a","Type":"ContainerStarted","Data":"ba6695d1b077a201106abdc5643a8619e5f151b3ec8128051de4849ebd74ffd7"} Oct 13 07:53:01 crc kubenswrapper[4833]: I1013 07:53:01.627072 4833 scope.go:117] "RemoveContainer" containerID="a730b8e15d65542d1555c0490d7044bbf38de54446d43d1c11482a8cdf705cba" Oct 13 07:53:01 crc kubenswrapper[4833]: E1013 07:53:01.627910 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:53:02 crc kubenswrapper[4833]: I1013 07:53:02.371827 4833 generic.go:334] "Generic (PLEG): container finished" podID="6a5e1f2b-5283-4461-9456-c4b11867e32a" containerID="cfc553ed4438c1a27298c1a6d0cd9bd88a051814e0121a88a606a09c020f33b3" exitCode=0 Oct 13 07:53:02 crc kubenswrapper[4833]: I1013 07:53:02.371957 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"6a5e1f2b-5283-4461-9456-c4b11867e32a","Type":"ContainerDied","Data":"cfc553ed4438c1a27298c1a6d0cd9bd88a051814e0121a88a606a09c020f33b3"} Oct 13 07:53:03 crc kubenswrapper[4833]: I1013 07:53:03.760657 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Oct 13 07:53:03 crc kubenswrapper[4833]: I1013 07:53:03.795078 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1-default_6a5e1f2b-5283-4461-9456-c4b11867e32a/mariadb-client-1-default/0.log" Oct 13 07:53:03 crc kubenswrapper[4833]: I1013 07:53:03.842039 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1-default"] Oct 13 07:53:03 crc kubenswrapper[4833]: I1013 07:53:03.845453 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc4wt\" (UniqueName: \"kubernetes.io/projected/6a5e1f2b-5283-4461-9456-c4b11867e32a-kube-api-access-sc4wt\") pod \"6a5e1f2b-5283-4461-9456-c4b11867e32a\" (UID: \"6a5e1f2b-5283-4461-9456-c4b11867e32a\") " Oct 13 07:53:03 crc kubenswrapper[4833]: I1013 07:53:03.850568 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a5e1f2b-5283-4461-9456-c4b11867e32a-kube-api-access-sc4wt" (OuterVolumeSpecName: "kube-api-access-sc4wt") pod "6a5e1f2b-5283-4461-9456-c4b11867e32a" (UID: "6a5e1f2b-5283-4461-9456-c4b11867e32a"). InnerVolumeSpecName "kube-api-access-sc4wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:53:03 crc kubenswrapper[4833]: I1013 07:53:03.851513 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1-default"] Oct 13 07:53:03 crc kubenswrapper[4833]: I1013 07:53:03.947353 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc4wt\" (UniqueName: \"kubernetes.io/projected/6a5e1f2b-5283-4461-9456-c4b11867e32a-kube-api-access-sc4wt\") on node \"crc\" DevicePath \"\"" Oct 13 07:53:04 crc kubenswrapper[4833]: I1013 07:53:04.359308 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2-default"] Oct 13 07:53:04 crc kubenswrapper[4833]: E1013 07:53:04.359941 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a5e1f2b-5283-4461-9456-c4b11867e32a" containerName="mariadb-client-1-default" Oct 13 07:53:04 crc kubenswrapper[4833]: I1013 07:53:04.359972 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a5e1f2b-5283-4461-9456-c4b11867e32a" containerName="mariadb-client-1-default" Oct 13 07:53:04 crc kubenswrapper[4833]: I1013 07:53:04.360509 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a5e1f2b-5283-4461-9456-c4b11867e32a" containerName="mariadb-client-1-default" Oct 13 07:53:04 crc kubenswrapper[4833]: I1013 07:53:04.361431 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Oct 13 07:53:04 crc kubenswrapper[4833]: I1013 07:53:04.374737 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Oct 13 07:53:04 crc kubenswrapper[4833]: I1013 07:53:04.395722 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba6695d1b077a201106abdc5643a8619e5f151b3ec8128051de4849ebd74ffd7" Oct 13 07:53:04 crc kubenswrapper[4833]: I1013 07:53:04.395837 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Oct 13 07:53:04 crc kubenswrapper[4833]: I1013 07:53:04.456982 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cswzp\" (UniqueName: \"kubernetes.io/projected/ebd9c1e9-73a7-4a4a-9a75-09045657d801-kube-api-access-cswzp\") pod \"mariadb-client-2-default\" (UID: \"ebd9c1e9-73a7-4a4a-9a75-09045657d801\") " pod="openstack/mariadb-client-2-default" Oct 13 07:53:04 crc kubenswrapper[4833]: I1013 07:53:04.558958 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cswzp\" (UniqueName: \"kubernetes.io/projected/ebd9c1e9-73a7-4a4a-9a75-09045657d801-kube-api-access-cswzp\") pod \"mariadb-client-2-default\" (UID: \"ebd9c1e9-73a7-4a4a-9a75-09045657d801\") " pod="openstack/mariadb-client-2-default" Oct 13 07:53:04 crc kubenswrapper[4833]: I1013 07:53:04.603875 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cswzp\" (UniqueName: \"kubernetes.io/projected/ebd9c1e9-73a7-4a4a-9a75-09045657d801-kube-api-access-cswzp\") pod \"mariadb-client-2-default\" (UID: \"ebd9c1e9-73a7-4a4a-9a75-09045657d801\") " pod="openstack/mariadb-client-2-default" Oct 13 07:53:04 crc kubenswrapper[4833]: I1013 07:53:04.641910 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a5e1f2b-5283-4461-9456-c4b11867e32a" path="/var/lib/kubelet/pods/6a5e1f2b-5283-4461-9456-c4b11867e32a/volumes" Oct 13 07:53:04 crc kubenswrapper[4833]: I1013 07:53:04.693958 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Oct 13 07:53:05 crc kubenswrapper[4833]: I1013 07:53:05.021381 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Oct 13 07:53:05 crc kubenswrapper[4833]: I1013 07:53:05.409067 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"ebd9c1e9-73a7-4a4a-9a75-09045657d801","Type":"ContainerStarted","Data":"f3cb7b46bf07b2c1a4d02b281dc2a3e1a0c757cbb7d8bde73a81afffc9662722"} Oct 13 07:53:05 crc kubenswrapper[4833]: I1013 07:53:05.409504 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"ebd9c1e9-73a7-4a4a-9a75-09045657d801","Type":"ContainerStarted","Data":"679747bc088ddadbc3634fe6efc00ce08bc16dd72eace9a86f2a9a00d9884d3e"} Oct 13 07:53:05 crc kubenswrapper[4833]: I1013 07:53:05.439736 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-2-default" podStartSLOduration=1.439710438 podStartE2EDuration="1.439710438s" podCreationTimestamp="2025-10-13 07:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 07:53:05.429680723 +0000 UTC m=+5075.530103709" watchObservedRunningTime="2025-10-13 07:53:05.439710438 +0000 UTC m=+5075.540133374" Oct 13 07:53:06 crc kubenswrapper[4833]: I1013 07:53:06.421120 4833 generic.go:334] "Generic (PLEG): container finished" podID="ebd9c1e9-73a7-4a4a-9a75-09045657d801" containerID="f3cb7b46bf07b2c1a4d02b281dc2a3e1a0c757cbb7d8bde73a81afffc9662722" exitCode=0 Oct 13 07:53:06 crc kubenswrapper[4833]: I1013 07:53:06.421176 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"ebd9c1e9-73a7-4a4a-9a75-09045657d801","Type":"ContainerDied","Data":"f3cb7b46bf07b2c1a4d02b281dc2a3e1a0c757cbb7d8bde73a81afffc9662722"} Oct 13 07:53:07 crc kubenswrapper[4833]: I1013 07:53:07.864562 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Oct 13 07:53:07 crc kubenswrapper[4833]: I1013 07:53:07.905931 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2-default"] Oct 13 07:53:07 crc kubenswrapper[4833]: I1013 07:53:07.913437 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2-default"] Oct 13 07:53:07 crc kubenswrapper[4833]: I1013 07:53:07.916413 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cswzp\" (UniqueName: \"kubernetes.io/projected/ebd9c1e9-73a7-4a4a-9a75-09045657d801-kube-api-access-cswzp\") pod \"ebd9c1e9-73a7-4a4a-9a75-09045657d801\" (UID: \"ebd9c1e9-73a7-4a4a-9a75-09045657d801\") " Oct 13 07:53:07 crc kubenswrapper[4833]: I1013 07:53:07.922017 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebd9c1e9-73a7-4a4a-9a75-09045657d801-kube-api-access-cswzp" (OuterVolumeSpecName: "kube-api-access-cswzp") pod "ebd9c1e9-73a7-4a4a-9a75-09045657d801" (UID: "ebd9c1e9-73a7-4a4a-9a75-09045657d801"). InnerVolumeSpecName "kube-api-access-cswzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:53:08 crc kubenswrapper[4833]: I1013 07:53:08.017951 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cswzp\" (UniqueName: \"kubernetes.io/projected/ebd9c1e9-73a7-4a4a-9a75-09045657d801-kube-api-access-cswzp\") on node \"crc\" DevicePath \"\"" Oct 13 07:53:08 crc kubenswrapper[4833]: I1013 07:53:08.073685 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-3-default"] Oct 13 07:53:08 crc kubenswrapper[4833]: E1013 07:53:08.074129 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebd9c1e9-73a7-4a4a-9a75-09045657d801" containerName="mariadb-client-2-default" Oct 13 07:53:08 crc kubenswrapper[4833]: I1013 07:53:08.074153 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebd9c1e9-73a7-4a4a-9a75-09045657d801" containerName="mariadb-client-2-default" Oct 13 07:53:08 crc kubenswrapper[4833]: I1013 07:53:08.074448 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebd9c1e9-73a7-4a4a-9a75-09045657d801" containerName="mariadb-client-2-default" Oct 13 07:53:08 crc kubenswrapper[4833]: I1013 07:53:08.075173 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-3-default" Oct 13 07:53:08 crc kubenswrapper[4833]: I1013 07:53:08.084822 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-3-default"] Oct 13 07:53:08 crc kubenswrapper[4833]: I1013 07:53:08.119984 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rtbv\" (UniqueName: \"kubernetes.io/projected/73c693b1-7b99-40d5-a1b1-02096f416623-kube-api-access-6rtbv\") pod \"mariadb-client-3-default\" (UID: \"73c693b1-7b99-40d5-a1b1-02096f416623\") " pod="openstack/mariadb-client-3-default" Oct 13 07:53:08 crc kubenswrapper[4833]: I1013 07:53:08.221336 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rtbv\" (UniqueName: \"kubernetes.io/projected/73c693b1-7b99-40d5-a1b1-02096f416623-kube-api-access-6rtbv\") pod \"mariadb-client-3-default\" (UID: \"73c693b1-7b99-40d5-a1b1-02096f416623\") " pod="openstack/mariadb-client-3-default" Oct 13 07:53:08 crc kubenswrapper[4833]: I1013 07:53:08.243599 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rtbv\" (UniqueName: \"kubernetes.io/projected/73c693b1-7b99-40d5-a1b1-02096f416623-kube-api-access-6rtbv\") pod \"mariadb-client-3-default\" (UID: \"73c693b1-7b99-40d5-a1b1-02096f416623\") " pod="openstack/mariadb-client-3-default" Oct 13 07:53:08 crc kubenswrapper[4833]: I1013 07:53:08.397027 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-3-default" Oct 13 07:53:08 crc kubenswrapper[4833]: I1013 07:53:08.460245 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="679747bc088ddadbc3634fe6efc00ce08bc16dd72eace9a86f2a9a00d9884d3e" Oct 13 07:53:08 crc kubenswrapper[4833]: I1013 07:53:08.460352 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Oct 13 07:53:08 crc kubenswrapper[4833]: I1013 07:53:08.639848 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebd9c1e9-73a7-4a4a-9a75-09045657d801" path="/var/lib/kubelet/pods/ebd9c1e9-73a7-4a4a-9a75-09045657d801/volumes" Oct 13 07:53:08 crc kubenswrapper[4833]: I1013 07:53:08.792451 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-3-default"] Oct 13 07:53:08 crc kubenswrapper[4833]: W1013 07:53:08.798005 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73c693b1_7b99_40d5_a1b1_02096f416623.slice/crio-13b4adf39ecf977ebbf2d55a012ba1cccb13f5899e4693197b15ff7bbc6d50e8 WatchSource:0}: Error finding container 13b4adf39ecf977ebbf2d55a012ba1cccb13f5899e4693197b15ff7bbc6d50e8: Status 404 returned error can't find the container with id 13b4adf39ecf977ebbf2d55a012ba1cccb13f5899e4693197b15ff7bbc6d50e8 Oct 13 07:53:09 crc kubenswrapper[4833]: I1013 07:53:09.468884 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-3-default" event={"ID":"73c693b1-7b99-40d5-a1b1-02096f416623","Type":"ContainerStarted","Data":"c2c664ca1ba051cfa48cb8e5a49cf224026b0aca68b73b15cdc483d8ca0ff332"} Oct 13 07:53:09 crc kubenswrapper[4833]: I1013 07:53:09.468928 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-3-default" event={"ID":"73c693b1-7b99-40d5-a1b1-02096f416623","Type":"ContainerStarted","Data":"13b4adf39ecf977ebbf2d55a012ba1cccb13f5899e4693197b15ff7bbc6d50e8"} Oct 13 07:53:09 crc kubenswrapper[4833]: I1013 07:53:09.493961 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-3-default" podStartSLOduration=1.493935527 podStartE2EDuration="1.493935527s" podCreationTimestamp="2025-10-13 07:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 07:53:09.485776826 +0000 UTC m=+5079.586199782" watchObservedRunningTime="2025-10-13 07:53:09.493935527 +0000 UTC m=+5079.594358483" Oct 13 07:53:11 crc kubenswrapper[4833]: I1013 07:53:11.490960 4833 generic.go:334] "Generic (PLEG): container finished" podID="73c693b1-7b99-40d5-a1b1-02096f416623" containerID="c2c664ca1ba051cfa48cb8e5a49cf224026b0aca68b73b15cdc483d8ca0ff332" exitCode=0 Oct 13 07:53:11 crc kubenswrapper[4833]: I1013 07:53:11.491105 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-3-default" event={"ID":"73c693b1-7b99-40d5-a1b1-02096f416623","Type":"ContainerDied","Data":"c2c664ca1ba051cfa48cb8e5a49cf224026b0aca68b73b15cdc483d8ca0ff332"} Oct 13 07:53:12 crc kubenswrapper[4833]: I1013 07:53:12.920227 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-3-default" Oct 13 07:53:12 crc kubenswrapper[4833]: I1013 07:53:12.972381 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-3-default"] Oct 13 07:53:12 crc kubenswrapper[4833]: I1013 07:53:12.981122 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-3-default"] Oct 13 07:53:13 crc kubenswrapper[4833]: I1013 07:53:13.003830 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rtbv\" (UniqueName: \"kubernetes.io/projected/73c693b1-7b99-40d5-a1b1-02096f416623-kube-api-access-6rtbv\") pod \"73c693b1-7b99-40d5-a1b1-02096f416623\" (UID: \"73c693b1-7b99-40d5-a1b1-02096f416623\") " Oct 13 07:53:13 crc kubenswrapper[4833]: I1013 07:53:13.010034 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73c693b1-7b99-40d5-a1b1-02096f416623-kube-api-access-6rtbv" (OuterVolumeSpecName: "kube-api-access-6rtbv") pod "73c693b1-7b99-40d5-a1b1-02096f416623" (UID: "73c693b1-7b99-40d5-a1b1-02096f416623"). InnerVolumeSpecName "kube-api-access-6rtbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:53:13 crc kubenswrapper[4833]: I1013 07:53:13.106363 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rtbv\" (UniqueName: \"kubernetes.io/projected/73c693b1-7b99-40d5-a1b1-02096f416623-kube-api-access-6rtbv\") on node \"crc\" DevicePath \"\"" Oct 13 07:53:13 crc kubenswrapper[4833]: I1013 07:53:13.377411 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1"] Oct 13 07:53:13 crc kubenswrapper[4833]: E1013 07:53:13.378143 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73c693b1-7b99-40d5-a1b1-02096f416623" containerName="mariadb-client-3-default" Oct 13 07:53:13 crc kubenswrapper[4833]: I1013 07:53:13.378192 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="73c693b1-7b99-40d5-a1b1-02096f416623" containerName="mariadb-client-3-default" Oct 13 07:53:13 crc kubenswrapper[4833]: I1013 07:53:13.378593 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="73c693b1-7b99-40d5-a1b1-02096f416623" containerName="mariadb-client-3-default" Oct 13 07:53:13 crc kubenswrapper[4833]: I1013 07:53:13.379839 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Oct 13 07:53:13 crc kubenswrapper[4833]: I1013 07:53:13.394071 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Oct 13 07:53:13 crc kubenswrapper[4833]: I1013 07:53:13.412217 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntxmv\" (UniqueName: \"kubernetes.io/projected/0c765b4c-252c-4bf7-93da-96da65eb833d-kube-api-access-ntxmv\") pod \"mariadb-client-1\" (UID: \"0c765b4c-252c-4bf7-93da-96da65eb833d\") " pod="openstack/mariadb-client-1" Oct 13 07:53:13 crc kubenswrapper[4833]: I1013 07:53:13.512276 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13b4adf39ecf977ebbf2d55a012ba1cccb13f5899e4693197b15ff7bbc6d50e8" Oct 13 07:53:13 crc kubenswrapper[4833]: I1013 07:53:13.512349 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-3-default" Oct 13 07:53:13 crc kubenswrapper[4833]: I1013 07:53:13.514226 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntxmv\" (UniqueName: \"kubernetes.io/projected/0c765b4c-252c-4bf7-93da-96da65eb833d-kube-api-access-ntxmv\") pod \"mariadb-client-1\" (UID: \"0c765b4c-252c-4bf7-93da-96da65eb833d\") " pod="openstack/mariadb-client-1" Oct 13 07:53:13 crc kubenswrapper[4833]: I1013 07:53:13.538211 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntxmv\" (UniqueName: \"kubernetes.io/projected/0c765b4c-252c-4bf7-93da-96da65eb833d-kube-api-access-ntxmv\") pod \"mariadb-client-1\" (UID: \"0c765b4c-252c-4bf7-93da-96da65eb833d\") " pod="openstack/mariadb-client-1" Oct 13 07:53:13 crc kubenswrapper[4833]: I1013 07:53:13.701935 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Oct 13 07:53:13 crc kubenswrapper[4833]: I1013 07:53:13.989467 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Oct 13 07:53:14 crc kubenswrapper[4833]: I1013 07:53:14.527591 4833 generic.go:334] "Generic (PLEG): container finished" podID="0c765b4c-252c-4bf7-93da-96da65eb833d" containerID="0853a5dad95df3108798377be1e792e3f8fadefcb1cbe8c7c87d77a621704ebd" exitCode=0 Oct 13 07:53:14 crc kubenswrapper[4833]: I1013 07:53:14.527657 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"0c765b4c-252c-4bf7-93da-96da65eb833d","Type":"ContainerDied","Data":"0853a5dad95df3108798377be1e792e3f8fadefcb1cbe8c7c87d77a621704ebd"} Oct 13 07:53:14 crc kubenswrapper[4833]: I1013 07:53:14.527698 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"0c765b4c-252c-4bf7-93da-96da65eb833d","Type":"ContainerStarted","Data":"4254141ec72a5fb192a740501e5fe59e598c9205917aed3990a71d9ab0757163"} Oct 13 07:53:14 crc kubenswrapper[4833]: I1013 07:53:14.628238 4833 scope.go:117] "RemoveContainer" containerID="a730b8e15d65542d1555c0490d7044bbf38de54446d43d1c11482a8cdf705cba" Oct 13 07:53:14 crc kubenswrapper[4833]: E1013 07:53:14.628696 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:53:14 crc kubenswrapper[4833]: I1013 07:53:14.641674 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73c693b1-7b99-40d5-a1b1-02096f416623" path="/var/lib/kubelet/pods/73c693b1-7b99-40d5-a1b1-02096f416623/volumes" Oct 13 07:53:15 crc kubenswrapper[4833]: I1013 07:53:15.992791 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Oct 13 07:53:16 crc kubenswrapper[4833]: I1013 07:53:16.013838 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1_0c765b4c-252c-4bf7-93da-96da65eb833d/mariadb-client-1/0.log" Oct 13 07:53:16 crc kubenswrapper[4833]: I1013 07:53:16.052941 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1"] Oct 13 07:53:16 crc kubenswrapper[4833]: I1013 07:53:16.069350 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1"] Oct 13 07:53:16 crc kubenswrapper[4833]: I1013 07:53:16.069730 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntxmv\" (UniqueName: \"kubernetes.io/projected/0c765b4c-252c-4bf7-93da-96da65eb833d-kube-api-access-ntxmv\") pod \"0c765b4c-252c-4bf7-93da-96da65eb833d\" (UID: \"0c765b4c-252c-4bf7-93da-96da65eb833d\") " Oct 13 07:53:16 crc kubenswrapper[4833]: I1013 07:53:16.078008 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c765b4c-252c-4bf7-93da-96da65eb833d-kube-api-access-ntxmv" (OuterVolumeSpecName: "kube-api-access-ntxmv") pod "0c765b4c-252c-4bf7-93da-96da65eb833d" (UID: "0c765b4c-252c-4bf7-93da-96da65eb833d"). InnerVolumeSpecName "kube-api-access-ntxmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:53:16 crc kubenswrapper[4833]: I1013 07:53:16.171908 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntxmv\" (UniqueName: \"kubernetes.io/projected/0c765b4c-252c-4bf7-93da-96da65eb833d-kube-api-access-ntxmv\") on node \"crc\" DevicePath \"\"" Oct 13 07:53:16 crc kubenswrapper[4833]: I1013 07:53:16.463037 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-4-default"] Oct 13 07:53:16 crc kubenswrapper[4833]: E1013 07:53:16.463730 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c765b4c-252c-4bf7-93da-96da65eb833d" containerName="mariadb-client-1" Oct 13 07:53:16 crc kubenswrapper[4833]: I1013 07:53:16.463779 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c765b4c-252c-4bf7-93da-96da65eb833d" containerName="mariadb-client-1" Oct 13 07:53:16 crc kubenswrapper[4833]: I1013 07:53:16.464178 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c765b4c-252c-4bf7-93da-96da65eb833d" containerName="mariadb-client-1" Oct 13 07:53:16 crc kubenswrapper[4833]: I1013 07:53:16.465511 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Oct 13 07:53:16 crc kubenswrapper[4833]: I1013 07:53:16.477211 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Oct 13 07:53:16 crc kubenswrapper[4833]: I1013 07:53:16.550453 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4254141ec72a5fb192a740501e5fe59e598c9205917aed3990a71d9ab0757163" Oct 13 07:53:16 crc kubenswrapper[4833]: I1013 07:53:16.550867 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Oct 13 07:53:16 crc kubenswrapper[4833]: I1013 07:53:16.581395 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfb2l\" (UniqueName: \"kubernetes.io/projected/d8e3c2b1-c316-4ea4-87a4-79b62400018b-kube-api-access-hfb2l\") pod \"mariadb-client-4-default\" (UID: \"d8e3c2b1-c316-4ea4-87a4-79b62400018b\") " pod="openstack/mariadb-client-4-default" Oct 13 07:53:16 crc kubenswrapper[4833]: I1013 07:53:16.653583 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c765b4c-252c-4bf7-93da-96da65eb833d" path="/var/lib/kubelet/pods/0c765b4c-252c-4bf7-93da-96da65eb833d/volumes" Oct 13 07:53:16 crc kubenswrapper[4833]: I1013 07:53:16.683202 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfb2l\" (UniqueName: \"kubernetes.io/projected/d8e3c2b1-c316-4ea4-87a4-79b62400018b-kube-api-access-hfb2l\") pod \"mariadb-client-4-default\" (UID: \"d8e3c2b1-c316-4ea4-87a4-79b62400018b\") " pod="openstack/mariadb-client-4-default" Oct 13 07:53:16 crc kubenswrapper[4833]: I1013 07:53:16.714803 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfb2l\" (UniqueName: \"kubernetes.io/projected/d8e3c2b1-c316-4ea4-87a4-79b62400018b-kube-api-access-hfb2l\") pod \"mariadb-client-4-default\" (UID: \"d8e3c2b1-c316-4ea4-87a4-79b62400018b\") " pod="openstack/mariadb-client-4-default" Oct 13 07:53:16 crc kubenswrapper[4833]: I1013 07:53:16.794695 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Oct 13 07:53:17 crc kubenswrapper[4833]: I1013 07:53:17.411751 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Oct 13 07:53:17 crc kubenswrapper[4833]: W1013 07:53:17.415928 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8e3c2b1_c316_4ea4_87a4_79b62400018b.slice/crio-2f0189fe6727244a968135657b5075364b34ecc02aa108af2e5cada24464c514 WatchSource:0}: Error finding container 2f0189fe6727244a968135657b5075364b34ecc02aa108af2e5cada24464c514: Status 404 returned error can't find the container with id 2f0189fe6727244a968135657b5075364b34ecc02aa108af2e5cada24464c514 Oct 13 07:53:17 crc kubenswrapper[4833]: I1013 07:53:17.559192 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"d8e3c2b1-c316-4ea4-87a4-79b62400018b","Type":"ContainerStarted","Data":"2f0189fe6727244a968135657b5075364b34ecc02aa108af2e5cada24464c514"} Oct 13 07:53:18 crc kubenswrapper[4833]: I1013 07:53:18.570723 4833 generic.go:334] "Generic (PLEG): container finished" podID="d8e3c2b1-c316-4ea4-87a4-79b62400018b" containerID="4e69027ac0ac6405fb9f5f35d4cf4ac861bdb4e44ed3b97b84fc537418836906" exitCode=0 Oct 13 07:53:18 crc kubenswrapper[4833]: I1013 07:53:18.570783 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"d8e3c2b1-c316-4ea4-87a4-79b62400018b","Type":"ContainerDied","Data":"4e69027ac0ac6405fb9f5f35d4cf4ac861bdb4e44ed3b97b84fc537418836906"} Oct 13 07:53:20 crc kubenswrapper[4833]: I1013 07:53:20.047909 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Oct 13 07:53:20 crc kubenswrapper[4833]: I1013 07:53:20.067512 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-4-default_d8e3c2b1-c316-4ea4-87a4-79b62400018b/mariadb-client-4-default/0.log" Oct 13 07:53:20 crc kubenswrapper[4833]: I1013 07:53:20.094050 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-4-default"] Oct 13 07:53:20 crc kubenswrapper[4833]: I1013 07:53:20.100027 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-4-default"] Oct 13 07:53:20 crc kubenswrapper[4833]: I1013 07:53:20.166954 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfb2l\" (UniqueName: \"kubernetes.io/projected/d8e3c2b1-c316-4ea4-87a4-79b62400018b-kube-api-access-hfb2l\") pod \"d8e3c2b1-c316-4ea4-87a4-79b62400018b\" (UID: \"d8e3c2b1-c316-4ea4-87a4-79b62400018b\") " Oct 13 07:53:20 crc kubenswrapper[4833]: I1013 07:53:20.174629 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8e3c2b1-c316-4ea4-87a4-79b62400018b-kube-api-access-hfb2l" (OuterVolumeSpecName: "kube-api-access-hfb2l") pod "d8e3c2b1-c316-4ea4-87a4-79b62400018b" (UID: "d8e3c2b1-c316-4ea4-87a4-79b62400018b"). InnerVolumeSpecName "kube-api-access-hfb2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:53:20 crc kubenswrapper[4833]: I1013 07:53:20.269363 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfb2l\" (UniqueName: \"kubernetes.io/projected/d8e3c2b1-c316-4ea4-87a4-79b62400018b-kube-api-access-hfb2l\") on node \"crc\" DevicePath \"\"" Oct 13 07:53:20 crc kubenswrapper[4833]: I1013 07:53:20.592753 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f0189fe6727244a968135657b5075364b34ecc02aa108af2e5cada24464c514" Oct 13 07:53:20 crc kubenswrapper[4833]: I1013 07:53:20.592853 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Oct 13 07:53:20 crc kubenswrapper[4833]: I1013 07:53:20.648030 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8e3c2b1-c316-4ea4-87a4-79b62400018b" path="/var/lib/kubelet/pods/d8e3c2b1-c316-4ea4-87a4-79b62400018b/volumes" Oct 13 07:53:24 crc kubenswrapper[4833]: I1013 07:53:24.923703 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-5-default"] Oct 13 07:53:24 crc kubenswrapper[4833]: E1013 07:53:24.924505 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8e3c2b1-c316-4ea4-87a4-79b62400018b" containerName="mariadb-client-4-default" Oct 13 07:53:24 crc kubenswrapper[4833]: I1013 07:53:24.924527 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8e3c2b1-c316-4ea4-87a4-79b62400018b" containerName="mariadb-client-4-default" Oct 13 07:53:24 crc kubenswrapper[4833]: I1013 07:53:24.924851 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8e3c2b1-c316-4ea4-87a4-79b62400018b" containerName="mariadb-client-4-default" Oct 13 07:53:24 crc kubenswrapper[4833]: I1013 07:53:24.925794 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Oct 13 07:53:24 crc kubenswrapper[4833]: I1013 07:53:24.937758 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bx4vr" Oct 13 07:53:24 crc kubenswrapper[4833]: I1013 07:53:24.946710 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Oct 13 07:53:25 crc kubenswrapper[4833]: I1013 07:53:25.060390 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvpgr\" (UniqueName: \"kubernetes.io/projected/daeb8000-4883-49e0-89cf-4c05f2ec3f32-kube-api-access-lvpgr\") pod \"mariadb-client-5-default\" (UID: \"daeb8000-4883-49e0-89cf-4c05f2ec3f32\") " pod="openstack/mariadb-client-5-default" Oct 13 07:53:25 crc kubenswrapper[4833]: I1013 07:53:25.161326 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvpgr\" (UniqueName: \"kubernetes.io/projected/daeb8000-4883-49e0-89cf-4c05f2ec3f32-kube-api-access-lvpgr\") pod \"mariadb-client-5-default\" (UID: \"daeb8000-4883-49e0-89cf-4c05f2ec3f32\") " pod="openstack/mariadb-client-5-default" Oct 13 07:53:25 crc kubenswrapper[4833]: I1013 07:53:25.186178 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvpgr\" (UniqueName: \"kubernetes.io/projected/daeb8000-4883-49e0-89cf-4c05f2ec3f32-kube-api-access-lvpgr\") pod \"mariadb-client-5-default\" (UID: \"daeb8000-4883-49e0-89cf-4c05f2ec3f32\") " pod="openstack/mariadb-client-5-default" Oct 13 07:53:25 crc kubenswrapper[4833]: I1013 07:53:25.262191 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Oct 13 07:53:25 crc kubenswrapper[4833]: I1013 07:53:25.889891 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Oct 13 07:53:26 crc kubenswrapper[4833]: I1013 07:53:26.661154 4833 generic.go:334] "Generic (PLEG): container finished" podID="daeb8000-4883-49e0-89cf-4c05f2ec3f32" containerID="0b9ff885870a8fcb82e5a280ed0f6583b17db57f188debc8b43d488113bc623e" exitCode=0 Oct 13 07:53:26 crc kubenswrapper[4833]: I1013 07:53:26.661229 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"daeb8000-4883-49e0-89cf-4c05f2ec3f32","Type":"ContainerDied","Data":"0b9ff885870a8fcb82e5a280ed0f6583b17db57f188debc8b43d488113bc623e"} Oct 13 07:53:26 crc kubenswrapper[4833]: I1013 07:53:26.661303 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"daeb8000-4883-49e0-89cf-4c05f2ec3f32","Type":"ContainerStarted","Data":"f3a818509a50d578b5c8a78ce2990b1e9a96b0e1184fe3e7a8e356a4fe6d8f4a"} Oct 13 07:53:27 crc kubenswrapper[4833]: I1013 07:53:27.627632 4833 scope.go:117] "RemoveContainer" containerID="a730b8e15d65542d1555c0490d7044bbf38de54446d43d1c11482a8cdf705cba" Oct 13 07:53:27 crc kubenswrapper[4833]: E1013 07:53:27.628374 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:53:28 crc kubenswrapper[4833]: I1013 07:53:28.027988 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Oct 13 07:53:28 crc kubenswrapper[4833]: I1013 07:53:28.046757 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-5-default_daeb8000-4883-49e0-89cf-4c05f2ec3f32/mariadb-client-5-default/0.log" Oct 13 07:53:28 crc kubenswrapper[4833]: I1013 07:53:28.073189 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-5-default"] Oct 13 07:53:28 crc kubenswrapper[4833]: I1013 07:53:28.078316 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-5-default"] Oct 13 07:53:28 crc kubenswrapper[4833]: I1013 07:53:28.128749 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvpgr\" (UniqueName: \"kubernetes.io/projected/daeb8000-4883-49e0-89cf-4c05f2ec3f32-kube-api-access-lvpgr\") pod \"daeb8000-4883-49e0-89cf-4c05f2ec3f32\" (UID: \"daeb8000-4883-49e0-89cf-4c05f2ec3f32\") " Oct 13 07:53:28 crc kubenswrapper[4833]: I1013 07:53:28.143839 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daeb8000-4883-49e0-89cf-4c05f2ec3f32-kube-api-access-lvpgr" (OuterVolumeSpecName: "kube-api-access-lvpgr") pod "daeb8000-4883-49e0-89cf-4c05f2ec3f32" (UID: "daeb8000-4883-49e0-89cf-4c05f2ec3f32"). InnerVolumeSpecName "kube-api-access-lvpgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:53:28 crc kubenswrapper[4833]: I1013 07:53:28.207687 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-6-default"] Oct 13 07:53:28 crc kubenswrapper[4833]: E1013 07:53:28.208114 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daeb8000-4883-49e0-89cf-4c05f2ec3f32" containerName="mariadb-client-5-default" Oct 13 07:53:28 crc kubenswrapper[4833]: I1013 07:53:28.208131 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="daeb8000-4883-49e0-89cf-4c05f2ec3f32" containerName="mariadb-client-5-default" Oct 13 07:53:28 crc kubenswrapper[4833]: I1013 07:53:28.208304 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="daeb8000-4883-49e0-89cf-4c05f2ec3f32" containerName="mariadb-client-5-default" Oct 13 07:53:28 crc kubenswrapper[4833]: I1013 07:53:28.208884 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Oct 13 07:53:28 crc kubenswrapper[4833]: I1013 07:53:28.226812 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Oct 13 07:53:28 crc kubenswrapper[4833]: I1013 07:53:28.230219 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvpgr\" (UniqueName: \"kubernetes.io/projected/daeb8000-4883-49e0-89cf-4c05f2ec3f32-kube-api-access-lvpgr\") on node \"crc\" DevicePath \"\"" Oct 13 07:53:28 crc kubenswrapper[4833]: I1013 07:53:28.331651 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thmb8\" (UniqueName: \"kubernetes.io/projected/11a72694-71f9-4af8-8b5d-7ff1b21a881b-kube-api-access-thmb8\") pod \"mariadb-client-6-default\" (UID: \"11a72694-71f9-4af8-8b5d-7ff1b21a881b\") " pod="openstack/mariadb-client-6-default" Oct 13 07:53:28 crc kubenswrapper[4833]: I1013 07:53:28.433879 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thmb8\" (UniqueName: \"kubernetes.io/projected/11a72694-71f9-4af8-8b5d-7ff1b21a881b-kube-api-access-thmb8\") pod \"mariadb-client-6-default\" (UID: \"11a72694-71f9-4af8-8b5d-7ff1b21a881b\") " pod="openstack/mariadb-client-6-default" Oct 13 07:53:28 crc kubenswrapper[4833]: I1013 07:53:28.450610 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thmb8\" (UniqueName: \"kubernetes.io/projected/11a72694-71f9-4af8-8b5d-7ff1b21a881b-kube-api-access-thmb8\") pod \"mariadb-client-6-default\" (UID: \"11a72694-71f9-4af8-8b5d-7ff1b21a881b\") " pod="openstack/mariadb-client-6-default" Oct 13 07:53:28 crc kubenswrapper[4833]: I1013 07:53:28.532321 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Oct 13 07:53:28 crc kubenswrapper[4833]: I1013 07:53:28.652678 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="daeb8000-4883-49e0-89cf-4c05f2ec3f32" path="/var/lib/kubelet/pods/daeb8000-4883-49e0-89cf-4c05f2ec3f32/volumes" Oct 13 07:53:28 crc kubenswrapper[4833]: I1013 07:53:28.683596 4833 scope.go:117] "RemoveContainer" containerID="0b9ff885870a8fcb82e5a280ed0f6583b17db57f188debc8b43d488113bc623e" Oct 13 07:53:28 crc kubenswrapper[4833]: I1013 07:53:28.683685 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Oct 13 07:53:29 crc kubenswrapper[4833]: I1013 07:53:29.064290 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Oct 13 07:53:29 crc kubenswrapper[4833]: W1013 07:53:29.069855 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11a72694_71f9_4af8_8b5d_7ff1b21a881b.slice/crio-c4144a5ade4e8112b02b65b5e968f8d46cea6f838beefd5dfb386928b0a2e32a WatchSource:0}: Error finding container c4144a5ade4e8112b02b65b5e968f8d46cea6f838beefd5dfb386928b0a2e32a: Status 404 returned error can't find the container with id c4144a5ade4e8112b02b65b5e968f8d46cea6f838beefd5dfb386928b0a2e32a Oct 13 07:53:29 crc kubenswrapper[4833]: I1013 07:53:29.704019 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"11a72694-71f9-4af8-8b5d-7ff1b21a881b","Type":"ContainerStarted","Data":"be8d8d112b41c3d9e78f90eb03b47f4388f6d921acc4fb21aacfde745cd0e7b5"} Oct 13 07:53:29 crc kubenswrapper[4833]: I1013 07:53:29.704086 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"11a72694-71f9-4af8-8b5d-7ff1b21a881b","Type":"ContainerStarted","Data":"c4144a5ade4e8112b02b65b5e968f8d46cea6f838beefd5dfb386928b0a2e32a"} Oct 13 07:53:29 crc kubenswrapper[4833]: I1013 07:53:29.727411 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-6-default" podStartSLOduration=1.7273935630000001 podStartE2EDuration="1.727393563s" podCreationTimestamp="2025-10-13 07:53:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 07:53:29.722798743 +0000 UTC m=+5099.823221679" watchObservedRunningTime="2025-10-13 07:53:29.727393563 +0000 UTC m=+5099.827816489" Oct 13 07:53:29 crc kubenswrapper[4833]: I1013 07:53:29.855164 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-6-default_11a72694-71f9-4af8-8b5d-7ff1b21a881b/mariadb-client-6-default/0.log" Oct 13 07:53:30 crc kubenswrapper[4833]: I1013 07:53:30.721249 4833 generic.go:334] "Generic (PLEG): container finished" podID="11a72694-71f9-4af8-8b5d-7ff1b21a881b" containerID="be8d8d112b41c3d9e78f90eb03b47f4388f6d921acc4fb21aacfde745cd0e7b5" exitCode=0 Oct 13 07:53:30 crc kubenswrapper[4833]: I1013 07:53:30.721324 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"11a72694-71f9-4af8-8b5d-7ff1b21a881b","Type":"ContainerDied","Data":"be8d8d112b41c3d9e78f90eb03b47f4388f6d921acc4fb21aacfde745cd0e7b5"} Oct 13 07:53:32 crc kubenswrapper[4833]: I1013 07:53:32.159699 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Oct 13 07:53:32 crc kubenswrapper[4833]: I1013 07:53:32.241696 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-6-default"] Oct 13 07:53:32 crc kubenswrapper[4833]: I1013 07:53:32.253015 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-6-default"] Oct 13 07:53:32 crc kubenswrapper[4833]: I1013 07:53:32.320918 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thmb8\" (UniqueName: \"kubernetes.io/projected/11a72694-71f9-4af8-8b5d-7ff1b21a881b-kube-api-access-thmb8\") pod \"11a72694-71f9-4af8-8b5d-7ff1b21a881b\" (UID: \"11a72694-71f9-4af8-8b5d-7ff1b21a881b\") " Oct 13 07:53:32 crc kubenswrapper[4833]: I1013 07:53:32.329893 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11a72694-71f9-4af8-8b5d-7ff1b21a881b-kube-api-access-thmb8" (OuterVolumeSpecName: "kube-api-access-thmb8") pod "11a72694-71f9-4af8-8b5d-7ff1b21a881b" (UID: "11a72694-71f9-4af8-8b5d-7ff1b21a881b"). InnerVolumeSpecName "kube-api-access-thmb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:53:32 crc kubenswrapper[4833]: I1013 07:53:32.361420 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-7-default"] Oct 13 07:53:32 crc kubenswrapper[4833]: E1013 07:53:32.361889 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11a72694-71f9-4af8-8b5d-7ff1b21a881b" containerName="mariadb-client-6-default" Oct 13 07:53:32 crc kubenswrapper[4833]: I1013 07:53:32.361918 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="11a72694-71f9-4af8-8b5d-7ff1b21a881b" containerName="mariadb-client-6-default" Oct 13 07:53:32 crc kubenswrapper[4833]: I1013 07:53:32.362126 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="11a72694-71f9-4af8-8b5d-7ff1b21a881b" containerName="mariadb-client-6-default" Oct 13 07:53:32 crc kubenswrapper[4833]: I1013 07:53:32.362776 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Oct 13 07:53:32 crc kubenswrapper[4833]: I1013 07:53:32.369590 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Oct 13 07:53:32 crc kubenswrapper[4833]: I1013 07:53:32.422807 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thmb8\" (UniqueName: \"kubernetes.io/projected/11a72694-71f9-4af8-8b5d-7ff1b21a881b-kube-api-access-thmb8\") on node \"crc\" DevicePath \"\"" Oct 13 07:53:32 crc kubenswrapper[4833]: I1013 07:53:32.524264 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swjxd\" (UniqueName: \"kubernetes.io/projected/f835a7f6-ba64-4982-93a2-de101a5fd893-kube-api-access-swjxd\") pod \"mariadb-client-7-default\" (UID: \"f835a7f6-ba64-4982-93a2-de101a5fd893\") " pod="openstack/mariadb-client-7-default" Oct 13 07:53:32 crc kubenswrapper[4833]: I1013 07:53:32.626017 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swjxd\" (UniqueName: \"kubernetes.io/projected/f835a7f6-ba64-4982-93a2-de101a5fd893-kube-api-access-swjxd\") pod \"mariadb-client-7-default\" (UID: \"f835a7f6-ba64-4982-93a2-de101a5fd893\") " pod="openstack/mariadb-client-7-default" Oct 13 07:53:32 crc kubenswrapper[4833]: I1013 07:53:32.639158 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11a72694-71f9-4af8-8b5d-7ff1b21a881b" path="/var/lib/kubelet/pods/11a72694-71f9-4af8-8b5d-7ff1b21a881b/volumes" Oct 13 07:53:32 crc kubenswrapper[4833]: I1013 07:53:32.647087 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swjxd\" (UniqueName: \"kubernetes.io/projected/f835a7f6-ba64-4982-93a2-de101a5fd893-kube-api-access-swjxd\") pod \"mariadb-client-7-default\" (UID: \"f835a7f6-ba64-4982-93a2-de101a5fd893\") " pod="openstack/mariadb-client-7-default" Oct 13 07:53:32 crc kubenswrapper[4833]: I1013 07:53:32.691200 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Oct 13 07:53:32 crc kubenswrapper[4833]: I1013 07:53:32.741395 4833 scope.go:117] "RemoveContainer" containerID="be8d8d112b41c3d9e78f90eb03b47f4388f6d921acc4fb21aacfde745cd0e7b5" Oct 13 07:53:32 crc kubenswrapper[4833]: I1013 07:53:32.741559 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Oct 13 07:53:33 crc kubenswrapper[4833]: I1013 07:53:33.240340 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Oct 13 07:53:33 crc kubenswrapper[4833]: I1013 07:53:33.757575 4833 generic.go:334] "Generic (PLEG): container finished" podID="f835a7f6-ba64-4982-93a2-de101a5fd893" containerID="43f710b55224fe33946f449d78bc80043e891e046ee6ab6bce785aa47750bce4" exitCode=0 Oct 13 07:53:33 crc kubenswrapper[4833]: I1013 07:53:33.757628 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"f835a7f6-ba64-4982-93a2-de101a5fd893","Type":"ContainerDied","Data":"43f710b55224fe33946f449d78bc80043e891e046ee6ab6bce785aa47750bce4"} Oct 13 07:53:33 crc kubenswrapper[4833]: I1013 07:53:33.757665 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"f835a7f6-ba64-4982-93a2-de101a5fd893","Type":"ContainerStarted","Data":"9f31b65fcae58a583f8115b39f4175a9dceb02259d8395eb4f33b0ad86a1a5cc"} Oct 13 07:53:35 crc kubenswrapper[4833]: I1013 07:53:35.233897 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Oct 13 07:53:35 crc kubenswrapper[4833]: I1013 07:53:35.252251 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-7-default_f835a7f6-ba64-4982-93a2-de101a5fd893/mariadb-client-7-default/0.log" Oct 13 07:53:35 crc kubenswrapper[4833]: I1013 07:53:35.287356 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-7-default"] Oct 13 07:53:35 crc kubenswrapper[4833]: I1013 07:53:35.294755 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-7-default"] Oct 13 07:53:35 crc kubenswrapper[4833]: I1013 07:53:35.378051 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swjxd\" (UniqueName: \"kubernetes.io/projected/f835a7f6-ba64-4982-93a2-de101a5fd893-kube-api-access-swjxd\") pod \"f835a7f6-ba64-4982-93a2-de101a5fd893\" (UID: \"f835a7f6-ba64-4982-93a2-de101a5fd893\") " Oct 13 07:53:35 crc kubenswrapper[4833]: I1013 07:53:35.388034 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f835a7f6-ba64-4982-93a2-de101a5fd893-kube-api-access-swjxd" (OuterVolumeSpecName: "kube-api-access-swjxd") pod "f835a7f6-ba64-4982-93a2-de101a5fd893" (UID: "f835a7f6-ba64-4982-93a2-de101a5fd893"). InnerVolumeSpecName "kube-api-access-swjxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:53:35 crc kubenswrapper[4833]: I1013 07:53:35.416141 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2"] Oct 13 07:53:35 crc kubenswrapper[4833]: E1013 07:53:35.417466 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f835a7f6-ba64-4982-93a2-de101a5fd893" containerName="mariadb-client-7-default" Oct 13 07:53:35 crc kubenswrapper[4833]: I1013 07:53:35.417581 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f835a7f6-ba64-4982-93a2-de101a5fd893" containerName="mariadb-client-7-default" Oct 13 07:53:35 crc kubenswrapper[4833]: I1013 07:53:35.417847 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f835a7f6-ba64-4982-93a2-de101a5fd893" containerName="mariadb-client-7-default" Oct 13 07:53:35 crc kubenswrapper[4833]: I1013 07:53:35.418530 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Oct 13 07:53:35 crc kubenswrapper[4833]: I1013 07:53:35.425094 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Oct 13 07:53:35 crc kubenswrapper[4833]: I1013 07:53:35.480373 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swjxd\" (UniqueName: \"kubernetes.io/projected/f835a7f6-ba64-4982-93a2-de101a5fd893-kube-api-access-swjxd\") on node \"crc\" DevicePath \"\"" Oct 13 07:53:35 crc kubenswrapper[4833]: I1013 07:53:35.582440 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hb76\" (UniqueName: \"kubernetes.io/projected/cc5ddb09-5c76-4f79-b5ad-c3071eddb952-kube-api-access-5hb76\") pod \"mariadb-client-2\" (UID: \"cc5ddb09-5c76-4f79-b5ad-c3071eddb952\") " pod="openstack/mariadb-client-2" Oct 13 07:53:35 crc kubenswrapper[4833]: I1013 07:53:35.684468 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hb76\" (UniqueName: \"kubernetes.io/projected/cc5ddb09-5c76-4f79-b5ad-c3071eddb952-kube-api-access-5hb76\") pod \"mariadb-client-2\" (UID: \"cc5ddb09-5c76-4f79-b5ad-c3071eddb952\") " pod="openstack/mariadb-client-2" Oct 13 07:53:35 crc kubenswrapper[4833]: I1013 07:53:35.707977 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hb76\" (UniqueName: \"kubernetes.io/projected/cc5ddb09-5c76-4f79-b5ad-c3071eddb952-kube-api-access-5hb76\") pod \"mariadb-client-2\" (UID: \"cc5ddb09-5c76-4f79-b5ad-c3071eddb952\") " pod="openstack/mariadb-client-2" Oct 13 07:53:35 crc kubenswrapper[4833]: I1013 07:53:35.763098 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Oct 13 07:53:35 crc kubenswrapper[4833]: I1013 07:53:35.791893 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f31b65fcae58a583f8115b39f4175a9dceb02259d8395eb4f33b0ad86a1a5cc" Oct 13 07:53:35 crc kubenswrapper[4833]: I1013 07:53:35.791984 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Oct 13 07:53:36 crc kubenswrapper[4833]: I1013 07:53:36.344969 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Oct 13 07:53:36 crc kubenswrapper[4833]: W1013 07:53:36.353796 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc5ddb09_5c76_4f79_b5ad_c3071eddb952.slice/crio-83f02923a04f8a30f999f2b4dafc29774de196cbd2e5912c3ec4e34cfbdb5cfa WatchSource:0}: Error finding container 83f02923a04f8a30f999f2b4dafc29774de196cbd2e5912c3ec4e34cfbdb5cfa: Status 404 returned error can't find the container with id 83f02923a04f8a30f999f2b4dafc29774de196cbd2e5912c3ec4e34cfbdb5cfa Oct 13 07:53:36 crc kubenswrapper[4833]: I1013 07:53:36.646944 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f835a7f6-ba64-4982-93a2-de101a5fd893" path="/var/lib/kubelet/pods/f835a7f6-ba64-4982-93a2-de101a5fd893/volumes" Oct 13 07:53:36 crc kubenswrapper[4833]: I1013 07:53:36.803007 4833 generic.go:334] "Generic (PLEG): container finished" podID="cc5ddb09-5c76-4f79-b5ad-c3071eddb952" containerID="73f5818d9262923f7274b217e29a154b0ce8280e1808ecec3ed2f4f94a0a62ae" exitCode=0 Oct 13 07:53:36 crc kubenswrapper[4833]: I1013 07:53:36.803132 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"cc5ddb09-5c76-4f79-b5ad-c3071eddb952","Type":"ContainerDied","Data":"73f5818d9262923f7274b217e29a154b0ce8280e1808ecec3ed2f4f94a0a62ae"} Oct 13 07:53:36 crc kubenswrapper[4833]: I1013 07:53:36.803174 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"cc5ddb09-5c76-4f79-b5ad-c3071eddb952","Type":"ContainerStarted","Data":"83f02923a04f8a30f999f2b4dafc29774de196cbd2e5912c3ec4e34cfbdb5cfa"} Oct 13 07:53:38 crc kubenswrapper[4833]: I1013 07:53:38.306766 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Oct 13 07:53:38 crc kubenswrapper[4833]: I1013 07:53:38.331744 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-2_cc5ddb09-5c76-4f79-b5ad-c3071eddb952/mariadb-client-2/0.log" Oct 13 07:53:38 crc kubenswrapper[4833]: I1013 07:53:38.368003 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2"] Oct 13 07:53:38 crc kubenswrapper[4833]: I1013 07:53:38.377483 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2"] Oct 13 07:53:38 crc kubenswrapper[4833]: I1013 07:53:38.436254 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hb76\" (UniqueName: \"kubernetes.io/projected/cc5ddb09-5c76-4f79-b5ad-c3071eddb952-kube-api-access-5hb76\") pod \"cc5ddb09-5c76-4f79-b5ad-c3071eddb952\" (UID: \"cc5ddb09-5c76-4f79-b5ad-c3071eddb952\") " Oct 13 07:53:38 crc kubenswrapper[4833]: I1013 07:53:38.445937 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc5ddb09-5c76-4f79-b5ad-c3071eddb952-kube-api-access-5hb76" (OuterVolumeSpecName: "kube-api-access-5hb76") pod "cc5ddb09-5c76-4f79-b5ad-c3071eddb952" (UID: "cc5ddb09-5c76-4f79-b5ad-c3071eddb952"). InnerVolumeSpecName "kube-api-access-5hb76". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:53:38 crc kubenswrapper[4833]: I1013 07:53:38.539426 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hb76\" (UniqueName: \"kubernetes.io/projected/cc5ddb09-5c76-4f79-b5ad-c3071eddb952-kube-api-access-5hb76\") on node \"crc\" DevicePath \"\"" Oct 13 07:53:38 crc kubenswrapper[4833]: I1013 07:53:38.648262 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc5ddb09-5c76-4f79-b5ad-c3071eddb952" path="/var/lib/kubelet/pods/cc5ddb09-5c76-4f79-b5ad-c3071eddb952/volumes" Oct 13 07:53:38 crc kubenswrapper[4833]: I1013 07:53:38.826313 4833 scope.go:117] "RemoveContainer" containerID="73f5818d9262923f7274b217e29a154b0ce8280e1808ecec3ed2f4f94a0a62ae" Oct 13 07:53:38 crc kubenswrapper[4833]: I1013 07:53:38.826382 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Oct 13 07:53:40 crc kubenswrapper[4833]: I1013 07:53:40.635407 4833 scope.go:117] "RemoveContainer" containerID="a730b8e15d65542d1555c0490d7044bbf38de54446d43d1c11482a8cdf705cba" Oct 13 07:53:40 crc kubenswrapper[4833]: E1013 07:53:40.636394 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:53:51 crc kubenswrapper[4833]: I1013 07:53:51.627371 4833 scope.go:117] "RemoveContainer" containerID="a730b8e15d65542d1555c0490d7044bbf38de54446d43d1c11482a8cdf705cba" Oct 13 07:53:51 crc kubenswrapper[4833]: E1013 07:53:51.628655 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:54:04 crc kubenswrapper[4833]: I1013 07:54:04.627097 4833 scope.go:117] "RemoveContainer" containerID="a730b8e15d65542d1555c0490d7044bbf38de54446d43d1c11482a8cdf705cba" Oct 13 07:54:04 crc kubenswrapper[4833]: E1013 07:54:04.628141 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:54:15 crc kubenswrapper[4833]: I1013 07:54:15.628177 4833 scope.go:117] "RemoveContainer" containerID="a730b8e15d65542d1555c0490d7044bbf38de54446d43d1c11482a8cdf705cba" Oct 13 07:54:15 crc kubenswrapper[4833]: E1013 07:54:15.629321 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:54:30 crc kubenswrapper[4833]: I1013 07:54:30.645813 4833 scope.go:117] "RemoveContainer" containerID="a730b8e15d65542d1555c0490d7044bbf38de54446d43d1c11482a8cdf705cba" Oct 13 07:54:30 crc kubenswrapper[4833]: E1013 07:54:30.648369 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:54:45 crc kubenswrapper[4833]: I1013 07:54:45.627574 4833 scope.go:117] "RemoveContainer" containerID="a730b8e15d65542d1555c0490d7044bbf38de54446d43d1c11482a8cdf705cba" Oct 13 07:54:45 crc kubenswrapper[4833]: E1013 07:54:45.628368 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:54:50 crc kubenswrapper[4833]: I1013 07:54:50.488026 4833 scope.go:117] "RemoveContainer" containerID="050f4d381c5181b1988676067674e4b7c6291ec14b2caca8ec3c03b15355acbf" Oct 13 07:54:59 crc kubenswrapper[4833]: I1013 07:54:59.281534 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m2fvq"] Oct 13 07:54:59 crc kubenswrapper[4833]: E1013 07:54:59.284401 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5ddb09-5c76-4f79-b5ad-c3071eddb952" containerName="mariadb-client-2" Oct 13 07:54:59 crc kubenswrapper[4833]: I1013 07:54:59.284586 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5ddb09-5c76-4f79-b5ad-c3071eddb952" containerName="mariadb-client-2" Oct 13 07:54:59 crc kubenswrapper[4833]: I1013 07:54:59.285000 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc5ddb09-5c76-4f79-b5ad-c3071eddb952" containerName="mariadb-client-2" Oct 13 07:54:59 crc kubenswrapper[4833]: I1013 07:54:59.287251 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m2fvq" Oct 13 07:54:59 crc kubenswrapper[4833]: I1013 07:54:59.291461 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2fvq"] Oct 13 07:54:59 crc kubenswrapper[4833]: I1013 07:54:59.412570 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d49684f1-1d14-45ed-b8e5-4bac712d31b7-utilities\") pod \"redhat-marketplace-m2fvq\" (UID: \"d49684f1-1d14-45ed-b8e5-4bac712d31b7\") " pod="openshift-marketplace/redhat-marketplace-m2fvq" Oct 13 07:54:59 crc kubenswrapper[4833]: I1013 07:54:59.412793 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d49684f1-1d14-45ed-b8e5-4bac712d31b7-catalog-content\") pod \"redhat-marketplace-m2fvq\" (UID: \"d49684f1-1d14-45ed-b8e5-4bac712d31b7\") " pod="openshift-marketplace/redhat-marketplace-m2fvq" Oct 13 07:54:59 crc kubenswrapper[4833]: I1013 07:54:59.412984 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d9k4\" (UniqueName: \"kubernetes.io/projected/d49684f1-1d14-45ed-b8e5-4bac712d31b7-kube-api-access-2d9k4\") pod \"redhat-marketplace-m2fvq\" (UID: \"d49684f1-1d14-45ed-b8e5-4bac712d31b7\") " pod="openshift-marketplace/redhat-marketplace-m2fvq" Oct 13 07:54:59 crc kubenswrapper[4833]: I1013 07:54:59.514284 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d49684f1-1d14-45ed-b8e5-4bac712d31b7-utilities\") pod \"redhat-marketplace-m2fvq\" (UID: \"d49684f1-1d14-45ed-b8e5-4bac712d31b7\") " pod="openshift-marketplace/redhat-marketplace-m2fvq" Oct 13 07:54:59 crc kubenswrapper[4833]: I1013 07:54:59.514341 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d49684f1-1d14-45ed-b8e5-4bac712d31b7-catalog-content\") pod \"redhat-marketplace-m2fvq\" (UID: \"d49684f1-1d14-45ed-b8e5-4bac712d31b7\") " pod="openshift-marketplace/redhat-marketplace-m2fvq" Oct 13 07:54:59 crc kubenswrapper[4833]: I1013 07:54:59.514394 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d9k4\" (UniqueName: \"kubernetes.io/projected/d49684f1-1d14-45ed-b8e5-4bac712d31b7-kube-api-access-2d9k4\") pod \"redhat-marketplace-m2fvq\" (UID: \"d49684f1-1d14-45ed-b8e5-4bac712d31b7\") " pod="openshift-marketplace/redhat-marketplace-m2fvq" Oct 13 07:54:59 crc kubenswrapper[4833]: I1013 07:54:59.514905 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d49684f1-1d14-45ed-b8e5-4bac712d31b7-utilities\") pod \"redhat-marketplace-m2fvq\" (UID: \"d49684f1-1d14-45ed-b8e5-4bac712d31b7\") " pod="openshift-marketplace/redhat-marketplace-m2fvq" Oct 13 07:54:59 crc kubenswrapper[4833]: I1013 07:54:59.514953 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d49684f1-1d14-45ed-b8e5-4bac712d31b7-catalog-content\") pod \"redhat-marketplace-m2fvq\" (UID: \"d49684f1-1d14-45ed-b8e5-4bac712d31b7\") " pod="openshift-marketplace/redhat-marketplace-m2fvq" Oct 13 07:54:59 crc kubenswrapper[4833]: I1013 07:54:59.548686 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d9k4\" (UniqueName: \"kubernetes.io/projected/d49684f1-1d14-45ed-b8e5-4bac712d31b7-kube-api-access-2d9k4\") pod \"redhat-marketplace-m2fvq\" (UID: \"d49684f1-1d14-45ed-b8e5-4bac712d31b7\") " pod="openshift-marketplace/redhat-marketplace-m2fvq" Oct 13 07:54:59 crc kubenswrapper[4833]: I1013 07:54:59.614444 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m2fvq" Oct 13 07:54:59 crc kubenswrapper[4833]: I1013 07:54:59.627347 4833 scope.go:117] "RemoveContainer" containerID="a730b8e15d65542d1555c0490d7044bbf38de54446d43d1c11482a8cdf705cba" Oct 13 07:54:59 crc kubenswrapper[4833]: E1013 07:54:59.627810 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:55:00 crc kubenswrapper[4833]: I1013 07:55:00.108490 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2fvq"] Oct 13 07:55:00 crc kubenswrapper[4833]: I1013 07:55:00.671529 4833 generic.go:334] "Generic (PLEG): container finished" podID="d49684f1-1d14-45ed-b8e5-4bac712d31b7" containerID="3447ba3ae9a7e3be1bde8571cd8e655fc2eefef5b5baf4766a5c55e8ca6cb030" exitCode=0 Oct 13 07:55:00 crc kubenswrapper[4833]: I1013 07:55:00.671649 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2fvq" event={"ID":"d49684f1-1d14-45ed-b8e5-4bac712d31b7","Type":"ContainerDied","Data":"3447ba3ae9a7e3be1bde8571cd8e655fc2eefef5b5baf4766a5c55e8ca6cb030"} Oct 13 07:55:00 crc kubenswrapper[4833]: I1013 07:55:00.671742 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2fvq" event={"ID":"d49684f1-1d14-45ed-b8e5-4bac712d31b7","Type":"ContainerStarted","Data":"4056d36243c228d8c4398d59909a25616c8bbd568d8a3cffe3a2437abdaa0c54"} Oct 13 07:55:00 crc kubenswrapper[4833]: I1013 07:55:00.674609 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 07:55:02 crc kubenswrapper[4833]: I1013 07:55:02.690108 4833 generic.go:334] "Generic (PLEG): container finished" podID="d49684f1-1d14-45ed-b8e5-4bac712d31b7" containerID="3022f5ff62c568b0899c90737ce9caace376e865946f0614c6ae21e4b01e6e46" exitCode=0 Oct 13 07:55:02 crc kubenswrapper[4833]: I1013 07:55:02.690433 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2fvq" event={"ID":"d49684f1-1d14-45ed-b8e5-4bac712d31b7","Type":"ContainerDied","Data":"3022f5ff62c568b0899c90737ce9caace376e865946f0614c6ae21e4b01e6e46"} Oct 13 07:55:03 crc kubenswrapper[4833]: I1013 07:55:03.711289 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2fvq" event={"ID":"d49684f1-1d14-45ed-b8e5-4bac712d31b7","Type":"ContainerStarted","Data":"19830e92e20a46e692db90e0b1a5f691eca27109cd301c4958874a42ad22dc74"} Oct 13 07:55:03 crc kubenswrapper[4833]: I1013 07:55:03.741048 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m2fvq" podStartSLOduration=2.142764006 podStartE2EDuration="4.741031846s" podCreationTimestamp="2025-10-13 07:54:59 +0000 UTC" firstStartedPulling="2025-10-13 07:55:00.67412056 +0000 UTC m=+5190.774543516" lastFinishedPulling="2025-10-13 07:55:03.27238844 +0000 UTC m=+5193.372811356" observedRunningTime="2025-10-13 07:55:03.738175135 +0000 UTC m=+5193.838598051" watchObservedRunningTime="2025-10-13 07:55:03.741031846 +0000 UTC m=+5193.841454762" Oct 13 07:55:09 crc kubenswrapper[4833]: I1013 07:55:09.615145 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m2fvq" Oct 13 07:55:09 crc kubenswrapper[4833]: I1013 07:55:09.615860 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m2fvq" Oct 13 07:55:09 crc kubenswrapper[4833]: I1013 07:55:09.691266 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m2fvq" Oct 13 07:55:09 crc kubenswrapper[4833]: I1013 07:55:09.852309 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m2fvq" Oct 13 07:55:09 crc kubenswrapper[4833]: I1013 07:55:09.931905 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2fvq"] Oct 13 07:55:11 crc kubenswrapper[4833]: I1013 07:55:11.793019 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m2fvq" podUID="d49684f1-1d14-45ed-b8e5-4bac712d31b7" containerName="registry-server" containerID="cri-o://19830e92e20a46e692db90e0b1a5f691eca27109cd301c4958874a42ad22dc74" gracePeriod=2 Oct 13 07:55:12 crc kubenswrapper[4833]: I1013 07:55:12.290017 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m2fvq" Oct 13 07:55:12 crc kubenswrapper[4833]: I1013 07:55:12.354236 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d9k4\" (UniqueName: \"kubernetes.io/projected/d49684f1-1d14-45ed-b8e5-4bac712d31b7-kube-api-access-2d9k4\") pod \"d49684f1-1d14-45ed-b8e5-4bac712d31b7\" (UID: \"d49684f1-1d14-45ed-b8e5-4bac712d31b7\") " Oct 13 07:55:12 crc kubenswrapper[4833]: I1013 07:55:12.354312 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d49684f1-1d14-45ed-b8e5-4bac712d31b7-catalog-content\") pod \"d49684f1-1d14-45ed-b8e5-4bac712d31b7\" (UID: \"d49684f1-1d14-45ed-b8e5-4bac712d31b7\") " Oct 13 07:55:12 crc kubenswrapper[4833]: I1013 07:55:12.354396 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d49684f1-1d14-45ed-b8e5-4bac712d31b7-utilities\") pod \"d49684f1-1d14-45ed-b8e5-4bac712d31b7\" (UID: \"d49684f1-1d14-45ed-b8e5-4bac712d31b7\") " Oct 13 07:55:12 crc kubenswrapper[4833]: I1013 07:55:12.356139 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d49684f1-1d14-45ed-b8e5-4bac712d31b7-utilities" (OuterVolumeSpecName: "utilities") pod "d49684f1-1d14-45ed-b8e5-4bac712d31b7" (UID: "d49684f1-1d14-45ed-b8e5-4bac712d31b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:55:12 crc kubenswrapper[4833]: I1013 07:55:12.365268 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d49684f1-1d14-45ed-b8e5-4bac712d31b7-kube-api-access-2d9k4" (OuterVolumeSpecName: "kube-api-access-2d9k4") pod "d49684f1-1d14-45ed-b8e5-4bac712d31b7" (UID: "d49684f1-1d14-45ed-b8e5-4bac712d31b7"). InnerVolumeSpecName "kube-api-access-2d9k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:55:12 crc kubenswrapper[4833]: I1013 07:55:12.367835 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d49684f1-1d14-45ed-b8e5-4bac712d31b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d49684f1-1d14-45ed-b8e5-4bac712d31b7" (UID: "d49684f1-1d14-45ed-b8e5-4bac712d31b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 07:55:12 crc kubenswrapper[4833]: I1013 07:55:12.456285 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d49684f1-1d14-45ed-b8e5-4bac712d31b7-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 07:55:12 crc kubenswrapper[4833]: I1013 07:55:12.456341 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d9k4\" (UniqueName: \"kubernetes.io/projected/d49684f1-1d14-45ed-b8e5-4bac712d31b7-kube-api-access-2d9k4\") on node \"crc\" DevicePath \"\"" Oct 13 07:55:12 crc kubenswrapper[4833]: I1013 07:55:12.456359 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d49684f1-1d14-45ed-b8e5-4bac712d31b7-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 07:55:12 crc kubenswrapper[4833]: I1013 07:55:12.803813 4833 generic.go:334] "Generic (PLEG): container finished" podID="d49684f1-1d14-45ed-b8e5-4bac712d31b7" containerID="19830e92e20a46e692db90e0b1a5f691eca27109cd301c4958874a42ad22dc74" exitCode=0 Oct 13 07:55:12 crc kubenswrapper[4833]: I1013 07:55:12.804125 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m2fvq" Oct 13 07:55:12 crc kubenswrapper[4833]: I1013 07:55:12.804150 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2fvq" event={"ID":"d49684f1-1d14-45ed-b8e5-4bac712d31b7","Type":"ContainerDied","Data":"19830e92e20a46e692db90e0b1a5f691eca27109cd301c4958874a42ad22dc74"} Oct 13 07:55:12 crc kubenswrapper[4833]: I1013 07:55:12.805002 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2fvq" event={"ID":"d49684f1-1d14-45ed-b8e5-4bac712d31b7","Type":"ContainerDied","Data":"4056d36243c228d8c4398d59909a25616c8bbd568d8a3cffe3a2437abdaa0c54"} Oct 13 07:55:12 crc kubenswrapper[4833]: I1013 07:55:12.805055 4833 scope.go:117] "RemoveContainer" containerID="19830e92e20a46e692db90e0b1a5f691eca27109cd301c4958874a42ad22dc74" Oct 13 07:55:12 crc kubenswrapper[4833]: I1013 07:55:12.837044 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2fvq"] Oct 13 07:55:12 crc kubenswrapper[4833]: I1013 07:55:12.839087 4833 scope.go:117] "RemoveContainer" containerID="3022f5ff62c568b0899c90737ce9caace376e865946f0614c6ae21e4b01e6e46" Oct 13 07:55:12 crc kubenswrapper[4833]: I1013 07:55:12.842462 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2fvq"] Oct 13 07:55:12 crc kubenswrapper[4833]: I1013 07:55:12.862834 4833 scope.go:117] "RemoveContainer" containerID="3447ba3ae9a7e3be1bde8571cd8e655fc2eefef5b5baf4766a5c55e8ca6cb030" Oct 13 07:55:12 crc kubenswrapper[4833]: I1013 07:55:12.909476 4833 scope.go:117] "RemoveContainer" containerID="19830e92e20a46e692db90e0b1a5f691eca27109cd301c4958874a42ad22dc74" Oct 13 07:55:12 crc kubenswrapper[4833]: E1013 07:55:12.910783 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19830e92e20a46e692db90e0b1a5f691eca27109cd301c4958874a42ad22dc74\": container with ID starting with 19830e92e20a46e692db90e0b1a5f691eca27109cd301c4958874a42ad22dc74 not found: ID does not exist" containerID="19830e92e20a46e692db90e0b1a5f691eca27109cd301c4958874a42ad22dc74" Oct 13 07:55:12 crc kubenswrapper[4833]: I1013 07:55:12.910838 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19830e92e20a46e692db90e0b1a5f691eca27109cd301c4958874a42ad22dc74"} err="failed to get container status \"19830e92e20a46e692db90e0b1a5f691eca27109cd301c4958874a42ad22dc74\": rpc error: code = NotFound desc = could not find container \"19830e92e20a46e692db90e0b1a5f691eca27109cd301c4958874a42ad22dc74\": container with ID starting with 19830e92e20a46e692db90e0b1a5f691eca27109cd301c4958874a42ad22dc74 not found: ID does not exist" Oct 13 07:55:12 crc kubenswrapper[4833]: I1013 07:55:12.910880 4833 scope.go:117] "RemoveContainer" containerID="3022f5ff62c568b0899c90737ce9caace376e865946f0614c6ae21e4b01e6e46" Oct 13 07:55:12 crc kubenswrapper[4833]: E1013 07:55:12.911267 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3022f5ff62c568b0899c90737ce9caace376e865946f0614c6ae21e4b01e6e46\": container with ID starting with 3022f5ff62c568b0899c90737ce9caace376e865946f0614c6ae21e4b01e6e46 not found: ID does not exist" containerID="3022f5ff62c568b0899c90737ce9caace376e865946f0614c6ae21e4b01e6e46" Oct 13 07:55:12 crc kubenswrapper[4833]: I1013 07:55:12.911294 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3022f5ff62c568b0899c90737ce9caace376e865946f0614c6ae21e4b01e6e46"} err="failed to get container status \"3022f5ff62c568b0899c90737ce9caace376e865946f0614c6ae21e4b01e6e46\": rpc error: code = NotFound desc = could not find container \"3022f5ff62c568b0899c90737ce9caace376e865946f0614c6ae21e4b01e6e46\": container with ID starting with 3022f5ff62c568b0899c90737ce9caace376e865946f0614c6ae21e4b01e6e46 not found: ID does not exist" Oct 13 07:55:12 crc kubenswrapper[4833]: I1013 07:55:12.911310 4833 scope.go:117] "RemoveContainer" containerID="3447ba3ae9a7e3be1bde8571cd8e655fc2eefef5b5baf4766a5c55e8ca6cb030" Oct 13 07:55:12 crc kubenswrapper[4833]: E1013 07:55:12.911529 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3447ba3ae9a7e3be1bde8571cd8e655fc2eefef5b5baf4766a5c55e8ca6cb030\": container with ID starting with 3447ba3ae9a7e3be1bde8571cd8e655fc2eefef5b5baf4766a5c55e8ca6cb030 not found: ID does not exist" containerID="3447ba3ae9a7e3be1bde8571cd8e655fc2eefef5b5baf4766a5c55e8ca6cb030" Oct 13 07:55:12 crc kubenswrapper[4833]: I1013 07:55:12.911570 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3447ba3ae9a7e3be1bde8571cd8e655fc2eefef5b5baf4766a5c55e8ca6cb030"} err="failed to get container status \"3447ba3ae9a7e3be1bde8571cd8e655fc2eefef5b5baf4766a5c55e8ca6cb030\": rpc error: code = NotFound desc = could not find container \"3447ba3ae9a7e3be1bde8571cd8e655fc2eefef5b5baf4766a5c55e8ca6cb030\": container with ID starting with 3447ba3ae9a7e3be1bde8571cd8e655fc2eefef5b5baf4766a5c55e8ca6cb030 not found: ID does not exist" Oct 13 07:55:14 crc kubenswrapper[4833]: I1013 07:55:14.627696 4833 scope.go:117] "RemoveContainer" containerID="a730b8e15d65542d1555c0490d7044bbf38de54446d43d1c11482a8cdf705cba" Oct 13 07:55:14 crc kubenswrapper[4833]: E1013 07:55:14.628352 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:55:14 crc kubenswrapper[4833]: I1013 07:55:14.639793 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d49684f1-1d14-45ed-b8e5-4bac712d31b7" path="/var/lib/kubelet/pods/d49684f1-1d14-45ed-b8e5-4bac712d31b7/volumes" Oct 13 07:55:27 crc kubenswrapper[4833]: I1013 07:55:27.627350 4833 scope.go:117] "RemoveContainer" containerID="a730b8e15d65542d1555c0490d7044bbf38de54446d43d1c11482a8cdf705cba" Oct 13 07:55:27 crc kubenswrapper[4833]: E1013 07:55:27.628438 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:55:40 crc kubenswrapper[4833]: I1013 07:55:40.630286 4833 scope.go:117] "RemoveContainer" containerID="a730b8e15d65542d1555c0490d7044bbf38de54446d43d1c11482a8cdf705cba" Oct 13 07:55:40 crc kubenswrapper[4833]: E1013 07:55:40.631342 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:55:53 crc kubenswrapper[4833]: I1013 07:55:53.627430 4833 scope.go:117] "RemoveContainer" containerID="a730b8e15d65542d1555c0490d7044bbf38de54446d43d1c11482a8cdf705cba" Oct 13 07:55:53 crc kubenswrapper[4833]: E1013 07:55:53.628672 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 07:56:07 crc kubenswrapper[4833]: I1013 07:56:07.627851 4833 scope.go:117] "RemoveContainer" containerID="a730b8e15d65542d1555c0490d7044bbf38de54446d43d1c11482a8cdf705cba" Oct 13 07:56:08 crc kubenswrapper[4833]: I1013 07:56:08.346622 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"529aa9b3cfb16d8dd65469ec64ebdc474a0f44417fe7a26128d25222a7982fb0"} Oct 13 07:57:32 crc kubenswrapper[4833]: I1013 07:57:32.493301 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Oct 13 07:57:32 crc kubenswrapper[4833]: E1013 07:57:32.494074 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49684f1-1d14-45ed-b8e5-4bac712d31b7" containerName="extract-content" Oct 13 07:57:32 crc kubenswrapper[4833]: I1013 07:57:32.494089 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49684f1-1d14-45ed-b8e5-4bac712d31b7" containerName="extract-content" Oct 13 07:57:32 crc kubenswrapper[4833]: E1013 07:57:32.494116 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49684f1-1d14-45ed-b8e5-4bac712d31b7" containerName="registry-server" Oct 13 07:57:32 crc kubenswrapper[4833]: I1013 07:57:32.494123 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49684f1-1d14-45ed-b8e5-4bac712d31b7" containerName="registry-server" Oct 13 07:57:32 crc kubenswrapper[4833]: E1013 07:57:32.494134 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49684f1-1d14-45ed-b8e5-4bac712d31b7" containerName="extract-utilities" Oct 13 07:57:32 crc kubenswrapper[4833]: I1013 07:57:32.494143 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49684f1-1d14-45ed-b8e5-4bac712d31b7" containerName="extract-utilities" Oct 13 07:57:32 crc kubenswrapper[4833]: I1013 07:57:32.494322 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49684f1-1d14-45ed-b8e5-4bac712d31b7" containerName="registry-server" Oct 13 07:57:32 crc kubenswrapper[4833]: I1013 07:57:32.494968 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Oct 13 07:57:32 crc kubenswrapper[4833]: I1013 07:57:32.496759 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bx4vr" Oct 13 07:57:32 crc kubenswrapper[4833]: I1013 07:57:32.505180 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Oct 13 07:57:32 crc kubenswrapper[4833]: I1013 07:57:32.630250 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a4677bd0-70bf-4dd7-8b6b-d1e2a4fbf423\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a4677bd0-70bf-4dd7-8b6b-d1e2a4fbf423\") pod \"mariadb-copy-data\" (UID: \"b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5\") " pod="openstack/mariadb-copy-data" Oct 13 07:57:32 crc kubenswrapper[4833]: I1013 07:57:32.630349 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq4dg\" (UniqueName: \"kubernetes.io/projected/b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5-kube-api-access-fq4dg\") pod \"mariadb-copy-data\" (UID: \"b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5\") " pod="openstack/mariadb-copy-data" Oct 13 07:57:32 crc kubenswrapper[4833]: I1013 07:57:32.732576 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq4dg\" (UniqueName: \"kubernetes.io/projected/b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5-kube-api-access-fq4dg\") pod \"mariadb-copy-data\" (UID: \"b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5\") " pod="openstack/mariadb-copy-data" Oct 13 07:57:32 crc kubenswrapper[4833]: I1013 07:57:32.733152 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a4677bd0-70bf-4dd7-8b6b-d1e2a4fbf423\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a4677bd0-70bf-4dd7-8b6b-d1e2a4fbf423\") pod \"mariadb-copy-data\" (UID: \"b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5\") " pod="openstack/mariadb-copy-data" Oct 13 07:57:32 crc kubenswrapper[4833]: I1013 07:57:32.737461 4833 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 13 07:57:32 crc kubenswrapper[4833]: I1013 07:57:32.737525 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a4677bd0-70bf-4dd7-8b6b-d1e2a4fbf423\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a4677bd0-70bf-4dd7-8b6b-d1e2a4fbf423\") pod \"mariadb-copy-data\" (UID: \"b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b6c703b4a8bb56023b4df8b4d737ddcf7e8f25480ba54f44b7ec864e7679d431/globalmount\"" pod="openstack/mariadb-copy-data" Oct 13 07:57:32 crc kubenswrapper[4833]: I1013 07:57:32.777329 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq4dg\" (UniqueName: \"kubernetes.io/projected/b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5-kube-api-access-fq4dg\") pod \"mariadb-copy-data\" (UID: \"b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5\") " pod="openstack/mariadb-copy-data" Oct 13 07:57:32 crc kubenswrapper[4833]: I1013 07:57:32.797870 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a4677bd0-70bf-4dd7-8b6b-d1e2a4fbf423\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a4677bd0-70bf-4dd7-8b6b-d1e2a4fbf423\") pod \"mariadb-copy-data\" (UID: \"b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5\") " pod="openstack/mariadb-copy-data" Oct 13 07:57:32 crc kubenswrapper[4833]: I1013 07:57:32.822396 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Oct 13 07:57:33 crc kubenswrapper[4833]: I1013 07:57:33.167869 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Oct 13 07:57:33 crc kubenswrapper[4833]: I1013 07:57:33.207391 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5","Type":"ContainerStarted","Data":"15c06747e4cb1e7e82e07fae31f19f664be33e6ea2fd261e1705177cd1dc97de"} Oct 13 07:57:34 crc kubenswrapper[4833]: I1013 07:57:34.218922 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5","Type":"ContainerStarted","Data":"3d1e9975e04c219a7bf74b5d8ef440fceb0482b1dacc25b66ddb8ed4b57c43f1"} Oct 13 07:57:34 crc kubenswrapper[4833]: I1013 07:57:34.250567 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=3.25051975 podStartE2EDuration="3.25051975s" podCreationTimestamp="2025-10-13 07:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 07:57:34.241714369 +0000 UTC m=+5344.342137365" watchObservedRunningTime="2025-10-13 07:57:34.25051975 +0000 UTC m=+5344.350942696" Oct 13 07:57:36 crc kubenswrapper[4833]: I1013 07:57:36.258490 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Oct 13 07:57:36 crc kubenswrapper[4833]: I1013 07:57:36.259767 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 13 07:57:36 crc kubenswrapper[4833]: I1013 07:57:36.268967 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Oct 13 07:57:36 crc kubenswrapper[4833]: I1013 07:57:36.401335 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpgld\" (UniqueName: \"kubernetes.io/projected/6fb4784e-5c29-4bcd-9e95-e3c7126dceda-kube-api-access-xpgld\") pod \"mariadb-client\" (UID: \"6fb4784e-5c29-4bcd-9e95-e3c7126dceda\") " pod="openstack/mariadb-client" Oct 13 07:57:36 crc kubenswrapper[4833]: I1013 07:57:36.503387 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpgld\" (UniqueName: \"kubernetes.io/projected/6fb4784e-5c29-4bcd-9e95-e3c7126dceda-kube-api-access-xpgld\") pod \"mariadb-client\" (UID: \"6fb4784e-5c29-4bcd-9e95-e3c7126dceda\") " pod="openstack/mariadb-client" Oct 13 07:57:36 crc kubenswrapper[4833]: I1013 07:57:36.536088 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpgld\" (UniqueName: \"kubernetes.io/projected/6fb4784e-5c29-4bcd-9e95-e3c7126dceda-kube-api-access-xpgld\") pod \"mariadb-client\" (UID: \"6fb4784e-5c29-4bcd-9e95-e3c7126dceda\") " pod="openstack/mariadb-client" Oct 13 07:57:36 crc kubenswrapper[4833]: I1013 07:57:36.612633 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 13 07:57:37 crc kubenswrapper[4833]: I1013 07:57:37.044334 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Oct 13 07:57:37 crc kubenswrapper[4833]: W1013 07:57:37.060183 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fb4784e_5c29_4bcd_9e95_e3c7126dceda.slice/crio-6372fd0ebb5886d86ea894c3a6b85a8bd614dd1d237af6b8554a901970b12c21 WatchSource:0}: Error finding container 6372fd0ebb5886d86ea894c3a6b85a8bd614dd1d237af6b8554a901970b12c21: Status 404 returned error can't find the container with id 6372fd0ebb5886d86ea894c3a6b85a8bd614dd1d237af6b8554a901970b12c21 Oct 13 07:57:37 crc kubenswrapper[4833]: I1013 07:57:37.257427 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"6fb4784e-5c29-4bcd-9e95-e3c7126dceda","Type":"ContainerStarted","Data":"6372fd0ebb5886d86ea894c3a6b85a8bd614dd1d237af6b8554a901970b12c21"} Oct 13 07:57:38 crc kubenswrapper[4833]: I1013 07:57:38.270469 4833 generic.go:334] "Generic (PLEG): container finished" podID="6fb4784e-5c29-4bcd-9e95-e3c7126dceda" containerID="6bf45e9adc7c6b9491ec622a592655cde72c18832e72e2bb5e45779df1800aeb" exitCode=0 Oct 13 07:57:38 crc kubenswrapper[4833]: I1013 07:57:38.270568 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"6fb4784e-5c29-4bcd-9e95-e3c7126dceda","Type":"ContainerDied","Data":"6bf45e9adc7c6b9491ec622a592655cde72c18832e72e2bb5e45779df1800aeb"} Oct 13 07:57:39 crc kubenswrapper[4833]: I1013 07:57:39.560631 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 13 07:57:39 crc kubenswrapper[4833]: I1013 07:57:39.589059 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_6fb4784e-5c29-4bcd-9e95-e3c7126dceda/mariadb-client/0.log" Oct 13 07:57:39 crc kubenswrapper[4833]: I1013 07:57:39.615793 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Oct 13 07:57:39 crc kubenswrapper[4833]: I1013 07:57:39.620849 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Oct 13 07:57:39 crc kubenswrapper[4833]: I1013 07:57:39.656681 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpgld\" (UniqueName: \"kubernetes.io/projected/6fb4784e-5c29-4bcd-9e95-e3c7126dceda-kube-api-access-xpgld\") pod \"6fb4784e-5c29-4bcd-9e95-e3c7126dceda\" (UID: \"6fb4784e-5c29-4bcd-9e95-e3c7126dceda\") " Oct 13 07:57:39 crc kubenswrapper[4833]: I1013 07:57:39.661906 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fb4784e-5c29-4bcd-9e95-e3c7126dceda-kube-api-access-xpgld" (OuterVolumeSpecName: "kube-api-access-xpgld") pod "6fb4784e-5c29-4bcd-9e95-e3c7126dceda" (UID: "6fb4784e-5c29-4bcd-9e95-e3c7126dceda"). InnerVolumeSpecName "kube-api-access-xpgld". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:57:39 crc kubenswrapper[4833]: I1013 07:57:39.755439 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Oct 13 07:57:39 crc kubenswrapper[4833]: E1013 07:57:39.755751 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb4784e-5c29-4bcd-9e95-e3c7126dceda" containerName="mariadb-client" Oct 13 07:57:39 crc kubenswrapper[4833]: I1013 07:57:39.755768 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb4784e-5c29-4bcd-9e95-e3c7126dceda" containerName="mariadb-client" Oct 13 07:57:39 crc kubenswrapper[4833]: I1013 07:57:39.755948 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fb4784e-5c29-4bcd-9e95-e3c7126dceda" containerName="mariadb-client" Oct 13 07:57:39 crc kubenswrapper[4833]: I1013 07:57:39.756418 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 13 07:57:39 crc kubenswrapper[4833]: I1013 07:57:39.759657 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpgld\" (UniqueName: \"kubernetes.io/projected/6fb4784e-5c29-4bcd-9e95-e3c7126dceda-kube-api-access-xpgld\") on node \"crc\" DevicePath \"\"" Oct 13 07:57:39 crc kubenswrapper[4833]: I1013 07:57:39.763966 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Oct 13 07:57:39 crc kubenswrapper[4833]: I1013 07:57:39.861326 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f59nf\" (UniqueName: \"kubernetes.io/projected/16fc33e5-3376-4cca-ac83-ef07fd97760a-kube-api-access-f59nf\") pod \"mariadb-client\" (UID: \"16fc33e5-3376-4cca-ac83-ef07fd97760a\") " pod="openstack/mariadb-client" Oct 13 07:57:39 crc kubenswrapper[4833]: I1013 07:57:39.962661 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f59nf\" (UniqueName: \"kubernetes.io/projected/16fc33e5-3376-4cca-ac83-ef07fd97760a-kube-api-access-f59nf\") pod \"mariadb-client\" (UID: \"16fc33e5-3376-4cca-ac83-ef07fd97760a\") " pod="openstack/mariadb-client" Oct 13 07:57:39 crc kubenswrapper[4833]: I1013 07:57:39.991173 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f59nf\" (UniqueName: \"kubernetes.io/projected/16fc33e5-3376-4cca-ac83-ef07fd97760a-kube-api-access-f59nf\") pod \"mariadb-client\" (UID: \"16fc33e5-3376-4cca-ac83-ef07fd97760a\") " pod="openstack/mariadb-client" Oct 13 07:57:40 crc kubenswrapper[4833]: I1013 07:57:40.082167 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 13 07:57:40 crc kubenswrapper[4833]: I1013 07:57:40.292962 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6372fd0ebb5886d86ea894c3a6b85a8bd614dd1d237af6b8554a901970b12c21" Oct 13 07:57:40 crc kubenswrapper[4833]: I1013 07:57:40.293520 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 13 07:57:40 crc kubenswrapper[4833]: I1013 07:57:40.318872 4833 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="6fb4784e-5c29-4bcd-9e95-e3c7126dceda" podUID="16fc33e5-3376-4cca-ac83-ef07fd97760a" Oct 13 07:57:40 crc kubenswrapper[4833]: I1013 07:57:40.360248 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Oct 13 07:57:40 crc kubenswrapper[4833]: W1013 07:57:40.368990 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16fc33e5_3376_4cca_ac83_ef07fd97760a.slice/crio-2fb91b45fe0c865f7c4bb97d181f1a4be0d7b4e608b71e02a8c78b4cb6b305ee WatchSource:0}: Error finding container 2fb91b45fe0c865f7c4bb97d181f1a4be0d7b4e608b71e02a8c78b4cb6b305ee: Status 404 returned error can't find the container with id 2fb91b45fe0c865f7c4bb97d181f1a4be0d7b4e608b71e02a8c78b4cb6b305ee Oct 13 07:57:40 crc kubenswrapper[4833]: I1013 07:57:40.642172 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fb4784e-5c29-4bcd-9e95-e3c7126dceda" path="/var/lib/kubelet/pods/6fb4784e-5c29-4bcd-9e95-e3c7126dceda/volumes" Oct 13 07:57:41 crc kubenswrapper[4833]: I1013 07:57:41.304924 4833 generic.go:334] "Generic (PLEG): container finished" podID="16fc33e5-3376-4cca-ac83-ef07fd97760a" containerID="054af7fab471f7272ab3504b942875052e0832e0e624e58ca28c67052d1667b4" exitCode=0 Oct 13 07:57:41 crc kubenswrapper[4833]: I1013 07:57:41.305007 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"16fc33e5-3376-4cca-ac83-ef07fd97760a","Type":"ContainerDied","Data":"054af7fab471f7272ab3504b942875052e0832e0e624e58ca28c67052d1667b4"} Oct 13 07:57:41 crc kubenswrapper[4833]: I1013 07:57:41.305415 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"16fc33e5-3376-4cca-ac83-ef07fd97760a","Type":"ContainerStarted","Data":"2fb91b45fe0c865f7c4bb97d181f1a4be0d7b4e608b71e02a8c78b4cb6b305ee"} Oct 13 07:57:42 crc kubenswrapper[4833]: I1013 07:57:42.673092 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 13 07:57:42 crc kubenswrapper[4833]: I1013 07:57:42.699029 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_16fc33e5-3376-4cca-ac83-ef07fd97760a/mariadb-client/0.log" Oct 13 07:57:42 crc kubenswrapper[4833]: I1013 07:57:42.730979 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Oct 13 07:57:42 crc kubenswrapper[4833]: I1013 07:57:42.739167 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Oct 13 07:57:42 crc kubenswrapper[4833]: I1013 07:57:42.811111 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f59nf\" (UniqueName: \"kubernetes.io/projected/16fc33e5-3376-4cca-ac83-ef07fd97760a-kube-api-access-f59nf\") pod \"16fc33e5-3376-4cca-ac83-ef07fd97760a\" (UID: \"16fc33e5-3376-4cca-ac83-ef07fd97760a\") " Oct 13 07:57:42 crc kubenswrapper[4833]: I1013 07:57:42.819059 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16fc33e5-3376-4cca-ac83-ef07fd97760a-kube-api-access-f59nf" (OuterVolumeSpecName: "kube-api-access-f59nf") pod "16fc33e5-3376-4cca-ac83-ef07fd97760a" (UID: "16fc33e5-3376-4cca-ac83-ef07fd97760a"). InnerVolumeSpecName "kube-api-access-f59nf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:57:42 crc kubenswrapper[4833]: I1013 07:57:42.913050 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f59nf\" (UniqueName: \"kubernetes.io/projected/16fc33e5-3376-4cca-ac83-ef07fd97760a-kube-api-access-f59nf\") on node \"crc\" DevicePath \"\"" Oct 13 07:57:43 crc kubenswrapper[4833]: I1013 07:57:43.336961 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fb91b45fe0c865f7c4bb97d181f1a4be0d7b4e608b71e02a8c78b4cb6b305ee" Oct 13 07:57:43 crc kubenswrapper[4833]: I1013 07:57:43.337018 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 13 07:57:44 crc kubenswrapper[4833]: I1013 07:57:44.639922 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16fc33e5-3376-4cca-ac83-ef07fd97760a" path="/var/lib/kubelet/pods/16fc33e5-3376-4cca-ac83-ef07fd97760a/volumes" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.184983 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 13 07:58:16 crc kubenswrapper[4833]: E1013 07:58:16.185806 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16fc33e5-3376-4cca-ac83-ef07fd97760a" containerName="mariadb-client" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.185818 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="16fc33e5-3376-4cca-ac83-ef07fd97760a" containerName="mariadb-client" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.185958 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="16fc33e5-3376-4cca-ac83-ef07fd97760a" containerName="mariadb-client" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.186809 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.189356 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.189938 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.190039 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.190504 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.190798 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-7qzgw" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.214995 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.237352 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.239899 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.247167 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.248919 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.263248 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.288651 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.312460 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0480246d-bbb2-4278-850e-1835e52c7eae-config\") pod \"ovsdbserver-sb-0\" (UID: \"0480246d-bbb2-4278-850e-1835e52c7eae\") " pod="openstack/ovsdbserver-sb-0" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.312609 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-45972ffb-795d-412f-a87e-786298891e83\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45972ffb-795d-412f-a87e-786298891e83\") pod \"ovsdbserver-sb-2\" (UID: \"1bf99176-41b5-4884-ab0e-4a2f877e26ec\") " pod="openstack/ovsdbserver-sb-2" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.312819 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0480246d-bbb2-4278-850e-1835e52c7eae-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0480246d-bbb2-4278-850e-1835e52c7eae\") " pod="openstack/ovsdbserver-sb-0" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.313466 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0480246d-bbb2-4278-850e-1835e52c7eae-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0480246d-bbb2-4278-850e-1835e52c7eae\") " pod="openstack/ovsdbserver-sb-0" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.313566 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkwtc\" (UniqueName: \"kubernetes.io/projected/1bf99176-41b5-4884-ab0e-4a2f877e26ec-kube-api-access-vkwtc\") pod \"ovsdbserver-sb-2\" (UID: \"1bf99176-41b5-4884-ab0e-4a2f877e26ec\") " pod="openstack/ovsdbserver-sb-2" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.313612 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1bf99176-41b5-4884-ab0e-4a2f877e26ec-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"1bf99176-41b5-4884-ab0e-4a2f877e26ec\") " pod="openstack/ovsdbserver-sb-2" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.313647 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf99176-41b5-4884-ab0e-4a2f877e26ec-config\") pod \"ovsdbserver-sb-2\" (UID: \"1bf99176-41b5-4884-ab0e-4a2f877e26ec\") " pod="openstack/ovsdbserver-sb-2" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.313693 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fc5edcfa-8608-4a7e-aed3-2716a050e8ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fc5edcfa-8608-4a7e-aed3-2716a050e8ee\") pod \"ovsdbserver-sb-0\" (UID: \"0480246d-bbb2-4278-850e-1835e52c7eae\") " pod="openstack/ovsdbserver-sb-0" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.313732 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5npm\" (UniqueName: \"kubernetes.io/projected/0480246d-bbb2-4278-850e-1835e52c7eae-kube-api-access-c5npm\") pod \"ovsdbserver-sb-0\" (UID: \"0480246d-bbb2-4278-850e-1835e52c7eae\") " pod="openstack/ovsdbserver-sb-0" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.313753 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bf99176-41b5-4884-ab0e-4a2f877e26ec-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"1bf99176-41b5-4884-ab0e-4a2f877e26ec\") " pod="openstack/ovsdbserver-sb-2" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.313804 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0480246d-bbb2-4278-850e-1835e52c7eae-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0480246d-bbb2-4278-850e-1835e52c7eae\") " pod="openstack/ovsdbserver-sb-0" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.313892 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0480246d-bbb2-4278-850e-1835e52c7eae-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0480246d-bbb2-4278-850e-1835e52c7eae\") " pod="openstack/ovsdbserver-sb-0" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.314070 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf99176-41b5-4884-ab0e-4a2f877e26ec-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"1bf99176-41b5-4884-ab0e-4a2f877e26ec\") " pod="openstack/ovsdbserver-sb-2" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.314135 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0480246d-bbb2-4278-850e-1835e52c7eae-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0480246d-bbb2-4278-850e-1835e52c7eae\") " pod="openstack/ovsdbserver-sb-0" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.314197 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1bf99176-41b5-4884-ab0e-4a2f877e26ec-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"1bf99176-41b5-4884-ab0e-4a2f877e26ec\") " pod="openstack/ovsdbserver-sb-2" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.314269 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bf99176-41b5-4884-ab0e-4a2f877e26ec-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"1bf99176-41b5-4884-ab0e-4a2f877e26ec\") " pod="openstack/ovsdbserver-sb-2" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.415465 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bf99176-41b5-4884-ab0e-4a2f877e26ec-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"1bf99176-41b5-4884-ab0e-4a2f877e26ec\") " pod="openstack/ovsdbserver-sb-2" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.415517 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9413068d-6f1c-449a-bbbd-7e1ad94bf92c-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"9413068d-6f1c-449a-bbbd-7e1ad94bf92c\") " pod="openstack/ovsdbserver-sb-1" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.415554 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0480246d-bbb2-4278-850e-1835e52c7eae-config\") pod \"ovsdbserver-sb-0\" (UID: \"0480246d-bbb2-4278-850e-1835e52c7eae\") " pod="openstack/ovsdbserver-sb-0" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.415584 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-45972ffb-795d-412f-a87e-786298891e83\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45972ffb-795d-412f-a87e-786298891e83\") pod \"ovsdbserver-sb-2\" (UID: \"1bf99176-41b5-4884-ab0e-4a2f877e26ec\") " pod="openstack/ovsdbserver-sb-2" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.415609 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0480246d-bbb2-4278-850e-1835e52c7eae-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0480246d-bbb2-4278-850e-1835e52c7eae\") " pod="openstack/ovsdbserver-sb-0" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.415631 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0480246d-bbb2-4278-850e-1835e52c7eae-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0480246d-bbb2-4278-850e-1835e52c7eae\") " pod="openstack/ovsdbserver-sb-0" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.415648 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkwtc\" (UniqueName: \"kubernetes.io/projected/1bf99176-41b5-4884-ab0e-4a2f877e26ec-kube-api-access-vkwtc\") pod \"ovsdbserver-sb-2\" (UID: \"1bf99176-41b5-4884-ab0e-4a2f877e26ec\") " pod="openstack/ovsdbserver-sb-2" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.415668 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1bf99176-41b5-4884-ab0e-4a2f877e26ec-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"1bf99176-41b5-4884-ab0e-4a2f877e26ec\") " pod="openstack/ovsdbserver-sb-2" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.415686 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf99176-41b5-4884-ab0e-4a2f877e26ec-config\") pod \"ovsdbserver-sb-2\" (UID: \"1bf99176-41b5-4884-ab0e-4a2f877e26ec\") " pod="openstack/ovsdbserver-sb-2" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.415720 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9413068d-6f1c-449a-bbbd-7e1ad94bf92c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"9413068d-6f1c-449a-bbbd-7e1ad94bf92c\") " pod="openstack/ovsdbserver-sb-1" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.415745 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml4pz\" (UniqueName: \"kubernetes.io/projected/9413068d-6f1c-449a-bbbd-7e1ad94bf92c-kube-api-access-ml4pz\") pod \"ovsdbserver-sb-1\" (UID: \"9413068d-6f1c-449a-bbbd-7e1ad94bf92c\") " pod="openstack/ovsdbserver-sb-1" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.415773 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fc5edcfa-8608-4a7e-aed3-2716a050e8ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fc5edcfa-8608-4a7e-aed3-2716a050e8ee\") pod \"ovsdbserver-sb-0\" (UID: \"0480246d-bbb2-4278-850e-1835e52c7eae\") " pod="openstack/ovsdbserver-sb-0" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.415796 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9413068d-6f1c-449a-bbbd-7e1ad94bf92c-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"9413068d-6f1c-449a-bbbd-7e1ad94bf92c\") " pod="openstack/ovsdbserver-sb-1" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.415824 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5npm\" (UniqueName: \"kubernetes.io/projected/0480246d-bbb2-4278-850e-1835e52c7eae-kube-api-access-c5npm\") pod \"ovsdbserver-sb-0\" (UID: \"0480246d-bbb2-4278-850e-1835e52c7eae\") " pod="openstack/ovsdbserver-sb-0" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.415848 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bf99176-41b5-4884-ab0e-4a2f877e26ec-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"1bf99176-41b5-4884-ab0e-4a2f877e26ec\") " pod="openstack/ovsdbserver-sb-2" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.415873 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9413068d-6f1c-449a-bbbd-7e1ad94bf92c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"9413068d-6f1c-449a-bbbd-7e1ad94bf92c\") " pod="openstack/ovsdbserver-sb-1" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.415903 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0480246d-bbb2-4278-850e-1835e52c7eae-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0480246d-bbb2-4278-850e-1835e52c7eae\") " pod="openstack/ovsdbserver-sb-0" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.415927 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9413068d-6f1c-449a-bbbd-7e1ad94bf92c-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"9413068d-6f1c-449a-bbbd-7e1ad94bf92c\") " pod="openstack/ovsdbserver-sb-1" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.415957 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9413068d-6f1c-449a-bbbd-7e1ad94bf92c-config\") pod \"ovsdbserver-sb-1\" (UID: \"9413068d-6f1c-449a-bbbd-7e1ad94bf92c\") " pod="openstack/ovsdbserver-sb-1" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.415974 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0480246d-bbb2-4278-850e-1835e52c7eae-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0480246d-bbb2-4278-850e-1835e52c7eae\") " pod="openstack/ovsdbserver-sb-0" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.415999 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf99176-41b5-4884-ab0e-4a2f877e26ec-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"1bf99176-41b5-4884-ab0e-4a2f877e26ec\") " pod="openstack/ovsdbserver-sb-2" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.416017 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0480246d-bbb2-4278-850e-1835e52c7eae-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0480246d-bbb2-4278-850e-1835e52c7eae\") " pod="openstack/ovsdbserver-sb-0" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.416035 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8c8368d6-a250-42a5-a2c3-3cda51d7c88a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8c8368d6-a250-42a5-a2c3-3cda51d7c88a\") pod \"ovsdbserver-sb-1\" (UID: \"9413068d-6f1c-449a-bbbd-7e1ad94bf92c\") " pod="openstack/ovsdbserver-sb-1" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.416059 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1bf99176-41b5-4884-ab0e-4a2f877e26ec-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"1bf99176-41b5-4884-ab0e-4a2f877e26ec\") " pod="openstack/ovsdbserver-sb-2" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.416450 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0480246d-bbb2-4278-850e-1835e52c7eae-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0480246d-bbb2-4278-850e-1835e52c7eae\") " pod="openstack/ovsdbserver-sb-0" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.416615 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1bf99176-41b5-4884-ab0e-4a2f877e26ec-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"1bf99176-41b5-4884-ab0e-4a2f877e26ec\") " pod="openstack/ovsdbserver-sb-2" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.416842 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf99176-41b5-4884-ab0e-4a2f877e26ec-config\") pod \"ovsdbserver-sb-2\" (UID: \"1bf99176-41b5-4884-ab0e-4a2f877e26ec\") " pod="openstack/ovsdbserver-sb-2" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.417255 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0480246d-bbb2-4278-850e-1835e52c7eae-config\") pod \"ovsdbserver-sb-0\" (UID: \"0480246d-bbb2-4278-850e-1835e52c7eae\") " pod="openstack/ovsdbserver-sb-0" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.417606 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0480246d-bbb2-4278-850e-1835e52c7eae-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0480246d-bbb2-4278-850e-1835e52c7eae\") " pod="openstack/ovsdbserver-sb-0" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.417775 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1bf99176-41b5-4884-ab0e-4a2f877e26ec-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"1bf99176-41b5-4884-ab0e-4a2f877e26ec\") " pod="openstack/ovsdbserver-sb-2" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.425636 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bf99176-41b5-4884-ab0e-4a2f877e26ec-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"1bf99176-41b5-4884-ab0e-4a2f877e26ec\") " pod="openstack/ovsdbserver-sb-2" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.427068 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf99176-41b5-4884-ab0e-4a2f877e26ec-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"1bf99176-41b5-4884-ab0e-4a2f877e26ec\") " pod="openstack/ovsdbserver-sb-2" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.427246 4833 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.427281 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fc5edcfa-8608-4a7e-aed3-2716a050e8ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fc5edcfa-8608-4a7e-aed3-2716a050e8ee\") pod \"ovsdbserver-sb-0\" (UID: \"0480246d-bbb2-4278-850e-1835e52c7eae\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8dba53b3cbac9fd98f3044615d2f733178221cddf443ab0e0f146a74248afba1/globalmount\"" pod="openstack/ovsdbserver-sb-0" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.427344 4833 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.427396 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-45972ffb-795d-412f-a87e-786298891e83\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45972ffb-795d-412f-a87e-786298891e83\") pod \"ovsdbserver-sb-2\" (UID: \"1bf99176-41b5-4884-ab0e-4a2f877e26ec\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0b4364f605ea557f13d3e0242017ab04cd4538fff77fa345e7821ed0f11f08e2/globalmount\"" pod="openstack/ovsdbserver-sb-2" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.429054 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0480246d-bbb2-4278-850e-1835e52c7eae-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0480246d-bbb2-4278-850e-1835e52c7eae\") " pod="openstack/ovsdbserver-sb-0" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.430776 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0480246d-bbb2-4278-850e-1835e52c7eae-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0480246d-bbb2-4278-850e-1835e52c7eae\") " pod="openstack/ovsdbserver-sb-0" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.431984 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bf99176-41b5-4884-ab0e-4a2f877e26ec-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"1bf99176-41b5-4884-ab0e-4a2f877e26ec\") " pod="openstack/ovsdbserver-sb-2" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.440066 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0480246d-bbb2-4278-850e-1835e52c7eae-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0480246d-bbb2-4278-850e-1835e52c7eae\") " pod="openstack/ovsdbserver-sb-0" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.440987 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5npm\" (UniqueName: \"kubernetes.io/projected/0480246d-bbb2-4278-850e-1835e52c7eae-kube-api-access-c5npm\") pod \"ovsdbserver-sb-0\" (UID: \"0480246d-bbb2-4278-850e-1835e52c7eae\") " pod="openstack/ovsdbserver-sb-0" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.441394 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkwtc\" (UniqueName: \"kubernetes.io/projected/1bf99176-41b5-4884-ab0e-4a2f877e26ec-kube-api-access-vkwtc\") pod \"ovsdbserver-sb-2\" (UID: \"1bf99176-41b5-4884-ab0e-4a2f877e26ec\") " pod="openstack/ovsdbserver-sb-2" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.464064 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-45972ffb-795d-412f-a87e-786298891e83\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45972ffb-795d-412f-a87e-786298891e83\") pod \"ovsdbserver-sb-2\" (UID: \"1bf99176-41b5-4884-ab0e-4a2f877e26ec\") " pod="openstack/ovsdbserver-sb-2" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.473085 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fc5edcfa-8608-4a7e-aed3-2716a050e8ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fc5edcfa-8608-4a7e-aed3-2716a050e8ee\") pod \"ovsdbserver-sb-0\" (UID: \"0480246d-bbb2-4278-850e-1835e52c7eae\") " pod="openstack/ovsdbserver-sb-0" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.506753 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.517878 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml4pz\" (UniqueName: \"kubernetes.io/projected/9413068d-6f1c-449a-bbbd-7e1ad94bf92c-kube-api-access-ml4pz\") pod \"ovsdbserver-sb-1\" (UID: \"9413068d-6f1c-449a-bbbd-7e1ad94bf92c\") " pod="openstack/ovsdbserver-sb-1" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.517934 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9413068d-6f1c-449a-bbbd-7e1ad94bf92c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"9413068d-6f1c-449a-bbbd-7e1ad94bf92c\") " pod="openstack/ovsdbserver-sb-1" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.517977 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9413068d-6f1c-449a-bbbd-7e1ad94bf92c-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"9413068d-6f1c-449a-bbbd-7e1ad94bf92c\") " pod="openstack/ovsdbserver-sb-1" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.518026 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9413068d-6f1c-449a-bbbd-7e1ad94bf92c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"9413068d-6f1c-449a-bbbd-7e1ad94bf92c\") " pod="openstack/ovsdbserver-sb-1" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.520835 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9413068d-6f1c-449a-bbbd-7e1ad94bf92c-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"9413068d-6f1c-449a-bbbd-7e1ad94bf92c\") " pod="openstack/ovsdbserver-sb-1" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.520932 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9413068d-6f1c-449a-bbbd-7e1ad94bf92c-config\") pod \"ovsdbserver-sb-1\" (UID: \"9413068d-6f1c-449a-bbbd-7e1ad94bf92c\") " pod="openstack/ovsdbserver-sb-1" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.520995 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8c8368d6-a250-42a5-a2c3-3cda51d7c88a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8c8368d6-a250-42a5-a2c3-3cda51d7c88a\") pod \"ovsdbserver-sb-1\" (UID: \"9413068d-6f1c-449a-bbbd-7e1ad94bf92c\") " pod="openstack/ovsdbserver-sb-1" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.521060 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9413068d-6f1c-449a-bbbd-7e1ad94bf92c-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"9413068d-6f1c-449a-bbbd-7e1ad94bf92c\") " pod="openstack/ovsdbserver-sb-1" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.523134 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9413068d-6f1c-449a-bbbd-7e1ad94bf92c-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"9413068d-6f1c-449a-bbbd-7e1ad94bf92c\") " pod="openstack/ovsdbserver-sb-1" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.524124 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9413068d-6f1c-449a-bbbd-7e1ad94bf92c-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"9413068d-6f1c-449a-bbbd-7e1ad94bf92c\") " pod="openstack/ovsdbserver-sb-1" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.528823 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9413068d-6f1c-449a-bbbd-7e1ad94bf92c-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"9413068d-6f1c-449a-bbbd-7e1ad94bf92c\") " pod="openstack/ovsdbserver-sb-1" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.530018 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9413068d-6f1c-449a-bbbd-7e1ad94bf92c-config\") pod \"ovsdbserver-sb-1\" (UID: \"9413068d-6f1c-449a-bbbd-7e1ad94bf92c\") " pod="openstack/ovsdbserver-sb-1" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.530093 4833 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.530137 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8c8368d6-a250-42a5-a2c3-3cda51d7c88a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8c8368d6-a250-42a5-a2c3-3cda51d7c88a\") pod \"ovsdbserver-sb-1\" (UID: \"9413068d-6f1c-449a-bbbd-7e1ad94bf92c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9f38d92dd1a5f164e37777a8a8a2f14617c77e00b616ffd82f75229984d45dd3/globalmount\"" pod="openstack/ovsdbserver-sb-1" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.530677 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9413068d-6f1c-449a-bbbd-7e1ad94bf92c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"9413068d-6f1c-449a-bbbd-7e1ad94bf92c\") " pod="openstack/ovsdbserver-sb-1" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.534196 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9413068d-6f1c-449a-bbbd-7e1ad94bf92c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"9413068d-6f1c-449a-bbbd-7e1ad94bf92c\") " pod="openstack/ovsdbserver-sb-1" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.542404 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml4pz\" (UniqueName: \"kubernetes.io/projected/9413068d-6f1c-449a-bbbd-7e1ad94bf92c-kube-api-access-ml4pz\") pod \"ovsdbserver-sb-1\" (UID: \"9413068d-6f1c-449a-bbbd-7e1ad94bf92c\") " pod="openstack/ovsdbserver-sb-1" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.568122 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.577721 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8c8368d6-a250-42a5-a2c3-3cda51d7c88a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8c8368d6-a250-42a5-a2c3-3cda51d7c88a\") pod \"ovsdbserver-sb-1\" (UID: \"9413068d-6f1c-449a-bbbd-7e1ad94bf92c\") " pod="openstack/ovsdbserver-sb-1" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.586885 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Oct 13 07:58:16 crc kubenswrapper[4833]: I1013 07:58:16.948188 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Oct 13 07:58:17 crc kubenswrapper[4833]: I1013 07:58:17.098911 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 13 07:58:17 crc kubenswrapper[4833]: I1013 07:58:17.223923 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Oct 13 07:58:17 crc kubenswrapper[4833]: W1013 07:58:17.241211 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9413068d_6f1c_449a_bbbd_7e1ad94bf92c.slice/crio-0ce581a263914df7d64c50f27ceb22043b464923d0371ee0b7895f1d53743147 WatchSource:0}: Error finding container 0ce581a263914df7d64c50f27ceb22043b464923d0371ee0b7895f1d53743147: Status 404 returned error can't find the container with id 0ce581a263914df7d64c50f27ceb22043b464923d0371ee0b7895f1d53743147 Oct 13 07:58:17 crc kubenswrapper[4833]: I1013 07:58:17.682345 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0480246d-bbb2-4278-850e-1835e52c7eae","Type":"ContainerStarted","Data":"535c920eae27f73020273e39049f6d723a5d95c76e35b88a62ccae991a1c95d3"} Oct 13 07:58:17 crc kubenswrapper[4833]: I1013 07:58:17.682696 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0480246d-bbb2-4278-850e-1835e52c7eae","Type":"ContainerStarted","Data":"8ba2d2f7341b67b2eecf21c5561fb963229e547b8733acc2e61ee03e0c86b584"} Oct 13 07:58:17 crc kubenswrapper[4833]: I1013 07:58:17.682708 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0480246d-bbb2-4278-850e-1835e52c7eae","Type":"ContainerStarted","Data":"d220e0ef6d257c9c727bab8d45adf33ffd1dccf89a87af4ed22ca6a83e725161"} Oct 13 07:58:17 crc kubenswrapper[4833]: I1013 07:58:17.684204 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"1bf99176-41b5-4884-ab0e-4a2f877e26ec","Type":"ContainerStarted","Data":"8cbcd066fa74ee4616620ef36702c50526e030fb15cc892f97642999ccfb9dd6"} Oct 13 07:58:17 crc kubenswrapper[4833]: I1013 07:58:17.684238 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"1bf99176-41b5-4884-ab0e-4a2f877e26ec","Type":"ContainerStarted","Data":"8b2660c0585ced76423708b0bc621535af05efdb898b6da663d347ae136d0a3f"} Oct 13 07:58:17 crc kubenswrapper[4833]: I1013 07:58:17.684249 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"1bf99176-41b5-4884-ab0e-4a2f877e26ec","Type":"ContainerStarted","Data":"bf2c18d3925c863b5ca33cd4b518fd9cc57537cf276ac6da2871f3992c25da0e"} Oct 13 07:58:17 crc kubenswrapper[4833]: I1013 07:58:17.687009 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"9413068d-6f1c-449a-bbbd-7e1ad94bf92c","Type":"ContainerStarted","Data":"26acb0ef8d465f3d3aef1167d9a4bba44ac4e6dd7b64c9d88c4033a0e2b64598"} Oct 13 07:58:17 crc kubenswrapper[4833]: I1013 07:58:17.687032 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"9413068d-6f1c-449a-bbbd-7e1ad94bf92c","Type":"ContainerStarted","Data":"61c4454db7cba472200b60e6b89b294f8797c2e49efab7a1972293c406f7b14f"} Oct 13 07:58:17 crc kubenswrapper[4833]: I1013 07:58:17.687041 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"9413068d-6f1c-449a-bbbd-7e1ad94bf92c","Type":"ContainerStarted","Data":"0ce581a263914df7d64c50f27ceb22043b464923d0371ee0b7895f1d53743147"} Oct 13 07:58:17 crc kubenswrapper[4833]: I1013 07:58:17.703256 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=2.703239825 podStartE2EDuration="2.703239825s" podCreationTimestamp="2025-10-13 07:58:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 07:58:17.700975701 +0000 UTC m=+5387.801398607" watchObservedRunningTime="2025-10-13 07:58:17.703239825 +0000 UTC m=+5387.803662731" Oct 13 07:58:17 crc kubenswrapper[4833]: I1013 07:58:17.718933 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=2.7189157010000002 podStartE2EDuration="2.718915701s" podCreationTimestamp="2025-10-13 07:58:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 07:58:17.716632316 +0000 UTC m=+5387.817055232" watchObservedRunningTime="2025-10-13 07:58:17.718915701 +0000 UTC m=+5387.819338617" Oct 13 07:58:17 crc kubenswrapper[4833]: I1013 07:58:17.741954 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=2.741936355 podStartE2EDuration="2.741936355s" podCreationTimestamp="2025-10-13 07:58:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 07:58:17.739398443 +0000 UTC m=+5387.839821359" watchObservedRunningTime="2025-10-13 07:58:17.741936355 +0000 UTC m=+5387.842359271" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.357514 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.359815 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.366712 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.366974 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.367305 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-d9jjd" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.367837 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.379895 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.381565 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.399006 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.416854 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.419249 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.426728 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.446744 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.452796 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tszq\" (UniqueName: \"kubernetes.io/projected/68045148-ee4a-4502-932b-d945d4cc26f3-kube-api-access-2tszq\") pod \"ovsdbserver-nb-0\" (UID: \"68045148-ee4a-4502-932b-d945d4cc26f3\") " pod="openstack/ovsdbserver-nb-0" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.452858 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f846dac2-caa3-435f-9413-8b6ae3fa7527\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f846dac2-caa3-435f-9413-8b6ae3fa7527\") pod \"ovsdbserver-nb-1\" (UID: \"38a8efaf-eda4-4b3a-9d9c-7d1f513b8345\") " pod="openstack/ovsdbserver-nb-1" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.452893 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phvt2\" (UniqueName: \"kubernetes.io/projected/38a8efaf-eda4-4b3a-9d9c-7d1f513b8345-kube-api-access-phvt2\") pod \"ovsdbserver-nb-1\" (UID: \"38a8efaf-eda4-4b3a-9d9c-7d1f513b8345\") " pod="openstack/ovsdbserver-nb-1" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.452938 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzmps\" (UniqueName: \"kubernetes.io/projected/98c76e26-6ad7-49f3-b68c-1a1a857111dd-kube-api-access-dzmps\") pod \"ovsdbserver-nb-2\" (UID: \"98c76e26-6ad7-49f3-b68c-1a1a857111dd\") " pod="openstack/ovsdbserver-nb-2" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.452975 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68045148-ee4a-4502-932b-d945d4cc26f3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"68045148-ee4a-4502-932b-d945d4cc26f3\") " pod="openstack/ovsdbserver-nb-0" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.452999 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38a8efaf-eda4-4b3a-9d9c-7d1f513b8345-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"38a8efaf-eda4-4b3a-9d9c-7d1f513b8345\") " pod="openstack/ovsdbserver-nb-1" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.453021 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98c76e26-6ad7-49f3-b68c-1a1a857111dd-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"98c76e26-6ad7-49f3-b68c-1a1a857111dd\") " pod="openstack/ovsdbserver-nb-2" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.453043 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a8efaf-eda4-4b3a-9d9c-7d1f513b8345-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"38a8efaf-eda4-4b3a-9d9c-7d1f513b8345\") " pod="openstack/ovsdbserver-nb-1" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.453064 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/98c76e26-6ad7-49f3-b68c-1a1a857111dd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"98c76e26-6ad7-49f3-b68c-1a1a857111dd\") " pod="openstack/ovsdbserver-nb-2" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.453092 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68045148-ee4a-4502-932b-d945d4cc26f3-config\") pod \"ovsdbserver-nb-0\" (UID: \"68045148-ee4a-4502-932b-d945d4cc26f3\") " pod="openstack/ovsdbserver-nb-0" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.453118 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/98c76e26-6ad7-49f3-b68c-1a1a857111dd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"98c76e26-6ad7-49f3-b68c-1a1a857111dd\") " pod="openstack/ovsdbserver-nb-2" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.453139 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/98c76e26-6ad7-49f3-b68c-1a1a857111dd-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"98c76e26-6ad7-49f3-b68c-1a1a857111dd\") " pod="openstack/ovsdbserver-nb-2" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.453170 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68045148-ee4a-4502-932b-d945d4cc26f3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"68045148-ee4a-4502-932b-d945d4cc26f3\") " pod="openstack/ovsdbserver-nb-0" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.453197 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/38a8efaf-eda4-4b3a-9d9c-7d1f513b8345-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"38a8efaf-eda4-4b3a-9d9c-7d1f513b8345\") " pod="openstack/ovsdbserver-nb-1" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.453223 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/38a8efaf-eda4-4b3a-9d9c-7d1f513b8345-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"38a8efaf-eda4-4b3a-9d9c-7d1f513b8345\") " pod="openstack/ovsdbserver-nb-1" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.453248 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/68045148-ee4a-4502-932b-d945d4cc26f3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"68045148-ee4a-4502-932b-d945d4cc26f3\") " pod="openstack/ovsdbserver-nb-0" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.453273 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68045148-ee4a-4502-932b-d945d4cc26f3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"68045148-ee4a-4502-932b-d945d4cc26f3\") " pod="openstack/ovsdbserver-nb-0" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.453298 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98c76e26-6ad7-49f3-b68c-1a1a857111dd-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"98c76e26-6ad7-49f3-b68c-1a1a857111dd\") " pod="openstack/ovsdbserver-nb-2" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.453316 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/68045148-ee4a-4502-932b-d945d4cc26f3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"68045148-ee4a-4502-932b-d945d4cc26f3\") " pod="openstack/ovsdbserver-nb-0" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.453338 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38a8efaf-eda4-4b3a-9d9c-7d1f513b8345-config\") pod \"ovsdbserver-nb-1\" (UID: \"38a8efaf-eda4-4b3a-9d9c-7d1f513b8345\") " pod="openstack/ovsdbserver-nb-1" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.453370 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9d1e79bb-b3de-4b88-b731-05cba45741db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9d1e79bb-b3de-4b88-b731-05cba45741db\") pod \"ovsdbserver-nb-2\" (UID: \"98c76e26-6ad7-49f3-b68c-1a1a857111dd\") " pod="openstack/ovsdbserver-nb-2" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.453400 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/38a8efaf-eda4-4b3a-9d9c-7d1f513b8345-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"38a8efaf-eda4-4b3a-9d9c-7d1f513b8345\") " pod="openstack/ovsdbserver-nb-1" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.453429 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8c11e49a-9013-491f-b22c-ebd9bb7be6b1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8c11e49a-9013-491f-b22c-ebd9bb7be6b1\") pod \"ovsdbserver-nb-0\" (UID: \"68045148-ee4a-4502-932b-d945d4cc26f3\") " pod="openstack/ovsdbserver-nb-0" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.453457 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98c76e26-6ad7-49f3-b68c-1a1a857111dd-config\") pod \"ovsdbserver-nb-2\" (UID: \"98c76e26-6ad7-49f3-b68c-1a1a857111dd\") " pod="openstack/ovsdbserver-nb-2" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.555344 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tszq\" (UniqueName: \"kubernetes.io/projected/68045148-ee4a-4502-932b-d945d4cc26f3-kube-api-access-2tszq\") pod \"ovsdbserver-nb-0\" (UID: \"68045148-ee4a-4502-932b-d945d4cc26f3\") " pod="openstack/ovsdbserver-nb-0" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.555423 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f846dac2-caa3-435f-9413-8b6ae3fa7527\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f846dac2-caa3-435f-9413-8b6ae3fa7527\") pod \"ovsdbserver-nb-1\" (UID: \"38a8efaf-eda4-4b3a-9d9c-7d1f513b8345\") " pod="openstack/ovsdbserver-nb-1" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.555470 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phvt2\" (UniqueName: \"kubernetes.io/projected/38a8efaf-eda4-4b3a-9d9c-7d1f513b8345-kube-api-access-phvt2\") pod \"ovsdbserver-nb-1\" (UID: \"38a8efaf-eda4-4b3a-9d9c-7d1f513b8345\") " pod="openstack/ovsdbserver-nb-1" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.555512 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzmps\" (UniqueName: \"kubernetes.io/projected/98c76e26-6ad7-49f3-b68c-1a1a857111dd-kube-api-access-dzmps\") pod \"ovsdbserver-nb-2\" (UID: \"98c76e26-6ad7-49f3-b68c-1a1a857111dd\") " pod="openstack/ovsdbserver-nb-2" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.555589 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68045148-ee4a-4502-932b-d945d4cc26f3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"68045148-ee4a-4502-932b-d945d4cc26f3\") " pod="openstack/ovsdbserver-nb-0" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.555610 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38a8efaf-eda4-4b3a-9d9c-7d1f513b8345-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"38a8efaf-eda4-4b3a-9d9c-7d1f513b8345\") " pod="openstack/ovsdbserver-nb-1" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.555632 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a8efaf-eda4-4b3a-9d9c-7d1f513b8345-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"38a8efaf-eda4-4b3a-9d9c-7d1f513b8345\") " pod="openstack/ovsdbserver-nb-1" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.555654 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98c76e26-6ad7-49f3-b68c-1a1a857111dd-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"98c76e26-6ad7-49f3-b68c-1a1a857111dd\") " pod="openstack/ovsdbserver-nb-2" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.555675 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/98c76e26-6ad7-49f3-b68c-1a1a857111dd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"98c76e26-6ad7-49f3-b68c-1a1a857111dd\") " pod="openstack/ovsdbserver-nb-2" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.555712 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68045148-ee4a-4502-932b-d945d4cc26f3-config\") pod \"ovsdbserver-nb-0\" (UID: \"68045148-ee4a-4502-932b-d945d4cc26f3\") " pod="openstack/ovsdbserver-nb-0" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.555739 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/98c76e26-6ad7-49f3-b68c-1a1a857111dd-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"98c76e26-6ad7-49f3-b68c-1a1a857111dd\") " pod="openstack/ovsdbserver-nb-2" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.555760 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/98c76e26-6ad7-49f3-b68c-1a1a857111dd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"98c76e26-6ad7-49f3-b68c-1a1a857111dd\") " pod="openstack/ovsdbserver-nb-2" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.555786 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68045148-ee4a-4502-932b-d945d4cc26f3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"68045148-ee4a-4502-932b-d945d4cc26f3\") " pod="openstack/ovsdbserver-nb-0" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.555814 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/38a8efaf-eda4-4b3a-9d9c-7d1f513b8345-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"38a8efaf-eda4-4b3a-9d9c-7d1f513b8345\") " pod="openstack/ovsdbserver-nb-1" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.555838 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/38a8efaf-eda4-4b3a-9d9c-7d1f513b8345-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"38a8efaf-eda4-4b3a-9d9c-7d1f513b8345\") " pod="openstack/ovsdbserver-nb-1" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.555862 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/68045148-ee4a-4502-932b-d945d4cc26f3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"68045148-ee4a-4502-932b-d945d4cc26f3\") " pod="openstack/ovsdbserver-nb-0" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.555890 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68045148-ee4a-4502-932b-d945d4cc26f3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"68045148-ee4a-4502-932b-d945d4cc26f3\") " pod="openstack/ovsdbserver-nb-0" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.555916 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98c76e26-6ad7-49f3-b68c-1a1a857111dd-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"98c76e26-6ad7-49f3-b68c-1a1a857111dd\") " pod="openstack/ovsdbserver-nb-2" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.555937 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/68045148-ee4a-4502-932b-d945d4cc26f3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"68045148-ee4a-4502-932b-d945d4cc26f3\") " pod="openstack/ovsdbserver-nb-0" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.555960 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38a8efaf-eda4-4b3a-9d9c-7d1f513b8345-config\") pod \"ovsdbserver-nb-1\" (UID: \"38a8efaf-eda4-4b3a-9d9c-7d1f513b8345\") " pod="openstack/ovsdbserver-nb-1" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.555994 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9d1e79bb-b3de-4b88-b731-05cba45741db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9d1e79bb-b3de-4b88-b731-05cba45741db\") pod \"ovsdbserver-nb-2\" (UID: \"98c76e26-6ad7-49f3-b68c-1a1a857111dd\") " pod="openstack/ovsdbserver-nb-2" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.556019 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/38a8efaf-eda4-4b3a-9d9c-7d1f513b8345-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"38a8efaf-eda4-4b3a-9d9c-7d1f513b8345\") " pod="openstack/ovsdbserver-nb-1" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.556046 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8c11e49a-9013-491f-b22c-ebd9bb7be6b1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8c11e49a-9013-491f-b22c-ebd9bb7be6b1\") pod \"ovsdbserver-nb-0\" (UID: \"68045148-ee4a-4502-932b-d945d4cc26f3\") " pod="openstack/ovsdbserver-nb-0" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.556077 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98c76e26-6ad7-49f3-b68c-1a1a857111dd-config\") pod \"ovsdbserver-nb-2\" (UID: \"98c76e26-6ad7-49f3-b68c-1a1a857111dd\") " pod="openstack/ovsdbserver-nb-2" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.557031 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/38a8efaf-eda4-4b3a-9d9c-7d1f513b8345-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"38a8efaf-eda4-4b3a-9d9c-7d1f513b8345\") " pod="openstack/ovsdbserver-nb-1" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.558225 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98c76e26-6ad7-49f3-b68c-1a1a857111dd-config\") pod \"ovsdbserver-nb-2\" (UID: \"98c76e26-6ad7-49f3-b68c-1a1a857111dd\") " pod="openstack/ovsdbserver-nb-2" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.559424 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/98c76e26-6ad7-49f3-b68c-1a1a857111dd-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"98c76e26-6ad7-49f3-b68c-1a1a857111dd\") " pod="openstack/ovsdbserver-nb-2" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.559463 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68045148-ee4a-4502-932b-d945d4cc26f3-config\") pod \"ovsdbserver-nb-0\" (UID: \"68045148-ee4a-4502-932b-d945d4cc26f3\") " pod="openstack/ovsdbserver-nb-0" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.560437 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38a8efaf-eda4-4b3a-9d9c-7d1f513b8345-config\") pod \"ovsdbserver-nb-1\" (UID: \"38a8efaf-eda4-4b3a-9d9c-7d1f513b8345\") " pod="openstack/ovsdbserver-nb-1" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.561678 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98c76e26-6ad7-49f3-b68c-1a1a857111dd-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"98c76e26-6ad7-49f3-b68c-1a1a857111dd\") " pod="openstack/ovsdbserver-nb-2" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.561829 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98c76e26-6ad7-49f3-b68c-1a1a857111dd-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"98c76e26-6ad7-49f3-b68c-1a1a857111dd\") " pod="openstack/ovsdbserver-nb-2" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.561975 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/98c76e26-6ad7-49f3-b68c-1a1a857111dd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"98c76e26-6ad7-49f3-b68c-1a1a857111dd\") " pod="openstack/ovsdbserver-nb-2" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.562033 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68045148-ee4a-4502-932b-d945d4cc26f3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"68045148-ee4a-4502-932b-d945d4cc26f3\") " pod="openstack/ovsdbserver-nb-0" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.562750 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68045148-ee4a-4502-932b-d945d4cc26f3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"68045148-ee4a-4502-932b-d945d4cc26f3\") " pod="openstack/ovsdbserver-nb-0" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.563224 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a8efaf-eda4-4b3a-9d9c-7d1f513b8345-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"38a8efaf-eda4-4b3a-9d9c-7d1f513b8345\") " pod="openstack/ovsdbserver-nb-1" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.563304 4833 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.563330 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9d1e79bb-b3de-4b88-b731-05cba45741db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9d1e79bb-b3de-4b88-b731-05cba45741db\") pod \"ovsdbserver-nb-2\" (UID: \"98c76e26-6ad7-49f3-b68c-1a1a857111dd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1e017124b545e9abe5cf5d1179048f7a60458d5780512c9549c00b7aa8261c29/globalmount\"" pod="openstack/ovsdbserver-nb-2" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.563531 4833 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.563611 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f846dac2-caa3-435f-9413-8b6ae3fa7527\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f846dac2-caa3-435f-9413-8b6ae3fa7527\") pod \"ovsdbserver-nb-1\" (UID: \"38a8efaf-eda4-4b3a-9d9c-7d1f513b8345\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f1c87525ae058a3442fa1c6a680cefadb99542da64244581765cbbf0dd002107/globalmount\"" pod="openstack/ovsdbserver-nb-1" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.564192 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68045148-ee4a-4502-932b-d945d4cc26f3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"68045148-ee4a-4502-932b-d945d4cc26f3\") " pod="openstack/ovsdbserver-nb-0" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.564386 4833 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.564430 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8c11e49a-9013-491f-b22c-ebd9bb7be6b1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8c11e49a-9013-491f-b22c-ebd9bb7be6b1\") pod \"ovsdbserver-nb-0\" (UID: \"68045148-ee4a-4502-932b-d945d4cc26f3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/52048eae520f98461e18381196b64b7e448f85863af947306afc81948ed6e499/globalmount\"" pod="openstack/ovsdbserver-nb-0" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.564858 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/38a8efaf-eda4-4b3a-9d9c-7d1f513b8345-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"38a8efaf-eda4-4b3a-9d9c-7d1f513b8345\") " pod="openstack/ovsdbserver-nb-1" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.564917 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38a8efaf-eda4-4b3a-9d9c-7d1f513b8345-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"38a8efaf-eda4-4b3a-9d9c-7d1f513b8345\") " pod="openstack/ovsdbserver-nb-1" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.565187 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/68045148-ee4a-4502-932b-d945d4cc26f3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"68045148-ee4a-4502-932b-d945d4cc26f3\") " pod="openstack/ovsdbserver-nb-0" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.565224 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/68045148-ee4a-4502-932b-d945d4cc26f3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"68045148-ee4a-4502-932b-d945d4cc26f3\") " pod="openstack/ovsdbserver-nb-0" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.565832 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/98c76e26-6ad7-49f3-b68c-1a1a857111dd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"98c76e26-6ad7-49f3-b68c-1a1a857111dd\") " pod="openstack/ovsdbserver-nb-2" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.583864 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/38a8efaf-eda4-4b3a-9d9c-7d1f513b8345-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"38a8efaf-eda4-4b3a-9d9c-7d1f513b8345\") " pod="openstack/ovsdbserver-nb-1" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.588348 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzmps\" (UniqueName: \"kubernetes.io/projected/98c76e26-6ad7-49f3-b68c-1a1a857111dd-kube-api-access-dzmps\") pod \"ovsdbserver-nb-2\" (UID: \"98c76e26-6ad7-49f3-b68c-1a1a857111dd\") " pod="openstack/ovsdbserver-nb-2" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.595747 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tszq\" (UniqueName: \"kubernetes.io/projected/68045148-ee4a-4502-932b-d945d4cc26f3-kube-api-access-2tszq\") pod \"ovsdbserver-nb-0\" (UID: \"68045148-ee4a-4502-932b-d945d4cc26f3\") " pod="openstack/ovsdbserver-nb-0" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.608167 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phvt2\" (UniqueName: \"kubernetes.io/projected/38a8efaf-eda4-4b3a-9d9c-7d1f513b8345-kube-api-access-phvt2\") pod \"ovsdbserver-nb-1\" (UID: \"38a8efaf-eda4-4b3a-9d9c-7d1f513b8345\") " pod="openstack/ovsdbserver-nb-1" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.625892 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8c11e49a-9013-491f-b22c-ebd9bb7be6b1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8c11e49a-9013-491f-b22c-ebd9bb7be6b1\") pod \"ovsdbserver-nb-0\" (UID: \"68045148-ee4a-4502-932b-d945d4cc26f3\") " pod="openstack/ovsdbserver-nb-0" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.631565 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9d1e79bb-b3de-4b88-b731-05cba45741db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9d1e79bb-b3de-4b88-b731-05cba45741db\") pod \"ovsdbserver-nb-2\" (UID: \"98c76e26-6ad7-49f3-b68c-1a1a857111dd\") " pod="openstack/ovsdbserver-nb-2" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.634663 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f846dac2-caa3-435f-9413-8b6ae3fa7527\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f846dac2-caa3-435f-9413-8b6ae3fa7527\") pod \"ovsdbserver-nb-1\" (UID: \"38a8efaf-eda4-4b3a-9d9c-7d1f513b8345\") " pod="openstack/ovsdbserver-nb-1" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.702321 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.723139 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Oct 13 07:58:18 crc kubenswrapper[4833]: I1013 07:58:18.741177 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Oct 13 07:58:19 crc kubenswrapper[4833]: I1013 07:58:19.263887 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 13 07:58:19 crc kubenswrapper[4833]: W1013 07:58:19.274350 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68045148_ee4a_4502_932b_d945d4cc26f3.slice/crio-b92ba6c62db72a6d2ad80b085aed52596062a9ca81268ad596f04c4d15517826 WatchSource:0}: Error finding container b92ba6c62db72a6d2ad80b085aed52596062a9ca81268ad596f04c4d15517826: Status 404 returned error can't find the container with id b92ba6c62db72a6d2ad80b085aed52596062a9ca81268ad596f04c4d15517826 Oct 13 07:58:19 crc kubenswrapper[4833]: I1013 07:58:19.340851 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Oct 13 07:58:19 crc kubenswrapper[4833]: W1013 07:58:19.343323 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38a8efaf_eda4_4b3a_9d9c_7d1f513b8345.slice/crio-54be3f426cdf896a19bbb1ccd5251cbf959170c7c2b2187133126c8275fc82f9 WatchSource:0}: Error finding container 54be3f426cdf896a19bbb1ccd5251cbf959170c7c2b2187133126c8275fc82f9: Status 404 returned error can't find the container with id 54be3f426cdf896a19bbb1ccd5251cbf959170c7c2b2187133126c8275fc82f9 Oct 13 07:58:19 crc kubenswrapper[4833]: I1013 07:58:19.507609 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Oct 13 07:58:19 crc kubenswrapper[4833]: I1013 07:58:19.569987 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Oct 13 07:58:19 crc kubenswrapper[4833]: I1013 07:58:19.587674 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Oct 13 07:58:19 crc kubenswrapper[4833]: I1013 07:58:19.704569 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"68045148-ee4a-4502-932b-d945d4cc26f3","Type":"ContainerStarted","Data":"59b20ad63ab3f03820815f135cf9628ab02c5db15beb57d0c061da9c2672e20b"} Oct 13 07:58:19 crc kubenswrapper[4833]: I1013 07:58:19.704928 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"68045148-ee4a-4502-932b-d945d4cc26f3","Type":"ContainerStarted","Data":"81c4ab2f23da0bae6c39d545c9b5fe2048384fc2cd7bbc5584c389cb54b78fe0"} Oct 13 07:58:19 crc kubenswrapper[4833]: I1013 07:58:19.704948 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"68045148-ee4a-4502-932b-d945d4cc26f3","Type":"ContainerStarted","Data":"b92ba6c62db72a6d2ad80b085aed52596062a9ca81268ad596f04c4d15517826"} Oct 13 07:58:19 crc kubenswrapper[4833]: I1013 07:58:19.706034 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"38a8efaf-eda4-4b3a-9d9c-7d1f513b8345","Type":"ContainerStarted","Data":"3368fa7e3f108c9985fddbc64af2f27f2d2be60f527a96e736107ce523a3ab9c"} Oct 13 07:58:19 crc kubenswrapper[4833]: I1013 07:58:19.706084 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"38a8efaf-eda4-4b3a-9d9c-7d1f513b8345","Type":"ContainerStarted","Data":"54be3f426cdf896a19bbb1ccd5251cbf959170c7c2b2187133126c8275fc82f9"} Oct 13 07:58:19 crc kubenswrapper[4833]: I1013 07:58:19.727618 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=2.727528913 podStartE2EDuration="2.727528913s" podCreationTimestamp="2025-10-13 07:58:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 07:58:19.721765079 +0000 UTC m=+5389.822187995" watchObservedRunningTime="2025-10-13 07:58:19.727528913 +0000 UTC m=+5389.827951829" Oct 13 07:58:20 crc kubenswrapper[4833]: I1013 07:58:20.388977 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Oct 13 07:58:20 crc kubenswrapper[4833]: I1013 07:58:20.719382 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"98c76e26-6ad7-49f3-b68c-1a1a857111dd","Type":"ContainerStarted","Data":"fe55a32940cfbc11706010c46b401b9c9a871e1ad90ea5c8fdd12a08cb2bd01e"} Oct 13 07:58:20 crc kubenswrapper[4833]: I1013 07:58:20.719455 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"98c76e26-6ad7-49f3-b68c-1a1a857111dd","Type":"ContainerStarted","Data":"97b57c3cdb0269b3fabb61a970a8e2eb302f7f17c1b451fa098ec94627c33f4c"} Oct 13 07:58:20 crc kubenswrapper[4833]: I1013 07:58:20.723291 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"38a8efaf-eda4-4b3a-9d9c-7d1f513b8345","Type":"ContainerStarted","Data":"86011a346564cf2bcfac05a290ff40f8c66cfd000b298eed857fc976660859d5"} Oct 13 07:58:20 crc kubenswrapper[4833]: I1013 07:58:20.763308 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=3.7632889130000002 podStartE2EDuration="3.763288913s" podCreationTimestamp="2025-10-13 07:58:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 07:58:20.754507004 +0000 UTC m=+5390.854929920" watchObservedRunningTime="2025-10-13 07:58:20.763288913 +0000 UTC m=+5390.863711829" Oct 13 07:58:21 crc kubenswrapper[4833]: I1013 07:58:21.507379 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Oct 13 07:58:21 crc kubenswrapper[4833]: I1013 07:58:21.569451 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Oct 13 07:58:21 crc kubenswrapper[4833]: I1013 07:58:21.587789 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Oct 13 07:58:21 crc kubenswrapper[4833]: I1013 07:58:21.702497 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Oct 13 07:58:21 crc kubenswrapper[4833]: I1013 07:58:21.724840 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Oct 13 07:58:21 crc kubenswrapper[4833]: I1013 07:58:21.745825 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"98c76e26-6ad7-49f3-b68c-1a1a857111dd","Type":"ContainerStarted","Data":"e3cf8bb4e0e132632ac8bccc8527587eeb7f9bd398f2c04445552457b1f5e30f"} Oct 13 07:58:21 crc kubenswrapper[4833]: I1013 07:58:21.787331 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=4.787283918 podStartE2EDuration="4.787283918s" podCreationTimestamp="2025-10-13 07:58:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 07:58:21.781389241 +0000 UTC m=+5391.881812217" watchObservedRunningTime="2025-10-13 07:58:21.787283918 +0000 UTC m=+5391.887706864" Oct 13 07:58:22 crc kubenswrapper[4833]: I1013 07:58:22.588017 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Oct 13 07:58:22 crc kubenswrapper[4833]: I1013 07:58:22.642502 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Oct 13 07:58:22 crc kubenswrapper[4833]: I1013 07:58:22.666209 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Oct 13 07:58:22 crc kubenswrapper[4833]: I1013 07:58:22.674293 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Oct 13 07:58:22 crc kubenswrapper[4833]: I1013 07:58:22.697671 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Oct 13 07:58:22 crc kubenswrapper[4833]: I1013 07:58:22.738897 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Oct 13 07:58:22 crc kubenswrapper[4833]: I1013 07:58:22.891234 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5578f45959-tvxc4"] Oct 13 07:58:22 crc kubenswrapper[4833]: I1013 07:58:22.892822 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5578f45959-tvxc4" Oct 13 07:58:22 crc kubenswrapper[4833]: I1013 07:58:22.900738 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 13 07:58:22 crc kubenswrapper[4833]: I1013 07:58:22.911110 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5578f45959-tvxc4"] Oct 13 07:58:23 crc kubenswrapper[4833]: I1013 07:58:23.046084 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7smc\" (UniqueName: \"kubernetes.io/projected/628ba20d-9743-4b1c-861e-49f83ba44ff1-kube-api-access-x7smc\") pod \"dnsmasq-dns-5578f45959-tvxc4\" (UID: \"628ba20d-9743-4b1c-861e-49f83ba44ff1\") " pod="openstack/dnsmasq-dns-5578f45959-tvxc4" Oct 13 07:58:23 crc kubenswrapper[4833]: I1013 07:58:23.046167 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/628ba20d-9743-4b1c-861e-49f83ba44ff1-dns-svc\") pod \"dnsmasq-dns-5578f45959-tvxc4\" (UID: \"628ba20d-9743-4b1c-861e-49f83ba44ff1\") " pod="openstack/dnsmasq-dns-5578f45959-tvxc4" Oct 13 07:58:23 crc kubenswrapper[4833]: I1013 07:58:23.046324 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/628ba20d-9743-4b1c-861e-49f83ba44ff1-config\") pod \"dnsmasq-dns-5578f45959-tvxc4\" (UID: \"628ba20d-9743-4b1c-861e-49f83ba44ff1\") " pod="openstack/dnsmasq-dns-5578f45959-tvxc4" Oct 13 07:58:23 crc kubenswrapper[4833]: I1013 07:58:23.046468 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/628ba20d-9743-4b1c-861e-49f83ba44ff1-ovsdbserver-sb\") pod \"dnsmasq-dns-5578f45959-tvxc4\" (UID: \"628ba20d-9743-4b1c-861e-49f83ba44ff1\") " pod="openstack/dnsmasq-dns-5578f45959-tvxc4" Oct 13 07:58:23 crc kubenswrapper[4833]: I1013 07:58:23.148192 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7smc\" (UniqueName: \"kubernetes.io/projected/628ba20d-9743-4b1c-861e-49f83ba44ff1-kube-api-access-x7smc\") pod \"dnsmasq-dns-5578f45959-tvxc4\" (UID: \"628ba20d-9743-4b1c-861e-49f83ba44ff1\") " pod="openstack/dnsmasq-dns-5578f45959-tvxc4" Oct 13 07:58:23 crc kubenswrapper[4833]: I1013 07:58:23.148331 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/628ba20d-9743-4b1c-861e-49f83ba44ff1-dns-svc\") pod \"dnsmasq-dns-5578f45959-tvxc4\" (UID: \"628ba20d-9743-4b1c-861e-49f83ba44ff1\") " pod="openstack/dnsmasq-dns-5578f45959-tvxc4" Oct 13 07:58:23 crc kubenswrapper[4833]: I1013 07:58:23.148413 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/628ba20d-9743-4b1c-861e-49f83ba44ff1-config\") pod \"dnsmasq-dns-5578f45959-tvxc4\" (UID: \"628ba20d-9743-4b1c-861e-49f83ba44ff1\") " pod="openstack/dnsmasq-dns-5578f45959-tvxc4" Oct 13 07:58:23 crc kubenswrapper[4833]: I1013 07:58:23.148492 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/628ba20d-9743-4b1c-861e-49f83ba44ff1-ovsdbserver-sb\") pod \"dnsmasq-dns-5578f45959-tvxc4\" (UID: \"628ba20d-9743-4b1c-861e-49f83ba44ff1\") " pod="openstack/dnsmasq-dns-5578f45959-tvxc4" Oct 13 07:58:23 crc kubenswrapper[4833]: I1013 07:58:23.149801 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/628ba20d-9743-4b1c-861e-49f83ba44ff1-dns-svc\") pod \"dnsmasq-dns-5578f45959-tvxc4\" (UID: \"628ba20d-9743-4b1c-861e-49f83ba44ff1\") " pod="openstack/dnsmasq-dns-5578f45959-tvxc4" Oct 13 07:58:23 crc kubenswrapper[4833]: I1013 07:58:23.150065 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/628ba20d-9743-4b1c-861e-49f83ba44ff1-config\") pod \"dnsmasq-dns-5578f45959-tvxc4\" (UID: \"628ba20d-9743-4b1c-861e-49f83ba44ff1\") " pod="openstack/dnsmasq-dns-5578f45959-tvxc4" Oct 13 07:58:23 crc kubenswrapper[4833]: I1013 07:58:23.150184 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/628ba20d-9743-4b1c-861e-49f83ba44ff1-ovsdbserver-sb\") pod \"dnsmasq-dns-5578f45959-tvxc4\" (UID: \"628ba20d-9743-4b1c-861e-49f83ba44ff1\") " pod="openstack/dnsmasq-dns-5578f45959-tvxc4" Oct 13 07:58:23 crc kubenswrapper[4833]: I1013 07:58:23.172402 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7smc\" (UniqueName: \"kubernetes.io/projected/628ba20d-9743-4b1c-861e-49f83ba44ff1-kube-api-access-x7smc\") pod \"dnsmasq-dns-5578f45959-tvxc4\" (UID: \"628ba20d-9743-4b1c-861e-49f83ba44ff1\") " pod="openstack/dnsmasq-dns-5578f45959-tvxc4" Oct 13 07:58:23 crc kubenswrapper[4833]: I1013 07:58:23.214202 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5578f45959-tvxc4" Oct 13 07:58:23 crc kubenswrapper[4833]: I1013 07:58:23.703419 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Oct 13 07:58:23 crc kubenswrapper[4833]: I1013 07:58:23.724047 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Oct 13 07:58:23 crc kubenswrapper[4833]: I1013 07:58:23.741937 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Oct 13 07:58:23 crc kubenswrapper[4833]: I1013 07:58:23.772705 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5578f45959-tvxc4"] Oct 13 07:58:23 crc kubenswrapper[4833]: I1013 07:58:23.793593 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5578f45959-tvxc4" event={"ID":"628ba20d-9743-4b1c-861e-49f83ba44ff1","Type":"ContainerStarted","Data":"20ebafc174ae3c10cfe817848bd9edb83110c4e6f92f59e9eebfbbe21df7f7ef"} Oct 13 07:58:24 crc kubenswrapper[4833]: I1013 07:58:24.742423 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Oct 13 07:58:24 crc kubenswrapper[4833]: I1013 07:58:24.760270 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Oct 13 07:58:24 crc kubenswrapper[4833]: I1013 07:58:24.776799 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Oct 13 07:58:24 crc kubenswrapper[4833]: I1013 07:58:24.822150 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Oct 13 07:58:24 crc kubenswrapper[4833]: I1013 07:58:24.831129 4833 generic.go:334] "Generic (PLEG): container finished" podID="628ba20d-9743-4b1c-861e-49f83ba44ff1" containerID="d3175e6cd3b3bf37304f4377bcbd2dd807437b3ee3fd03ab88192d14637b0214" exitCode=0 Oct 13 07:58:24 crc kubenswrapper[4833]: I1013 07:58:24.832493 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5578f45959-tvxc4" event={"ID":"628ba20d-9743-4b1c-861e-49f83ba44ff1","Type":"ContainerDied","Data":"d3175e6cd3b3bf37304f4377bcbd2dd807437b3ee3fd03ab88192d14637b0214"} Oct 13 07:58:24 crc kubenswrapper[4833]: I1013 07:58:24.866862 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Oct 13 07:58:24 crc kubenswrapper[4833]: I1013 07:58:24.878519 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Oct 13 07:58:25 crc kubenswrapper[4833]: I1013 07:58:25.067888 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5578f45959-tvxc4"] Oct 13 07:58:25 crc kubenswrapper[4833]: I1013 07:58:25.095331 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8d8987b6c-pj8pt"] Oct 13 07:58:25 crc kubenswrapper[4833]: I1013 07:58:25.103630 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8d8987b6c-pj8pt" Oct 13 07:58:25 crc kubenswrapper[4833]: I1013 07:58:25.106293 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 13 07:58:25 crc kubenswrapper[4833]: I1013 07:58:25.113975 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8d8987b6c-pj8pt"] Oct 13 07:58:25 crc kubenswrapper[4833]: I1013 07:58:25.188432 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a194576-3860-4b16-a7a8-cb7951a1e776-dns-svc\") pod \"dnsmasq-dns-8d8987b6c-pj8pt\" (UID: \"0a194576-3860-4b16-a7a8-cb7951a1e776\") " pod="openstack/dnsmasq-dns-8d8987b6c-pj8pt" Oct 13 07:58:25 crc kubenswrapper[4833]: I1013 07:58:25.188499 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a194576-3860-4b16-a7a8-cb7951a1e776-ovsdbserver-nb\") pod \"dnsmasq-dns-8d8987b6c-pj8pt\" (UID: \"0a194576-3860-4b16-a7a8-cb7951a1e776\") " pod="openstack/dnsmasq-dns-8d8987b6c-pj8pt" Oct 13 07:58:25 crc kubenswrapper[4833]: I1013 07:58:25.188590 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a194576-3860-4b16-a7a8-cb7951a1e776-config\") pod \"dnsmasq-dns-8d8987b6c-pj8pt\" (UID: \"0a194576-3860-4b16-a7a8-cb7951a1e776\") " pod="openstack/dnsmasq-dns-8d8987b6c-pj8pt" Oct 13 07:58:25 crc kubenswrapper[4833]: I1013 07:58:25.188646 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a194576-3860-4b16-a7a8-cb7951a1e776-ovsdbserver-sb\") pod \"dnsmasq-dns-8d8987b6c-pj8pt\" (UID: \"0a194576-3860-4b16-a7a8-cb7951a1e776\") " pod="openstack/dnsmasq-dns-8d8987b6c-pj8pt" Oct 13 07:58:25 crc kubenswrapper[4833]: I1013 07:58:25.188666 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lftzw\" (UniqueName: \"kubernetes.io/projected/0a194576-3860-4b16-a7a8-cb7951a1e776-kube-api-access-lftzw\") pod \"dnsmasq-dns-8d8987b6c-pj8pt\" (UID: \"0a194576-3860-4b16-a7a8-cb7951a1e776\") " pod="openstack/dnsmasq-dns-8d8987b6c-pj8pt" Oct 13 07:58:25 crc kubenswrapper[4833]: I1013 07:58:25.290669 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a194576-3860-4b16-a7a8-cb7951a1e776-ovsdbserver-sb\") pod \"dnsmasq-dns-8d8987b6c-pj8pt\" (UID: \"0a194576-3860-4b16-a7a8-cb7951a1e776\") " pod="openstack/dnsmasq-dns-8d8987b6c-pj8pt" Oct 13 07:58:25 crc kubenswrapper[4833]: I1013 07:58:25.290726 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lftzw\" (UniqueName: \"kubernetes.io/projected/0a194576-3860-4b16-a7a8-cb7951a1e776-kube-api-access-lftzw\") pod \"dnsmasq-dns-8d8987b6c-pj8pt\" (UID: \"0a194576-3860-4b16-a7a8-cb7951a1e776\") " pod="openstack/dnsmasq-dns-8d8987b6c-pj8pt" Oct 13 07:58:25 crc kubenswrapper[4833]: I1013 07:58:25.290774 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a194576-3860-4b16-a7a8-cb7951a1e776-dns-svc\") pod \"dnsmasq-dns-8d8987b6c-pj8pt\" (UID: \"0a194576-3860-4b16-a7a8-cb7951a1e776\") " pod="openstack/dnsmasq-dns-8d8987b6c-pj8pt" Oct 13 07:58:25 crc kubenswrapper[4833]: I1013 07:58:25.290826 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a194576-3860-4b16-a7a8-cb7951a1e776-ovsdbserver-nb\") pod \"dnsmasq-dns-8d8987b6c-pj8pt\" (UID: \"0a194576-3860-4b16-a7a8-cb7951a1e776\") " pod="openstack/dnsmasq-dns-8d8987b6c-pj8pt" Oct 13 07:58:25 crc kubenswrapper[4833]: I1013 07:58:25.290915 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a194576-3860-4b16-a7a8-cb7951a1e776-config\") pod \"dnsmasq-dns-8d8987b6c-pj8pt\" (UID: \"0a194576-3860-4b16-a7a8-cb7951a1e776\") " pod="openstack/dnsmasq-dns-8d8987b6c-pj8pt" Oct 13 07:58:25 crc kubenswrapper[4833]: I1013 07:58:25.291900 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a194576-3860-4b16-a7a8-cb7951a1e776-config\") pod \"dnsmasq-dns-8d8987b6c-pj8pt\" (UID: \"0a194576-3860-4b16-a7a8-cb7951a1e776\") " pod="openstack/dnsmasq-dns-8d8987b6c-pj8pt" Oct 13 07:58:25 crc kubenswrapper[4833]: I1013 07:58:25.292497 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a194576-3860-4b16-a7a8-cb7951a1e776-dns-svc\") pod \"dnsmasq-dns-8d8987b6c-pj8pt\" (UID: \"0a194576-3860-4b16-a7a8-cb7951a1e776\") " pod="openstack/dnsmasq-dns-8d8987b6c-pj8pt" Oct 13 07:58:25 crc kubenswrapper[4833]: I1013 07:58:25.292724 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a194576-3860-4b16-a7a8-cb7951a1e776-ovsdbserver-sb\") pod \"dnsmasq-dns-8d8987b6c-pj8pt\" (UID: \"0a194576-3860-4b16-a7a8-cb7951a1e776\") " pod="openstack/dnsmasq-dns-8d8987b6c-pj8pt" Oct 13 07:58:25 crc kubenswrapper[4833]: I1013 07:58:25.294195 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a194576-3860-4b16-a7a8-cb7951a1e776-ovsdbserver-nb\") pod \"dnsmasq-dns-8d8987b6c-pj8pt\" (UID: \"0a194576-3860-4b16-a7a8-cb7951a1e776\") " pod="openstack/dnsmasq-dns-8d8987b6c-pj8pt" Oct 13 07:58:25 crc kubenswrapper[4833]: I1013 07:58:25.323033 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lftzw\" (UniqueName: \"kubernetes.io/projected/0a194576-3860-4b16-a7a8-cb7951a1e776-kube-api-access-lftzw\") pod \"dnsmasq-dns-8d8987b6c-pj8pt\" (UID: \"0a194576-3860-4b16-a7a8-cb7951a1e776\") " pod="openstack/dnsmasq-dns-8d8987b6c-pj8pt" Oct 13 07:58:25 crc kubenswrapper[4833]: I1013 07:58:25.422838 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8d8987b6c-pj8pt" Oct 13 07:58:25 crc kubenswrapper[4833]: I1013 07:58:25.688667 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8d8987b6c-pj8pt"] Oct 13 07:58:25 crc kubenswrapper[4833]: W1013 07:58:25.696771 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a194576_3860_4b16_a7a8_cb7951a1e776.slice/crio-a28c2dd694b8e19c56bb84ba3f732b438a8d82c98bffcb8f93438a9f8f2065f5 WatchSource:0}: Error finding container a28c2dd694b8e19c56bb84ba3f732b438a8d82c98bffcb8f93438a9f8f2065f5: Status 404 returned error can't find the container with id a28c2dd694b8e19c56bb84ba3f732b438a8d82c98bffcb8f93438a9f8f2065f5 Oct 13 07:58:25 crc kubenswrapper[4833]: I1013 07:58:25.842974 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8d8987b6c-pj8pt" event={"ID":"0a194576-3860-4b16-a7a8-cb7951a1e776","Type":"ContainerStarted","Data":"a28c2dd694b8e19c56bb84ba3f732b438a8d82c98bffcb8f93438a9f8f2065f5"} Oct 13 07:58:25 crc kubenswrapper[4833]: I1013 07:58:25.846059 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5578f45959-tvxc4" event={"ID":"628ba20d-9743-4b1c-861e-49f83ba44ff1","Type":"ContainerStarted","Data":"59d8d694655b133e43592faf3b39663c500ad7b326c5b8e8553282eaaa71fd85"} Oct 13 07:58:25 crc kubenswrapper[4833]: I1013 07:58:25.846693 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5578f45959-tvxc4" podUID="628ba20d-9743-4b1c-861e-49f83ba44ff1" containerName="dnsmasq-dns" containerID="cri-o://59d8d694655b133e43592faf3b39663c500ad7b326c5b8e8553282eaaa71fd85" gracePeriod=10 Oct 13 07:58:25 crc kubenswrapper[4833]: I1013 07:58:25.871687 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5578f45959-tvxc4" podStartSLOduration=3.8716609220000002 podStartE2EDuration="3.871660922s" podCreationTimestamp="2025-10-13 07:58:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 07:58:25.862463521 +0000 UTC m=+5395.962886477" watchObservedRunningTime="2025-10-13 07:58:25.871660922 +0000 UTC m=+5395.972083878" Oct 13 07:58:25 crc kubenswrapper[4833]: I1013 07:58:25.923448 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Oct 13 07:58:26 crc kubenswrapper[4833]: I1013 07:58:26.234490 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5578f45959-tvxc4" Oct 13 07:58:26 crc kubenswrapper[4833]: I1013 07:58:26.411774 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7smc\" (UniqueName: \"kubernetes.io/projected/628ba20d-9743-4b1c-861e-49f83ba44ff1-kube-api-access-x7smc\") pod \"628ba20d-9743-4b1c-861e-49f83ba44ff1\" (UID: \"628ba20d-9743-4b1c-861e-49f83ba44ff1\") " Oct 13 07:58:26 crc kubenswrapper[4833]: I1013 07:58:26.412364 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/628ba20d-9743-4b1c-861e-49f83ba44ff1-dns-svc\") pod \"628ba20d-9743-4b1c-861e-49f83ba44ff1\" (UID: \"628ba20d-9743-4b1c-861e-49f83ba44ff1\") " Oct 13 07:58:26 crc kubenswrapper[4833]: I1013 07:58:26.412441 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/628ba20d-9743-4b1c-861e-49f83ba44ff1-ovsdbserver-sb\") pod \"628ba20d-9743-4b1c-861e-49f83ba44ff1\" (UID: \"628ba20d-9743-4b1c-861e-49f83ba44ff1\") " Oct 13 07:58:26 crc kubenswrapper[4833]: I1013 07:58:26.412557 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/628ba20d-9743-4b1c-861e-49f83ba44ff1-config\") pod \"628ba20d-9743-4b1c-861e-49f83ba44ff1\" (UID: \"628ba20d-9743-4b1c-861e-49f83ba44ff1\") " Oct 13 07:58:26 crc kubenswrapper[4833]: I1013 07:58:26.418887 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/628ba20d-9743-4b1c-861e-49f83ba44ff1-kube-api-access-x7smc" (OuterVolumeSpecName: "kube-api-access-x7smc") pod "628ba20d-9743-4b1c-861e-49f83ba44ff1" (UID: "628ba20d-9743-4b1c-861e-49f83ba44ff1"). InnerVolumeSpecName "kube-api-access-x7smc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:58:26 crc kubenswrapper[4833]: I1013 07:58:26.465559 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/628ba20d-9743-4b1c-861e-49f83ba44ff1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "628ba20d-9743-4b1c-861e-49f83ba44ff1" (UID: "628ba20d-9743-4b1c-861e-49f83ba44ff1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 07:58:26 crc kubenswrapper[4833]: I1013 07:58:26.470511 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/628ba20d-9743-4b1c-861e-49f83ba44ff1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "628ba20d-9743-4b1c-861e-49f83ba44ff1" (UID: "628ba20d-9743-4b1c-861e-49f83ba44ff1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 07:58:26 crc kubenswrapper[4833]: I1013 07:58:26.478093 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/628ba20d-9743-4b1c-861e-49f83ba44ff1-config" (OuterVolumeSpecName: "config") pod "628ba20d-9743-4b1c-861e-49f83ba44ff1" (UID: "628ba20d-9743-4b1c-861e-49f83ba44ff1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 07:58:26 crc kubenswrapper[4833]: I1013 07:58:26.514628 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/628ba20d-9743-4b1c-861e-49f83ba44ff1-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 07:58:26 crc kubenswrapper[4833]: I1013 07:58:26.514695 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/628ba20d-9743-4b1c-861e-49f83ba44ff1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 07:58:26 crc kubenswrapper[4833]: I1013 07:58:26.514725 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/628ba20d-9743-4b1c-861e-49f83ba44ff1-config\") on node \"crc\" DevicePath \"\"" Oct 13 07:58:26 crc kubenswrapper[4833]: I1013 07:58:26.514750 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7smc\" (UniqueName: \"kubernetes.io/projected/628ba20d-9743-4b1c-861e-49f83ba44ff1-kube-api-access-x7smc\") on node \"crc\" DevicePath \"\"" Oct 13 07:58:26 crc kubenswrapper[4833]: I1013 07:58:26.858615 4833 generic.go:334] "Generic (PLEG): container finished" podID="0a194576-3860-4b16-a7a8-cb7951a1e776" containerID="324f78161d34f49feb9b15f8b215a770f2f3a5688bcc09bd7fb66bb10a1be1cc" exitCode=0 Oct 13 07:58:26 crc kubenswrapper[4833]: I1013 07:58:26.858689 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8d8987b6c-pj8pt" event={"ID":"0a194576-3860-4b16-a7a8-cb7951a1e776","Type":"ContainerDied","Data":"324f78161d34f49feb9b15f8b215a770f2f3a5688bcc09bd7fb66bb10a1be1cc"} Oct 13 07:58:26 crc kubenswrapper[4833]: I1013 07:58:26.863770 4833 generic.go:334] "Generic (PLEG): container finished" podID="628ba20d-9743-4b1c-861e-49f83ba44ff1" containerID="59d8d694655b133e43592faf3b39663c500ad7b326c5b8e8553282eaaa71fd85" exitCode=0 Oct 13 07:58:26 crc kubenswrapper[4833]: I1013 07:58:26.864685 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5578f45959-tvxc4" Oct 13 07:58:26 crc kubenswrapper[4833]: I1013 07:58:26.864722 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5578f45959-tvxc4" event={"ID":"628ba20d-9743-4b1c-861e-49f83ba44ff1","Type":"ContainerDied","Data":"59d8d694655b133e43592faf3b39663c500ad7b326c5b8e8553282eaaa71fd85"} Oct 13 07:58:26 crc kubenswrapper[4833]: I1013 07:58:26.864813 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5578f45959-tvxc4" event={"ID":"628ba20d-9743-4b1c-861e-49f83ba44ff1","Type":"ContainerDied","Data":"20ebafc174ae3c10cfe817848bd9edb83110c4e6f92f59e9eebfbbe21df7f7ef"} Oct 13 07:58:26 crc kubenswrapper[4833]: I1013 07:58:26.864903 4833 scope.go:117] "RemoveContainer" containerID="59d8d694655b133e43592faf3b39663c500ad7b326c5b8e8553282eaaa71fd85" Oct 13 07:58:26 crc kubenswrapper[4833]: I1013 07:58:26.978939 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5578f45959-tvxc4"] Oct 13 07:58:26 crc kubenswrapper[4833]: I1013 07:58:26.984885 4833 scope.go:117] "RemoveContainer" containerID="d3175e6cd3b3bf37304f4377bcbd2dd807437b3ee3fd03ab88192d14637b0214" Oct 13 07:58:26 crc kubenswrapper[4833]: I1013 07:58:26.987217 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5578f45959-tvxc4"] Oct 13 07:58:27 crc kubenswrapper[4833]: I1013 07:58:27.007441 4833 scope.go:117] "RemoveContainer" containerID="59d8d694655b133e43592faf3b39663c500ad7b326c5b8e8553282eaaa71fd85" Oct 13 07:58:27 crc kubenswrapper[4833]: E1013 07:58:27.008111 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59d8d694655b133e43592faf3b39663c500ad7b326c5b8e8553282eaaa71fd85\": container with ID starting with 59d8d694655b133e43592faf3b39663c500ad7b326c5b8e8553282eaaa71fd85 not found: ID does not exist" containerID="59d8d694655b133e43592faf3b39663c500ad7b326c5b8e8553282eaaa71fd85" Oct 13 07:58:27 crc kubenswrapper[4833]: I1013 07:58:27.008154 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59d8d694655b133e43592faf3b39663c500ad7b326c5b8e8553282eaaa71fd85"} err="failed to get container status \"59d8d694655b133e43592faf3b39663c500ad7b326c5b8e8553282eaaa71fd85\": rpc error: code = NotFound desc = could not find container \"59d8d694655b133e43592faf3b39663c500ad7b326c5b8e8553282eaaa71fd85\": container with ID starting with 59d8d694655b133e43592faf3b39663c500ad7b326c5b8e8553282eaaa71fd85 not found: ID does not exist" Oct 13 07:58:27 crc kubenswrapper[4833]: I1013 07:58:27.008183 4833 scope.go:117] "RemoveContainer" containerID="d3175e6cd3b3bf37304f4377bcbd2dd807437b3ee3fd03ab88192d14637b0214" Oct 13 07:58:27 crc kubenswrapper[4833]: E1013 07:58:27.008453 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3175e6cd3b3bf37304f4377bcbd2dd807437b3ee3fd03ab88192d14637b0214\": container with ID starting with d3175e6cd3b3bf37304f4377bcbd2dd807437b3ee3fd03ab88192d14637b0214 not found: ID does not exist" containerID="d3175e6cd3b3bf37304f4377bcbd2dd807437b3ee3fd03ab88192d14637b0214" Oct 13 07:58:27 crc kubenswrapper[4833]: I1013 07:58:27.008512 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3175e6cd3b3bf37304f4377bcbd2dd807437b3ee3fd03ab88192d14637b0214"} err="failed to get container status \"d3175e6cd3b3bf37304f4377bcbd2dd807437b3ee3fd03ab88192d14637b0214\": rpc error: code = NotFound desc = could not find container \"d3175e6cd3b3bf37304f4377bcbd2dd807437b3ee3fd03ab88192d14637b0214\": container with ID starting with d3175e6cd3b3bf37304f4377bcbd2dd807437b3ee3fd03ab88192d14637b0214 not found: ID does not exist" Oct 13 07:58:27 crc kubenswrapper[4833]: I1013 07:58:27.875093 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8d8987b6c-pj8pt" event={"ID":"0a194576-3860-4b16-a7a8-cb7951a1e776","Type":"ContainerStarted","Data":"5330b3d03e45ec11652222a76fca262c9e56694acf30ac77cf9c9e188822182a"} Oct 13 07:58:27 crc kubenswrapper[4833]: I1013 07:58:27.875340 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8d8987b6c-pj8pt" Oct 13 07:58:27 crc kubenswrapper[4833]: I1013 07:58:27.894830 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8d8987b6c-pj8pt" podStartSLOduration=2.894811709 podStartE2EDuration="2.894811709s" podCreationTimestamp="2025-10-13 07:58:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 07:58:27.893475571 +0000 UTC m=+5397.993898497" watchObservedRunningTime="2025-10-13 07:58:27.894811709 +0000 UTC m=+5397.995234625" Oct 13 07:58:28 crc kubenswrapper[4833]: I1013 07:58:28.646966 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="628ba20d-9743-4b1c-861e-49f83ba44ff1" path="/var/lib/kubelet/pods/628ba20d-9743-4b1c-861e-49f83ba44ff1/volumes" Oct 13 07:58:28 crc kubenswrapper[4833]: I1013 07:58:28.872101 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Oct 13 07:58:28 crc kubenswrapper[4833]: E1013 07:58:28.872640 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="628ba20d-9743-4b1c-861e-49f83ba44ff1" containerName="init" Oct 13 07:58:28 crc kubenswrapper[4833]: I1013 07:58:28.872669 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="628ba20d-9743-4b1c-861e-49f83ba44ff1" containerName="init" Oct 13 07:58:28 crc kubenswrapper[4833]: E1013 07:58:28.872711 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="628ba20d-9743-4b1c-861e-49f83ba44ff1" containerName="dnsmasq-dns" Oct 13 07:58:28 crc kubenswrapper[4833]: I1013 07:58:28.872725 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="628ba20d-9743-4b1c-861e-49f83ba44ff1" containerName="dnsmasq-dns" Oct 13 07:58:28 crc kubenswrapper[4833]: I1013 07:58:28.872990 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="628ba20d-9743-4b1c-861e-49f83ba44ff1" containerName="dnsmasq-dns" Oct 13 07:58:28 crc kubenswrapper[4833]: I1013 07:58:28.873962 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Oct 13 07:58:28 crc kubenswrapper[4833]: I1013 07:58:28.877861 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Oct 13 07:58:28 crc kubenswrapper[4833]: I1013 07:58:28.883932 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Oct 13 07:58:29 crc kubenswrapper[4833]: I1013 07:58:29.075584 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/dde840da-a72a-4913-a6ec-12e725edd967-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"dde840da-a72a-4913-a6ec-12e725edd967\") " pod="openstack/ovn-copy-data" Oct 13 07:58:29 crc kubenswrapper[4833]: I1013 07:58:29.075889 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm45f\" (UniqueName: \"kubernetes.io/projected/dde840da-a72a-4913-a6ec-12e725edd967-kube-api-access-mm45f\") pod \"ovn-copy-data\" (UID: \"dde840da-a72a-4913-a6ec-12e725edd967\") " pod="openstack/ovn-copy-data" Oct 13 07:58:29 crc kubenswrapper[4833]: I1013 07:58:29.076015 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a70cf987-2495-46db-9d19-48453e28ffd3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a70cf987-2495-46db-9d19-48453e28ffd3\") pod \"ovn-copy-data\" (UID: \"dde840da-a72a-4913-a6ec-12e725edd967\") " pod="openstack/ovn-copy-data" Oct 13 07:58:29 crc kubenswrapper[4833]: I1013 07:58:29.178823 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/dde840da-a72a-4913-a6ec-12e725edd967-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"dde840da-a72a-4913-a6ec-12e725edd967\") " pod="openstack/ovn-copy-data" Oct 13 07:58:29 crc kubenswrapper[4833]: I1013 07:58:29.178991 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm45f\" (UniqueName: \"kubernetes.io/projected/dde840da-a72a-4913-a6ec-12e725edd967-kube-api-access-mm45f\") pod \"ovn-copy-data\" (UID: \"dde840da-a72a-4913-a6ec-12e725edd967\") " pod="openstack/ovn-copy-data" Oct 13 07:58:29 crc kubenswrapper[4833]: I1013 07:58:29.179040 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a70cf987-2495-46db-9d19-48453e28ffd3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a70cf987-2495-46db-9d19-48453e28ffd3\") pod \"ovn-copy-data\" (UID: \"dde840da-a72a-4913-a6ec-12e725edd967\") " pod="openstack/ovn-copy-data" Oct 13 07:58:29 crc kubenswrapper[4833]: I1013 07:58:29.182914 4833 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 13 07:58:29 crc kubenswrapper[4833]: I1013 07:58:29.183049 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a70cf987-2495-46db-9d19-48453e28ffd3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a70cf987-2495-46db-9d19-48453e28ffd3\") pod \"ovn-copy-data\" (UID: \"dde840da-a72a-4913-a6ec-12e725edd967\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/edf927a3ebfc2d54c8f4c923e4825efdd3ba7dd70a1e617efa1ddaf742a8b4cd/globalmount\"" pod="openstack/ovn-copy-data" Oct 13 07:58:29 crc kubenswrapper[4833]: I1013 07:58:29.186390 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/dde840da-a72a-4913-a6ec-12e725edd967-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"dde840da-a72a-4913-a6ec-12e725edd967\") " pod="openstack/ovn-copy-data" Oct 13 07:58:29 crc kubenswrapper[4833]: I1013 07:58:29.210753 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm45f\" (UniqueName: \"kubernetes.io/projected/dde840da-a72a-4913-a6ec-12e725edd967-kube-api-access-mm45f\") pod \"ovn-copy-data\" (UID: \"dde840da-a72a-4913-a6ec-12e725edd967\") " pod="openstack/ovn-copy-data" Oct 13 07:58:29 crc kubenswrapper[4833]: I1013 07:58:29.224317 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a70cf987-2495-46db-9d19-48453e28ffd3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a70cf987-2495-46db-9d19-48453e28ffd3\") pod \"ovn-copy-data\" (UID: \"dde840da-a72a-4913-a6ec-12e725edd967\") " pod="openstack/ovn-copy-data" Oct 13 07:58:29 crc kubenswrapper[4833]: I1013 07:58:29.512522 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Oct 13 07:58:30 crc kubenswrapper[4833]: I1013 07:58:30.127978 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Oct 13 07:58:30 crc kubenswrapper[4833]: W1013 07:58:30.137830 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddde840da_a72a_4913_a6ec_12e725edd967.slice/crio-13e1381f1f883f78f12534ee73fed620f509c2ebe2464d124b37d37f98202a31 WatchSource:0}: Error finding container 13e1381f1f883f78f12534ee73fed620f509c2ebe2464d124b37d37f98202a31: Status 404 returned error can't find the container with id 13e1381f1f883f78f12534ee73fed620f509c2ebe2464d124b37d37f98202a31 Oct 13 07:58:30 crc kubenswrapper[4833]: I1013 07:58:30.542445 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:58:30 crc kubenswrapper[4833]: I1013 07:58:30.542568 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:58:30 crc kubenswrapper[4833]: I1013 07:58:30.915018 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"dde840da-a72a-4913-a6ec-12e725edd967","Type":"ContainerStarted","Data":"13e1381f1f883f78f12534ee73fed620f509c2ebe2464d124b37d37f98202a31"} Oct 13 07:58:33 crc kubenswrapper[4833]: I1013 07:58:33.942147 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"dde840da-a72a-4913-a6ec-12e725edd967","Type":"ContainerStarted","Data":"31f44ace549c08589d45324b9041b59d0471b076ac85218bfff68d4bf2752958"} Oct 13 07:58:33 crc kubenswrapper[4833]: I1013 07:58:33.965365 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=4.149377906 podStartE2EDuration="6.965343787s" podCreationTimestamp="2025-10-13 07:58:27 +0000 UTC" firstStartedPulling="2025-10-13 07:58:30.141325473 +0000 UTC m=+5400.241748429" lastFinishedPulling="2025-10-13 07:58:32.957291364 +0000 UTC m=+5403.057714310" observedRunningTime="2025-10-13 07:58:33.95772013 +0000 UTC m=+5404.058143086" watchObservedRunningTime="2025-10-13 07:58:33.965343787 +0000 UTC m=+5404.065766713" Oct 13 07:58:35 crc kubenswrapper[4833]: I1013 07:58:35.424771 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8d8987b6c-pj8pt" Oct 13 07:58:35 crc kubenswrapper[4833]: I1013 07:58:35.520448 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdc957c47-pct5n"] Oct 13 07:58:35 crc kubenswrapper[4833]: I1013 07:58:35.520665 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5fdc957c47-pct5n" podUID="75a4b45d-5ace-4527-87b2-083eb5ea3199" containerName="dnsmasq-dns" containerID="cri-o://e845784987db35c05573d9e70f8bd4c6e6c965b7de10110ef340bbe60f3f1764" gracePeriod=10 Oct 13 07:58:35 crc kubenswrapper[4833]: I1013 07:58:35.962528 4833 generic.go:334] "Generic (PLEG): container finished" podID="75a4b45d-5ace-4527-87b2-083eb5ea3199" containerID="e845784987db35c05573d9e70f8bd4c6e6c965b7de10110ef340bbe60f3f1764" exitCode=0 Oct 13 07:58:35 crc kubenswrapper[4833]: I1013 07:58:35.962776 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdc957c47-pct5n" event={"ID":"75a4b45d-5ace-4527-87b2-083eb5ea3199","Type":"ContainerDied","Data":"e845784987db35c05573d9e70f8bd4c6e6c965b7de10110ef340bbe60f3f1764"} Oct 13 07:58:35 crc kubenswrapper[4833]: I1013 07:58:35.962800 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdc957c47-pct5n" event={"ID":"75a4b45d-5ace-4527-87b2-083eb5ea3199","Type":"ContainerDied","Data":"bd34a08503bc3a060e7d253ddffc83453896f39336fec18127d708bd26c39b23"} Oct 13 07:58:35 crc kubenswrapper[4833]: I1013 07:58:35.962810 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd34a08503bc3a060e7d253ddffc83453896f39336fec18127d708bd26c39b23" Oct 13 07:58:36 crc kubenswrapper[4833]: I1013 07:58:36.005798 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdc957c47-pct5n" Oct 13 07:58:36 crc kubenswrapper[4833]: I1013 07:58:36.032483 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75a4b45d-5ace-4527-87b2-083eb5ea3199-dns-svc\") pod \"75a4b45d-5ace-4527-87b2-083eb5ea3199\" (UID: \"75a4b45d-5ace-4527-87b2-083eb5ea3199\") " Oct 13 07:58:36 crc kubenswrapper[4833]: I1013 07:58:36.032565 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75a4b45d-5ace-4527-87b2-083eb5ea3199-config\") pod \"75a4b45d-5ace-4527-87b2-083eb5ea3199\" (UID: \"75a4b45d-5ace-4527-87b2-083eb5ea3199\") " Oct 13 07:58:36 crc kubenswrapper[4833]: I1013 07:58:36.032677 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv7p2\" (UniqueName: \"kubernetes.io/projected/75a4b45d-5ace-4527-87b2-083eb5ea3199-kube-api-access-pv7p2\") pod \"75a4b45d-5ace-4527-87b2-083eb5ea3199\" (UID: \"75a4b45d-5ace-4527-87b2-083eb5ea3199\") " Oct 13 07:58:36 crc kubenswrapper[4833]: I1013 07:58:36.051894 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75a4b45d-5ace-4527-87b2-083eb5ea3199-kube-api-access-pv7p2" (OuterVolumeSpecName: "kube-api-access-pv7p2") pod "75a4b45d-5ace-4527-87b2-083eb5ea3199" (UID: "75a4b45d-5ace-4527-87b2-083eb5ea3199"). InnerVolumeSpecName "kube-api-access-pv7p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:58:36 crc kubenswrapper[4833]: I1013 07:58:36.105972 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75a4b45d-5ace-4527-87b2-083eb5ea3199-config" (OuterVolumeSpecName: "config") pod "75a4b45d-5ace-4527-87b2-083eb5ea3199" (UID: "75a4b45d-5ace-4527-87b2-083eb5ea3199"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 07:58:36 crc kubenswrapper[4833]: I1013 07:58:36.108439 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75a4b45d-5ace-4527-87b2-083eb5ea3199-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "75a4b45d-5ace-4527-87b2-083eb5ea3199" (UID: "75a4b45d-5ace-4527-87b2-083eb5ea3199"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 07:58:36 crc kubenswrapper[4833]: I1013 07:58:36.135777 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75a4b45d-5ace-4527-87b2-083eb5ea3199-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 07:58:36 crc kubenswrapper[4833]: I1013 07:58:36.135801 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75a4b45d-5ace-4527-87b2-083eb5ea3199-config\") on node \"crc\" DevicePath \"\"" Oct 13 07:58:36 crc kubenswrapper[4833]: I1013 07:58:36.135811 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv7p2\" (UniqueName: \"kubernetes.io/projected/75a4b45d-5ace-4527-87b2-083eb5ea3199-kube-api-access-pv7p2\") on node \"crc\" DevicePath \"\"" Oct 13 07:58:36 crc kubenswrapper[4833]: I1013 07:58:36.976577 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdc957c47-pct5n" Oct 13 07:58:37 crc kubenswrapper[4833]: I1013 07:58:37.005986 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdc957c47-pct5n"] Oct 13 07:58:37 crc kubenswrapper[4833]: I1013 07:58:37.010860 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fdc957c47-pct5n"] Oct 13 07:58:38 crc kubenswrapper[4833]: I1013 07:58:38.654209 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75a4b45d-5ace-4527-87b2-083eb5ea3199" path="/var/lib/kubelet/pods/75a4b45d-5ace-4527-87b2-083eb5ea3199/volumes" Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.580736 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Oct 13 07:58:39 crc kubenswrapper[4833]: E1013 07:58:39.581364 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a4b45d-5ace-4527-87b2-083eb5ea3199" containerName="init" Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.581386 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a4b45d-5ace-4527-87b2-083eb5ea3199" containerName="init" Oct 13 07:58:39 crc kubenswrapper[4833]: E1013 07:58:39.581411 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a4b45d-5ace-4527-87b2-083eb5ea3199" containerName="dnsmasq-dns" Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.581453 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a4b45d-5ace-4527-87b2-083eb5ea3199" containerName="dnsmasq-dns" Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.581866 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="75a4b45d-5ace-4527-87b2-083eb5ea3199" containerName="dnsmasq-dns" Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.583147 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.587965 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.588445 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.588454 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-zjzv9" Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.590508 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.591064 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.602737 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqgbr\" (UniqueName: \"kubernetes.io/projected/8a6c7ab6-9cb4-4e29-b0cc-08939c57944d-kube-api-access-tqgbr\") pod \"ovn-northd-0\" (UID: \"8a6c7ab6-9cb4-4e29-b0cc-08939c57944d\") " pod="openstack/ovn-northd-0" Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.602984 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a6c7ab6-9cb4-4e29-b0cc-08939c57944d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8a6c7ab6-9cb4-4e29-b0cc-08939c57944d\") " pod="openstack/ovn-northd-0" Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.603077 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a6c7ab6-9cb4-4e29-b0cc-08939c57944d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8a6c7ab6-9cb4-4e29-b0cc-08939c57944d\") " pod="openstack/ovn-northd-0" Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.603169 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8a6c7ab6-9cb4-4e29-b0cc-08939c57944d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8a6c7ab6-9cb4-4e29-b0cc-08939c57944d\") " pod="openstack/ovn-northd-0" Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.603231 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a6c7ab6-9cb4-4e29-b0cc-08939c57944d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8a6c7ab6-9cb4-4e29-b0cc-08939c57944d\") " pod="openstack/ovn-northd-0" Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.603361 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a6c7ab6-9cb4-4e29-b0cc-08939c57944d-scripts\") pod \"ovn-northd-0\" (UID: \"8a6c7ab6-9cb4-4e29-b0cc-08939c57944d\") " pod="openstack/ovn-northd-0" Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.603465 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a6c7ab6-9cb4-4e29-b0cc-08939c57944d-config\") pod \"ovn-northd-0\" (UID: \"8a6c7ab6-9cb4-4e29-b0cc-08939c57944d\") " pod="openstack/ovn-northd-0" Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.704619 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a6c7ab6-9cb4-4e29-b0cc-08939c57944d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8a6c7ab6-9cb4-4e29-b0cc-08939c57944d\") " pod="openstack/ovn-northd-0" Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.704660 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a6c7ab6-9cb4-4e29-b0cc-08939c57944d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8a6c7ab6-9cb4-4e29-b0cc-08939c57944d\") " pod="openstack/ovn-northd-0" Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.704683 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8a6c7ab6-9cb4-4e29-b0cc-08939c57944d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8a6c7ab6-9cb4-4e29-b0cc-08939c57944d\") " pod="openstack/ovn-northd-0" Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.704715 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a6c7ab6-9cb4-4e29-b0cc-08939c57944d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8a6c7ab6-9cb4-4e29-b0cc-08939c57944d\") " pod="openstack/ovn-northd-0" Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.704760 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a6c7ab6-9cb4-4e29-b0cc-08939c57944d-scripts\") pod \"ovn-northd-0\" (UID: \"8a6c7ab6-9cb4-4e29-b0cc-08939c57944d\") " pod="openstack/ovn-northd-0" Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.704792 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a6c7ab6-9cb4-4e29-b0cc-08939c57944d-config\") pod \"ovn-northd-0\" (UID: \"8a6c7ab6-9cb4-4e29-b0cc-08939c57944d\") " pod="openstack/ovn-northd-0" Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.704834 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqgbr\" (UniqueName: \"kubernetes.io/projected/8a6c7ab6-9cb4-4e29-b0cc-08939c57944d-kube-api-access-tqgbr\") pod \"ovn-northd-0\" (UID: \"8a6c7ab6-9cb4-4e29-b0cc-08939c57944d\") " pod="openstack/ovn-northd-0" Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.705480 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8a6c7ab6-9cb4-4e29-b0cc-08939c57944d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8a6c7ab6-9cb4-4e29-b0cc-08939c57944d\") " pod="openstack/ovn-northd-0" Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.706389 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a6c7ab6-9cb4-4e29-b0cc-08939c57944d-scripts\") pod \"ovn-northd-0\" (UID: \"8a6c7ab6-9cb4-4e29-b0cc-08939c57944d\") " pod="openstack/ovn-northd-0" Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.706927 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a6c7ab6-9cb4-4e29-b0cc-08939c57944d-config\") pod \"ovn-northd-0\" (UID: \"8a6c7ab6-9cb4-4e29-b0cc-08939c57944d\") " pod="openstack/ovn-northd-0" Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.711494 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a6c7ab6-9cb4-4e29-b0cc-08939c57944d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8a6c7ab6-9cb4-4e29-b0cc-08939c57944d\") " pod="openstack/ovn-northd-0" Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.717233 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a6c7ab6-9cb4-4e29-b0cc-08939c57944d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8a6c7ab6-9cb4-4e29-b0cc-08939c57944d\") " pod="openstack/ovn-northd-0" Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.717374 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a6c7ab6-9cb4-4e29-b0cc-08939c57944d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8a6c7ab6-9cb4-4e29-b0cc-08939c57944d\") " pod="openstack/ovn-northd-0" Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.723659 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqgbr\" (UniqueName: \"kubernetes.io/projected/8a6c7ab6-9cb4-4e29-b0cc-08939c57944d-kube-api-access-tqgbr\") pod \"ovn-northd-0\" (UID: \"8a6c7ab6-9cb4-4e29-b0cc-08939c57944d\") " pod="openstack/ovn-northd-0" Oct 13 07:58:39 crc kubenswrapper[4833]: I1013 07:58:39.906836 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 13 07:58:40 crc kubenswrapper[4833]: I1013 07:58:40.387373 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 13 07:58:40 crc kubenswrapper[4833]: W1013 07:58:40.390642 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a6c7ab6_9cb4_4e29_b0cc_08939c57944d.slice/crio-3187c97df1f16ff94b11021c0734e4bb7c8123fd871752731de27212a138bde8 WatchSource:0}: Error finding container 3187c97df1f16ff94b11021c0734e4bb7c8123fd871752731de27212a138bde8: Status 404 returned error can't find the container with id 3187c97df1f16ff94b11021c0734e4bb7c8123fd871752731de27212a138bde8 Oct 13 07:58:41 crc kubenswrapper[4833]: I1013 07:58:41.024874 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8a6c7ab6-9cb4-4e29-b0cc-08939c57944d","Type":"ContainerStarted","Data":"2828096712c9839ef56a9f1b44c4358f7add785a22b1733413de271dae8735d0"} Oct 13 07:58:41 crc kubenswrapper[4833]: I1013 07:58:41.025180 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Oct 13 07:58:41 crc kubenswrapper[4833]: I1013 07:58:41.025190 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8a6c7ab6-9cb4-4e29-b0cc-08939c57944d","Type":"ContainerStarted","Data":"0e6df0a67d5d9e3a1d0c82879109f6c69cec060aa5a73830e0a5f9d35de68030"} Oct 13 07:58:41 crc kubenswrapper[4833]: I1013 07:58:41.025201 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8a6c7ab6-9cb4-4e29-b0cc-08939c57944d","Type":"ContainerStarted","Data":"3187c97df1f16ff94b11021c0734e4bb7c8123fd871752731de27212a138bde8"} Oct 13 07:58:41 crc kubenswrapper[4833]: I1013 07:58:41.045356 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.045332698 podStartE2EDuration="2.045332698s" podCreationTimestamp="2025-10-13 07:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 07:58:41.039341647 +0000 UTC m=+5411.139764563" watchObservedRunningTime="2025-10-13 07:58:41.045332698 +0000 UTC m=+5411.145755614" Oct 13 07:58:44 crc kubenswrapper[4833]: I1013 07:58:44.487082 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-fpjgk"] Oct 13 07:58:44 crc kubenswrapper[4833]: I1013 07:58:44.488763 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fpjgk" Oct 13 07:58:44 crc kubenswrapper[4833]: I1013 07:58:44.498434 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fpjgk"] Oct 13 07:58:44 crc kubenswrapper[4833]: I1013 07:58:44.591167 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4zqb\" (UniqueName: \"kubernetes.io/projected/64a09cec-505e-46b9-9be8-163a017dd1e9-kube-api-access-s4zqb\") pod \"keystone-db-create-fpjgk\" (UID: \"64a09cec-505e-46b9-9be8-163a017dd1e9\") " pod="openstack/keystone-db-create-fpjgk" Oct 13 07:58:44 crc kubenswrapper[4833]: I1013 07:58:44.711831 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4zqb\" (UniqueName: \"kubernetes.io/projected/64a09cec-505e-46b9-9be8-163a017dd1e9-kube-api-access-s4zqb\") pod \"keystone-db-create-fpjgk\" (UID: \"64a09cec-505e-46b9-9be8-163a017dd1e9\") " pod="openstack/keystone-db-create-fpjgk" Oct 13 07:58:44 crc kubenswrapper[4833]: I1013 07:58:44.749998 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4zqb\" (UniqueName: \"kubernetes.io/projected/64a09cec-505e-46b9-9be8-163a017dd1e9-kube-api-access-s4zqb\") pod \"keystone-db-create-fpjgk\" (UID: \"64a09cec-505e-46b9-9be8-163a017dd1e9\") " pod="openstack/keystone-db-create-fpjgk" Oct 13 07:58:44 crc kubenswrapper[4833]: I1013 07:58:44.815498 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fpjgk" Oct 13 07:58:45 crc kubenswrapper[4833]: I1013 07:58:45.118323 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fpjgk"] Oct 13 07:58:46 crc kubenswrapper[4833]: I1013 07:58:46.078553 4833 generic.go:334] "Generic (PLEG): container finished" podID="64a09cec-505e-46b9-9be8-163a017dd1e9" containerID="ab834059bc8d962374421434c7f5bd3761319215c22db327e9c67458d33aa56b" exitCode=0 Oct 13 07:58:46 crc kubenswrapper[4833]: I1013 07:58:46.078606 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fpjgk" event={"ID":"64a09cec-505e-46b9-9be8-163a017dd1e9","Type":"ContainerDied","Data":"ab834059bc8d962374421434c7f5bd3761319215c22db327e9c67458d33aa56b"} Oct 13 07:58:46 crc kubenswrapper[4833]: I1013 07:58:46.078637 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fpjgk" event={"ID":"64a09cec-505e-46b9-9be8-163a017dd1e9","Type":"ContainerStarted","Data":"ea82566e0ecd33668c4a057cd8ccfc5c72910fd7bc21bb0bea8bc0d79a785453"} Oct 13 07:58:47 crc kubenswrapper[4833]: I1013 07:58:47.424491 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fpjgk" Oct 13 07:58:47 crc kubenswrapper[4833]: I1013 07:58:47.459873 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4zqb\" (UniqueName: \"kubernetes.io/projected/64a09cec-505e-46b9-9be8-163a017dd1e9-kube-api-access-s4zqb\") pod \"64a09cec-505e-46b9-9be8-163a017dd1e9\" (UID: \"64a09cec-505e-46b9-9be8-163a017dd1e9\") " Oct 13 07:58:47 crc kubenswrapper[4833]: I1013 07:58:47.469391 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64a09cec-505e-46b9-9be8-163a017dd1e9-kube-api-access-s4zqb" (OuterVolumeSpecName: "kube-api-access-s4zqb") pod "64a09cec-505e-46b9-9be8-163a017dd1e9" (UID: "64a09cec-505e-46b9-9be8-163a017dd1e9"). InnerVolumeSpecName "kube-api-access-s4zqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:58:47 crc kubenswrapper[4833]: I1013 07:58:47.562761 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4zqb\" (UniqueName: \"kubernetes.io/projected/64a09cec-505e-46b9-9be8-163a017dd1e9-kube-api-access-s4zqb\") on node \"crc\" DevicePath \"\"" Oct 13 07:58:48 crc kubenswrapper[4833]: I1013 07:58:48.106707 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fpjgk" event={"ID":"64a09cec-505e-46b9-9be8-163a017dd1e9","Type":"ContainerDied","Data":"ea82566e0ecd33668c4a057cd8ccfc5c72910fd7bc21bb0bea8bc0d79a785453"} Oct 13 07:58:48 crc kubenswrapper[4833]: I1013 07:58:48.106751 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea82566e0ecd33668c4a057cd8ccfc5c72910fd7bc21bb0bea8bc0d79a785453" Oct 13 07:58:48 crc kubenswrapper[4833]: I1013 07:58:48.106804 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fpjgk" Oct 13 07:58:50 crc kubenswrapper[4833]: I1013 07:58:50.679835 4833 scope.go:117] "RemoveContainer" containerID="e845784987db35c05573d9e70f8bd4c6e6c965b7de10110ef340bbe60f3f1764" Oct 13 07:58:50 crc kubenswrapper[4833]: I1013 07:58:50.715678 4833 scope.go:117] "RemoveContainer" containerID="4a5a2ef25b141d4b2641ab1f380b7ca7ed24833e5101fc8ab4db979082b9d746" Oct 13 07:58:54 crc kubenswrapper[4833]: I1013 07:58:54.499851 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6458-account-create-zlnxq"] Oct 13 07:58:54 crc kubenswrapper[4833]: E1013 07:58:54.500147 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64a09cec-505e-46b9-9be8-163a017dd1e9" containerName="mariadb-database-create" Oct 13 07:58:54 crc kubenswrapper[4833]: I1013 07:58:54.500158 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="64a09cec-505e-46b9-9be8-163a017dd1e9" containerName="mariadb-database-create" Oct 13 07:58:54 crc kubenswrapper[4833]: I1013 07:58:54.500346 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="64a09cec-505e-46b9-9be8-163a017dd1e9" containerName="mariadb-database-create" Oct 13 07:58:54 crc kubenswrapper[4833]: I1013 07:58:54.501234 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6458-account-create-zlnxq" Oct 13 07:58:54 crc kubenswrapper[4833]: I1013 07:58:54.508447 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Oct 13 07:58:54 crc kubenswrapper[4833]: I1013 07:58:54.522401 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6458-account-create-zlnxq"] Oct 13 07:58:54 crc kubenswrapper[4833]: I1013 07:58:54.687708 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcnsz\" (UniqueName: \"kubernetes.io/projected/4f300349-5dfc-4c90-b952-e89f89a74f9d-kube-api-access-zcnsz\") pod \"keystone-6458-account-create-zlnxq\" (UID: \"4f300349-5dfc-4c90-b952-e89f89a74f9d\") " pod="openstack/keystone-6458-account-create-zlnxq" Oct 13 07:58:54 crc kubenswrapper[4833]: I1013 07:58:54.789960 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcnsz\" (UniqueName: \"kubernetes.io/projected/4f300349-5dfc-4c90-b952-e89f89a74f9d-kube-api-access-zcnsz\") pod \"keystone-6458-account-create-zlnxq\" (UID: \"4f300349-5dfc-4c90-b952-e89f89a74f9d\") " pod="openstack/keystone-6458-account-create-zlnxq" Oct 13 07:58:54 crc kubenswrapper[4833]: I1013 07:58:54.811666 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcnsz\" (UniqueName: \"kubernetes.io/projected/4f300349-5dfc-4c90-b952-e89f89a74f9d-kube-api-access-zcnsz\") pod \"keystone-6458-account-create-zlnxq\" (UID: \"4f300349-5dfc-4c90-b952-e89f89a74f9d\") " pod="openstack/keystone-6458-account-create-zlnxq" Oct 13 07:58:54 crc kubenswrapper[4833]: I1013 07:58:54.821977 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6458-account-create-zlnxq" Oct 13 07:58:55 crc kubenswrapper[4833]: I1013 07:58:55.011953 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Oct 13 07:58:55 crc kubenswrapper[4833]: I1013 07:58:55.275961 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6458-account-create-zlnxq"] Oct 13 07:58:55 crc kubenswrapper[4833]: W1013 07:58:55.286315 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f300349_5dfc_4c90_b952_e89f89a74f9d.slice/crio-5b1d169e50efc0d6f93711f90f7bc6ffec91da3514454a9bdf6579165a99125e WatchSource:0}: Error finding container 5b1d169e50efc0d6f93711f90f7bc6ffec91da3514454a9bdf6579165a99125e: Status 404 returned error can't find the container with id 5b1d169e50efc0d6f93711f90f7bc6ffec91da3514454a9bdf6579165a99125e Oct 13 07:58:56 crc kubenswrapper[4833]: I1013 07:58:56.188451 4833 generic.go:334] "Generic (PLEG): container finished" podID="4f300349-5dfc-4c90-b952-e89f89a74f9d" containerID="2358d627f4b4f6ccc77818eeb4b48c9fd9fcdcc46baede8c999aaa3dbe5c960f" exitCode=0 Oct 13 07:58:56 crc kubenswrapper[4833]: I1013 07:58:56.188583 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6458-account-create-zlnxq" event={"ID":"4f300349-5dfc-4c90-b952-e89f89a74f9d","Type":"ContainerDied","Data":"2358d627f4b4f6ccc77818eeb4b48c9fd9fcdcc46baede8c999aaa3dbe5c960f"} Oct 13 07:58:56 crc kubenswrapper[4833]: I1013 07:58:56.189729 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6458-account-create-zlnxq" event={"ID":"4f300349-5dfc-4c90-b952-e89f89a74f9d","Type":"ContainerStarted","Data":"5b1d169e50efc0d6f93711f90f7bc6ffec91da3514454a9bdf6579165a99125e"} Oct 13 07:58:57 crc kubenswrapper[4833]: I1013 07:58:57.460507 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6458-account-create-zlnxq" Oct 13 07:58:57 crc kubenswrapper[4833]: I1013 07:58:57.539863 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcnsz\" (UniqueName: \"kubernetes.io/projected/4f300349-5dfc-4c90-b952-e89f89a74f9d-kube-api-access-zcnsz\") pod \"4f300349-5dfc-4c90-b952-e89f89a74f9d\" (UID: \"4f300349-5dfc-4c90-b952-e89f89a74f9d\") " Oct 13 07:58:57 crc kubenswrapper[4833]: I1013 07:58:57.547012 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f300349-5dfc-4c90-b952-e89f89a74f9d-kube-api-access-zcnsz" (OuterVolumeSpecName: "kube-api-access-zcnsz") pod "4f300349-5dfc-4c90-b952-e89f89a74f9d" (UID: "4f300349-5dfc-4c90-b952-e89f89a74f9d"). InnerVolumeSpecName "kube-api-access-zcnsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:58:57 crc kubenswrapper[4833]: I1013 07:58:57.641775 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcnsz\" (UniqueName: \"kubernetes.io/projected/4f300349-5dfc-4c90-b952-e89f89a74f9d-kube-api-access-zcnsz\") on node \"crc\" DevicePath \"\"" Oct 13 07:58:58 crc kubenswrapper[4833]: I1013 07:58:58.209156 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6458-account-create-zlnxq" event={"ID":"4f300349-5dfc-4c90-b952-e89f89a74f9d","Type":"ContainerDied","Data":"5b1d169e50efc0d6f93711f90f7bc6ffec91da3514454a9bdf6579165a99125e"} Oct 13 07:58:58 crc kubenswrapper[4833]: I1013 07:58:58.209487 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b1d169e50efc0d6f93711f90f7bc6ffec91da3514454a9bdf6579165a99125e" Oct 13 07:58:58 crc kubenswrapper[4833]: I1013 07:58:58.209200 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6458-account-create-zlnxq" Oct 13 07:58:59 crc kubenswrapper[4833]: I1013 07:58:59.973076 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-96x5z"] Oct 13 07:58:59 crc kubenswrapper[4833]: E1013 07:58:59.973400 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f300349-5dfc-4c90-b952-e89f89a74f9d" containerName="mariadb-account-create" Oct 13 07:58:59 crc kubenswrapper[4833]: I1013 07:58:59.973412 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f300349-5dfc-4c90-b952-e89f89a74f9d" containerName="mariadb-account-create" Oct 13 07:58:59 crc kubenswrapper[4833]: I1013 07:58:59.973566 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f300349-5dfc-4c90-b952-e89f89a74f9d" containerName="mariadb-account-create" Oct 13 07:58:59 crc kubenswrapper[4833]: I1013 07:58:59.974063 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-96x5z" Oct 13 07:58:59 crc kubenswrapper[4833]: I1013 07:58:59.976909 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 13 07:58:59 crc kubenswrapper[4833]: I1013 07:58:59.977751 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 13 07:58:59 crc kubenswrapper[4833]: I1013 07:58:59.978954 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7gjll" Oct 13 07:58:59 crc kubenswrapper[4833]: I1013 07:58:59.979211 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 13 07:58:59 crc kubenswrapper[4833]: I1013 07:58:59.991238 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-96x5z"] Oct 13 07:59:00 crc kubenswrapper[4833]: I1013 07:59:00.081054 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b179700-e622-4497-8d98-096ddef6c4bf-config-data\") pod \"keystone-db-sync-96x5z\" (UID: \"9b179700-e622-4497-8d98-096ddef6c4bf\") " pod="openstack/keystone-db-sync-96x5z" Oct 13 07:59:00 crc kubenswrapper[4833]: I1013 07:59:00.081164 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b179700-e622-4497-8d98-096ddef6c4bf-combined-ca-bundle\") pod \"keystone-db-sync-96x5z\" (UID: \"9b179700-e622-4497-8d98-096ddef6c4bf\") " pod="openstack/keystone-db-sync-96x5z" Oct 13 07:59:00 crc kubenswrapper[4833]: I1013 07:59:00.081256 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mg7x\" (UniqueName: \"kubernetes.io/projected/9b179700-e622-4497-8d98-096ddef6c4bf-kube-api-access-4mg7x\") pod \"keystone-db-sync-96x5z\" (UID: \"9b179700-e622-4497-8d98-096ddef6c4bf\") " pod="openstack/keystone-db-sync-96x5z" Oct 13 07:59:00 crc kubenswrapper[4833]: I1013 07:59:00.182349 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mg7x\" (UniqueName: \"kubernetes.io/projected/9b179700-e622-4497-8d98-096ddef6c4bf-kube-api-access-4mg7x\") pod \"keystone-db-sync-96x5z\" (UID: \"9b179700-e622-4497-8d98-096ddef6c4bf\") " pod="openstack/keystone-db-sync-96x5z" Oct 13 07:59:00 crc kubenswrapper[4833]: I1013 07:59:00.182745 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b179700-e622-4497-8d98-096ddef6c4bf-config-data\") pod \"keystone-db-sync-96x5z\" (UID: \"9b179700-e622-4497-8d98-096ddef6c4bf\") " pod="openstack/keystone-db-sync-96x5z" Oct 13 07:59:00 crc kubenswrapper[4833]: I1013 07:59:00.182881 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b179700-e622-4497-8d98-096ddef6c4bf-combined-ca-bundle\") pod \"keystone-db-sync-96x5z\" (UID: \"9b179700-e622-4497-8d98-096ddef6c4bf\") " pod="openstack/keystone-db-sync-96x5z" Oct 13 07:59:00 crc kubenswrapper[4833]: I1013 07:59:00.189326 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b179700-e622-4497-8d98-096ddef6c4bf-config-data\") pod \"keystone-db-sync-96x5z\" (UID: \"9b179700-e622-4497-8d98-096ddef6c4bf\") " pod="openstack/keystone-db-sync-96x5z" Oct 13 07:59:00 crc kubenswrapper[4833]: I1013 07:59:00.190425 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b179700-e622-4497-8d98-096ddef6c4bf-combined-ca-bundle\") pod \"keystone-db-sync-96x5z\" (UID: \"9b179700-e622-4497-8d98-096ddef6c4bf\") " pod="openstack/keystone-db-sync-96x5z" Oct 13 07:59:00 crc kubenswrapper[4833]: I1013 07:59:00.200051 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mg7x\" (UniqueName: \"kubernetes.io/projected/9b179700-e622-4497-8d98-096ddef6c4bf-kube-api-access-4mg7x\") pod \"keystone-db-sync-96x5z\" (UID: \"9b179700-e622-4497-8d98-096ddef6c4bf\") " pod="openstack/keystone-db-sync-96x5z" Oct 13 07:59:00 crc kubenswrapper[4833]: I1013 07:59:00.292316 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-96x5z" Oct 13 07:59:00 crc kubenswrapper[4833]: I1013 07:59:00.542935 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:59:00 crc kubenswrapper[4833]: I1013 07:59:00.543193 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:59:00 crc kubenswrapper[4833]: I1013 07:59:00.811430 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-96x5z"] Oct 13 07:59:01 crc kubenswrapper[4833]: I1013 07:59:01.239954 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-96x5z" event={"ID":"9b179700-e622-4497-8d98-096ddef6c4bf","Type":"ContainerStarted","Data":"822fd76c870110279b91f2053106b87ca4acac746835343aea4b90dea4888bb2"} Oct 13 07:59:01 crc kubenswrapper[4833]: I1013 07:59:01.240045 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-96x5z" event={"ID":"9b179700-e622-4497-8d98-096ddef6c4bf","Type":"ContainerStarted","Data":"52d3821bbe53edd867335fbe6c5ee513140286b891d7f3d662869a8abaf84e9d"} Oct 13 07:59:01 crc kubenswrapper[4833]: I1013 07:59:01.259286 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-96x5z" podStartSLOduration=2.2592648459999998 podStartE2EDuration="2.259264846s" podCreationTimestamp="2025-10-13 07:58:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 07:59:01.252316668 +0000 UTC m=+5431.352739584" watchObservedRunningTime="2025-10-13 07:59:01.259264846 +0000 UTC m=+5431.359687762" Oct 13 07:59:03 crc kubenswrapper[4833]: I1013 07:59:03.263443 4833 generic.go:334] "Generic (PLEG): container finished" podID="9b179700-e622-4497-8d98-096ddef6c4bf" containerID="822fd76c870110279b91f2053106b87ca4acac746835343aea4b90dea4888bb2" exitCode=0 Oct 13 07:59:03 crc kubenswrapper[4833]: I1013 07:59:03.263527 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-96x5z" event={"ID":"9b179700-e622-4497-8d98-096ddef6c4bf","Type":"ContainerDied","Data":"822fd76c870110279b91f2053106b87ca4acac746835343aea4b90dea4888bb2"} Oct 13 07:59:04 crc kubenswrapper[4833]: I1013 07:59:04.657388 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-96x5z" Oct 13 07:59:04 crc kubenswrapper[4833]: I1013 07:59:04.757766 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b179700-e622-4497-8d98-096ddef6c4bf-combined-ca-bundle\") pod \"9b179700-e622-4497-8d98-096ddef6c4bf\" (UID: \"9b179700-e622-4497-8d98-096ddef6c4bf\") " Oct 13 07:59:04 crc kubenswrapper[4833]: I1013 07:59:04.757857 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mg7x\" (UniqueName: \"kubernetes.io/projected/9b179700-e622-4497-8d98-096ddef6c4bf-kube-api-access-4mg7x\") pod \"9b179700-e622-4497-8d98-096ddef6c4bf\" (UID: \"9b179700-e622-4497-8d98-096ddef6c4bf\") " Oct 13 07:59:04 crc kubenswrapper[4833]: I1013 07:59:04.758005 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b179700-e622-4497-8d98-096ddef6c4bf-config-data\") pod \"9b179700-e622-4497-8d98-096ddef6c4bf\" (UID: \"9b179700-e622-4497-8d98-096ddef6c4bf\") " Oct 13 07:59:04 crc kubenswrapper[4833]: I1013 07:59:04.763695 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b179700-e622-4497-8d98-096ddef6c4bf-kube-api-access-4mg7x" (OuterVolumeSpecName: "kube-api-access-4mg7x") pod "9b179700-e622-4497-8d98-096ddef6c4bf" (UID: "9b179700-e622-4497-8d98-096ddef6c4bf"). InnerVolumeSpecName "kube-api-access-4mg7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:59:04 crc kubenswrapper[4833]: I1013 07:59:04.779948 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b179700-e622-4497-8d98-096ddef6c4bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b179700-e622-4497-8d98-096ddef6c4bf" (UID: "9b179700-e622-4497-8d98-096ddef6c4bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 07:59:04 crc kubenswrapper[4833]: I1013 07:59:04.813919 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b179700-e622-4497-8d98-096ddef6c4bf-config-data" (OuterVolumeSpecName: "config-data") pod "9b179700-e622-4497-8d98-096ddef6c4bf" (UID: "9b179700-e622-4497-8d98-096ddef6c4bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 07:59:04 crc kubenswrapper[4833]: I1013 07:59:04.860203 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b179700-e622-4497-8d98-096ddef6c4bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 07:59:04 crc kubenswrapper[4833]: I1013 07:59:04.860252 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mg7x\" (UniqueName: \"kubernetes.io/projected/9b179700-e622-4497-8d98-096ddef6c4bf-kube-api-access-4mg7x\") on node \"crc\" DevicePath \"\"" Oct 13 07:59:04 crc kubenswrapper[4833]: I1013 07:59:04.860275 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b179700-e622-4497-8d98-096ddef6c4bf-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.286853 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-96x5z" event={"ID":"9b179700-e622-4497-8d98-096ddef6c4bf","Type":"ContainerDied","Data":"52d3821bbe53edd867335fbe6c5ee513140286b891d7f3d662869a8abaf84e9d"} Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.286899 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52d3821bbe53edd867335fbe6c5ee513140286b891d7f3d662869a8abaf84e9d" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.286958 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-96x5z" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.542892 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7858474d7c-dfrkm"] Oct 13 07:59:05 crc kubenswrapper[4833]: E1013 07:59:05.543280 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b179700-e622-4497-8d98-096ddef6c4bf" containerName="keystone-db-sync" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.543299 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b179700-e622-4497-8d98-096ddef6c4bf" containerName="keystone-db-sync" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.543516 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b179700-e622-4497-8d98-096ddef6c4bf" containerName="keystone-db-sync" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.544492 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7858474d7c-dfrkm" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.594472 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-nzfdr"] Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.595472 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nzfdr" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.599070 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.599114 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7gjll" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.599071 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.599285 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.608552 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7858474d7c-dfrkm"] Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.632955 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nzfdr"] Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.675339 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6144ac3-73ed-4d20-8ec7-a971caa832a7-dns-svc\") pod \"dnsmasq-dns-7858474d7c-dfrkm\" (UID: \"f6144ac3-73ed-4d20-8ec7-a971caa832a7\") " pod="openstack/dnsmasq-dns-7858474d7c-dfrkm" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.675389 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vft58\" (UniqueName: \"kubernetes.io/projected/f6144ac3-73ed-4d20-8ec7-a971caa832a7-kube-api-access-vft58\") pod \"dnsmasq-dns-7858474d7c-dfrkm\" (UID: \"f6144ac3-73ed-4d20-8ec7-a971caa832a7\") " pod="openstack/dnsmasq-dns-7858474d7c-dfrkm" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.675484 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6144ac3-73ed-4d20-8ec7-a971caa832a7-ovsdbserver-nb\") pod \"dnsmasq-dns-7858474d7c-dfrkm\" (UID: \"f6144ac3-73ed-4d20-8ec7-a971caa832a7\") " pod="openstack/dnsmasq-dns-7858474d7c-dfrkm" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.675513 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6144ac3-73ed-4d20-8ec7-a971caa832a7-ovsdbserver-sb\") pod \"dnsmasq-dns-7858474d7c-dfrkm\" (UID: \"f6144ac3-73ed-4d20-8ec7-a971caa832a7\") " pod="openstack/dnsmasq-dns-7858474d7c-dfrkm" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.675530 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6144ac3-73ed-4d20-8ec7-a971caa832a7-config\") pod \"dnsmasq-dns-7858474d7c-dfrkm\" (UID: \"f6144ac3-73ed-4d20-8ec7-a971caa832a7\") " pod="openstack/dnsmasq-dns-7858474d7c-dfrkm" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.776833 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63e66d75-8738-47f9-ae12-51961d24731d-scripts\") pod \"keystone-bootstrap-nzfdr\" (UID: \"63e66d75-8738-47f9-ae12-51961d24731d\") " pod="openstack/keystone-bootstrap-nzfdr" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.776910 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63e66d75-8738-47f9-ae12-51961d24731d-config-data\") pod \"keystone-bootstrap-nzfdr\" (UID: \"63e66d75-8738-47f9-ae12-51961d24731d\") " pod="openstack/keystone-bootstrap-nzfdr" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.776976 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6144ac3-73ed-4d20-8ec7-a971caa832a7-dns-svc\") pod \"dnsmasq-dns-7858474d7c-dfrkm\" (UID: \"f6144ac3-73ed-4d20-8ec7-a971caa832a7\") " pod="openstack/dnsmasq-dns-7858474d7c-dfrkm" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.777028 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vft58\" (UniqueName: \"kubernetes.io/projected/f6144ac3-73ed-4d20-8ec7-a971caa832a7-kube-api-access-vft58\") pod \"dnsmasq-dns-7858474d7c-dfrkm\" (UID: \"f6144ac3-73ed-4d20-8ec7-a971caa832a7\") " pod="openstack/dnsmasq-dns-7858474d7c-dfrkm" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.777074 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/63e66d75-8738-47f9-ae12-51961d24731d-credential-keys\") pod \"keystone-bootstrap-nzfdr\" (UID: \"63e66d75-8738-47f9-ae12-51961d24731d\") " pod="openstack/keystone-bootstrap-nzfdr" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.777103 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63e66d75-8738-47f9-ae12-51961d24731d-combined-ca-bundle\") pod \"keystone-bootstrap-nzfdr\" (UID: \"63e66d75-8738-47f9-ae12-51961d24731d\") " pod="openstack/keystone-bootstrap-nzfdr" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.777153 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6144ac3-73ed-4d20-8ec7-a971caa832a7-ovsdbserver-nb\") pod \"dnsmasq-dns-7858474d7c-dfrkm\" (UID: \"f6144ac3-73ed-4d20-8ec7-a971caa832a7\") " pod="openstack/dnsmasq-dns-7858474d7c-dfrkm" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.777181 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6144ac3-73ed-4d20-8ec7-a971caa832a7-config\") pod \"dnsmasq-dns-7858474d7c-dfrkm\" (UID: \"f6144ac3-73ed-4d20-8ec7-a971caa832a7\") " pod="openstack/dnsmasq-dns-7858474d7c-dfrkm" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.777206 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6144ac3-73ed-4d20-8ec7-a971caa832a7-ovsdbserver-sb\") pod \"dnsmasq-dns-7858474d7c-dfrkm\" (UID: \"f6144ac3-73ed-4d20-8ec7-a971caa832a7\") " pod="openstack/dnsmasq-dns-7858474d7c-dfrkm" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.777257 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/63e66d75-8738-47f9-ae12-51961d24731d-fernet-keys\") pod \"keystone-bootstrap-nzfdr\" (UID: \"63e66d75-8738-47f9-ae12-51961d24731d\") " pod="openstack/keystone-bootstrap-nzfdr" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.777290 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thvp2\" (UniqueName: \"kubernetes.io/projected/63e66d75-8738-47f9-ae12-51961d24731d-kube-api-access-thvp2\") pod \"keystone-bootstrap-nzfdr\" (UID: \"63e66d75-8738-47f9-ae12-51961d24731d\") " pod="openstack/keystone-bootstrap-nzfdr" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.778093 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6144ac3-73ed-4d20-8ec7-a971caa832a7-dns-svc\") pod \"dnsmasq-dns-7858474d7c-dfrkm\" (UID: \"f6144ac3-73ed-4d20-8ec7-a971caa832a7\") " pod="openstack/dnsmasq-dns-7858474d7c-dfrkm" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.778123 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6144ac3-73ed-4d20-8ec7-a971caa832a7-ovsdbserver-nb\") pod \"dnsmasq-dns-7858474d7c-dfrkm\" (UID: \"f6144ac3-73ed-4d20-8ec7-a971caa832a7\") " pod="openstack/dnsmasq-dns-7858474d7c-dfrkm" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.778204 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6144ac3-73ed-4d20-8ec7-a971caa832a7-config\") pod \"dnsmasq-dns-7858474d7c-dfrkm\" (UID: \"f6144ac3-73ed-4d20-8ec7-a971caa832a7\") " pod="openstack/dnsmasq-dns-7858474d7c-dfrkm" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.778321 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6144ac3-73ed-4d20-8ec7-a971caa832a7-ovsdbserver-sb\") pod \"dnsmasq-dns-7858474d7c-dfrkm\" (UID: \"f6144ac3-73ed-4d20-8ec7-a971caa832a7\") " pod="openstack/dnsmasq-dns-7858474d7c-dfrkm" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.809429 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vft58\" (UniqueName: \"kubernetes.io/projected/f6144ac3-73ed-4d20-8ec7-a971caa832a7-kube-api-access-vft58\") pod \"dnsmasq-dns-7858474d7c-dfrkm\" (UID: \"f6144ac3-73ed-4d20-8ec7-a971caa832a7\") " pod="openstack/dnsmasq-dns-7858474d7c-dfrkm" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.867513 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7858474d7c-dfrkm" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.878933 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63e66d75-8738-47f9-ae12-51961d24731d-config-data\") pod \"keystone-bootstrap-nzfdr\" (UID: \"63e66d75-8738-47f9-ae12-51961d24731d\") " pod="openstack/keystone-bootstrap-nzfdr" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.879730 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/63e66d75-8738-47f9-ae12-51961d24731d-credential-keys\") pod \"keystone-bootstrap-nzfdr\" (UID: \"63e66d75-8738-47f9-ae12-51961d24731d\") " pod="openstack/keystone-bootstrap-nzfdr" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.879787 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63e66d75-8738-47f9-ae12-51961d24731d-combined-ca-bundle\") pod \"keystone-bootstrap-nzfdr\" (UID: \"63e66d75-8738-47f9-ae12-51961d24731d\") " pod="openstack/keystone-bootstrap-nzfdr" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.879953 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/63e66d75-8738-47f9-ae12-51961d24731d-fernet-keys\") pod \"keystone-bootstrap-nzfdr\" (UID: \"63e66d75-8738-47f9-ae12-51961d24731d\") " pod="openstack/keystone-bootstrap-nzfdr" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.880008 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thvp2\" (UniqueName: \"kubernetes.io/projected/63e66d75-8738-47f9-ae12-51961d24731d-kube-api-access-thvp2\") pod \"keystone-bootstrap-nzfdr\" (UID: \"63e66d75-8738-47f9-ae12-51961d24731d\") " pod="openstack/keystone-bootstrap-nzfdr" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.880074 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63e66d75-8738-47f9-ae12-51961d24731d-scripts\") pod \"keystone-bootstrap-nzfdr\" (UID: \"63e66d75-8738-47f9-ae12-51961d24731d\") " pod="openstack/keystone-bootstrap-nzfdr" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.883879 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63e66d75-8738-47f9-ae12-51961d24731d-config-data\") pod \"keystone-bootstrap-nzfdr\" (UID: \"63e66d75-8738-47f9-ae12-51961d24731d\") " pod="openstack/keystone-bootstrap-nzfdr" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.885835 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63e66d75-8738-47f9-ae12-51961d24731d-scripts\") pod \"keystone-bootstrap-nzfdr\" (UID: \"63e66d75-8738-47f9-ae12-51961d24731d\") " pod="openstack/keystone-bootstrap-nzfdr" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.890419 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/63e66d75-8738-47f9-ae12-51961d24731d-fernet-keys\") pod \"keystone-bootstrap-nzfdr\" (UID: \"63e66d75-8738-47f9-ae12-51961d24731d\") " pod="openstack/keystone-bootstrap-nzfdr" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.891608 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/63e66d75-8738-47f9-ae12-51961d24731d-credential-keys\") pod \"keystone-bootstrap-nzfdr\" (UID: \"63e66d75-8738-47f9-ae12-51961d24731d\") " pod="openstack/keystone-bootstrap-nzfdr" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.896674 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63e66d75-8738-47f9-ae12-51961d24731d-combined-ca-bundle\") pod \"keystone-bootstrap-nzfdr\" (UID: \"63e66d75-8738-47f9-ae12-51961d24731d\") " pod="openstack/keystone-bootstrap-nzfdr" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.905856 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thvp2\" (UniqueName: \"kubernetes.io/projected/63e66d75-8738-47f9-ae12-51961d24731d-kube-api-access-thvp2\") pod \"keystone-bootstrap-nzfdr\" (UID: \"63e66d75-8738-47f9-ae12-51961d24731d\") " pod="openstack/keystone-bootstrap-nzfdr" Oct 13 07:59:05 crc kubenswrapper[4833]: I1013 07:59:05.917828 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nzfdr" Oct 13 07:59:06 crc kubenswrapper[4833]: I1013 07:59:06.419566 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7858474d7c-dfrkm"] Oct 13 07:59:06 crc kubenswrapper[4833]: I1013 07:59:06.433584 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nzfdr"] Oct 13 07:59:07 crc kubenswrapper[4833]: I1013 07:59:07.305886 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nzfdr" event={"ID":"63e66d75-8738-47f9-ae12-51961d24731d","Type":"ContainerStarted","Data":"c23cac5248e069421afeb41b6309979340b828b4aef28bf4635b8bc20f906f37"} Oct 13 07:59:07 crc kubenswrapper[4833]: I1013 07:59:07.306284 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nzfdr" event={"ID":"63e66d75-8738-47f9-ae12-51961d24731d","Type":"ContainerStarted","Data":"9e15b8af8976efa0c97c7877221c1ce5d4bfd723e00b0e6188dd10a08a650d3d"} Oct 13 07:59:07 crc kubenswrapper[4833]: I1013 07:59:07.307843 4833 generic.go:334] "Generic (PLEG): container finished" podID="f6144ac3-73ed-4d20-8ec7-a971caa832a7" containerID="41297594f99bdb7cad78e6761bf3b756eb3fd09c25f900c1d5d75d12e8a0fedc" exitCode=0 Oct 13 07:59:07 crc kubenswrapper[4833]: I1013 07:59:07.307896 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7858474d7c-dfrkm" event={"ID":"f6144ac3-73ed-4d20-8ec7-a971caa832a7","Type":"ContainerDied","Data":"41297594f99bdb7cad78e6761bf3b756eb3fd09c25f900c1d5d75d12e8a0fedc"} Oct 13 07:59:07 crc kubenswrapper[4833]: I1013 07:59:07.307925 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7858474d7c-dfrkm" event={"ID":"f6144ac3-73ed-4d20-8ec7-a971caa832a7","Type":"ContainerStarted","Data":"43e75f8bf16eb7279c726fd2bad451f09c9aafb0ff5c5c511baf1f0420b6baab"} Oct 13 07:59:07 crc kubenswrapper[4833]: I1013 07:59:07.348872 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-nzfdr" podStartSLOduration=2.348836106 podStartE2EDuration="2.348836106s" podCreationTimestamp="2025-10-13 07:59:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 07:59:07.338689018 +0000 UTC m=+5437.439111964" watchObservedRunningTime="2025-10-13 07:59:07.348836106 +0000 UTC m=+5437.449259062" Oct 13 07:59:08 crc kubenswrapper[4833]: I1013 07:59:08.324533 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7858474d7c-dfrkm" event={"ID":"f6144ac3-73ed-4d20-8ec7-a971caa832a7","Type":"ContainerStarted","Data":"0b268c9b716fc9618bb959a20c0f87908220299864938a40663e7cff2695d0d9"} Oct 13 07:59:08 crc kubenswrapper[4833]: I1013 07:59:08.325109 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7858474d7c-dfrkm" Oct 13 07:59:08 crc kubenswrapper[4833]: I1013 07:59:08.348808 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7858474d7c-dfrkm" podStartSLOduration=3.348785258 podStartE2EDuration="3.348785258s" podCreationTimestamp="2025-10-13 07:59:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 07:59:08.344587558 +0000 UTC m=+5438.445010504" watchObservedRunningTime="2025-10-13 07:59:08.348785258 +0000 UTC m=+5438.449208204" Oct 13 07:59:10 crc kubenswrapper[4833]: I1013 07:59:10.348229 4833 generic.go:334] "Generic (PLEG): container finished" podID="63e66d75-8738-47f9-ae12-51961d24731d" containerID="c23cac5248e069421afeb41b6309979340b828b4aef28bf4635b8bc20f906f37" exitCode=0 Oct 13 07:59:10 crc kubenswrapper[4833]: I1013 07:59:10.348366 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nzfdr" event={"ID":"63e66d75-8738-47f9-ae12-51961d24731d","Type":"ContainerDied","Data":"c23cac5248e069421afeb41b6309979340b828b4aef28bf4635b8bc20f906f37"} Oct 13 07:59:11 crc kubenswrapper[4833]: I1013 07:59:11.825923 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nzfdr" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.013723 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63e66d75-8738-47f9-ae12-51961d24731d-config-data\") pod \"63e66d75-8738-47f9-ae12-51961d24731d\" (UID: \"63e66d75-8738-47f9-ae12-51961d24731d\") " Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.013788 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thvp2\" (UniqueName: \"kubernetes.io/projected/63e66d75-8738-47f9-ae12-51961d24731d-kube-api-access-thvp2\") pod \"63e66d75-8738-47f9-ae12-51961d24731d\" (UID: \"63e66d75-8738-47f9-ae12-51961d24731d\") " Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.013842 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/63e66d75-8738-47f9-ae12-51961d24731d-credential-keys\") pod \"63e66d75-8738-47f9-ae12-51961d24731d\" (UID: \"63e66d75-8738-47f9-ae12-51961d24731d\") " Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.013868 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63e66d75-8738-47f9-ae12-51961d24731d-scripts\") pod \"63e66d75-8738-47f9-ae12-51961d24731d\" (UID: \"63e66d75-8738-47f9-ae12-51961d24731d\") " Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.013963 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63e66d75-8738-47f9-ae12-51961d24731d-combined-ca-bundle\") pod \"63e66d75-8738-47f9-ae12-51961d24731d\" (UID: \"63e66d75-8738-47f9-ae12-51961d24731d\") " Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.014006 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/63e66d75-8738-47f9-ae12-51961d24731d-fernet-keys\") pod \"63e66d75-8738-47f9-ae12-51961d24731d\" (UID: \"63e66d75-8738-47f9-ae12-51961d24731d\") " Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.022430 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63e66d75-8738-47f9-ae12-51961d24731d-scripts" (OuterVolumeSpecName: "scripts") pod "63e66d75-8738-47f9-ae12-51961d24731d" (UID: "63e66d75-8738-47f9-ae12-51961d24731d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.023769 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63e66d75-8738-47f9-ae12-51961d24731d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "63e66d75-8738-47f9-ae12-51961d24731d" (UID: "63e66d75-8738-47f9-ae12-51961d24731d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.023859 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63e66d75-8738-47f9-ae12-51961d24731d-kube-api-access-thvp2" (OuterVolumeSpecName: "kube-api-access-thvp2") pod "63e66d75-8738-47f9-ae12-51961d24731d" (UID: "63e66d75-8738-47f9-ae12-51961d24731d"). InnerVolumeSpecName "kube-api-access-thvp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.029788 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63e66d75-8738-47f9-ae12-51961d24731d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "63e66d75-8738-47f9-ae12-51961d24731d" (UID: "63e66d75-8738-47f9-ae12-51961d24731d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.052652 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63e66d75-8738-47f9-ae12-51961d24731d-config-data" (OuterVolumeSpecName: "config-data") pod "63e66d75-8738-47f9-ae12-51961d24731d" (UID: "63e66d75-8738-47f9-ae12-51961d24731d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.053021 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63e66d75-8738-47f9-ae12-51961d24731d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63e66d75-8738-47f9-ae12-51961d24731d" (UID: "63e66d75-8738-47f9-ae12-51961d24731d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.116245 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63e66d75-8738-47f9-ae12-51961d24731d-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.116286 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thvp2\" (UniqueName: \"kubernetes.io/projected/63e66d75-8738-47f9-ae12-51961d24731d-kube-api-access-thvp2\") on node \"crc\" DevicePath \"\"" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.116301 4833 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/63e66d75-8738-47f9-ae12-51961d24731d-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.116312 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63e66d75-8738-47f9-ae12-51961d24731d-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.116325 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63e66d75-8738-47f9-ae12-51961d24731d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.116336 4833 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/63e66d75-8738-47f9-ae12-51961d24731d-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.379315 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nzfdr" event={"ID":"63e66d75-8738-47f9-ae12-51961d24731d","Type":"ContainerDied","Data":"9e15b8af8976efa0c97c7877221c1ce5d4bfd723e00b0e6188dd10a08a650d3d"} Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.379609 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e15b8af8976efa0c97c7877221c1ce5d4bfd723e00b0e6188dd10a08a650d3d" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.379348 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nzfdr" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.487722 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-nzfdr"] Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.494829 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-nzfdr"] Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.552085 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-8mgv5"] Oct 13 07:59:12 crc kubenswrapper[4833]: E1013 07:59:12.552430 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e66d75-8738-47f9-ae12-51961d24731d" containerName="keystone-bootstrap" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.552453 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e66d75-8738-47f9-ae12-51961d24731d" containerName="keystone-bootstrap" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.552719 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="63e66d75-8738-47f9-ae12-51961d24731d" containerName="keystone-bootstrap" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.553375 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8mgv5" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.557511 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.557597 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.557689 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7gjll" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.558143 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.569209 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8mgv5"] Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.641983 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63e66d75-8738-47f9-ae12-51961d24731d" path="/var/lib/kubelet/pods/63e66d75-8738-47f9-ae12-51961d24731d/volumes" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.726072 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de418a90-689a-4e67-83db-8d62633f8657-combined-ca-bundle\") pod \"keystone-bootstrap-8mgv5\" (UID: \"de418a90-689a-4e67-83db-8d62633f8657\") " pod="openstack/keystone-bootstrap-8mgv5" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.726412 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de418a90-689a-4e67-83db-8d62633f8657-scripts\") pod \"keystone-bootstrap-8mgv5\" (UID: \"de418a90-689a-4e67-83db-8d62633f8657\") " pod="openstack/keystone-bootstrap-8mgv5" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.726626 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/de418a90-689a-4e67-83db-8d62633f8657-credential-keys\") pod \"keystone-bootstrap-8mgv5\" (UID: \"de418a90-689a-4e67-83db-8d62633f8657\") " pod="openstack/keystone-bootstrap-8mgv5" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.727003 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sxfk\" (UniqueName: \"kubernetes.io/projected/de418a90-689a-4e67-83db-8d62633f8657-kube-api-access-9sxfk\") pod \"keystone-bootstrap-8mgv5\" (UID: \"de418a90-689a-4e67-83db-8d62633f8657\") " pod="openstack/keystone-bootstrap-8mgv5" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.727061 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/de418a90-689a-4e67-83db-8d62633f8657-fernet-keys\") pod \"keystone-bootstrap-8mgv5\" (UID: \"de418a90-689a-4e67-83db-8d62633f8657\") " pod="openstack/keystone-bootstrap-8mgv5" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.727162 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de418a90-689a-4e67-83db-8d62633f8657-config-data\") pod \"keystone-bootstrap-8mgv5\" (UID: \"de418a90-689a-4e67-83db-8d62633f8657\") " pod="openstack/keystone-bootstrap-8mgv5" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.828561 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sxfk\" (UniqueName: \"kubernetes.io/projected/de418a90-689a-4e67-83db-8d62633f8657-kube-api-access-9sxfk\") pod \"keystone-bootstrap-8mgv5\" (UID: \"de418a90-689a-4e67-83db-8d62633f8657\") " pod="openstack/keystone-bootstrap-8mgv5" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.828617 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/de418a90-689a-4e67-83db-8d62633f8657-fernet-keys\") pod \"keystone-bootstrap-8mgv5\" (UID: \"de418a90-689a-4e67-83db-8d62633f8657\") " pod="openstack/keystone-bootstrap-8mgv5" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.828660 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de418a90-689a-4e67-83db-8d62633f8657-config-data\") pod \"keystone-bootstrap-8mgv5\" (UID: \"de418a90-689a-4e67-83db-8d62633f8657\") " pod="openstack/keystone-bootstrap-8mgv5" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.828734 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de418a90-689a-4e67-83db-8d62633f8657-combined-ca-bundle\") pod \"keystone-bootstrap-8mgv5\" (UID: \"de418a90-689a-4e67-83db-8d62633f8657\") " pod="openstack/keystone-bootstrap-8mgv5" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.829594 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de418a90-689a-4e67-83db-8d62633f8657-scripts\") pod \"keystone-bootstrap-8mgv5\" (UID: \"de418a90-689a-4e67-83db-8d62633f8657\") " pod="openstack/keystone-bootstrap-8mgv5" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.829678 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/de418a90-689a-4e67-83db-8d62633f8657-credential-keys\") pod \"keystone-bootstrap-8mgv5\" (UID: \"de418a90-689a-4e67-83db-8d62633f8657\") " pod="openstack/keystone-bootstrap-8mgv5" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.833999 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de418a90-689a-4e67-83db-8d62633f8657-combined-ca-bundle\") pod \"keystone-bootstrap-8mgv5\" (UID: \"de418a90-689a-4e67-83db-8d62633f8657\") " pod="openstack/keystone-bootstrap-8mgv5" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.834306 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/de418a90-689a-4e67-83db-8d62633f8657-credential-keys\") pod \"keystone-bootstrap-8mgv5\" (UID: \"de418a90-689a-4e67-83db-8d62633f8657\") " pod="openstack/keystone-bootstrap-8mgv5" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.834484 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de418a90-689a-4e67-83db-8d62633f8657-config-data\") pod \"keystone-bootstrap-8mgv5\" (UID: \"de418a90-689a-4e67-83db-8d62633f8657\") " pod="openstack/keystone-bootstrap-8mgv5" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.835713 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de418a90-689a-4e67-83db-8d62633f8657-scripts\") pod \"keystone-bootstrap-8mgv5\" (UID: \"de418a90-689a-4e67-83db-8d62633f8657\") " pod="openstack/keystone-bootstrap-8mgv5" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.836438 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/de418a90-689a-4e67-83db-8d62633f8657-fernet-keys\") pod \"keystone-bootstrap-8mgv5\" (UID: \"de418a90-689a-4e67-83db-8d62633f8657\") " pod="openstack/keystone-bootstrap-8mgv5" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.847332 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sxfk\" (UniqueName: \"kubernetes.io/projected/de418a90-689a-4e67-83db-8d62633f8657-kube-api-access-9sxfk\") pod \"keystone-bootstrap-8mgv5\" (UID: \"de418a90-689a-4e67-83db-8d62633f8657\") " pod="openstack/keystone-bootstrap-8mgv5" Oct 13 07:59:12 crc kubenswrapper[4833]: I1013 07:59:12.868064 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8mgv5" Oct 13 07:59:13 crc kubenswrapper[4833]: I1013 07:59:13.315844 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8mgv5"] Oct 13 07:59:13 crc kubenswrapper[4833]: W1013 07:59:13.325736 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde418a90_689a_4e67_83db_8d62633f8657.slice/crio-492ec622cf914d3bc7fe6f59e198924929b93a81adf66b77151da33391e660da WatchSource:0}: Error finding container 492ec622cf914d3bc7fe6f59e198924929b93a81adf66b77151da33391e660da: Status 404 returned error can't find the container with id 492ec622cf914d3bc7fe6f59e198924929b93a81adf66b77151da33391e660da Oct 13 07:59:13 crc kubenswrapper[4833]: I1013 07:59:13.395965 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8mgv5" event={"ID":"de418a90-689a-4e67-83db-8d62633f8657","Type":"ContainerStarted","Data":"492ec622cf914d3bc7fe6f59e198924929b93a81adf66b77151da33391e660da"} Oct 13 07:59:14 crc kubenswrapper[4833]: I1013 07:59:14.411840 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8mgv5" event={"ID":"de418a90-689a-4e67-83db-8d62633f8657","Type":"ContainerStarted","Data":"f6e366342a2b99abce28c251849269c2471823a00712ff9a6773729f030e3457"} Oct 13 07:59:14 crc kubenswrapper[4833]: I1013 07:59:14.442857 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-8mgv5" podStartSLOduration=2.442829995 podStartE2EDuration="2.442829995s" podCreationTimestamp="2025-10-13 07:59:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 07:59:14.436213997 +0000 UTC m=+5444.536636953" watchObservedRunningTime="2025-10-13 07:59:14.442829995 +0000 UTC m=+5444.543252951" Oct 13 07:59:15 crc kubenswrapper[4833]: I1013 07:59:15.868842 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7858474d7c-dfrkm" Oct 13 07:59:15 crc kubenswrapper[4833]: I1013 07:59:15.967815 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8d8987b6c-pj8pt"] Oct 13 07:59:15 crc kubenswrapper[4833]: I1013 07:59:15.968141 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8d8987b6c-pj8pt" podUID="0a194576-3860-4b16-a7a8-cb7951a1e776" containerName="dnsmasq-dns" containerID="cri-o://5330b3d03e45ec11652222a76fca262c9e56694acf30ac77cf9c9e188822182a" gracePeriod=10 Oct 13 07:59:16 crc kubenswrapper[4833]: I1013 07:59:16.417295 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8d8987b6c-pj8pt" Oct 13 07:59:16 crc kubenswrapper[4833]: I1013 07:59:16.433657 4833 generic.go:334] "Generic (PLEG): container finished" podID="0a194576-3860-4b16-a7a8-cb7951a1e776" containerID="5330b3d03e45ec11652222a76fca262c9e56694acf30ac77cf9c9e188822182a" exitCode=0 Oct 13 07:59:16 crc kubenswrapper[4833]: I1013 07:59:16.433756 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8d8987b6c-pj8pt" event={"ID":"0a194576-3860-4b16-a7a8-cb7951a1e776","Type":"ContainerDied","Data":"5330b3d03e45ec11652222a76fca262c9e56694acf30ac77cf9c9e188822182a"} Oct 13 07:59:16 crc kubenswrapper[4833]: I1013 07:59:16.433819 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8d8987b6c-pj8pt" event={"ID":"0a194576-3860-4b16-a7a8-cb7951a1e776","Type":"ContainerDied","Data":"a28c2dd694b8e19c56bb84ba3f732b438a8d82c98bffcb8f93438a9f8f2065f5"} Oct 13 07:59:16 crc kubenswrapper[4833]: I1013 07:59:16.433841 4833 scope.go:117] "RemoveContainer" containerID="5330b3d03e45ec11652222a76fca262c9e56694acf30ac77cf9c9e188822182a" Oct 13 07:59:16 crc kubenswrapper[4833]: I1013 07:59:16.434029 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8d8987b6c-pj8pt" Oct 13 07:59:16 crc kubenswrapper[4833]: I1013 07:59:16.439109 4833 generic.go:334] "Generic (PLEG): container finished" podID="de418a90-689a-4e67-83db-8d62633f8657" containerID="f6e366342a2b99abce28c251849269c2471823a00712ff9a6773729f030e3457" exitCode=0 Oct 13 07:59:16 crc kubenswrapper[4833]: I1013 07:59:16.439147 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8mgv5" event={"ID":"de418a90-689a-4e67-83db-8d62633f8657","Type":"ContainerDied","Data":"f6e366342a2b99abce28c251849269c2471823a00712ff9a6773729f030e3457"} Oct 13 07:59:16 crc kubenswrapper[4833]: I1013 07:59:16.462758 4833 scope.go:117] "RemoveContainer" containerID="324f78161d34f49feb9b15f8b215a770f2f3a5688bcc09bd7fb66bb10a1be1cc" Oct 13 07:59:16 crc kubenswrapper[4833]: I1013 07:59:16.505286 4833 scope.go:117] "RemoveContainer" containerID="5330b3d03e45ec11652222a76fca262c9e56694acf30ac77cf9c9e188822182a" Oct 13 07:59:16 crc kubenswrapper[4833]: E1013 07:59:16.507108 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5330b3d03e45ec11652222a76fca262c9e56694acf30ac77cf9c9e188822182a\": container with ID starting with 5330b3d03e45ec11652222a76fca262c9e56694acf30ac77cf9c9e188822182a not found: ID does not exist" containerID="5330b3d03e45ec11652222a76fca262c9e56694acf30ac77cf9c9e188822182a" Oct 13 07:59:16 crc kubenswrapper[4833]: I1013 07:59:16.507169 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5330b3d03e45ec11652222a76fca262c9e56694acf30ac77cf9c9e188822182a"} err="failed to get container status \"5330b3d03e45ec11652222a76fca262c9e56694acf30ac77cf9c9e188822182a\": rpc error: code = NotFound desc = could not find container \"5330b3d03e45ec11652222a76fca262c9e56694acf30ac77cf9c9e188822182a\": container with ID starting with 5330b3d03e45ec11652222a76fca262c9e56694acf30ac77cf9c9e188822182a not found: ID does not exist" Oct 13 07:59:16 crc kubenswrapper[4833]: I1013 07:59:16.507204 4833 scope.go:117] "RemoveContainer" containerID="324f78161d34f49feb9b15f8b215a770f2f3a5688bcc09bd7fb66bb10a1be1cc" Oct 13 07:59:16 crc kubenswrapper[4833]: E1013 07:59:16.507976 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"324f78161d34f49feb9b15f8b215a770f2f3a5688bcc09bd7fb66bb10a1be1cc\": container with ID starting with 324f78161d34f49feb9b15f8b215a770f2f3a5688bcc09bd7fb66bb10a1be1cc not found: ID does not exist" containerID="324f78161d34f49feb9b15f8b215a770f2f3a5688bcc09bd7fb66bb10a1be1cc" Oct 13 07:59:16 crc kubenswrapper[4833]: I1013 07:59:16.508004 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"324f78161d34f49feb9b15f8b215a770f2f3a5688bcc09bd7fb66bb10a1be1cc"} err="failed to get container status \"324f78161d34f49feb9b15f8b215a770f2f3a5688bcc09bd7fb66bb10a1be1cc\": rpc error: code = NotFound desc = could not find container \"324f78161d34f49feb9b15f8b215a770f2f3a5688bcc09bd7fb66bb10a1be1cc\": container with ID starting with 324f78161d34f49feb9b15f8b215a770f2f3a5688bcc09bd7fb66bb10a1be1cc not found: ID does not exist" Oct 13 07:59:16 crc kubenswrapper[4833]: I1013 07:59:16.508413 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lftzw\" (UniqueName: \"kubernetes.io/projected/0a194576-3860-4b16-a7a8-cb7951a1e776-kube-api-access-lftzw\") pod \"0a194576-3860-4b16-a7a8-cb7951a1e776\" (UID: \"0a194576-3860-4b16-a7a8-cb7951a1e776\") " Oct 13 07:59:16 crc kubenswrapper[4833]: I1013 07:59:16.508484 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a194576-3860-4b16-a7a8-cb7951a1e776-ovsdbserver-sb\") pod \"0a194576-3860-4b16-a7a8-cb7951a1e776\" (UID: \"0a194576-3860-4b16-a7a8-cb7951a1e776\") " Oct 13 07:59:16 crc kubenswrapper[4833]: I1013 07:59:16.508560 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a194576-3860-4b16-a7a8-cb7951a1e776-dns-svc\") pod \"0a194576-3860-4b16-a7a8-cb7951a1e776\" (UID: \"0a194576-3860-4b16-a7a8-cb7951a1e776\") " Oct 13 07:59:16 crc kubenswrapper[4833]: I1013 07:59:16.513260 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a194576-3860-4b16-a7a8-cb7951a1e776-kube-api-access-lftzw" (OuterVolumeSpecName: "kube-api-access-lftzw") pod "0a194576-3860-4b16-a7a8-cb7951a1e776" (UID: "0a194576-3860-4b16-a7a8-cb7951a1e776"). InnerVolumeSpecName "kube-api-access-lftzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:59:16 crc kubenswrapper[4833]: I1013 07:59:16.548860 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a194576-3860-4b16-a7a8-cb7951a1e776-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0a194576-3860-4b16-a7a8-cb7951a1e776" (UID: "0a194576-3860-4b16-a7a8-cb7951a1e776"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 07:59:16 crc kubenswrapper[4833]: I1013 07:59:16.557145 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a194576-3860-4b16-a7a8-cb7951a1e776-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0a194576-3860-4b16-a7a8-cb7951a1e776" (UID: "0a194576-3860-4b16-a7a8-cb7951a1e776"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 07:59:16 crc kubenswrapper[4833]: I1013 07:59:16.609794 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a194576-3860-4b16-a7a8-cb7951a1e776-config\") pod \"0a194576-3860-4b16-a7a8-cb7951a1e776\" (UID: \"0a194576-3860-4b16-a7a8-cb7951a1e776\") " Oct 13 07:59:16 crc kubenswrapper[4833]: I1013 07:59:16.609888 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a194576-3860-4b16-a7a8-cb7951a1e776-ovsdbserver-nb\") pod \"0a194576-3860-4b16-a7a8-cb7951a1e776\" (UID: \"0a194576-3860-4b16-a7a8-cb7951a1e776\") " Oct 13 07:59:16 crc kubenswrapper[4833]: I1013 07:59:16.610652 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lftzw\" (UniqueName: \"kubernetes.io/projected/0a194576-3860-4b16-a7a8-cb7951a1e776-kube-api-access-lftzw\") on node \"crc\" DevicePath \"\"" Oct 13 07:59:16 crc kubenswrapper[4833]: I1013 07:59:16.610825 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a194576-3860-4b16-a7a8-cb7951a1e776-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 07:59:16 crc kubenswrapper[4833]: I1013 07:59:16.610857 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a194576-3860-4b16-a7a8-cb7951a1e776-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 07:59:16 crc kubenswrapper[4833]: I1013 07:59:16.662867 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a194576-3860-4b16-a7a8-cb7951a1e776-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0a194576-3860-4b16-a7a8-cb7951a1e776" (UID: "0a194576-3860-4b16-a7a8-cb7951a1e776"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 07:59:16 crc kubenswrapper[4833]: I1013 07:59:16.665401 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a194576-3860-4b16-a7a8-cb7951a1e776-config" (OuterVolumeSpecName: "config") pod "0a194576-3860-4b16-a7a8-cb7951a1e776" (UID: "0a194576-3860-4b16-a7a8-cb7951a1e776"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 07:59:16 crc kubenswrapper[4833]: I1013 07:59:16.713163 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a194576-3860-4b16-a7a8-cb7951a1e776-config\") on node \"crc\" DevicePath \"\"" Oct 13 07:59:16 crc kubenswrapper[4833]: I1013 07:59:16.713224 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a194576-3860-4b16-a7a8-cb7951a1e776-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 07:59:16 crc kubenswrapper[4833]: I1013 07:59:16.774258 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8d8987b6c-pj8pt"] Oct 13 07:59:16 crc kubenswrapper[4833]: I1013 07:59:16.779022 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8d8987b6c-pj8pt"] Oct 13 07:59:17 crc kubenswrapper[4833]: I1013 07:59:17.842420 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8mgv5" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.036221 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sxfk\" (UniqueName: \"kubernetes.io/projected/de418a90-689a-4e67-83db-8d62633f8657-kube-api-access-9sxfk\") pod \"de418a90-689a-4e67-83db-8d62633f8657\" (UID: \"de418a90-689a-4e67-83db-8d62633f8657\") " Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.036794 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/de418a90-689a-4e67-83db-8d62633f8657-credential-keys\") pod \"de418a90-689a-4e67-83db-8d62633f8657\" (UID: \"de418a90-689a-4e67-83db-8d62633f8657\") " Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.036868 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de418a90-689a-4e67-83db-8d62633f8657-config-data\") pod \"de418a90-689a-4e67-83db-8d62633f8657\" (UID: \"de418a90-689a-4e67-83db-8d62633f8657\") " Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.036919 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de418a90-689a-4e67-83db-8d62633f8657-combined-ca-bundle\") pod \"de418a90-689a-4e67-83db-8d62633f8657\" (UID: \"de418a90-689a-4e67-83db-8d62633f8657\") " Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.037220 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de418a90-689a-4e67-83db-8d62633f8657-scripts\") pod \"de418a90-689a-4e67-83db-8d62633f8657\" (UID: \"de418a90-689a-4e67-83db-8d62633f8657\") " Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.037284 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/de418a90-689a-4e67-83db-8d62633f8657-fernet-keys\") pod \"de418a90-689a-4e67-83db-8d62633f8657\" (UID: \"de418a90-689a-4e67-83db-8d62633f8657\") " Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.043004 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de418a90-689a-4e67-83db-8d62633f8657-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "de418a90-689a-4e67-83db-8d62633f8657" (UID: "de418a90-689a-4e67-83db-8d62633f8657"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.045041 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de418a90-689a-4e67-83db-8d62633f8657-kube-api-access-9sxfk" (OuterVolumeSpecName: "kube-api-access-9sxfk") pod "de418a90-689a-4e67-83db-8d62633f8657" (UID: "de418a90-689a-4e67-83db-8d62633f8657"). InnerVolumeSpecName "kube-api-access-9sxfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.046103 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de418a90-689a-4e67-83db-8d62633f8657-scripts" (OuterVolumeSpecName: "scripts") pod "de418a90-689a-4e67-83db-8d62633f8657" (UID: "de418a90-689a-4e67-83db-8d62633f8657"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.048286 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de418a90-689a-4e67-83db-8d62633f8657-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "de418a90-689a-4e67-83db-8d62633f8657" (UID: "de418a90-689a-4e67-83db-8d62633f8657"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.089354 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de418a90-689a-4e67-83db-8d62633f8657-config-data" (OuterVolumeSpecName: "config-data") pod "de418a90-689a-4e67-83db-8d62633f8657" (UID: "de418a90-689a-4e67-83db-8d62633f8657"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.096114 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de418a90-689a-4e67-83db-8d62633f8657-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de418a90-689a-4e67-83db-8d62633f8657" (UID: "de418a90-689a-4e67-83db-8d62633f8657"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.140129 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de418a90-689a-4e67-83db-8d62633f8657-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.140187 4833 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/de418a90-689a-4e67-83db-8d62633f8657-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.140209 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sxfk\" (UniqueName: \"kubernetes.io/projected/de418a90-689a-4e67-83db-8d62633f8657-kube-api-access-9sxfk\") on node \"crc\" DevicePath \"\"" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.140246 4833 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/de418a90-689a-4e67-83db-8d62633f8657-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.140265 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de418a90-689a-4e67-83db-8d62633f8657-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.140285 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de418a90-689a-4e67-83db-8d62633f8657-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.463407 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8mgv5" event={"ID":"de418a90-689a-4e67-83db-8d62633f8657","Type":"ContainerDied","Data":"492ec622cf914d3bc7fe6f59e198924929b93a81adf66b77151da33391e660da"} Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.463448 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="492ec622cf914d3bc7fe6f59e198924929b93a81adf66b77151da33391e660da" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.464661 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8mgv5" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.563398 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-c88f78f44-h69f7"] Oct 13 07:59:18 crc kubenswrapper[4833]: E1013 07:59:18.563783 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a194576-3860-4b16-a7a8-cb7951a1e776" containerName="dnsmasq-dns" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.563802 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a194576-3860-4b16-a7a8-cb7951a1e776" containerName="dnsmasq-dns" Oct 13 07:59:18 crc kubenswrapper[4833]: E1013 07:59:18.563837 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a194576-3860-4b16-a7a8-cb7951a1e776" containerName="init" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.563847 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a194576-3860-4b16-a7a8-cb7951a1e776" containerName="init" Oct 13 07:59:18 crc kubenswrapper[4833]: E1013 07:59:18.563867 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de418a90-689a-4e67-83db-8d62633f8657" containerName="keystone-bootstrap" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.563878 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="de418a90-689a-4e67-83db-8d62633f8657" containerName="keystone-bootstrap" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.564056 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a194576-3860-4b16-a7a8-cb7951a1e776" containerName="dnsmasq-dns" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.564091 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="de418a90-689a-4e67-83db-8d62633f8657" containerName="keystone-bootstrap" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.564722 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c88f78f44-h69f7" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.572432 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.572646 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.572702 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.572702 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7gjll" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.572711 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.572653 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.572996 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c88f78f44-h69f7"] Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.636342 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a194576-3860-4b16-a7a8-cb7951a1e776" path="/var/lib/kubelet/pods/0a194576-3860-4b16-a7a8-cb7951a1e776/volumes" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.649020 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9713b03c-2a06-4163-9540-d7fd9f32c2ab-internal-tls-certs\") pod \"keystone-c88f78f44-h69f7\" (UID: \"9713b03c-2a06-4163-9540-d7fd9f32c2ab\") " pod="openstack/keystone-c88f78f44-h69f7" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.649088 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs7nm\" (UniqueName: \"kubernetes.io/projected/9713b03c-2a06-4163-9540-d7fd9f32c2ab-kube-api-access-bs7nm\") pod \"keystone-c88f78f44-h69f7\" (UID: \"9713b03c-2a06-4163-9540-d7fd9f32c2ab\") " pod="openstack/keystone-c88f78f44-h69f7" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.649182 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9713b03c-2a06-4163-9540-d7fd9f32c2ab-public-tls-certs\") pod \"keystone-c88f78f44-h69f7\" (UID: \"9713b03c-2a06-4163-9540-d7fd9f32c2ab\") " pod="openstack/keystone-c88f78f44-h69f7" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.649247 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9713b03c-2a06-4163-9540-d7fd9f32c2ab-config-data\") pod \"keystone-c88f78f44-h69f7\" (UID: \"9713b03c-2a06-4163-9540-d7fd9f32c2ab\") " pod="openstack/keystone-c88f78f44-h69f7" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.649275 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9713b03c-2a06-4163-9540-d7fd9f32c2ab-fernet-keys\") pod \"keystone-c88f78f44-h69f7\" (UID: \"9713b03c-2a06-4163-9540-d7fd9f32c2ab\") " pod="openstack/keystone-c88f78f44-h69f7" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.649300 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9713b03c-2a06-4163-9540-d7fd9f32c2ab-combined-ca-bundle\") pod \"keystone-c88f78f44-h69f7\" (UID: \"9713b03c-2a06-4163-9540-d7fd9f32c2ab\") " pod="openstack/keystone-c88f78f44-h69f7" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.649337 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9713b03c-2a06-4163-9540-d7fd9f32c2ab-credential-keys\") pod \"keystone-c88f78f44-h69f7\" (UID: \"9713b03c-2a06-4163-9540-d7fd9f32c2ab\") " pod="openstack/keystone-c88f78f44-h69f7" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.649482 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9713b03c-2a06-4163-9540-d7fd9f32c2ab-scripts\") pod \"keystone-c88f78f44-h69f7\" (UID: \"9713b03c-2a06-4163-9540-d7fd9f32c2ab\") " pod="openstack/keystone-c88f78f44-h69f7" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.753939 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9713b03c-2a06-4163-9540-d7fd9f32c2ab-scripts\") pod \"keystone-c88f78f44-h69f7\" (UID: \"9713b03c-2a06-4163-9540-d7fd9f32c2ab\") " pod="openstack/keystone-c88f78f44-h69f7" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.754483 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9713b03c-2a06-4163-9540-d7fd9f32c2ab-internal-tls-certs\") pod \"keystone-c88f78f44-h69f7\" (UID: \"9713b03c-2a06-4163-9540-d7fd9f32c2ab\") " pod="openstack/keystone-c88f78f44-h69f7" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.754653 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs7nm\" (UniqueName: \"kubernetes.io/projected/9713b03c-2a06-4163-9540-d7fd9f32c2ab-kube-api-access-bs7nm\") pod \"keystone-c88f78f44-h69f7\" (UID: \"9713b03c-2a06-4163-9540-d7fd9f32c2ab\") " pod="openstack/keystone-c88f78f44-h69f7" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.754842 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9713b03c-2a06-4163-9540-d7fd9f32c2ab-public-tls-certs\") pod \"keystone-c88f78f44-h69f7\" (UID: \"9713b03c-2a06-4163-9540-d7fd9f32c2ab\") " pod="openstack/keystone-c88f78f44-h69f7" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.754938 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9713b03c-2a06-4163-9540-d7fd9f32c2ab-config-data\") pod \"keystone-c88f78f44-h69f7\" (UID: \"9713b03c-2a06-4163-9540-d7fd9f32c2ab\") " pod="openstack/keystone-c88f78f44-h69f7" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.755001 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9713b03c-2a06-4163-9540-d7fd9f32c2ab-fernet-keys\") pod \"keystone-c88f78f44-h69f7\" (UID: \"9713b03c-2a06-4163-9540-d7fd9f32c2ab\") " pod="openstack/keystone-c88f78f44-h69f7" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.755055 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9713b03c-2a06-4163-9540-d7fd9f32c2ab-combined-ca-bundle\") pod \"keystone-c88f78f44-h69f7\" (UID: \"9713b03c-2a06-4163-9540-d7fd9f32c2ab\") " pod="openstack/keystone-c88f78f44-h69f7" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.755129 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9713b03c-2a06-4163-9540-d7fd9f32c2ab-credential-keys\") pod \"keystone-c88f78f44-h69f7\" (UID: \"9713b03c-2a06-4163-9540-d7fd9f32c2ab\") " pod="openstack/keystone-c88f78f44-h69f7" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.759683 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9713b03c-2a06-4163-9540-d7fd9f32c2ab-combined-ca-bundle\") pod \"keystone-c88f78f44-h69f7\" (UID: \"9713b03c-2a06-4163-9540-d7fd9f32c2ab\") " pod="openstack/keystone-c88f78f44-h69f7" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.760036 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9713b03c-2a06-4163-9540-d7fd9f32c2ab-scripts\") pod \"keystone-c88f78f44-h69f7\" (UID: \"9713b03c-2a06-4163-9540-d7fd9f32c2ab\") " pod="openstack/keystone-c88f78f44-h69f7" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.760576 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9713b03c-2a06-4163-9540-d7fd9f32c2ab-internal-tls-certs\") pod \"keystone-c88f78f44-h69f7\" (UID: \"9713b03c-2a06-4163-9540-d7fd9f32c2ab\") " pod="openstack/keystone-c88f78f44-h69f7" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.768226 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9713b03c-2a06-4163-9540-d7fd9f32c2ab-credential-keys\") pod \"keystone-c88f78f44-h69f7\" (UID: \"9713b03c-2a06-4163-9540-d7fd9f32c2ab\") " pod="openstack/keystone-c88f78f44-h69f7" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.768635 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9713b03c-2a06-4163-9540-d7fd9f32c2ab-public-tls-certs\") pod \"keystone-c88f78f44-h69f7\" (UID: \"9713b03c-2a06-4163-9540-d7fd9f32c2ab\") " pod="openstack/keystone-c88f78f44-h69f7" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.768883 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9713b03c-2a06-4163-9540-d7fd9f32c2ab-config-data\") pod \"keystone-c88f78f44-h69f7\" (UID: \"9713b03c-2a06-4163-9540-d7fd9f32c2ab\") " pod="openstack/keystone-c88f78f44-h69f7" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.772208 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9713b03c-2a06-4163-9540-d7fd9f32c2ab-fernet-keys\") pod \"keystone-c88f78f44-h69f7\" (UID: \"9713b03c-2a06-4163-9540-d7fd9f32c2ab\") " pod="openstack/keystone-c88f78f44-h69f7" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.779911 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs7nm\" (UniqueName: \"kubernetes.io/projected/9713b03c-2a06-4163-9540-d7fd9f32c2ab-kube-api-access-bs7nm\") pod \"keystone-c88f78f44-h69f7\" (UID: \"9713b03c-2a06-4163-9540-d7fd9f32c2ab\") " pod="openstack/keystone-c88f78f44-h69f7" Oct 13 07:59:18 crc kubenswrapper[4833]: I1013 07:59:18.888797 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c88f78f44-h69f7" Oct 13 07:59:19 crc kubenswrapper[4833]: I1013 07:59:19.375186 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c88f78f44-h69f7"] Oct 13 07:59:19 crc kubenswrapper[4833]: I1013 07:59:19.473309 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c88f78f44-h69f7" event={"ID":"9713b03c-2a06-4163-9540-d7fd9f32c2ab","Type":"ContainerStarted","Data":"006e1a2900be4fddb6194392fed0889cf68c561652ac6d6f2d385991fc8bcf97"} Oct 13 07:59:20 crc kubenswrapper[4833]: I1013 07:59:20.488334 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c88f78f44-h69f7" event={"ID":"9713b03c-2a06-4163-9540-d7fd9f32c2ab","Type":"ContainerStarted","Data":"67bda1a88829b753cb252bc057cd61723dae6ccbfe20c2ea404483a536671240"} Oct 13 07:59:20 crc kubenswrapper[4833]: I1013 07:59:20.488958 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-c88f78f44-h69f7" Oct 13 07:59:20 crc kubenswrapper[4833]: I1013 07:59:20.519949 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-c88f78f44-h69f7" podStartSLOduration=2.519927059 podStartE2EDuration="2.519927059s" podCreationTimestamp="2025-10-13 07:59:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 07:59:20.513514146 +0000 UTC m=+5450.613937062" watchObservedRunningTime="2025-10-13 07:59:20.519927059 +0000 UTC m=+5450.620349975" Oct 13 07:59:30 crc kubenswrapper[4833]: I1013 07:59:30.542754 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 07:59:30 crc kubenswrapper[4833]: I1013 07:59:30.543437 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 07:59:30 crc kubenswrapper[4833]: I1013 07:59:30.543508 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 07:59:30 crc kubenswrapper[4833]: I1013 07:59:30.544815 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"529aa9b3cfb16d8dd65469ec64ebdc474a0f44417fe7a26128d25222a7982fb0"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 07:59:30 crc kubenswrapper[4833]: I1013 07:59:30.544937 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://529aa9b3cfb16d8dd65469ec64ebdc474a0f44417fe7a26128d25222a7982fb0" gracePeriod=600 Oct 13 07:59:31 crc kubenswrapper[4833]: I1013 07:59:31.594918 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="529aa9b3cfb16d8dd65469ec64ebdc474a0f44417fe7a26128d25222a7982fb0" exitCode=0 Oct 13 07:59:31 crc kubenswrapper[4833]: I1013 07:59:31.595148 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"529aa9b3cfb16d8dd65469ec64ebdc474a0f44417fe7a26128d25222a7982fb0"} Oct 13 07:59:31 crc kubenswrapper[4833]: I1013 07:59:31.595387 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"84144bfb0195d5a0d1b4378bb219c91ea274fee3df90291859e033d126a958d8"} Oct 13 07:59:31 crc kubenswrapper[4833]: I1013 07:59:31.595413 4833 scope.go:117] "RemoveContainer" containerID="a730b8e15d65542d1555c0490d7044bbf38de54446d43d1c11482a8cdf705cba" Oct 13 07:59:50 crc kubenswrapper[4833]: I1013 07:59:50.398768 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-c88f78f44-h69f7" Oct 13 07:59:50 crc kubenswrapper[4833]: I1013 07:59:50.790863 4833 scope.go:117] "RemoveContainer" containerID="43f710b55224fe33946f449d78bc80043e891e046ee6ab6bce785aa47750bce4" Oct 13 07:59:50 crc kubenswrapper[4833]: I1013 07:59:50.832419 4833 scope.go:117] "RemoveContainer" containerID="0853a5dad95df3108798377be1e792e3f8fadefcb1cbe8c7c87d77a621704ebd" Oct 13 07:59:50 crc kubenswrapper[4833]: I1013 07:59:50.878093 4833 scope.go:117] "RemoveContainer" containerID="c2c664ca1ba051cfa48cb8e5a49cf224026b0aca68b73b15cdc483d8ca0ff332" Oct 13 07:59:50 crc kubenswrapper[4833]: I1013 07:59:50.912905 4833 scope.go:117] "RemoveContainer" containerID="f3cb7b46bf07b2c1a4d02b281dc2a3e1a0c757cbb7d8bde73a81afffc9662722" Oct 13 07:59:50 crc kubenswrapper[4833]: I1013 07:59:50.947694 4833 scope.go:117] "RemoveContainer" containerID="4e69027ac0ac6405fb9f5f35d4cf4ac861bdb4e44ed3b97b84fc537418836906" Oct 13 07:59:50 crc kubenswrapper[4833]: I1013 07:59:50.979528 4833 scope.go:117] "RemoveContainer" containerID="cfc553ed4438c1a27298c1a6d0cd9bd88a051814e0121a88a606a09c020f33b3" Oct 13 07:59:54 crc kubenswrapper[4833]: I1013 07:59:54.578158 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 13 07:59:54 crc kubenswrapper[4833]: I1013 07:59:54.580091 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 13 07:59:54 crc kubenswrapper[4833]: I1013 07:59:54.582619 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-bqw7x" Oct 13 07:59:54 crc kubenswrapper[4833]: I1013 07:59:54.582948 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Oct 13 07:59:54 crc kubenswrapper[4833]: I1013 07:59:54.583430 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Oct 13 07:59:54 crc kubenswrapper[4833]: I1013 07:59:54.595010 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 13 07:59:54 crc kubenswrapper[4833]: I1013 07:59:54.700947 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89258\" (UniqueName: \"kubernetes.io/projected/573128e7-9275-410b-8695-b2beb20484f9-kube-api-access-89258\") pod \"openstackclient\" (UID: \"573128e7-9275-410b-8695-b2beb20484f9\") " pod="openstack/openstackclient" Oct 13 07:59:54 crc kubenswrapper[4833]: I1013 07:59:54.701067 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/573128e7-9275-410b-8695-b2beb20484f9-openstack-config\") pod \"openstackclient\" (UID: \"573128e7-9275-410b-8695-b2beb20484f9\") " pod="openstack/openstackclient" Oct 13 07:59:54 crc kubenswrapper[4833]: I1013 07:59:54.701094 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/573128e7-9275-410b-8695-b2beb20484f9-openstack-config-secret\") pod \"openstackclient\" (UID: \"573128e7-9275-410b-8695-b2beb20484f9\") " pod="openstack/openstackclient" Oct 13 07:59:54 crc kubenswrapper[4833]: I1013 07:59:54.701133 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573128e7-9275-410b-8695-b2beb20484f9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"573128e7-9275-410b-8695-b2beb20484f9\") " pod="openstack/openstackclient" Oct 13 07:59:54 crc kubenswrapper[4833]: I1013 07:59:54.802987 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89258\" (UniqueName: \"kubernetes.io/projected/573128e7-9275-410b-8695-b2beb20484f9-kube-api-access-89258\") pod \"openstackclient\" (UID: \"573128e7-9275-410b-8695-b2beb20484f9\") " pod="openstack/openstackclient" Oct 13 07:59:54 crc kubenswrapper[4833]: I1013 07:59:54.803054 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/573128e7-9275-410b-8695-b2beb20484f9-openstack-config\") pod \"openstackclient\" (UID: \"573128e7-9275-410b-8695-b2beb20484f9\") " pod="openstack/openstackclient" Oct 13 07:59:54 crc kubenswrapper[4833]: I1013 07:59:54.803080 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/573128e7-9275-410b-8695-b2beb20484f9-openstack-config-secret\") pod \"openstackclient\" (UID: \"573128e7-9275-410b-8695-b2beb20484f9\") " pod="openstack/openstackclient" Oct 13 07:59:54 crc kubenswrapper[4833]: I1013 07:59:54.803117 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573128e7-9275-410b-8695-b2beb20484f9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"573128e7-9275-410b-8695-b2beb20484f9\") " pod="openstack/openstackclient" Oct 13 07:59:54 crc kubenswrapper[4833]: I1013 07:59:54.804131 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/573128e7-9275-410b-8695-b2beb20484f9-openstack-config\") pod \"openstackclient\" (UID: \"573128e7-9275-410b-8695-b2beb20484f9\") " pod="openstack/openstackclient" Oct 13 07:59:54 crc kubenswrapper[4833]: I1013 07:59:54.809528 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/573128e7-9275-410b-8695-b2beb20484f9-openstack-config-secret\") pod \"openstackclient\" (UID: \"573128e7-9275-410b-8695-b2beb20484f9\") " pod="openstack/openstackclient" Oct 13 07:59:54 crc kubenswrapper[4833]: I1013 07:59:54.818359 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573128e7-9275-410b-8695-b2beb20484f9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"573128e7-9275-410b-8695-b2beb20484f9\") " pod="openstack/openstackclient" Oct 13 07:59:54 crc kubenswrapper[4833]: I1013 07:59:54.818755 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89258\" (UniqueName: \"kubernetes.io/projected/573128e7-9275-410b-8695-b2beb20484f9-kube-api-access-89258\") pod \"openstackclient\" (UID: \"573128e7-9275-410b-8695-b2beb20484f9\") " pod="openstack/openstackclient" Oct 13 07:59:54 crc kubenswrapper[4833]: I1013 07:59:54.909386 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 13 07:59:55 crc kubenswrapper[4833]: I1013 07:59:55.381628 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 13 07:59:55 crc kubenswrapper[4833]: I1013 07:59:55.839032 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"573128e7-9275-410b-8695-b2beb20484f9","Type":"ContainerStarted","Data":"7577c7fee48c73c05eba3347c4095af439391a496d474574993139365c4d6e75"} Oct 13 07:59:55 crc kubenswrapper[4833]: I1013 07:59:55.839380 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"573128e7-9275-410b-8695-b2beb20484f9","Type":"ContainerStarted","Data":"ce208a4f79bb08928c16c31f643016cab2ed23e9de4bd3271520aa74cbd1b026"} Oct 13 07:59:55 crc kubenswrapper[4833]: I1013 07:59:55.866767 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.866742764 podStartE2EDuration="1.866742764s" podCreationTimestamp="2025-10-13 07:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 07:59:55.855385891 +0000 UTC m=+5485.955808817" watchObservedRunningTime="2025-10-13 07:59:55.866742764 +0000 UTC m=+5485.967165680" Oct 13 07:59:56 crc kubenswrapper[4833]: I1013 07:59:56.447011 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8k8lg"] Oct 13 07:59:56 crc kubenswrapper[4833]: I1013 07:59:56.448974 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8k8lg" Oct 13 07:59:56 crc kubenswrapper[4833]: I1013 07:59:56.461129 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8k8lg"] Oct 13 07:59:56 crc kubenswrapper[4833]: I1013 07:59:56.529729 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f9bf60c-1e49-4162-8232-14a251e9bc48-utilities\") pod \"community-operators-8k8lg\" (UID: \"6f9bf60c-1e49-4162-8232-14a251e9bc48\") " pod="openshift-marketplace/community-operators-8k8lg" Oct 13 07:59:56 crc kubenswrapper[4833]: I1013 07:59:56.529829 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f9bf60c-1e49-4162-8232-14a251e9bc48-catalog-content\") pod \"community-operators-8k8lg\" (UID: \"6f9bf60c-1e49-4162-8232-14a251e9bc48\") " pod="openshift-marketplace/community-operators-8k8lg" Oct 13 07:59:56 crc kubenswrapper[4833]: I1013 07:59:56.529856 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snhtc\" (UniqueName: \"kubernetes.io/projected/6f9bf60c-1e49-4162-8232-14a251e9bc48-kube-api-access-snhtc\") pod \"community-operators-8k8lg\" (UID: \"6f9bf60c-1e49-4162-8232-14a251e9bc48\") " pod="openshift-marketplace/community-operators-8k8lg" Oct 13 07:59:56 crc kubenswrapper[4833]: I1013 07:59:56.631018 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f9bf60c-1e49-4162-8232-14a251e9bc48-catalog-content\") pod \"community-operators-8k8lg\" (UID: \"6f9bf60c-1e49-4162-8232-14a251e9bc48\") " pod="openshift-marketplace/community-operators-8k8lg" Oct 13 07:59:56 crc kubenswrapper[4833]: I1013 07:59:56.631243 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snhtc\" (UniqueName: \"kubernetes.io/projected/6f9bf60c-1e49-4162-8232-14a251e9bc48-kube-api-access-snhtc\") pod \"community-operators-8k8lg\" (UID: \"6f9bf60c-1e49-4162-8232-14a251e9bc48\") " pod="openshift-marketplace/community-operators-8k8lg" Oct 13 07:59:56 crc kubenswrapper[4833]: I1013 07:59:56.631389 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f9bf60c-1e49-4162-8232-14a251e9bc48-utilities\") pod \"community-operators-8k8lg\" (UID: \"6f9bf60c-1e49-4162-8232-14a251e9bc48\") " pod="openshift-marketplace/community-operators-8k8lg" Oct 13 07:59:56 crc kubenswrapper[4833]: I1013 07:59:56.631943 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f9bf60c-1e49-4162-8232-14a251e9bc48-utilities\") pod \"community-operators-8k8lg\" (UID: \"6f9bf60c-1e49-4162-8232-14a251e9bc48\") " pod="openshift-marketplace/community-operators-8k8lg" Oct 13 07:59:56 crc kubenswrapper[4833]: I1013 07:59:56.632287 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f9bf60c-1e49-4162-8232-14a251e9bc48-catalog-content\") pod \"community-operators-8k8lg\" (UID: \"6f9bf60c-1e49-4162-8232-14a251e9bc48\") " pod="openshift-marketplace/community-operators-8k8lg" Oct 13 07:59:56 crc kubenswrapper[4833]: I1013 07:59:56.655303 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snhtc\" (UniqueName: \"kubernetes.io/projected/6f9bf60c-1e49-4162-8232-14a251e9bc48-kube-api-access-snhtc\") pod \"community-operators-8k8lg\" (UID: \"6f9bf60c-1e49-4162-8232-14a251e9bc48\") " pod="openshift-marketplace/community-operators-8k8lg" Oct 13 07:59:56 crc kubenswrapper[4833]: I1013 07:59:56.789140 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8k8lg" Oct 13 07:59:57 crc kubenswrapper[4833]: I1013 07:59:57.257618 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8k8lg"] Oct 13 07:59:57 crc kubenswrapper[4833]: W1013 07:59:57.266908 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f9bf60c_1e49_4162_8232_14a251e9bc48.slice/crio-e50bbe3a94be0d6c1dd9a29a1c8a0647250e049310431390554d200347a996f1 WatchSource:0}: Error finding container e50bbe3a94be0d6c1dd9a29a1c8a0647250e049310431390554d200347a996f1: Status 404 returned error can't find the container with id e50bbe3a94be0d6c1dd9a29a1c8a0647250e049310431390554d200347a996f1 Oct 13 07:59:57 crc kubenswrapper[4833]: I1013 07:59:57.866107 4833 generic.go:334] "Generic (PLEG): container finished" podID="6f9bf60c-1e49-4162-8232-14a251e9bc48" containerID="b09fab42c507447979aba96705483d239a1789918f53fc704cad92261041a274" exitCode=0 Oct 13 07:59:57 crc kubenswrapper[4833]: I1013 07:59:57.866232 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8k8lg" event={"ID":"6f9bf60c-1e49-4162-8232-14a251e9bc48","Type":"ContainerDied","Data":"b09fab42c507447979aba96705483d239a1789918f53fc704cad92261041a274"} Oct 13 07:59:57 crc kubenswrapper[4833]: I1013 07:59:57.866484 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8k8lg" event={"ID":"6f9bf60c-1e49-4162-8232-14a251e9bc48","Type":"ContainerStarted","Data":"e50bbe3a94be0d6c1dd9a29a1c8a0647250e049310431390554d200347a996f1"} Oct 13 07:59:58 crc kubenswrapper[4833]: I1013 07:59:58.880794 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8k8lg" event={"ID":"6f9bf60c-1e49-4162-8232-14a251e9bc48","Type":"ContainerStarted","Data":"94e6edfb8f489eb88305493b6af52229e1bcb7e95b2829634d9b71cbb79e3965"} Oct 13 07:59:59 crc kubenswrapper[4833]: I1013 07:59:59.893097 4833 generic.go:334] "Generic (PLEG): container finished" podID="6f9bf60c-1e49-4162-8232-14a251e9bc48" containerID="94e6edfb8f489eb88305493b6af52229e1bcb7e95b2829634d9b71cbb79e3965" exitCode=0 Oct 13 07:59:59 crc kubenswrapper[4833]: I1013 07:59:59.893202 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8k8lg" event={"ID":"6f9bf60c-1e49-4162-8232-14a251e9bc48","Type":"ContainerDied","Data":"94e6edfb8f489eb88305493b6af52229e1bcb7e95b2829634d9b71cbb79e3965"} Oct 13 08:00:00 crc kubenswrapper[4833]: I1013 08:00:00.138113 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339040-5mx8m"] Oct 13 08:00:00 crc kubenswrapper[4833]: I1013 08:00:00.139958 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339040-5mx8m" Oct 13 08:00:00 crc kubenswrapper[4833]: I1013 08:00:00.142358 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 13 08:00:00 crc kubenswrapper[4833]: I1013 08:00:00.142615 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 13 08:00:00 crc kubenswrapper[4833]: I1013 08:00:00.149912 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339040-5mx8m"] Oct 13 08:00:00 crc kubenswrapper[4833]: I1013 08:00:00.207010 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc9d7e62-9f76-449f-90f7-bcaf9e66da5d-secret-volume\") pod \"collect-profiles-29339040-5mx8m\" (UID: \"fc9d7e62-9f76-449f-90f7-bcaf9e66da5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339040-5mx8m" Oct 13 08:00:00 crc kubenswrapper[4833]: I1013 08:00:00.207281 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk4zs\" (UniqueName: \"kubernetes.io/projected/fc9d7e62-9f76-449f-90f7-bcaf9e66da5d-kube-api-access-fk4zs\") pod \"collect-profiles-29339040-5mx8m\" (UID: \"fc9d7e62-9f76-449f-90f7-bcaf9e66da5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339040-5mx8m" Oct 13 08:00:00 crc kubenswrapper[4833]: I1013 08:00:00.207443 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc9d7e62-9f76-449f-90f7-bcaf9e66da5d-config-volume\") pod \"collect-profiles-29339040-5mx8m\" (UID: \"fc9d7e62-9f76-449f-90f7-bcaf9e66da5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339040-5mx8m" Oct 13 08:00:00 crc kubenswrapper[4833]: I1013 08:00:00.308321 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk4zs\" (UniqueName: \"kubernetes.io/projected/fc9d7e62-9f76-449f-90f7-bcaf9e66da5d-kube-api-access-fk4zs\") pod \"collect-profiles-29339040-5mx8m\" (UID: \"fc9d7e62-9f76-449f-90f7-bcaf9e66da5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339040-5mx8m" Oct 13 08:00:00 crc kubenswrapper[4833]: I1013 08:00:00.308407 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc9d7e62-9f76-449f-90f7-bcaf9e66da5d-config-volume\") pod \"collect-profiles-29339040-5mx8m\" (UID: \"fc9d7e62-9f76-449f-90f7-bcaf9e66da5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339040-5mx8m" Oct 13 08:00:00 crc kubenswrapper[4833]: I1013 08:00:00.308437 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc9d7e62-9f76-449f-90f7-bcaf9e66da5d-secret-volume\") pod \"collect-profiles-29339040-5mx8m\" (UID: \"fc9d7e62-9f76-449f-90f7-bcaf9e66da5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339040-5mx8m" Oct 13 08:00:00 crc kubenswrapper[4833]: I1013 08:00:00.309948 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc9d7e62-9f76-449f-90f7-bcaf9e66da5d-config-volume\") pod \"collect-profiles-29339040-5mx8m\" (UID: \"fc9d7e62-9f76-449f-90f7-bcaf9e66da5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339040-5mx8m" Oct 13 08:00:00 crc kubenswrapper[4833]: I1013 08:00:00.323297 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc9d7e62-9f76-449f-90f7-bcaf9e66da5d-secret-volume\") pod \"collect-profiles-29339040-5mx8m\" (UID: \"fc9d7e62-9f76-449f-90f7-bcaf9e66da5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339040-5mx8m" Oct 13 08:00:00 crc kubenswrapper[4833]: I1013 08:00:00.338602 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk4zs\" (UniqueName: \"kubernetes.io/projected/fc9d7e62-9f76-449f-90f7-bcaf9e66da5d-kube-api-access-fk4zs\") pod \"collect-profiles-29339040-5mx8m\" (UID: \"fc9d7e62-9f76-449f-90f7-bcaf9e66da5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339040-5mx8m" Oct 13 08:00:00 crc kubenswrapper[4833]: I1013 08:00:00.477876 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339040-5mx8m" Oct 13 08:00:00 crc kubenswrapper[4833]: I1013 08:00:00.767821 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339040-5mx8m"] Oct 13 08:00:00 crc kubenswrapper[4833]: W1013 08:00:00.776778 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc9d7e62_9f76_449f_90f7_bcaf9e66da5d.slice/crio-98c290b6d6022c1eed2f388cfae3467223e065123021cd30c92d73630cc899b1 WatchSource:0}: Error finding container 98c290b6d6022c1eed2f388cfae3467223e065123021cd30c92d73630cc899b1: Status 404 returned error can't find the container with id 98c290b6d6022c1eed2f388cfae3467223e065123021cd30c92d73630cc899b1 Oct 13 08:00:00 crc kubenswrapper[4833]: I1013 08:00:00.904677 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8k8lg" event={"ID":"6f9bf60c-1e49-4162-8232-14a251e9bc48","Type":"ContainerStarted","Data":"ac164b3e58794b1b281aa11f778b885e1ce7b15dace1ea44f86de6717635367a"} Oct 13 08:00:00 crc kubenswrapper[4833]: I1013 08:00:00.905586 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339040-5mx8m" event={"ID":"fc9d7e62-9f76-449f-90f7-bcaf9e66da5d","Type":"ContainerStarted","Data":"98c290b6d6022c1eed2f388cfae3467223e065123021cd30c92d73630cc899b1"} Oct 13 08:00:00 crc kubenswrapper[4833]: I1013 08:00:00.929616 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8k8lg" podStartSLOduration=2.168701675 podStartE2EDuration="4.92959872s" podCreationTimestamp="2025-10-13 07:59:56 +0000 UTC" firstStartedPulling="2025-10-13 07:59:57.870076867 +0000 UTC m=+5487.970499823" lastFinishedPulling="2025-10-13 08:00:00.630973952 +0000 UTC m=+5490.731396868" observedRunningTime="2025-10-13 08:00:00.923675832 +0000 UTC m=+5491.024098758" watchObservedRunningTime="2025-10-13 08:00:00.92959872 +0000 UTC m=+5491.030021636" Oct 13 08:00:01 crc kubenswrapper[4833]: I1013 08:00:01.916212 4833 generic.go:334] "Generic (PLEG): container finished" podID="fc9d7e62-9f76-449f-90f7-bcaf9e66da5d" containerID="0e52208a96027a5c7e6b9fc703a2f6b7ffa11182f67a0fd1463eee046a4d39f8" exitCode=0 Oct 13 08:00:01 crc kubenswrapper[4833]: I1013 08:00:01.917408 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339040-5mx8m" event={"ID":"fc9d7e62-9f76-449f-90f7-bcaf9e66da5d","Type":"ContainerDied","Data":"0e52208a96027a5c7e6b9fc703a2f6b7ffa11182f67a0fd1463eee046a4d39f8"} Oct 13 08:00:03 crc kubenswrapper[4833]: I1013 08:00:03.262101 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339040-5mx8m" Oct 13 08:00:03 crc kubenswrapper[4833]: I1013 08:00:03.395300 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk4zs\" (UniqueName: \"kubernetes.io/projected/fc9d7e62-9f76-449f-90f7-bcaf9e66da5d-kube-api-access-fk4zs\") pod \"fc9d7e62-9f76-449f-90f7-bcaf9e66da5d\" (UID: \"fc9d7e62-9f76-449f-90f7-bcaf9e66da5d\") " Oct 13 08:00:03 crc kubenswrapper[4833]: I1013 08:00:03.395359 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc9d7e62-9f76-449f-90f7-bcaf9e66da5d-config-volume\") pod \"fc9d7e62-9f76-449f-90f7-bcaf9e66da5d\" (UID: \"fc9d7e62-9f76-449f-90f7-bcaf9e66da5d\") " Oct 13 08:00:03 crc kubenswrapper[4833]: I1013 08:00:03.395396 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc9d7e62-9f76-449f-90f7-bcaf9e66da5d-secret-volume\") pod \"fc9d7e62-9f76-449f-90f7-bcaf9e66da5d\" (UID: \"fc9d7e62-9f76-449f-90f7-bcaf9e66da5d\") " Oct 13 08:00:03 crc kubenswrapper[4833]: I1013 08:00:03.396376 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc9d7e62-9f76-449f-90f7-bcaf9e66da5d-config-volume" (OuterVolumeSpecName: "config-volume") pod "fc9d7e62-9f76-449f-90f7-bcaf9e66da5d" (UID: "fc9d7e62-9f76-449f-90f7-bcaf9e66da5d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:00:03 crc kubenswrapper[4833]: I1013 08:00:03.400722 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc9d7e62-9f76-449f-90f7-bcaf9e66da5d-kube-api-access-fk4zs" (OuterVolumeSpecName: "kube-api-access-fk4zs") pod "fc9d7e62-9f76-449f-90f7-bcaf9e66da5d" (UID: "fc9d7e62-9f76-449f-90f7-bcaf9e66da5d"). InnerVolumeSpecName "kube-api-access-fk4zs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:00:03 crc kubenswrapper[4833]: I1013 08:00:03.403321 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc9d7e62-9f76-449f-90f7-bcaf9e66da5d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fc9d7e62-9f76-449f-90f7-bcaf9e66da5d" (UID: "fc9d7e62-9f76-449f-90f7-bcaf9e66da5d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:00:03 crc kubenswrapper[4833]: I1013 08:00:03.497212 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk4zs\" (UniqueName: \"kubernetes.io/projected/fc9d7e62-9f76-449f-90f7-bcaf9e66da5d-kube-api-access-fk4zs\") on node \"crc\" DevicePath \"\"" Oct 13 08:00:03 crc kubenswrapper[4833]: I1013 08:00:03.497258 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc9d7e62-9f76-449f-90f7-bcaf9e66da5d-config-volume\") on node \"crc\" DevicePath \"\"" Oct 13 08:00:03 crc kubenswrapper[4833]: I1013 08:00:03.497273 4833 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc9d7e62-9f76-449f-90f7-bcaf9e66da5d-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 13 08:00:03 crc kubenswrapper[4833]: I1013 08:00:03.937028 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339040-5mx8m" event={"ID":"fc9d7e62-9f76-449f-90f7-bcaf9e66da5d","Type":"ContainerDied","Data":"98c290b6d6022c1eed2f388cfae3467223e065123021cd30c92d73630cc899b1"} Oct 13 08:00:03 crc kubenswrapper[4833]: I1013 08:00:03.937339 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98c290b6d6022c1eed2f388cfae3467223e065123021cd30c92d73630cc899b1" Oct 13 08:00:03 crc kubenswrapper[4833]: I1013 08:00:03.937163 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339040-5mx8m" Oct 13 08:00:04 crc kubenswrapper[4833]: I1013 08:00:04.353045 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29338995-m9jjs"] Oct 13 08:00:04 crc kubenswrapper[4833]: I1013 08:00:04.363401 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29338995-m9jjs"] Oct 13 08:00:04 crc kubenswrapper[4833]: I1013 08:00:04.639406 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a30bb4f-fdcb-48b3-819c-004e02282a56" path="/var/lib/kubelet/pods/8a30bb4f-fdcb-48b3-819c-004e02282a56/volumes" Oct 13 08:00:06 crc kubenswrapper[4833]: I1013 08:00:06.790099 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8k8lg" Oct 13 08:00:06 crc kubenswrapper[4833]: I1013 08:00:06.790893 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8k8lg" Oct 13 08:00:06 crc kubenswrapper[4833]: I1013 08:00:06.861460 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8k8lg" Oct 13 08:00:07 crc kubenswrapper[4833]: I1013 08:00:07.035224 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8k8lg" Oct 13 08:00:07 crc kubenswrapper[4833]: I1013 08:00:07.107040 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8k8lg"] Oct 13 08:00:08 crc kubenswrapper[4833]: I1013 08:00:08.996036 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8k8lg" podUID="6f9bf60c-1e49-4162-8232-14a251e9bc48" containerName="registry-server" containerID="cri-o://ac164b3e58794b1b281aa11f778b885e1ce7b15dace1ea44f86de6717635367a" gracePeriod=2 Oct 13 08:00:09 crc kubenswrapper[4833]: I1013 08:00:09.532418 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8k8lg" Oct 13 08:00:09 crc kubenswrapper[4833]: I1013 08:00:09.608428 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f9bf60c-1e49-4162-8232-14a251e9bc48-catalog-content\") pod \"6f9bf60c-1e49-4162-8232-14a251e9bc48\" (UID: \"6f9bf60c-1e49-4162-8232-14a251e9bc48\") " Oct 13 08:00:09 crc kubenswrapper[4833]: I1013 08:00:09.608558 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f9bf60c-1e49-4162-8232-14a251e9bc48-utilities\") pod \"6f9bf60c-1e49-4162-8232-14a251e9bc48\" (UID: \"6f9bf60c-1e49-4162-8232-14a251e9bc48\") " Oct 13 08:00:09 crc kubenswrapper[4833]: I1013 08:00:09.608688 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snhtc\" (UniqueName: \"kubernetes.io/projected/6f9bf60c-1e49-4162-8232-14a251e9bc48-kube-api-access-snhtc\") pod \"6f9bf60c-1e49-4162-8232-14a251e9bc48\" (UID: \"6f9bf60c-1e49-4162-8232-14a251e9bc48\") " Oct 13 08:00:09 crc kubenswrapper[4833]: I1013 08:00:09.610851 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f9bf60c-1e49-4162-8232-14a251e9bc48-utilities" (OuterVolumeSpecName: "utilities") pod "6f9bf60c-1e49-4162-8232-14a251e9bc48" (UID: "6f9bf60c-1e49-4162-8232-14a251e9bc48"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:00:09 crc kubenswrapper[4833]: I1013 08:00:09.614937 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f9bf60c-1e49-4162-8232-14a251e9bc48-kube-api-access-snhtc" (OuterVolumeSpecName: "kube-api-access-snhtc") pod "6f9bf60c-1e49-4162-8232-14a251e9bc48" (UID: "6f9bf60c-1e49-4162-8232-14a251e9bc48"). InnerVolumeSpecName "kube-api-access-snhtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:00:09 crc kubenswrapper[4833]: I1013 08:00:09.673152 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f9bf60c-1e49-4162-8232-14a251e9bc48-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f9bf60c-1e49-4162-8232-14a251e9bc48" (UID: "6f9bf60c-1e49-4162-8232-14a251e9bc48"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:00:09 crc kubenswrapper[4833]: I1013 08:00:09.710820 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f9bf60c-1e49-4162-8232-14a251e9bc48-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 08:00:09 crc kubenswrapper[4833]: I1013 08:00:09.710858 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f9bf60c-1e49-4162-8232-14a251e9bc48-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 08:00:09 crc kubenswrapper[4833]: I1013 08:00:09.710871 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snhtc\" (UniqueName: \"kubernetes.io/projected/6f9bf60c-1e49-4162-8232-14a251e9bc48-kube-api-access-snhtc\") on node \"crc\" DevicePath \"\"" Oct 13 08:00:10 crc kubenswrapper[4833]: I1013 08:00:10.008430 4833 generic.go:334] "Generic (PLEG): container finished" podID="6f9bf60c-1e49-4162-8232-14a251e9bc48" containerID="ac164b3e58794b1b281aa11f778b885e1ce7b15dace1ea44f86de6717635367a" exitCode=0 Oct 13 08:00:10 crc kubenswrapper[4833]: I1013 08:00:10.008479 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8k8lg" event={"ID":"6f9bf60c-1e49-4162-8232-14a251e9bc48","Type":"ContainerDied","Data":"ac164b3e58794b1b281aa11f778b885e1ce7b15dace1ea44f86de6717635367a"} Oct 13 08:00:10 crc kubenswrapper[4833]: I1013 08:00:10.008526 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8k8lg" event={"ID":"6f9bf60c-1e49-4162-8232-14a251e9bc48","Type":"ContainerDied","Data":"e50bbe3a94be0d6c1dd9a29a1c8a0647250e049310431390554d200347a996f1"} Oct 13 08:00:10 crc kubenswrapper[4833]: I1013 08:00:10.008569 4833 scope.go:117] "RemoveContainer" containerID="ac164b3e58794b1b281aa11f778b885e1ce7b15dace1ea44f86de6717635367a" Oct 13 08:00:10 crc kubenswrapper[4833]: I1013 08:00:10.010107 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8k8lg" Oct 13 08:00:10 crc kubenswrapper[4833]: I1013 08:00:10.045390 4833 scope.go:117] "RemoveContainer" containerID="94e6edfb8f489eb88305493b6af52229e1bcb7e95b2829634d9b71cbb79e3965" Oct 13 08:00:10 crc kubenswrapper[4833]: I1013 08:00:10.072734 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8k8lg"] Oct 13 08:00:10 crc kubenswrapper[4833]: I1013 08:00:10.082554 4833 scope.go:117] "RemoveContainer" containerID="b09fab42c507447979aba96705483d239a1789918f53fc704cad92261041a274" Oct 13 08:00:10 crc kubenswrapper[4833]: I1013 08:00:10.083265 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8k8lg"] Oct 13 08:00:10 crc kubenswrapper[4833]: I1013 08:00:10.128755 4833 scope.go:117] "RemoveContainer" containerID="ac164b3e58794b1b281aa11f778b885e1ce7b15dace1ea44f86de6717635367a" Oct 13 08:00:10 crc kubenswrapper[4833]: E1013 08:00:10.129281 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac164b3e58794b1b281aa11f778b885e1ce7b15dace1ea44f86de6717635367a\": container with ID starting with ac164b3e58794b1b281aa11f778b885e1ce7b15dace1ea44f86de6717635367a not found: ID does not exist" containerID="ac164b3e58794b1b281aa11f778b885e1ce7b15dace1ea44f86de6717635367a" Oct 13 08:00:10 crc kubenswrapper[4833]: I1013 08:00:10.129319 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac164b3e58794b1b281aa11f778b885e1ce7b15dace1ea44f86de6717635367a"} err="failed to get container status \"ac164b3e58794b1b281aa11f778b885e1ce7b15dace1ea44f86de6717635367a\": rpc error: code = NotFound desc = could not find container \"ac164b3e58794b1b281aa11f778b885e1ce7b15dace1ea44f86de6717635367a\": container with ID starting with ac164b3e58794b1b281aa11f778b885e1ce7b15dace1ea44f86de6717635367a not found: ID does not exist" Oct 13 08:00:10 crc kubenswrapper[4833]: I1013 08:00:10.129342 4833 scope.go:117] "RemoveContainer" containerID="94e6edfb8f489eb88305493b6af52229e1bcb7e95b2829634d9b71cbb79e3965" Oct 13 08:00:10 crc kubenswrapper[4833]: E1013 08:00:10.129934 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94e6edfb8f489eb88305493b6af52229e1bcb7e95b2829634d9b71cbb79e3965\": container with ID starting with 94e6edfb8f489eb88305493b6af52229e1bcb7e95b2829634d9b71cbb79e3965 not found: ID does not exist" containerID="94e6edfb8f489eb88305493b6af52229e1bcb7e95b2829634d9b71cbb79e3965" Oct 13 08:00:10 crc kubenswrapper[4833]: I1013 08:00:10.130139 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94e6edfb8f489eb88305493b6af52229e1bcb7e95b2829634d9b71cbb79e3965"} err="failed to get container status \"94e6edfb8f489eb88305493b6af52229e1bcb7e95b2829634d9b71cbb79e3965\": rpc error: code = NotFound desc = could not find container \"94e6edfb8f489eb88305493b6af52229e1bcb7e95b2829634d9b71cbb79e3965\": container with ID starting with 94e6edfb8f489eb88305493b6af52229e1bcb7e95b2829634d9b71cbb79e3965 not found: ID does not exist" Oct 13 08:00:10 crc kubenswrapper[4833]: I1013 08:00:10.130294 4833 scope.go:117] "RemoveContainer" containerID="b09fab42c507447979aba96705483d239a1789918f53fc704cad92261041a274" Oct 13 08:00:10 crc kubenswrapper[4833]: E1013 08:00:10.131228 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b09fab42c507447979aba96705483d239a1789918f53fc704cad92261041a274\": container with ID starting with b09fab42c507447979aba96705483d239a1789918f53fc704cad92261041a274 not found: ID does not exist" containerID="b09fab42c507447979aba96705483d239a1789918f53fc704cad92261041a274" Oct 13 08:00:10 crc kubenswrapper[4833]: I1013 08:00:10.131260 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b09fab42c507447979aba96705483d239a1789918f53fc704cad92261041a274"} err="failed to get container status \"b09fab42c507447979aba96705483d239a1789918f53fc704cad92261041a274\": rpc error: code = NotFound desc = could not find container \"b09fab42c507447979aba96705483d239a1789918f53fc704cad92261041a274\": container with ID starting with b09fab42c507447979aba96705483d239a1789918f53fc704cad92261041a274 not found: ID does not exist" Oct 13 08:00:10 crc kubenswrapper[4833]: I1013 08:00:10.649162 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f9bf60c-1e49-4162-8232-14a251e9bc48" path="/var/lib/kubelet/pods/6f9bf60c-1e49-4162-8232-14a251e9bc48/volumes" Oct 13 08:00:51 crc kubenswrapper[4833]: I1013 08:00:51.167998 4833 scope.go:117] "RemoveContainer" containerID="66b21a7bb3015aacf43423146dae504db52692ddd530dc3167ed13630a96bdcc" Oct 13 08:01:00 crc kubenswrapper[4833]: I1013 08:01:00.138951 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29339041-tp27n"] Oct 13 08:01:00 crc kubenswrapper[4833]: E1013 08:01:00.141298 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f9bf60c-1e49-4162-8232-14a251e9bc48" containerName="extract-utilities" Oct 13 08:01:00 crc kubenswrapper[4833]: I1013 08:01:00.141412 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f9bf60c-1e49-4162-8232-14a251e9bc48" containerName="extract-utilities" Oct 13 08:01:00 crc kubenswrapper[4833]: E1013 08:01:00.141513 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f9bf60c-1e49-4162-8232-14a251e9bc48" containerName="registry-server" Oct 13 08:01:00 crc kubenswrapper[4833]: I1013 08:01:00.141627 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f9bf60c-1e49-4162-8232-14a251e9bc48" containerName="registry-server" Oct 13 08:01:00 crc kubenswrapper[4833]: E1013 08:01:00.141715 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f9bf60c-1e49-4162-8232-14a251e9bc48" containerName="extract-content" Oct 13 08:01:00 crc kubenswrapper[4833]: I1013 08:01:00.141786 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f9bf60c-1e49-4162-8232-14a251e9bc48" containerName="extract-content" Oct 13 08:01:00 crc kubenswrapper[4833]: E1013 08:01:00.141871 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc9d7e62-9f76-449f-90f7-bcaf9e66da5d" containerName="collect-profiles" Oct 13 08:01:00 crc kubenswrapper[4833]: I1013 08:01:00.141943 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc9d7e62-9f76-449f-90f7-bcaf9e66da5d" containerName="collect-profiles" Oct 13 08:01:00 crc kubenswrapper[4833]: I1013 08:01:00.142190 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f9bf60c-1e49-4162-8232-14a251e9bc48" containerName="registry-server" Oct 13 08:01:00 crc kubenswrapper[4833]: I1013 08:01:00.142300 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc9d7e62-9f76-449f-90f7-bcaf9e66da5d" containerName="collect-profiles" Oct 13 08:01:00 crc kubenswrapper[4833]: I1013 08:01:00.143165 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29339041-tp27n" Oct 13 08:01:00 crc kubenswrapper[4833]: I1013 08:01:00.151311 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29339041-tp27n"] Oct 13 08:01:00 crc kubenswrapper[4833]: I1013 08:01:00.297357 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9696285-91dc-48ff-911b-0f984c7c17f4-combined-ca-bundle\") pod \"keystone-cron-29339041-tp27n\" (UID: \"c9696285-91dc-48ff-911b-0f984c7c17f4\") " pod="openstack/keystone-cron-29339041-tp27n" Oct 13 08:01:00 crc kubenswrapper[4833]: I1013 08:01:00.297410 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz8xc\" (UniqueName: \"kubernetes.io/projected/c9696285-91dc-48ff-911b-0f984c7c17f4-kube-api-access-qz8xc\") pod \"keystone-cron-29339041-tp27n\" (UID: \"c9696285-91dc-48ff-911b-0f984c7c17f4\") " pod="openstack/keystone-cron-29339041-tp27n" Oct 13 08:01:00 crc kubenswrapper[4833]: I1013 08:01:00.297442 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c9696285-91dc-48ff-911b-0f984c7c17f4-fernet-keys\") pod \"keystone-cron-29339041-tp27n\" (UID: \"c9696285-91dc-48ff-911b-0f984c7c17f4\") " pod="openstack/keystone-cron-29339041-tp27n" Oct 13 08:01:00 crc kubenswrapper[4833]: I1013 08:01:00.297489 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9696285-91dc-48ff-911b-0f984c7c17f4-config-data\") pod \"keystone-cron-29339041-tp27n\" (UID: \"c9696285-91dc-48ff-911b-0f984c7c17f4\") " pod="openstack/keystone-cron-29339041-tp27n" Oct 13 08:01:00 crc kubenswrapper[4833]: I1013 08:01:00.399396 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9696285-91dc-48ff-911b-0f984c7c17f4-combined-ca-bundle\") pod \"keystone-cron-29339041-tp27n\" (UID: \"c9696285-91dc-48ff-911b-0f984c7c17f4\") " pod="openstack/keystone-cron-29339041-tp27n" Oct 13 08:01:00 crc kubenswrapper[4833]: I1013 08:01:00.399446 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz8xc\" (UniqueName: \"kubernetes.io/projected/c9696285-91dc-48ff-911b-0f984c7c17f4-kube-api-access-qz8xc\") pod \"keystone-cron-29339041-tp27n\" (UID: \"c9696285-91dc-48ff-911b-0f984c7c17f4\") " pod="openstack/keystone-cron-29339041-tp27n" Oct 13 08:01:00 crc kubenswrapper[4833]: I1013 08:01:00.399479 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c9696285-91dc-48ff-911b-0f984c7c17f4-fernet-keys\") pod \"keystone-cron-29339041-tp27n\" (UID: \"c9696285-91dc-48ff-911b-0f984c7c17f4\") " pod="openstack/keystone-cron-29339041-tp27n" Oct 13 08:01:00 crc kubenswrapper[4833]: I1013 08:01:00.399525 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9696285-91dc-48ff-911b-0f984c7c17f4-config-data\") pod \"keystone-cron-29339041-tp27n\" (UID: \"c9696285-91dc-48ff-911b-0f984c7c17f4\") " pod="openstack/keystone-cron-29339041-tp27n" Oct 13 08:01:00 crc kubenswrapper[4833]: I1013 08:01:00.405157 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9696285-91dc-48ff-911b-0f984c7c17f4-combined-ca-bundle\") pod \"keystone-cron-29339041-tp27n\" (UID: \"c9696285-91dc-48ff-911b-0f984c7c17f4\") " pod="openstack/keystone-cron-29339041-tp27n" Oct 13 08:01:00 crc kubenswrapper[4833]: I1013 08:01:00.405280 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9696285-91dc-48ff-911b-0f984c7c17f4-config-data\") pod \"keystone-cron-29339041-tp27n\" (UID: \"c9696285-91dc-48ff-911b-0f984c7c17f4\") " pod="openstack/keystone-cron-29339041-tp27n" Oct 13 08:01:00 crc kubenswrapper[4833]: I1013 08:01:00.405443 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c9696285-91dc-48ff-911b-0f984c7c17f4-fernet-keys\") pod \"keystone-cron-29339041-tp27n\" (UID: \"c9696285-91dc-48ff-911b-0f984c7c17f4\") " pod="openstack/keystone-cron-29339041-tp27n" Oct 13 08:01:00 crc kubenswrapper[4833]: I1013 08:01:00.415972 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz8xc\" (UniqueName: \"kubernetes.io/projected/c9696285-91dc-48ff-911b-0f984c7c17f4-kube-api-access-qz8xc\") pod \"keystone-cron-29339041-tp27n\" (UID: \"c9696285-91dc-48ff-911b-0f984c7c17f4\") " pod="openstack/keystone-cron-29339041-tp27n" Oct 13 08:01:00 crc kubenswrapper[4833]: I1013 08:01:00.469361 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29339041-tp27n" Oct 13 08:01:00 crc kubenswrapper[4833]: I1013 08:01:00.978426 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29339041-tp27n"] Oct 13 08:01:01 crc kubenswrapper[4833]: I1013 08:01:01.538718 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29339041-tp27n" event={"ID":"c9696285-91dc-48ff-911b-0f984c7c17f4","Type":"ContainerStarted","Data":"82d5a0c3216f5723fd9c9124d87ce5cff39bcb170f2bf9fedcdaa7d835998109"} Oct 13 08:01:01 crc kubenswrapper[4833]: I1013 08:01:01.539001 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29339041-tp27n" event={"ID":"c9696285-91dc-48ff-911b-0f984c7c17f4","Type":"ContainerStarted","Data":"738c0b8a3cb69187a0bf11817551d0bfc5346098a6525d01b2edcaffb9138f37"} Oct 13 08:01:01 crc kubenswrapper[4833]: I1013 08:01:01.560397 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29339041-tp27n" podStartSLOduration=1.560382074 podStartE2EDuration="1.560382074s" podCreationTimestamp="2025-10-13 08:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:01:01.556890375 +0000 UTC m=+5551.657313291" watchObservedRunningTime="2025-10-13 08:01:01.560382074 +0000 UTC m=+5551.660804990" Oct 13 08:01:03 crc kubenswrapper[4833]: I1013 08:01:03.558777 4833 generic.go:334] "Generic (PLEG): container finished" podID="c9696285-91dc-48ff-911b-0f984c7c17f4" containerID="82d5a0c3216f5723fd9c9124d87ce5cff39bcb170f2bf9fedcdaa7d835998109" exitCode=0 Oct 13 08:01:03 crc kubenswrapper[4833]: I1013 08:01:03.558864 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29339041-tp27n" event={"ID":"c9696285-91dc-48ff-911b-0f984c7c17f4","Type":"ContainerDied","Data":"82d5a0c3216f5723fd9c9124d87ce5cff39bcb170f2bf9fedcdaa7d835998109"} Oct 13 08:01:05 crc kubenswrapper[4833]: I1013 08:01:05.059220 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29339041-tp27n" Oct 13 08:01:05 crc kubenswrapper[4833]: I1013 08:01:05.100915 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9696285-91dc-48ff-911b-0f984c7c17f4-config-data\") pod \"c9696285-91dc-48ff-911b-0f984c7c17f4\" (UID: \"c9696285-91dc-48ff-911b-0f984c7c17f4\") " Oct 13 08:01:05 crc kubenswrapper[4833]: I1013 08:01:05.100975 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz8xc\" (UniqueName: \"kubernetes.io/projected/c9696285-91dc-48ff-911b-0f984c7c17f4-kube-api-access-qz8xc\") pod \"c9696285-91dc-48ff-911b-0f984c7c17f4\" (UID: \"c9696285-91dc-48ff-911b-0f984c7c17f4\") " Oct 13 08:01:05 crc kubenswrapper[4833]: I1013 08:01:05.101067 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9696285-91dc-48ff-911b-0f984c7c17f4-combined-ca-bundle\") pod \"c9696285-91dc-48ff-911b-0f984c7c17f4\" (UID: \"c9696285-91dc-48ff-911b-0f984c7c17f4\") " Oct 13 08:01:05 crc kubenswrapper[4833]: I1013 08:01:05.101209 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c9696285-91dc-48ff-911b-0f984c7c17f4-fernet-keys\") pod \"c9696285-91dc-48ff-911b-0f984c7c17f4\" (UID: \"c9696285-91dc-48ff-911b-0f984c7c17f4\") " Oct 13 08:01:05 crc kubenswrapper[4833]: I1013 08:01:05.109594 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9696285-91dc-48ff-911b-0f984c7c17f4-kube-api-access-qz8xc" (OuterVolumeSpecName: "kube-api-access-qz8xc") pod "c9696285-91dc-48ff-911b-0f984c7c17f4" (UID: "c9696285-91dc-48ff-911b-0f984c7c17f4"). InnerVolumeSpecName "kube-api-access-qz8xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:01:05 crc kubenswrapper[4833]: I1013 08:01:05.126185 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9696285-91dc-48ff-911b-0f984c7c17f4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c9696285-91dc-48ff-911b-0f984c7c17f4" (UID: "c9696285-91dc-48ff-911b-0f984c7c17f4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:01:05 crc kubenswrapper[4833]: I1013 08:01:05.137391 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9696285-91dc-48ff-911b-0f984c7c17f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9696285-91dc-48ff-911b-0f984c7c17f4" (UID: "c9696285-91dc-48ff-911b-0f984c7c17f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:01:05 crc kubenswrapper[4833]: I1013 08:01:05.167890 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9696285-91dc-48ff-911b-0f984c7c17f4-config-data" (OuterVolumeSpecName: "config-data") pod "c9696285-91dc-48ff-911b-0f984c7c17f4" (UID: "c9696285-91dc-48ff-911b-0f984c7c17f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:01:05 crc kubenswrapper[4833]: I1013 08:01:05.202831 4833 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c9696285-91dc-48ff-911b-0f984c7c17f4-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 13 08:01:05 crc kubenswrapper[4833]: I1013 08:01:05.202862 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9696285-91dc-48ff-911b-0f984c7c17f4-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:01:05 crc kubenswrapper[4833]: I1013 08:01:05.202894 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz8xc\" (UniqueName: \"kubernetes.io/projected/c9696285-91dc-48ff-911b-0f984c7c17f4-kube-api-access-qz8xc\") on node \"crc\" DevicePath \"\"" Oct 13 08:01:05 crc kubenswrapper[4833]: I1013 08:01:05.202904 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9696285-91dc-48ff-911b-0f984c7c17f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:01:05 crc kubenswrapper[4833]: I1013 08:01:05.585866 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29339041-tp27n" event={"ID":"c9696285-91dc-48ff-911b-0f984c7c17f4","Type":"ContainerDied","Data":"738c0b8a3cb69187a0bf11817551d0bfc5346098a6525d01b2edcaffb9138f37"} Oct 13 08:01:05 crc kubenswrapper[4833]: I1013 08:01:05.586316 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="738c0b8a3cb69187a0bf11817551d0bfc5346098a6525d01b2edcaffb9138f37" Oct 13 08:01:05 crc kubenswrapper[4833]: I1013 08:01:05.586420 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29339041-tp27n" Oct 13 08:01:08 crc kubenswrapper[4833]: I1013 08:01:08.058889 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t7zl7"] Oct 13 08:01:08 crc kubenswrapper[4833]: E1013 08:01:08.061820 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9696285-91dc-48ff-911b-0f984c7c17f4" containerName="keystone-cron" Oct 13 08:01:08 crc kubenswrapper[4833]: I1013 08:01:08.062017 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9696285-91dc-48ff-911b-0f984c7c17f4" containerName="keystone-cron" Oct 13 08:01:08 crc kubenswrapper[4833]: I1013 08:01:08.062727 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9696285-91dc-48ff-911b-0f984c7c17f4" containerName="keystone-cron" Oct 13 08:01:08 crc kubenswrapper[4833]: I1013 08:01:08.066292 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t7zl7" Oct 13 08:01:08 crc kubenswrapper[4833]: I1013 08:01:08.079386 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t7zl7"] Oct 13 08:01:08 crc kubenswrapper[4833]: I1013 08:01:08.166747 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7324b577-855e-471d-a4a9-c5683f5d9e61-catalog-content\") pod \"redhat-operators-t7zl7\" (UID: \"7324b577-855e-471d-a4a9-c5683f5d9e61\") " pod="openshift-marketplace/redhat-operators-t7zl7" Oct 13 08:01:08 crc kubenswrapper[4833]: I1013 08:01:08.167150 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwx67\" (UniqueName: \"kubernetes.io/projected/7324b577-855e-471d-a4a9-c5683f5d9e61-kube-api-access-mwx67\") pod \"redhat-operators-t7zl7\" (UID: \"7324b577-855e-471d-a4a9-c5683f5d9e61\") " pod="openshift-marketplace/redhat-operators-t7zl7" Oct 13 08:01:08 crc kubenswrapper[4833]: I1013 08:01:08.167302 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7324b577-855e-471d-a4a9-c5683f5d9e61-utilities\") pod \"redhat-operators-t7zl7\" (UID: \"7324b577-855e-471d-a4a9-c5683f5d9e61\") " pod="openshift-marketplace/redhat-operators-t7zl7" Oct 13 08:01:08 crc kubenswrapper[4833]: I1013 08:01:08.268738 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwx67\" (UniqueName: \"kubernetes.io/projected/7324b577-855e-471d-a4a9-c5683f5d9e61-kube-api-access-mwx67\") pod \"redhat-operators-t7zl7\" (UID: \"7324b577-855e-471d-a4a9-c5683f5d9e61\") " pod="openshift-marketplace/redhat-operators-t7zl7" Oct 13 08:01:08 crc kubenswrapper[4833]: I1013 08:01:08.268846 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7324b577-855e-471d-a4a9-c5683f5d9e61-utilities\") pod \"redhat-operators-t7zl7\" (UID: \"7324b577-855e-471d-a4a9-c5683f5d9e61\") " pod="openshift-marketplace/redhat-operators-t7zl7" Oct 13 08:01:08 crc kubenswrapper[4833]: I1013 08:01:08.268927 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7324b577-855e-471d-a4a9-c5683f5d9e61-catalog-content\") pod \"redhat-operators-t7zl7\" (UID: \"7324b577-855e-471d-a4a9-c5683f5d9e61\") " pod="openshift-marketplace/redhat-operators-t7zl7" Oct 13 08:01:08 crc kubenswrapper[4833]: I1013 08:01:08.269572 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7324b577-855e-471d-a4a9-c5683f5d9e61-catalog-content\") pod \"redhat-operators-t7zl7\" (UID: \"7324b577-855e-471d-a4a9-c5683f5d9e61\") " pod="openshift-marketplace/redhat-operators-t7zl7" Oct 13 08:01:08 crc kubenswrapper[4833]: I1013 08:01:08.269670 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7324b577-855e-471d-a4a9-c5683f5d9e61-utilities\") pod \"redhat-operators-t7zl7\" (UID: \"7324b577-855e-471d-a4a9-c5683f5d9e61\") " pod="openshift-marketplace/redhat-operators-t7zl7" Oct 13 08:01:08 crc kubenswrapper[4833]: I1013 08:01:08.303766 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwx67\" (UniqueName: \"kubernetes.io/projected/7324b577-855e-471d-a4a9-c5683f5d9e61-kube-api-access-mwx67\") pod \"redhat-operators-t7zl7\" (UID: \"7324b577-855e-471d-a4a9-c5683f5d9e61\") " pod="openshift-marketplace/redhat-operators-t7zl7" Oct 13 08:01:08 crc kubenswrapper[4833]: I1013 08:01:08.397767 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t7zl7" Oct 13 08:01:08 crc kubenswrapper[4833]: I1013 08:01:08.962788 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t7zl7"] Oct 13 08:01:09 crc kubenswrapper[4833]: I1013 08:01:09.643624 4833 generic.go:334] "Generic (PLEG): container finished" podID="7324b577-855e-471d-a4a9-c5683f5d9e61" containerID="6cd0adb156087f60354a0e3cc7b3e5987916675c8b6a058f70b7b3df958f55c0" exitCode=0 Oct 13 08:01:09 crc kubenswrapper[4833]: I1013 08:01:09.643671 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7zl7" event={"ID":"7324b577-855e-471d-a4a9-c5683f5d9e61","Type":"ContainerDied","Data":"6cd0adb156087f60354a0e3cc7b3e5987916675c8b6a058f70b7b3df958f55c0"} Oct 13 08:01:09 crc kubenswrapper[4833]: I1013 08:01:09.643715 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7zl7" event={"ID":"7324b577-855e-471d-a4a9-c5683f5d9e61","Type":"ContainerStarted","Data":"9b24db79fb90ac5f9b6992798de26187d37b89b00c095f9d7c65890d77e01fd0"} Oct 13 08:01:09 crc kubenswrapper[4833]: I1013 08:01:09.646964 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 08:01:11 crc kubenswrapper[4833]: I1013 08:01:11.664309 4833 generic.go:334] "Generic (PLEG): container finished" podID="7324b577-855e-471d-a4a9-c5683f5d9e61" containerID="03ea436f2fd27cb1701ea5450ecde5f891a3bc2c60d25ff5d568c893c4fbf628" exitCode=0 Oct 13 08:01:11 crc kubenswrapper[4833]: I1013 08:01:11.664380 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7zl7" event={"ID":"7324b577-855e-471d-a4a9-c5683f5d9e61","Type":"ContainerDied","Data":"03ea436f2fd27cb1701ea5450ecde5f891a3bc2c60d25ff5d568c893c4fbf628"} Oct 13 08:01:12 crc kubenswrapper[4833]: I1013 08:01:12.677764 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7zl7" event={"ID":"7324b577-855e-471d-a4a9-c5683f5d9e61","Type":"ContainerStarted","Data":"e7a1e4e9c522e608385a4e25f7e39042f1458147d24d94c0eb648e93562b40ec"} Oct 13 08:01:12 crc kubenswrapper[4833]: I1013 08:01:12.708282 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t7zl7" podStartSLOduration=2.218747209 podStartE2EDuration="4.70825941s" podCreationTimestamp="2025-10-13 08:01:08 +0000 UTC" firstStartedPulling="2025-10-13 08:01:09.646577485 +0000 UTC m=+5559.747000441" lastFinishedPulling="2025-10-13 08:01:12.136089686 +0000 UTC m=+5562.236512642" observedRunningTime="2025-10-13 08:01:12.701042095 +0000 UTC m=+5562.801465041" watchObservedRunningTime="2025-10-13 08:01:12.70825941 +0000 UTC m=+5562.808682336" Oct 13 08:01:18 crc kubenswrapper[4833]: I1013 08:01:18.398788 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t7zl7" Oct 13 08:01:18 crc kubenswrapper[4833]: I1013 08:01:18.399251 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t7zl7" Oct 13 08:01:18 crc kubenswrapper[4833]: I1013 08:01:18.478255 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t7zl7" Oct 13 08:01:18 crc kubenswrapper[4833]: I1013 08:01:18.801496 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t7zl7" Oct 13 08:01:18 crc kubenswrapper[4833]: I1013 08:01:18.858594 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t7zl7"] Oct 13 08:01:20 crc kubenswrapper[4833]: I1013 08:01:20.762579 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t7zl7" podUID="7324b577-855e-471d-a4a9-c5683f5d9e61" containerName="registry-server" containerID="cri-o://e7a1e4e9c522e608385a4e25f7e39042f1458147d24d94c0eb648e93562b40ec" gracePeriod=2 Oct 13 08:01:21 crc kubenswrapper[4833]: I1013 08:01:21.307919 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t7zl7" Oct 13 08:01:21 crc kubenswrapper[4833]: I1013 08:01:21.418818 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7324b577-855e-471d-a4a9-c5683f5d9e61-catalog-content\") pod \"7324b577-855e-471d-a4a9-c5683f5d9e61\" (UID: \"7324b577-855e-471d-a4a9-c5683f5d9e61\") " Oct 13 08:01:21 crc kubenswrapper[4833]: I1013 08:01:21.419025 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwx67\" (UniqueName: \"kubernetes.io/projected/7324b577-855e-471d-a4a9-c5683f5d9e61-kube-api-access-mwx67\") pod \"7324b577-855e-471d-a4a9-c5683f5d9e61\" (UID: \"7324b577-855e-471d-a4a9-c5683f5d9e61\") " Oct 13 08:01:21 crc kubenswrapper[4833]: I1013 08:01:21.419134 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7324b577-855e-471d-a4a9-c5683f5d9e61-utilities\") pod \"7324b577-855e-471d-a4a9-c5683f5d9e61\" (UID: \"7324b577-855e-471d-a4a9-c5683f5d9e61\") " Oct 13 08:01:21 crc kubenswrapper[4833]: I1013 08:01:21.420709 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7324b577-855e-471d-a4a9-c5683f5d9e61-utilities" (OuterVolumeSpecName: "utilities") pod "7324b577-855e-471d-a4a9-c5683f5d9e61" (UID: "7324b577-855e-471d-a4a9-c5683f5d9e61"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:01:21 crc kubenswrapper[4833]: I1013 08:01:21.427380 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7324b577-855e-471d-a4a9-c5683f5d9e61-kube-api-access-mwx67" (OuterVolumeSpecName: "kube-api-access-mwx67") pod "7324b577-855e-471d-a4a9-c5683f5d9e61" (UID: "7324b577-855e-471d-a4a9-c5683f5d9e61"). InnerVolumeSpecName "kube-api-access-mwx67". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:01:21 crc kubenswrapper[4833]: I1013 08:01:21.522467 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwx67\" (UniqueName: \"kubernetes.io/projected/7324b577-855e-471d-a4a9-c5683f5d9e61-kube-api-access-mwx67\") on node \"crc\" DevicePath \"\"" Oct 13 08:01:21 crc kubenswrapper[4833]: I1013 08:01:21.522534 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7324b577-855e-471d-a4a9-c5683f5d9e61-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 08:01:21 crc kubenswrapper[4833]: I1013 08:01:21.775080 4833 generic.go:334] "Generic (PLEG): container finished" podID="7324b577-855e-471d-a4a9-c5683f5d9e61" containerID="e7a1e4e9c522e608385a4e25f7e39042f1458147d24d94c0eb648e93562b40ec" exitCode=0 Oct 13 08:01:21 crc kubenswrapper[4833]: I1013 08:01:21.775156 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7zl7" event={"ID":"7324b577-855e-471d-a4a9-c5683f5d9e61","Type":"ContainerDied","Data":"e7a1e4e9c522e608385a4e25f7e39042f1458147d24d94c0eb648e93562b40ec"} Oct 13 08:01:21 crc kubenswrapper[4833]: I1013 08:01:21.775181 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t7zl7" Oct 13 08:01:21 crc kubenswrapper[4833]: I1013 08:01:21.775256 4833 scope.go:117] "RemoveContainer" containerID="e7a1e4e9c522e608385a4e25f7e39042f1458147d24d94c0eb648e93562b40ec" Oct 13 08:01:21 crc kubenswrapper[4833]: I1013 08:01:21.775237 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7zl7" event={"ID":"7324b577-855e-471d-a4a9-c5683f5d9e61","Type":"ContainerDied","Data":"9b24db79fb90ac5f9b6992798de26187d37b89b00c095f9d7c65890d77e01fd0"} Oct 13 08:01:21 crc kubenswrapper[4833]: I1013 08:01:21.797428 4833 scope.go:117] "RemoveContainer" containerID="03ea436f2fd27cb1701ea5450ecde5f891a3bc2c60d25ff5d568c893c4fbf628" Oct 13 08:01:21 crc kubenswrapper[4833]: I1013 08:01:21.825276 4833 scope.go:117] "RemoveContainer" containerID="6cd0adb156087f60354a0e3cc7b3e5987916675c8b6a058f70b7b3df958f55c0" Oct 13 08:01:21 crc kubenswrapper[4833]: I1013 08:01:21.861138 4833 scope.go:117] "RemoveContainer" containerID="e7a1e4e9c522e608385a4e25f7e39042f1458147d24d94c0eb648e93562b40ec" Oct 13 08:01:21 crc kubenswrapper[4833]: E1013 08:01:21.861679 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7a1e4e9c522e608385a4e25f7e39042f1458147d24d94c0eb648e93562b40ec\": container with ID starting with e7a1e4e9c522e608385a4e25f7e39042f1458147d24d94c0eb648e93562b40ec not found: ID does not exist" containerID="e7a1e4e9c522e608385a4e25f7e39042f1458147d24d94c0eb648e93562b40ec" Oct 13 08:01:21 crc kubenswrapper[4833]: I1013 08:01:21.861732 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7a1e4e9c522e608385a4e25f7e39042f1458147d24d94c0eb648e93562b40ec"} err="failed to get container status \"e7a1e4e9c522e608385a4e25f7e39042f1458147d24d94c0eb648e93562b40ec\": rpc error: code = NotFound desc = could not find container \"e7a1e4e9c522e608385a4e25f7e39042f1458147d24d94c0eb648e93562b40ec\": container with ID starting with e7a1e4e9c522e608385a4e25f7e39042f1458147d24d94c0eb648e93562b40ec not found: ID does not exist" Oct 13 08:01:21 crc kubenswrapper[4833]: I1013 08:01:21.861769 4833 scope.go:117] "RemoveContainer" containerID="03ea436f2fd27cb1701ea5450ecde5f891a3bc2c60d25ff5d568c893c4fbf628" Oct 13 08:01:21 crc kubenswrapper[4833]: E1013 08:01:21.862560 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03ea436f2fd27cb1701ea5450ecde5f891a3bc2c60d25ff5d568c893c4fbf628\": container with ID starting with 03ea436f2fd27cb1701ea5450ecde5f891a3bc2c60d25ff5d568c893c4fbf628 not found: ID does not exist" containerID="03ea436f2fd27cb1701ea5450ecde5f891a3bc2c60d25ff5d568c893c4fbf628" Oct 13 08:01:21 crc kubenswrapper[4833]: I1013 08:01:21.862605 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03ea436f2fd27cb1701ea5450ecde5f891a3bc2c60d25ff5d568c893c4fbf628"} err="failed to get container status \"03ea436f2fd27cb1701ea5450ecde5f891a3bc2c60d25ff5d568c893c4fbf628\": rpc error: code = NotFound desc = could not find container \"03ea436f2fd27cb1701ea5450ecde5f891a3bc2c60d25ff5d568c893c4fbf628\": container with ID starting with 03ea436f2fd27cb1701ea5450ecde5f891a3bc2c60d25ff5d568c893c4fbf628 not found: ID does not exist" Oct 13 08:01:21 crc kubenswrapper[4833]: I1013 08:01:21.862634 4833 scope.go:117] "RemoveContainer" containerID="6cd0adb156087f60354a0e3cc7b3e5987916675c8b6a058f70b7b3df958f55c0" Oct 13 08:01:21 crc kubenswrapper[4833]: E1013 08:01:21.863119 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cd0adb156087f60354a0e3cc7b3e5987916675c8b6a058f70b7b3df958f55c0\": container with ID starting with 6cd0adb156087f60354a0e3cc7b3e5987916675c8b6a058f70b7b3df958f55c0 not found: ID does not exist" containerID="6cd0adb156087f60354a0e3cc7b3e5987916675c8b6a058f70b7b3df958f55c0" Oct 13 08:01:21 crc kubenswrapper[4833]: I1013 08:01:21.863159 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cd0adb156087f60354a0e3cc7b3e5987916675c8b6a058f70b7b3df958f55c0"} err="failed to get container status \"6cd0adb156087f60354a0e3cc7b3e5987916675c8b6a058f70b7b3df958f55c0\": rpc error: code = NotFound desc = could not find container \"6cd0adb156087f60354a0e3cc7b3e5987916675c8b6a058f70b7b3df958f55c0\": container with ID starting with 6cd0adb156087f60354a0e3cc7b3e5987916675c8b6a058f70b7b3df958f55c0 not found: ID does not exist" Oct 13 08:01:22 crc kubenswrapper[4833]: I1013 08:01:22.284973 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7324b577-855e-471d-a4a9-c5683f5d9e61-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7324b577-855e-471d-a4a9-c5683f5d9e61" (UID: "7324b577-855e-471d-a4a9-c5683f5d9e61"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:01:22 crc kubenswrapper[4833]: I1013 08:01:22.336978 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7324b577-855e-471d-a4a9-c5683f5d9e61-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 08:01:22 crc kubenswrapper[4833]: I1013 08:01:22.417472 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t7zl7"] Oct 13 08:01:22 crc kubenswrapper[4833]: I1013 08:01:22.431051 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t7zl7"] Oct 13 08:01:22 crc kubenswrapper[4833]: I1013 08:01:22.647161 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7324b577-855e-471d-a4a9-c5683f5d9e61" path="/var/lib/kubelet/pods/7324b577-855e-471d-a4a9-c5683f5d9e61/volumes" Oct 13 08:01:30 crc kubenswrapper[4833]: I1013 08:01:30.071190 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-7n2mh"] Oct 13 08:01:30 crc kubenswrapper[4833]: E1013 08:01:30.073427 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7324b577-855e-471d-a4a9-c5683f5d9e61" containerName="extract-content" Oct 13 08:01:30 crc kubenswrapper[4833]: I1013 08:01:30.073617 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7324b577-855e-471d-a4a9-c5683f5d9e61" containerName="extract-content" Oct 13 08:01:30 crc kubenswrapper[4833]: E1013 08:01:30.073797 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7324b577-855e-471d-a4a9-c5683f5d9e61" containerName="registry-server" Oct 13 08:01:30 crc kubenswrapper[4833]: I1013 08:01:30.073908 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7324b577-855e-471d-a4a9-c5683f5d9e61" containerName="registry-server" Oct 13 08:01:30 crc kubenswrapper[4833]: E1013 08:01:30.074066 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7324b577-855e-471d-a4a9-c5683f5d9e61" containerName="extract-utilities" Oct 13 08:01:30 crc kubenswrapper[4833]: I1013 08:01:30.074171 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7324b577-855e-471d-a4a9-c5683f5d9e61" containerName="extract-utilities" Oct 13 08:01:30 crc kubenswrapper[4833]: I1013 08:01:30.074532 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7324b577-855e-471d-a4a9-c5683f5d9e61" containerName="registry-server" Oct 13 08:01:30 crc kubenswrapper[4833]: I1013 08:01:30.075388 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7n2mh" Oct 13 08:01:30 crc kubenswrapper[4833]: I1013 08:01:30.090133 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-7n2mh"] Oct 13 08:01:30 crc kubenswrapper[4833]: I1013 08:01:30.205775 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddv2h\" (UniqueName: \"kubernetes.io/projected/4e6b0b75-c288-4ecb-9bc0-96c9a79abb10-kube-api-access-ddv2h\") pod \"barbican-db-create-7n2mh\" (UID: \"4e6b0b75-c288-4ecb-9bc0-96c9a79abb10\") " pod="openstack/barbican-db-create-7n2mh" Oct 13 08:01:30 crc kubenswrapper[4833]: I1013 08:01:30.307852 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddv2h\" (UniqueName: \"kubernetes.io/projected/4e6b0b75-c288-4ecb-9bc0-96c9a79abb10-kube-api-access-ddv2h\") pod \"barbican-db-create-7n2mh\" (UID: \"4e6b0b75-c288-4ecb-9bc0-96c9a79abb10\") " pod="openstack/barbican-db-create-7n2mh" Oct 13 08:01:30 crc kubenswrapper[4833]: I1013 08:01:30.340511 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddv2h\" (UniqueName: \"kubernetes.io/projected/4e6b0b75-c288-4ecb-9bc0-96c9a79abb10-kube-api-access-ddv2h\") pod \"barbican-db-create-7n2mh\" (UID: \"4e6b0b75-c288-4ecb-9bc0-96c9a79abb10\") " pod="openstack/barbican-db-create-7n2mh" Oct 13 08:01:30 crc kubenswrapper[4833]: I1013 08:01:30.454943 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7n2mh" Oct 13 08:01:30 crc kubenswrapper[4833]: I1013 08:01:30.542320 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:01:30 crc kubenswrapper[4833]: I1013 08:01:30.542631 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 08:01:30 crc kubenswrapper[4833]: I1013 08:01:30.892391 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-7n2mh"] Oct 13 08:01:31 crc kubenswrapper[4833]: I1013 08:01:31.872399 4833 generic.go:334] "Generic (PLEG): container finished" podID="4e6b0b75-c288-4ecb-9bc0-96c9a79abb10" containerID="cae36f7c179114b8281334c08ba6a15e2615e3f522b744f4fdf707d88101d901" exitCode=0 Oct 13 08:01:31 crc kubenswrapper[4833]: I1013 08:01:31.872584 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7n2mh" event={"ID":"4e6b0b75-c288-4ecb-9bc0-96c9a79abb10","Type":"ContainerDied","Data":"cae36f7c179114b8281334c08ba6a15e2615e3f522b744f4fdf707d88101d901"} Oct 13 08:01:31 crc kubenswrapper[4833]: I1013 08:01:31.873800 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7n2mh" event={"ID":"4e6b0b75-c288-4ecb-9bc0-96c9a79abb10","Type":"ContainerStarted","Data":"f72be33b1f79034314ea258e67dfee81097e52fe98db8ed59c8b6fb81844eca5"} Oct 13 08:01:33 crc kubenswrapper[4833]: I1013 08:01:33.292578 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7n2mh" Oct 13 08:01:33 crc kubenswrapper[4833]: I1013 08:01:33.366743 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddv2h\" (UniqueName: \"kubernetes.io/projected/4e6b0b75-c288-4ecb-9bc0-96c9a79abb10-kube-api-access-ddv2h\") pod \"4e6b0b75-c288-4ecb-9bc0-96c9a79abb10\" (UID: \"4e6b0b75-c288-4ecb-9bc0-96c9a79abb10\") " Oct 13 08:01:33 crc kubenswrapper[4833]: I1013 08:01:33.389870 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e6b0b75-c288-4ecb-9bc0-96c9a79abb10-kube-api-access-ddv2h" (OuterVolumeSpecName: "kube-api-access-ddv2h") pod "4e6b0b75-c288-4ecb-9bc0-96c9a79abb10" (UID: "4e6b0b75-c288-4ecb-9bc0-96c9a79abb10"). InnerVolumeSpecName "kube-api-access-ddv2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:01:33 crc kubenswrapper[4833]: I1013 08:01:33.469657 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddv2h\" (UniqueName: \"kubernetes.io/projected/4e6b0b75-c288-4ecb-9bc0-96c9a79abb10-kube-api-access-ddv2h\") on node \"crc\" DevicePath \"\"" Oct 13 08:01:33 crc kubenswrapper[4833]: I1013 08:01:33.895178 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7n2mh" event={"ID":"4e6b0b75-c288-4ecb-9bc0-96c9a79abb10","Type":"ContainerDied","Data":"f72be33b1f79034314ea258e67dfee81097e52fe98db8ed59c8b6fb81844eca5"} Oct 13 08:01:33 crc kubenswrapper[4833]: I1013 08:01:33.895598 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f72be33b1f79034314ea258e67dfee81097e52fe98db8ed59c8b6fb81844eca5" Oct 13 08:01:33 crc kubenswrapper[4833]: I1013 08:01:33.895241 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7n2mh" Oct 13 08:01:40 crc kubenswrapper[4833]: I1013 08:01:40.193214 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-3eb6-account-create-xgvlm"] Oct 13 08:01:40 crc kubenswrapper[4833]: E1013 08:01:40.194628 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e6b0b75-c288-4ecb-9bc0-96c9a79abb10" containerName="mariadb-database-create" Oct 13 08:01:40 crc kubenswrapper[4833]: I1013 08:01:40.194664 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e6b0b75-c288-4ecb-9bc0-96c9a79abb10" containerName="mariadb-database-create" Oct 13 08:01:40 crc kubenswrapper[4833]: I1013 08:01:40.195096 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e6b0b75-c288-4ecb-9bc0-96c9a79abb10" containerName="mariadb-database-create" Oct 13 08:01:40 crc kubenswrapper[4833]: I1013 08:01:40.196291 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3eb6-account-create-xgvlm" Oct 13 08:01:40 crc kubenswrapper[4833]: I1013 08:01:40.198778 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Oct 13 08:01:40 crc kubenswrapper[4833]: I1013 08:01:40.222313 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3eb6-account-create-xgvlm"] Oct 13 08:01:40 crc kubenswrapper[4833]: I1013 08:01:40.297318 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgfnx\" (UniqueName: \"kubernetes.io/projected/5ebeb361-7c2d-4a9c-b5da-cc39cd98e24a-kube-api-access-xgfnx\") pod \"barbican-3eb6-account-create-xgvlm\" (UID: \"5ebeb361-7c2d-4a9c-b5da-cc39cd98e24a\") " pod="openstack/barbican-3eb6-account-create-xgvlm" Oct 13 08:01:40 crc kubenswrapper[4833]: I1013 08:01:40.400965 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgfnx\" (UniqueName: \"kubernetes.io/projected/5ebeb361-7c2d-4a9c-b5da-cc39cd98e24a-kube-api-access-xgfnx\") pod \"barbican-3eb6-account-create-xgvlm\" (UID: \"5ebeb361-7c2d-4a9c-b5da-cc39cd98e24a\") " pod="openstack/barbican-3eb6-account-create-xgvlm" Oct 13 08:01:40 crc kubenswrapper[4833]: I1013 08:01:40.437895 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgfnx\" (UniqueName: \"kubernetes.io/projected/5ebeb361-7c2d-4a9c-b5da-cc39cd98e24a-kube-api-access-xgfnx\") pod \"barbican-3eb6-account-create-xgvlm\" (UID: \"5ebeb361-7c2d-4a9c-b5da-cc39cd98e24a\") " pod="openstack/barbican-3eb6-account-create-xgvlm" Oct 13 08:01:40 crc kubenswrapper[4833]: I1013 08:01:40.526276 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3eb6-account-create-xgvlm" Oct 13 08:01:40 crc kubenswrapper[4833]: I1013 08:01:40.979445 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3eb6-account-create-xgvlm"] Oct 13 08:01:41 crc kubenswrapper[4833]: I1013 08:01:41.993070 4833 generic.go:334] "Generic (PLEG): container finished" podID="5ebeb361-7c2d-4a9c-b5da-cc39cd98e24a" containerID="8bd47cee7a525c2bd51e570539b3ababb1085b0d89c94c1001bf0cbe47e02c10" exitCode=0 Oct 13 08:01:41 crc kubenswrapper[4833]: I1013 08:01:41.993153 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3eb6-account-create-xgvlm" event={"ID":"5ebeb361-7c2d-4a9c-b5da-cc39cd98e24a","Type":"ContainerDied","Data":"8bd47cee7a525c2bd51e570539b3ababb1085b0d89c94c1001bf0cbe47e02c10"} Oct 13 08:01:41 crc kubenswrapper[4833]: I1013 08:01:41.993422 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3eb6-account-create-xgvlm" event={"ID":"5ebeb361-7c2d-4a9c-b5da-cc39cd98e24a","Type":"ContainerStarted","Data":"667e25d4f35b6a9be66c6aaa2103047022cfb4c98a8b3cfd7c83206bde11e73b"} Oct 13 08:01:43 crc kubenswrapper[4833]: I1013 08:01:43.501786 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3eb6-account-create-xgvlm" Oct 13 08:01:43 crc kubenswrapper[4833]: I1013 08:01:43.662168 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgfnx\" (UniqueName: \"kubernetes.io/projected/5ebeb361-7c2d-4a9c-b5da-cc39cd98e24a-kube-api-access-xgfnx\") pod \"5ebeb361-7c2d-4a9c-b5da-cc39cd98e24a\" (UID: \"5ebeb361-7c2d-4a9c-b5da-cc39cd98e24a\") " Oct 13 08:01:43 crc kubenswrapper[4833]: I1013 08:01:43.668952 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebeb361-7c2d-4a9c-b5da-cc39cd98e24a-kube-api-access-xgfnx" (OuterVolumeSpecName: "kube-api-access-xgfnx") pod "5ebeb361-7c2d-4a9c-b5da-cc39cd98e24a" (UID: "5ebeb361-7c2d-4a9c-b5da-cc39cd98e24a"). InnerVolumeSpecName "kube-api-access-xgfnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:01:43 crc kubenswrapper[4833]: I1013 08:01:43.764800 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgfnx\" (UniqueName: \"kubernetes.io/projected/5ebeb361-7c2d-4a9c-b5da-cc39cd98e24a-kube-api-access-xgfnx\") on node \"crc\" DevicePath \"\"" Oct 13 08:01:44 crc kubenswrapper[4833]: I1013 08:01:44.014852 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3eb6-account-create-xgvlm" event={"ID":"5ebeb361-7c2d-4a9c-b5da-cc39cd98e24a","Type":"ContainerDied","Data":"667e25d4f35b6a9be66c6aaa2103047022cfb4c98a8b3cfd7c83206bde11e73b"} Oct 13 08:01:44 crc kubenswrapper[4833]: I1013 08:01:44.014893 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="667e25d4f35b6a9be66c6aaa2103047022cfb4c98a8b3cfd7c83206bde11e73b" Oct 13 08:01:44 crc kubenswrapper[4833]: I1013 08:01:44.014904 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3eb6-account-create-xgvlm" Oct 13 08:01:45 crc kubenswrapper[4833]: I1013 08:01:45.502613 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-2qxgl"] Oct 13 08:01:45 crc kubenswrapper[4833]: E1013 08:01:45.503406 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ebeb361-7c2d-4a9c-b5da-cc39cd98e24a" containerName="mariadb-account-create" Oct 13 08:01:45 crc kubenswrapper[4833]: I1013 08:01:45.503429 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ebeb361-7c2d-4a9c-b5da-cc39cd98e24a" containerName="mariadb-account-create" Oct 13 08:01:45 crc kubenswrapper[4833]: I1013 08:01:45.503701 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ebeb361-7c2d-4a9c-b5da-cc39cd98e24a" containerName="mariadb-account-create" Oct 13 08:01:45 crc kubenswrapper[4833]: I1013 08:01:45.504418 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2qxgl" Oct 13 08:01:45 crc kubenswrapper[4833]: I1013 08:01:45.507332 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 13 08:01:45 crc kubenswrapper[4833]: I1013 08:01:45.514154 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-fjdv6" Oct 13 08:01:45 crc kubenswrapper[4833]: I1013 08:01:45.542737 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2qxgl"] Oct 13 08:01:45 crc kubenswrapper[4833]: I1013 08:01:45.600836 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg6tw\" (UniqueName: \"kubernetes.io/projected/0e82f80e-e2cb-4040-832a-84adfa9ea71b-kube-api-access-zg6tw\") pod \"barbican-db-sync-2qxgl\" (UID: \"0e82f80e-e2cb-4040-832a-84adfa9ea71b\") " pod="openstack/barbican-db-sync-2qxgl" Oct 13 08:01:45 crc kubenswrapper[4833]: I1013 08:01:45.601161 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0e82f80e-e2cb-4040-832a-84adfa9ea71b-db-sync-config-data\") pod \"barbican-db-sync-2qxgl\" (UID: \"0e82f80e-e2cb-4040-832a-84adfa9ea71b\") " pod="openstack/barbican-db-sync-2qxgl" Oct 13 08:01:45 crc kubenswrapper[4833]: I1013 08:01:45.601362 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e82f80e-e2cb-4040-832a-84adfa9ea71b-combined-ca-bundle\") pod \"barbican-db-sync-2qxgl\" (UID: \"0e82f80e-e2cb-4040-832a-84adfa9ea71b\") " pod="openstack/barbican-db-sync-2qxgl" Oct 13 08:01:45 crc kubenswrapper[4833]: I1013 08:01:45.702581 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg6tw\" (UniqueName: \"kubernetes.io/projected/0e82f80e-e2cb-4040-832a-84adfa9ea71b-kube-api-access-zg6tw\") pod \"barbican-db-sync-2qxgl\" (UID: \"0e82f80e-e2cb-4040-832a-84adfa9ea71b\") " pod="openstack/barbican-db-sync-2qxgl" Oct 13 08:01:45 crc kubenswrapper[4833]: I1013 08:01:45.702649 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0e82f80e-e2cb-4040-832a-84adfa9ea71b-db-sync-config-data\") pod \"barbican-db-sync-2qxgl\" (UID: \"0e82f80e-e2cb-4040-832a-84adfa9ea71b\") " pod="openstack/barbican-db-sync-2qxgl" Oct 13 08:01:45 crc kubenswrapper[4833]: I1013 08:01:45.702688 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e82f80e-e2cb-4040-832a-84adfa9ea71b-combined-ca-bundle\") pod \"barbican-db-sync-2qxgl\" (UID: \"0e82f80e-e2cb-4040-832a-84adfa9ea71b\") " pod="openstack/barbican-db-sync-2qxgl" Oct 13 08:01:45 crc kubenswrapper[4833]: I1013 08:01:45.707985 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e82f80e-e2cb-4040-832a-84adfa9ea71b-combined-ca-bundle\") pod \"barbican-db-sync-2qxgl\" (UID: \"0e82f80e-e2cb-4040-832a-84adfa9ea71b\") " pod="openstack/barbican-db-sync-2qxgl" Oct 13 08:01:45 crc kubenswrapper[4833]: I1013 08:01:45.709690 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0e82f80e-e2cb-4040-832a-84adfa9ea71b-db-sync-config-data\") pod \"barbican-db-sync-2qxgl\" (UID: \"0e82f80e-e2cb-4040-832a-84adfa9ea71b\") " pod="openstack/barbican-db-sync-2qxgl" Oct 13 08:01:45 crc kubenswrapper[4833]: I1013 08:01:45.719732 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg6tw\" (UniqueName: \"kubernetes.io/projected/0e82f80e-e2cb-4040-832a-84adfa9ea71b-kube-api-access-zg6tw\") pod \"barbican-db-sync-2qxgl\" (UID: \"0e82f80e-e2cb-4040-832a-84adfa9ea71b\") " pod="openstack/barbican-db-sync-2qxgl" Oct 13 08:01:45 crc kubenswrapper[4833]: I1013 08:01:45.826603 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2qxgl" Oct 13 08:01:46 crc kubenswrapper[4833]: I1013 08:01:46.301704 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2qxgl"] Oct 13 08:01:47 crc kubenswrapper[4833]: I1013 08:01:47.049689 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2qxgl" event={"ID":"0e82f80e-e2cb-4040-832a-84adfa9ea71b","Type":"ContainerStarted","Data":"eec1b88ea50ea1ada585bba5c540ba7497293dc120cae542bc0e6a6663965a7c"} Oct 13 08:01:47 crc kubenswrapper[4833]: I1013 08:01:47.050659 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2qxgl" event={"ID":"0e82f80e-e2cb-4040-832a-84adfa9ea71b","Type":"ContainerStarted","Data":"669947ebe17083ca95bd7f99e7fac5c0dad8a88f3324b79d5e1538bd31d8b353"} Oct 13 08:01:47 crc kubenswrapper[4833]: I1013 08:01:47.080002 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-2qxgl" podStartSLOduration=2.079976046 podStartE2EDuration="2.079976046s" podCreationTimestamp="2025-10-13 08:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:01:47.069634442 +0000 UTC m=+5597.170057378" watchObservedRunningTime="2025-10-13 08:01:47.079976046 +0000 UTC m=+5597.180398972" Oct 13 08:01:49 crc kubenswrapper[4833]: I1013 08:01:49.072095 4833 generic.go:334] "Generic (PLEG): container finished" podID="0e82f80e-e2cb-4040-832a-84adfa9ea71b" containerID="eec1b88ea50ea1ada585bba5c540ba7497293dc120cae542bc0e6a6663965a7c" exitCode=0 Oct 13 08:01:49 crc kubenswrapper[4833]: I1013 08:01:49.072186 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2qxgl" event={"ID":"0e82f80e-e2cb-4040-832a-84adfa9ea71b","Type":"ContainerDied","Data":"eec1b88ea50ea1ada585bba5c540ba7497293dc120cae542bc0e6a6663965a7c"} Oct 13 08:01:50 crc kubenswrapper[4833]: I1013 08:01:50.411184 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2qxgl" Oct 13 08:01:50 crc kubenswrapper[4833]: I1013 08:01:50.493337 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0e82f80e-e2cb-4040-832a-84adfa9ea71b-db-sync-config-data\") pod \"0e82f80e-e2cb-4040-832a-84adfa9ea71b\" (UID: \"0e82f80e-e2cb-4040-832a-84adfa9ea71b\") " Oct 13 08:01:50 crc kubenswrapper[4833]: I1013 08:01:50.493498 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e82f80e-e2cb-4040-832a-84adfa9ea71b-combined-ca-bundle\") pod \"0e82f80e-e2cb-4040-832a-84adfa9ea71b\" (UID: \"0e82f80e-e2cb-4040-832a-84adfa9ea71b\") " Oct 13 08:01:50 crc kubenswrapper[4833]: I1013 08:01:50.493659 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg6tw\" (UniqueName: \"kubernetes.io/projected/0e82f80e-e2cb-4040-832a-84adfa9ea71b-kube-api-access-zg6tw\") pod \"0e82f80e-e2cb-4040-832a-84adfa9ea71b\" (UID: \"0e82f80e-e2cb-4040-832a-84adfa9ea71b\") " Oct 13 08:01:50 crc kubenswrapper[4833]: I1013 08:01:50.504871 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e82f80e-e2cb-4040-832a-84adfa9ea71b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0e82f80e-e2cb-4040-832a-84adfa9ea71b" (UID: "0e82f80e-e2cb-4040-832a-84adfa9ea71b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:01:50 crc kubenswrapper[4833]: I1013 08:01:50.505991 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e82f80e-e2cb-4040-832a-84adfa9ea71b-kube-api-access-zg6tw" (OuterVolumeSpecName: "kube-api-access-zg6tw") pod "0e82f80e-e2cb-4040-832a-84adfa9ea71b" (UID: "0e82f80e-e2cb-4040-832a-84adfa9ea71b"). InnerVolumeSpecName "kube-api-access-zg6tw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:01:50 crc kubenswrapper[4833]: I1013 08:01:50.522441 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e82f80e-e2cb-4040-832a-84adfa9ea71b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e82f80e-e2cb-4040-832a-84adfa9ea71b" (UID: "0e82f80e-e2cb-4040-832a-84adfa9ea71b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:01:50 crc kubenswrapper[4833]: I1013 08:01:50.595798 4833 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0e82f80e-e2cb-4040-832a-84adfa9ea71b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:01:50 crc kubenswrapper[4833]: I1013 08:01:50.595846 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e82f80e-e2cb-4040-832a-84adfa9ea71b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:01:50 crc kubenswrapper[4833]: I1013 08:01:50.595865 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg6tw\" (UniqueName: \"kubernetes.io/projected/0e82f80e-e2cb-4040-832a-84adfa9ea71b-kube-api-access-zg6tw\") on node \"crc\" DevicePath \"\"" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.101823 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2qxgl" event={"ID":"0e82f80e-e2cb-4040-832a-84adfa9ea71b","Type":"ContainerDied","Data":"669947ebe17083ca95bd7f99e7fac5c0dad8a88f3324b79d5e1538bd31d8b353"} Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.101874 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="669947ebe17083ca95bd7f99e7fac5c0dad8a88f3324b79d5e1538bd31d8b353" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.101882 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2qxgl" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.365394 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-77b9cc5f97-fmd25"] Oct 13 08:01:51 crc kubenswrapper[4833]: E1013 08:01:51.365734 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e82f80e-e2cb-4040-832a-84adfa9ea71b" containerName="barbican-db-sync" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.365749 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e82f80e-e2cb-4040-832a-84adfa9ea71b" containerName="barbican-db-sync" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.365904 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e82f80e-e2cb-4040-832a-84adfa9ea71b" containerName="barbican-db-sync" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.366693 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-77b9cc5f97-fmd25" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.378142 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.378467 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.378702 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-fjdv6" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.400266 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-77b9cc5f97-fmd25"] Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.415238 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9fd61fda-332f-4333-adc9-e1815b3a1433-config-data-custom\") pod \"barbican-worker-77b9cc5f97-fmd25\" (UID: \"9fd61fda-332f-4333-adc9-e1815b3a1433\") " pod="openstack/barbican-worker-77b9cc5f97-fmd25" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.415287 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fd61fda-332f-4333-adc9-e1815b3a1433-logs\") pod \"barbican-worker-77b9cc5f97-fmd25\" (UID: \"9fd61fda-332f-4333-adc9-e1815b3a1433\") " pod="openstack/barbican-worker-77b9cc5f97-fmd25" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.415341 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m42c\" (UniqueName: \"kubernetes.io/projected/9fd61fda-332f-4333-adc9-e1815b3a1433-kube-api-access-4m42c\") pod \"barbican-worker-77b9cc5f97-fmd25\" (UID: \"9fd61fda-332f-4333-adc9-e1815b3a1433\") " pod="openstack/barbican-worker-77b9cc5f97-fmd25" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.415401 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fd61fda-332f-4333-adc9-e1815b3a1433-config-data\") pod \"barbican-worker-77b9cc5f97-fmd25\" (UID: \"9fd61fda-332f-4333-adc9-e1815b3a1433\") " pod="openstack/barbican-worker-77b9cc5f97-fmd25" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.415422 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fd61fda-332f-4333-adc9-e1815b3a1433-combined-ca-bundle\") pod \"barbican-worker-77b9cc5f97-fmd25\" (UID: \"9fd61fda-332f-4333-adc9-e1815b3a1433\") " pod="openstack/barbican-worker-77b9cc5f97-fmd25" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.423065 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5f96d4c4b6-c4vtj"] Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.435556 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f96d4c4b6-c4vtj" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.445054 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.454018 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5f96d4c4b6-c4vtj"] Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.519849 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9fd61fda-332f-4333-adc9-e1815b3a1433-config-data-custom\") pod \"barbican-worker-77b9cc5f97-fmd25\" (UID: \"9fd61fda-332f-4333-adc9-e1815b3a1433\") " pod="openstack/barbican-worker-77b9cc5f97-fmd25" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.519912 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fd61fda-332f-4333-adc9-e1815b3a1433-logs\") pod \"barbican-worker-77b9cc5f97-fmd25\" (UID: \"9fd61fda-332f-4333-adc9-e1815b3a1433\") " pod="openstack/barbican-worker-77b9cc5f97-fmd25" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.519970 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m42c\" (UniqueName: \"kubernetes.io/projected/9fd61fda-332f-4333-adc9-e1815b3a1433-kube-api-access-4m42c\") pod \"barbican-worker-77b9cc5f97-fmd25\" (UID: \"9fd61fda-332f-4333-adc9-e1815b3a1433\") " pod="openstack/barbican-worker-77b9cc5f97-fmd25" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.520034 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fd61fda-332f-4333-adc9-e1815b3a1433-config-data\") pod \"barbican-worker-77b9cc5f97-fmd25\" (UID: \"9fd61fda-332f-4333-adc9-e1815b3a1433\") " pod="openstack/barbican-worker-77b9cc5f97-fmd25" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.520065 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fd61fda-332f-4333-adc9-e1815b3a1433-combined-ca-bundle\") pod \"barbican-worker-77b9cc5f97-fmd25\" (UID: \"9fd61fda-332f-4333-adc9-e1815b3a1433\") " pod="openstack/barbican-worker-77b9cc5f97-fmd25" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.520751 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fd61fda-332f-4333-adc9-e1815b3a1433-logs\") pod \"barbican-worker-77b9cc5f97-fmd25\" (UID: \"9fd61fda-332f-4333-adc9-e1815b3a1433\") " pod="openstack/barbican-worker-77b9cc5f97-fmd25" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.540165 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9fd61fda-332f-4333-adc9-e1815b3a1433-config-data-custom\") pod \"barbican-worker-77b9cc5f97-fmd25\" (UID: \"9fd61fda-332f-4333-adc9-e1815b3a1433\") " pod="openstack/barbican-worker-77b9cc5f97-fmd25" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.548399 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fd61fda-332f-4333-adc9-e1815b3a1433-combined-ca-bundle\") pod \"barbican-worker-77b9cc5f97-fmd25\" (UID: \"9fd61fda-332f-4333-adc9-e1815b3a1433\") " pod="openstack/barbican-worker-77b9cc5f97-fmd25" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.580451 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m42c\" (UniqueName: \"kubernetes.io/projected/9fd61fda-332f-4333-adc9-e1815b3a1433-kube-api-access-4m42c\") pod \"barbican-worker-77b9cc5f97-fmd25\" (UID: \"9fd61fda-332f-4333-adc9-e1815b3a1433\") " pod="openstack/barbican-worker-77b9cc5f97-fmd25" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.583398 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fd61fda-332f-4333-adc9-e1815b3a1433-config-data\") pod \"barbican-worker-77b9cc5f97-fmd25\" (UID: \"9fd61fda-332f-4333-adc9-e1815b3a1433\") " pod="openstack/barbican-worker-77b9cc5f97-fmd25" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.603621 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757f897c4f-nv78p"] Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.641293 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e418cfbe-e180-41a6-9730-91552572bfce-config-data\") pod \"barbican-keystone-listener-5f96d4c4b6-c4vtj\" (UID: \"e418cfbe-e180-41a6-9730-91552572bfce\") " pod="openstack/barbican-keystone-listener-5f96d4c4b6-c4vtj" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.641349 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e418cfbe-e180-41a6-9730-91552572bfce-combined-ca-bundle\") pod \"barbican-keystone-listener-5f96d4c4b6-c4vtj\" (UID: \"e418cfbe-e180-41a6-9730-91552572bfce\") " pod="openstack/barbican-keystone-listener-5f96d4c4b6-c4vtj" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.641424 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfbbz\" (UniqueName: \"kubernetes.io/projected/e418cfbe-e180-41a6-9730-91552572bfce-kube-api-access-bfbbz\") pod \"barbican-keystone-listener-5f96d4c4b6-c4vtj\" (UID: \"e418cfbe-e180-41a6-9730-91552572bfce\") " pod="openstack/barbican-keystone-listener-5f96d4c4b6-c4vtj" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.641559 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e418cfbe-e180-41a6-9730-91552572bfce-config-data-custom\") pod \"barbican-keystone-listener-5f96d4c4b6-c4vtj\" (UID: \"e418cfbe-e180-41a6-9730-91552572bfce\") " pod="openstack/barbican-keystone-listener-5f96d4c4b6-c4vtj" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.641675 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e418cfbe-e180-41a6-9730-91552572bfce-logs\") pod \"barbican-keystone-listener-5f96d4c4b6-c4vtj\" (UID: \"e418cfbe-e180-41a6-9730-91552572bfce\") " pod="openstack/barbican-keystone-listener-5f96d4c4b6-c4vtj" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.644613 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757f897c4f-nv78p" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.694165 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757f897c4f-nv78p"] Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.701114 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-9f59b7956-9bj8x"] Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.702972 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9f59b7956-9bj8x" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.709445 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.730169 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-77b9cc5f97-fmd25" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.741235 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-9f59b7956-9bj8x"] Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.744768 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e418cfbe-e180-41a6-9730-91552572bfce-logs\") pod \"barbican-keystone-listener-5f96d4c4b6-c4vtj\" (UID: \"e418cfbe-e180-41a6-9730-91552572bfce\") " pod="openstack/barbican-keystone-listener-5f96d4c4b6-c4vtj" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.744898 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e418cfbe-e180-41a6-9730-91552572bfce-config-data\") pod \"barbican-keystone-listener-5f96d4c4b6-c4vtj\" (UID: \"e418cfbe-e180-41a6-9730-91552572bfce\") " pod="openstack/barbican-keystone-listener-5f96d4c4b6-c4vtj" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.744936 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e418cfbe-e180-41a6-9730-91552572bfce-combined-ca-bundle\") pod \"barbican-keystone-listener-5f96d4c4b6-c4vtj\" (UID: \"e418cfbe-e180-41a6-9730-91552572bfce\") " pod="openstack/barbican-keystone-listener-5f96d4c4b6-c4vtj" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.745002 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfbbz\" (UniqueName: \"kubernetes.io/projected/e418cfbe-e180-41a6-9730-91552572bfce-kube-api-access-bfbbz\") pod \"barbican-keystone-listener-5f96d4c4b6-c4vtj\" (UID: \"e418cfbe-e180-41a6-9730-91552572bfce\") " pod="openstack/barbican-keystone-listener-5f96d4c4b6-c4vtj" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.745034 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e418cfbe-e180-41a6-9730-91552572bfce-config-data-custom\") pod \"barbican-keystone-listener-5f96d4c4b6-c4vtj\" (UID: \"e418cfbe-e180-41a6-9730-91552572bfce\") " pod="openstack/barbican-keystone-listener-5f96d4c4b6-c4vtj" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.748362 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e418cfbe-e180-41a6-9730-91552572bfce-logs\") pod \"barbican-keystone-listener-5f96d4c4b6-c4vtj\" (UID: \"e418cfbe-e180-41a6-9730-91552572bfce\") " pod="openstack/barbican-keystone-listener-5f96d4c4b6-c4vtj" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.749289 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e418cfbe-e180-41a6-9730-91552572bfce-config-data\") pod \"barbican-keystone-listener-5f96d4c4b6-c4vtj\" (UID: \"e418cfbe-e180-41a6-9730-91552572bfce\") " pod="openstack/barbican-keystone-listener-5f96d4c4b6-c4vtj" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.751414 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e418cfbe-e180-41a6-9730-91552572bfce-combined-ca-bundle\") pod \"barbican-keystone-listener-5f96d4c4b6-c4vtj\" (UID: \"e418cfbe-e180-41a6-9730-91552572bfce\") " pod="openstack/barbican-keystone-listener-5f96d4c4b6-c4vtj" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.764511 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e418cfbe-e180-41a6-9730-91552572bfce-config-data-custom\") pod \"barbican-keystone-listener-5f96d4c4b6-c4vtj\" (UID: \"e418cfbe-e180-41a6-9730-91552572bfce\") " pod="openstack/barbican-keystone-listener-5f96d4c4b6-c4vtj" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.769294 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfbbz\" (UniqueName: \"kubernetes.io/projected/e418cfbe-e180-41a6-9730-91552572bfce-kube-api-access-bfbbz\") pod \"barbican-keystone-listener-5f96d4c4b6-c4vtj\" (UID: \"e418cfbe-e180-41a6-9730-91552572bfce\") " pod="openstack/barbican-keystone-listener-5f96d4c4b6-c4vtj" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.799150 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f96d4c4b6-c4vtj" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.848940 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6z4p\" (UniqueName: \"kubernetes.io/projected/a776b922-af90-4326-8f70-261decad52ce-kube-api-access-n6z4p\") pod \"barbican-api-9f59b7956-9bj8x\" (UID: \"a776b922-af90-4326-8f70-261decad52ce\") " pod="openstack/barbican-api-9f59b7956-9bj8x" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.849196 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bpfm\" (UniqueName: \"kubernetes.io/projected/1ece2475-4308-40f4-9c61-9663fa98fa06-kube-api-access-5bpfm\") pod \"dnsmasq-dns-757f897c4f-nv78p\" (UID: \"1ece2475-4308-40f4-9c61-9663fa98fa06\") " pod="openstack/dnsmasq-dns-757f897c4f-nv78p" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.849235 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ece2475-4308-40f4-9c61-9663fa98fa06-config\") pod \"dnsmasq-dns-757f897c4f-nv78p\" (UID: \"1ece2475-4308-40f4-9c61-9663fa98fa06\") " pod="openstack/dnsmasq-dns-757f897c4f-nv78p" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.849254 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ece2475-4308-40f4-9c61-9663fa98fa06-ovsdbserver-nb\") pod \"dnsmasq-dns-757f897c4f-nv78p\" (UID: \"1ece2475-4308-40f4-9c61-9663fa98fa06\") " pod="openstack/dnsmasq-dns-757f897c4f-nv78p" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.849268 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a776b922-af90-4326-8f70-261decad52ce-config-data\") pod \"barbican-api-9f59b7956-9bj8x\" (UID: \"a776b922-af90-4326-8f70-261decad52ce\") " pod="openstack/barbican-api-9f59b7956-9bj8x" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.849303 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a776b922-af90-4326-8f70-261decad52ce-logs\") pod \"barbican-api-9f59b7956-9bj8x\" (UID: \"a776b922-af90-4326-8f70-261decad52ce\") " pod="openstack/barbican-api-9f59b7956-9bj8x" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.849374 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a776b922-af90-4326-8f70-261decad52ce-combined-ca-bundle\") pod \"barbican-api-9f59b7956-9bj8x\" (UID: \"a776b922-af90-4326-8f70-261decad52ce\") " pod="openstack/barbican-api-9f59b7956-9bj8x" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.849395 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ece2475-4308-40f4-9c61-9663fa98fa06-ovsdbserver-sb\") pod \"dnsmasq-dns-757f897c4f-nv78p\" (UID: \"1ece2475-4308-40f4-9c61-9663fa98fa06\") " pod="openstack/dnsmasq-dns-757f897c4f-nv78p" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.849434 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a776b922-af90-4326-8f70-261decad52ce-config-data-custom\") pod \"barbican-api-9f59b7956-9bj8x\" (UID: \"a776b922-af90-4326-8f70-261decad52ce\") " pod="openstack/barbican-api-9f59b7956-9bj8x" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.849499 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ece2475-4308-40f4-9c61-9663fa98fa06-dns-svc\") pod \"dnsmasq-dns-757f897c4f-nv78p\" (UID: \"1ece2475-4308-40f4-9c61-9663fa98fa06\") " pod="openstack/dnsmasq-dns-757f897c4f-nv78p" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.951528 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ece2475-4308-40f4-9c61-9663fa98fa06-dns-svc\") pod \"dnsmasq-dns-757f897c4f-nv78p\" (UID: \"1ece2475-4308-40f4-9c61-9663fa98fa06\") " pod="openstack/dnsmasq-dns-757f897c4f-nv78p" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.951634 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6z4p\" (UniqueName: \"kubernetes.io/projected/a776b922-af90-4326-8f70-261decad52ce-kube-api-access-n6z4p\") pod \"barbican-api-9f59b7956-9bj8x\" (UID: \"a776b922-af90-4326-8f70-261decad52ce\") " pod="openstack/barbican-api-9f59b7956-9bj8x" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.951663 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bpfm\" (UniqueName: \"kubernetes.io/projected/1ece2475-4308-40f4-9c61-9663fa98fa06-kube-api-access-5bpfm\") pod \"dnsmasq-dns-757f897c4f-nv78p\" (UID: \"1ece2475-4308-40f4-9c61-9663fa98fa06\") " pod="openstack/dnsmasq-dns-757f897c4f-nv78p" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.951688 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ece2475-4308-40f4-9c61-9663fa98fa06-config\") pod \"dnsmasq-dns-757f897c4f-nv78p\" (UID: \"1ece2475-4308-40f4-9c61-9663fa98fa06\") " pod="openstack/dnsmasq-dns-757f897c4f-nv78p" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.951713 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ece2475-4308-40f4-9c61-9663fa98fa06-ovsdbserver-nb\") pod \"dnsmasq-dns-757f897c4f-nv78p\" (UID: \"1ece2475-4308-40f4-9c61-9663fa98fa06\") " pod="openstack/dnsmasq-dns-757f897c4f-nv78p" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.951734 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a776b922-af90-4326-8f70-261decad52ce-config-data\") pod \"barbican-api-9f59b7956-9bj8x\" (UID: \"a776b922-af90-4326-8f70-261decad52ce\") " pod="openstack/barbican-api-9f59b7956-9bj8x" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.951773 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a776b922-af90-4326-8f70-261decad52ce-logs\") pod \"barbican-api-9f59b7956-9bj8x\" (UID: \"a776b922-af90-4326-8f70-261decad52ce\") " pod="openstack/barbican-api-9f59b7956-9bj8x" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.951824 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a776b922-af90-4326-8f70-261decad52ce-combined-ca-bundle\") pod \"barbican-api-9f59b7956-9bj8x\" (UID: \"a776b922-af90-4326-8f70-261decad52ce\") " pod="openstack/barbican-api-9f59b7956-9bj8x" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.951850 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ece2475-4308-40f4-9c61-9663fa98fa06-ovsdbserver-sb\") pod \"dnsmasq-dns-757f897c4f-nv78p\" (UID: \"1ece2475-4308-40f4-9c61-9663fa98fa06\") " pod="openstack/dnsmasq-dns-757f897c4f-nv78p" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.951888 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a776b922-af90-4326-8f70-261decad52ce-config-data-custom\") pod \"barbican-api-9f59b7956-9bj8x\" (UID: \"a776b922-af90-4326-8f70-261decad52ce\") " pod="openstack/barbican-api-9f59b7956-9bj8x" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.952950 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a776b922-af90-4326-8f70-261decad52ce-logs\") pod \"barbican-api-9f59b7956-9bj8x\" (UID: \"a776b922-af90-4326-8f70-261decad52ce\") " pod="openstack/barbican-api-9f59b7956-9bj8x" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.953782 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ece2475-4308-40f4-9c61-9663fa98fa06-ovsdbserver-nb\") pod \"dnsmasq-dns-757f897c4f-nv78p\" (UID: \"1ece2475-4308-40f4-9c61-9663fa98fa06\") " pod="openstack/dnsmasq-dns-757f897c4f-nv78p" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.954255 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ece2475-4308-40f4-9c61-9663fa98fa06-dns-svc\") pod \"dnsmasq-dns-757f897c4f-nv78p\" (UID: \"1ece2475-4308-40f4-9c61-9663fa98fa06\") " pod="openstack/dnsmasq-dns-757f897c4f-nv78p" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.956009 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ece2475-4308-40f4-9c61-9663fa98fa06-ovsdbserver-sb\") pod \"dnsmasq-dns-757f897c4f-nv78p\" (UID: \"1ece2475-4308-40f4-9c61-9663fa98fa06\") " pod="openstack/dnsmasq-dns-757f897c4f-nv78p" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.959732 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ece2475-4308-40f4-9c61-9663fa98fa06-config\") pod \"dnsmasq-dns-757f897c4f-nv78p\" (UID: \"1ece2475-4308-40f4-9c61-9663fa98fa06\") " pod="openstack/dnsmasq-dns-757f897c4f-nv78p" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.961901 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a776b922-af90-4326-8f70-261decad52ce-combined-ca-bundle\") pod \"barbican-api-9f59b7956-9bj8x\" (UID: \"a776b922-af90-4326-8f70-261decad52ce\") " pod="openstack/barbican-api-9f59b7956-9bj8x" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.962056 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a776b922-af90-4326-8f70-261decad52ce-config-data-custom\") pod \"barbican-api-9f59b7956-9bj8x\" (UID: \"a776b922-af90-4326-8f70-261decad52ce\") " pod="openstack/barbican-api-9f59b7956-9bj8x" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.972462 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a776b922-af90-4326-8f70-261decad52ce-config-data\") pod \"barbican-api-9f59b7956-9bj8x\" (UID: \"a776b922-af90-4326-8f70-261decad52ce\") " pod="openstack/barbican-api-9f59b7956-9bj8x" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.974918 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bpfm\" (UniqueName: \"kubernetes.io/projected/1ece2475-4308-40f4-9c61-9663fa98fa06-kube-api-access-5bpfm\") pod \"dnsmasq-dns-757f897c4f-nv78p\" (UID: \"1ece2475-4308-40f4-9c61-9663fa98fa06\") " pod="openstack/dnsmasq-dns-757f897c4f-nv78p" Oct 13 08:01:51 crc kubenswrapper[4833]: I1013 08:01:51.976339 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6z4p\" (UniqueName: \"kubernetes.io/projected/a776b922-af90-4326-8f70-261decad52ce-kube-api-access-n6z4p\") pod \"barbican-api-9f59b7956-9bj8x\" (UID: \"a776b922-af90-4326-8f70-261decad52ce\") " pod="openstack/barbican-api-9f59b7956-9bj8x" Oct 13 08:01:52 crc kubenswrapper[4833]: I1013 08:01:52.010989 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757f897c4f-nv78p" Oct 13 08:01:52 crc kubenswrapper[4833]: I1013 08:01:52.025585 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9f59b7956-9bj8x" Oct 13 08:01:52 crc kubenswrapper[4833]: I1013 08:01:52.051144 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5f96d4c4b6-c4vtj"] Oct 13 08:01:52 crc kubenswrapper[4833]: W1013 08:01:52.058134 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode418cfbe_e180_41a6_9730_91552572bfce.slice/crio-458c97f0744967d470cd2076265fa02194c6e820d9e86f4c4a89adb85493f327 WatchSource:0}: Error finding container 458c97f0744967d470cd2076265fa02194c6e820d9e86f4c4a89adb85493f327: Status 404 returned error can't find the container with id 458c97f0744967d470cd2076265fa02194c6e820d9e86f4c4a89adb85493f327 Oct 13 08:01:52 crc kubenswrapper[4833]: I1013 08:01:52.139438 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f96d4c4b6-c4vtj" event={"ID":"e418cfbe-e180-41a6-9730-91552572bfce","Type":"ContainerStarted","Data":"458c97f0744967d470cd2076265fa02194c6e820d9e86f4c4a89adb85493f327"} Oct 13 08:01:52 crc kubenswrapper[4833]: I1013 08:01:52.224222 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-77b9cc5f97-fmd25"] Oct 13 08:01:52 crc kubenswrapper[4833]: I1013 08:01:52.278451 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757f897c4f-nv78p"] Oct 13 08:01:52 crc kubenswrapper[4833]: I1013 08:01:52.576231 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-9f59b7956-9bj8x"] Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.155387 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f96d4c4b6-c4vtj" event={"ID":"e418cfbe-e180-41a6-9730-91552572bfce","Type":"ContainerStarted","Data":"220818cf48fd17e783bb2a5580406bb6a399c39632e9fb1ce67cefb9e47ee093"} Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.155843 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f96d4c4b6-c4vtj" event={"ID":"e418cfbe-e180-41a6-9730-91552572bfce","Type":"ContainerStarted","Data":"d0ec0a10828a41a98cb40f7555e0f6aa77456d7d973e497af0183573974f06de"} Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.163065 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-77b9cc5f97-fmd25" event={"ID":"9fd61fda-332f-4333-adc9-e1815b3a1433","Type":"ContainerStarted","Data":"48989147f6a7e77f75d230f341fc66c82c4294d13f34ceed12f9317a5ca94b43"} Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.163110 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-77b9cc5f97-fmd25" event={"ID":"9fd61fda-332f-4333-adc9-e1815b3a1433","Type":"ContainerStarted","Data":"19aa0891b2a1f964b61142c7eb23c225a18d9c1e511e63bd98c08f80d06c0c59"} Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.163123 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-77b9cc5f97-fmd25" event={"ID":"9fd61fda-332f-4333-adc9-e1815b3a1433","Type":"ContainerStarted","Data":"4f39ed58f9eefa9314436e53f5e76995f05263917740c22bf16d37d4169c8907"} Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.177289 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5f96d4c4b6-c4vtj" podStartSLOduration=2.1772665939999998 podStartE2EDuration="2.177266594s" podCreationTimestamp="2025-10-13 08:01:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:01:53.171796469 +0000 UTC m=+5603.272219375" watchObservedRunningTime="2025-10-13 08:01:53.177266594 +0000 UTC m=+5603.277689510" Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.178465 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9f59b7956-9bj8x" event={"ID":"a776b922-af90-4326-8f70-261decad52ce","Type":"ContainerStarted","Data":"99d7291506e02246a4a85e1e17a09381232251a49d2b96d322d184c0cdad85cc"} Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.178511 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9f59b7956-9bj8x" event={"ID":"a776b922-af90-4326-8f70-261decad52ce","Type":"ContainerStarted","Data":"12b3a27311c5dbb16508db9037690f56635e36b2665fc0e86cf1b71351a279c9"} Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.178521 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9f59b7956-9bj8x" event={"ID":"a776b922-af90-4326-8f70-261decad52ce","Type":"ContainerStarted","Data":"bae738b99e2bd6e77cf8d39bc839df543e983770feee41368f035a2b513d0ab5"} Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.179190 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-9f59b7956-9bj8x" Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.179255 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-9f59b7956-9bj8x" Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.184455 4833 generic.go:334] "Generic (PLEG): container finished" podID="1ece2475-4308-40f4-9c61-9663fa98fa06" containerID="b6b8c79d3f056b14774dc9e5d8468f9192da270491d51102fe1ef1f5c99a34ed" exitCode=0 Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.184502 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757f897c4f-nv78p" event={"ID":"1ece2475-4308-40f4-9c61-9663fa98fa06","Type":"ContainerDied","Data":"b6b8c79d3f056b14774dc9e5d8468f9192da270491d51102fe1ef1f5c99a34ed"} Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.184522 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757f897c4f-nv78p" event={"ID":"1ece2475-4308-40f4-9c61-9663fa98fa06","Type":"ContainerStarted","Data":"21f6aedf27a55d11513c381a5d4f37e99ffd758c7502e17ec5be0207a270aec8"} Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.196048 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-77b9cc5f97-fmd25" podStartSLOduration=2.196032388 podStartE2EDuration="2.196032388s" podCreationTimestamp="2025-10-13 08:01:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:01:53.193571488 +0000 UTC m=+5603.293994414" watchObservedRunningTime="2025-10-13 08:01:53.196032388 +0000 UTC m=+5603.296455304" Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.243319 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-9f59b7956-9bj8x" podStartSLOduration=2.243296401 podStartE2EDuration="2.243296401s" podCreationTimestamp="2025-10-13 08:01:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:01:53.232738101 +0000 UTC m=+5603.333161017" watchObservedRunningTime="2025-10-13 08:01:53.243296401 +0000 UTC m=+5603.343719317" Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.695027 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6b489798bd-m2wph"] Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.701062 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b489798bd-m2wph" Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.703275 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.703444 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.704715 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b489798bd-m2wph"] Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.795587 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d855a17c-efd2-41dc-939d-264069e488e7-internal-tls-certs\") pod \"barbican-api-6b489798bd-m2wph\" (UID: \"d855a17c-efd2-41dc-939d-264069e488e7\") " pod="openstack/barbican-api-6b489798bd-m2wph" Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.795662 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d855a17c-efd2-41dc-939d-264069e488e7-config-data\") pod \"barbican-api-6b489798bd-m2wph\" (UID: \"d855a17c-efd2-41dc-939d-264069e488e7\") " pod="openstack/barbican-api-6b489798bd-m2wph" Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.795690 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d855a17c-efd2-41dc-939d-264069e488e7-combined-ca-bundle\") pod \"barbican-api-6b489798bd-m2wph\" (UID: \"d855a17c-efd2-41dc-939d-264069e488e7\") " pod="openstack/barbican-api-6b489798bd-m2wph" Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.795717 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d855a17c-efd2-41dc-939d-264069e488e7-logs\") pod \"barbican-api-6b489798bd-m2wph\" (UID: \"d855a17c-efd2-41dc-939d-264069e488e7\") " pod="openstack/barbican-api-6b489798bd-m2wph" Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.795747 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d855a17c-efd2-41dc-939d-264069e488e7-config-data-custom\") pod \"barbican-api-6b489798bd-m2wph\" (UID: \"d855a17c-efd2-41dc-939d-264069e488e7\") " pod="openstack/barbican-api-6b489798bd-m2wph" Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.795830 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24mlj\" (UniqueName: \"kubernetes.io/projected/d855a17c-efd2-41dc-939d-264069e488e7-kube-api-access-24mlj\") pod \"barbican-api-6b489798bd-m2wph\" (UID: \"d855a17c-efd2-41dc-939d-264069e488e7\") " pod="openstack/barbican-api-6b489798bd-m2wph" Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.795854 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d855a17c-efd2-41dc-939d-264069e488e7-public-tls-certs\") pod \"barbican-api-6b489798bd-m2wph\" (UID: \"d855a17c-efd2-41dc-939d-264069e488e7\") " pod="openstack/barbican-api-6b489798bd-m2wph" Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.898242 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24mlj\" (UniqueName: \"kubernetes.io/projected/d855a17c-efd2-41dc-939d-264069e488e7-kube-api-access-24mlj\") pod \"barbican-api-6b489798bd-m2wph\" (UID: \"d855a17c-efd2-41dc-939d-264069e488e7\") " pod="openstack/barbican-api-6b489798bd-m2wph" Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.898318 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d855a17c-efd2-41dc-939d-264069e488e7-public-tls-certs\") pod \"barbican-api-6b489798bd-m2wph\" (UID: \"d855a17c-efd2-41dc-939d-264069e488e7\") " pod="openstack/barbican-api-6b489798bd-m2wph" Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.898383 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d855a17c-efd2-41dc-939d-264069e488e7-internal-tls-certs\") pod \"barbican-api-6b489798bd-m2wph\" (UID: \"d855a17c-efd2-41dc-939d-264069e488e7\") " pod="openstack/barbican-api-6b489798bd-m2wph" Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.898468 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d855a17c-efd2-41dc-939d-264069e488e7-config-data\") pod \"barbican-api-6b489798bd-m2wph\" (UID: \"d855a17c-efd2-41dc-939d-264069e488e7\") " pod="openstack/barbican-api-6b489798bd-m2wph" Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.898518 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d855a17c-efd2-41dc-939d-264069e488e7-combined-ca-bundle\") pod \"barbican-api-6b489798bd-m2wph\" (UID: \"d855a17c-efd2-41dc-939d-264069e488e7\") " pod="openstack/barbican-api-6b489798bd-m2wph" Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.898615 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d855a17c-efd2-41dc-939d-264069e488e7-logs\") pod \"barbican-api-6b489798bd-m2wph\" (UID: \"d855a17c-efd2-41dc-939d-264069e488e7\") " pod="openstack/barbican-api-6b489798bd-m2wph" Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.898678 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d855a17c-efd2-41dc-939d-264069e488e7-config-data-custom\") pod \"barbican-api-6b489798bd-m2wph\" (UID: \"d855a17c-efd2-41dc-939d-264069e488e7\") " pod="openstack/barbican-api-6b489798bd-m2wph" Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.900911 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d855a17c-efd2-41dc-939d-264069e488e7-logs\") pod \"barbican-api-6b489798bd-m2wph\" (UID: \"d855a17c-efd2-41dc-939d-264069e488e7\") " pod="openstack/barbican-api-6b489798bd-m2wph" Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.905115 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d855a17c-efd2-41dc-939d-264069e488e7-public-tls-certs\") pod \"barbican-api-6b489798bd-m2wph\" (UID: \"d855a17c-efd2-41dc-939d-264069e488e7\") " pod="openstack/barbican-api-6b489798bd-m2wph" Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.905139 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d855a17c-efd2-41dc-939d-264069e488e7-config-data\") pod \"barbican-api-6b489798bd-m2wph\" (UID: \"d855a17c-efd2-41dc-939d-264069e488e7\") " pod="openstack/barbican-api-6b489798bd-m2wph" Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.914515 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d855a17c-efd2-41dc-939d-264069e488e7-config-data-custom\") pod \"barbican-api-6b489798bd-m2wph\" (UID: \"d855a17c-efd2-41dc-939d-264069e488e7\") " pod="openstack/barbican-api-6b489798bd-m2wph" Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.936356 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d855a17c-efd2-41dc-939d-264069e488e7-combined-ca-bundle\") pod \"barbican-api-6b489798bd-m2wph\" (UID: \"d855a17c-efd2-41dc-939d-264069e488e7\") " pod="openstack/barbican-api-6b489798bd-m2wph" Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.937414 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d855a17c-efd2-41dc-939d-264069e488e7-internal-tls-certs\") pod \"barbican-api-6b489798bd-m2wph\" (UID: \"d855a17c-efd2-41dc-939d-264069e488e7\") " pod="openstack/barbican-api-6b489798bd-m2wph" Oct 13 08:01:53 crc kubenswrapper[4833]: I1013 08:01:53.942047 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24mlj\" (UniqueName: \"kubernetes.io/projected/d855a17c-efd2-41dc-939d-264069e488e7-kube-api-access-24mlj\") pod \"barbican-api-6b489798bd-m2wph\" (UID: \"d855a17c-efd2-41dc-939d-264069e488e7\") " pod="openstack/barbican-api-6b489798bd-m2wph" Oct 13 08:01:54 crc kubenswrapper[4833]: I1013 08:01:54.021687 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b489798bd-m2wph" Oct 13 08:01:54 crc kubenswrapper[4833]: I1013 08:01:54.200334 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757f897c4f-nv78p" event={"ID":"1ece2475-4308-40f4-9c61-9663fa98fa06","Type":"ContainerStarted","Data":"385a5ecc93b9e35ecae77545bc42aa1c021f860e6d6720584a1bca79e82bca53"} Oct 13 08:01:54 crc kubenswrapper[4833]: I1013 08:01:54.237170 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757f897c4f-nv78p" podStartSLOduration=3.237144591 podStartE2EDuration="3.237144591s" podCreationTimestamp="2025-10-13 08:01:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:01:54.221820875 +0000 UTC m=+5604.322243791" watchObservedRunningTime="2025-10-13 08:01:54.237144591 +0000 UTC m=+5604.337567517" Oct 13 08:01:54 crc kubenswrapper[4833]: I1013 08:01:54.558605 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b489798bd-m2wph"] Oct 13 08:01:55 crc kubenswrapper[4833]: I1013 08:01:55.213725 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b489798bd-m2wph" event={"ID":"d855a17c-efd2-41dc-939d-264069e488e7","Type":"ContainerStarted","Data":"46790a0995d9328762c9fe8787f1f6782233d782024d3b814065475316a2f51b"} Oct 13 08:01:55 crc kubenswrapper[4833]: I1013 08:01:55.214124 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b489798bd-m2wph" event={"ID":"d855a17c-efd2-41dc-939d-264069e488e7","Type":"ContainerStarted","Data":"3ef2b75d553fff011e612eb5dc74de0678d4416837ce67c52e59bbeb35c74b26"} Oct 13 08:01:55 crc kubenswrapper[4833]: I1013 08:01:55.214148 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b489798bd-m2wph" event={"ID":"d855a17c-efd2-41dc-939d-264069e488e7","Type":"ContainerStarted","Data":"ab8fb8ce459a7acedf7de682654c8c2ca52a5576a08491117a59bf564305fe0e"} Oct 13 08:01:55 crc kubenswrapper[4833]: I1013 08:01:55.214274 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b489798bd-m2wph" Oct 13 08:01:55 crc kubenswrapper[4833]: I1013 08:01:55.214798 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757f897c4f-nv78p" Oct 13 08:01:55 crc kubenswrapper[4833]: I1013 08:01:55.214815 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b489798bd-m2wph" Oct 13 08:01:58 crc kubenswrapper[4833]: I1013 08:01:58.413730 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-9f59b7956-9bj8x" Oct 13 08:01:58 crc kubenswrapper[4833]: I1013 08:01:58.432108 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-9f59b7956-9bj8x" podUID="a776b922-af90-4326-8f70-261decad52ce" containerName="barbican-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 08:01:58 crc kubenswrapper[4833]: I1013 08:01:58.448090 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6b489798bd-m2wph" podStartSLOduration=5.448063102 podStartE2EDuration="5.448063102s" podCreationTimestamp="2025-10-13 08:01:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:01:55.232281567 +0000 UTC m=+5605.332704493" watchObservedRunningTime="2025-10-13 08:01:58.448063102 +0000 UTC m=+5608.548486058" Oct 13 08:01:59 crc kubenswrapper[4833]: I1013 08:01:59.828422 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-9f59b7956-9bj8x" Oct 13 08:02:00 crc kubenswrapper[4833]: I1013 08:02:00.356004 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6b489798bd-m2wph" Oct 13 08:02:00 crc kubenswrapper[4833]: I1013 08:02:00.464201 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6b489798bd-m2wph" Oct 13 08:02:00 crc kubenswrapper[4833]: I1013 08:02:00.544052 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:02:00 crc kubenswrapper[4833]: I1013 08:02:00.544112 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 08:02:00 crc kubenswrapper[4833]: I1013 08:02:00.559268 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-9f59b7956-9bj8x"] Oct 13 08:02:00 crc kubenswrapper[4833]: I1013 08:02:00.559448 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-9f59b7956-9bj8x" podUID="a776b922-af90-4326-8f70-261decad52ce" containerName="barbican-api-log" containerID="cri-o://12b3a27311c5dbb16508db9037690f56635e36b2665fc0e86cf1b71351a279c9" gracePeriod=30 Oct 13 08:02:00 crc kubenswrapper[4833]: I1013 08:02:00.559821 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-9f59b7956-9bj8x" podUID="a776b922-af90-4326-8f70-261decad52ce" containerName="barbican-api" containerID="cri-o://99d7291506e02246a4a85e1e17a09381232251a49d2b96d322d184c0cdad85cc" gracePeriod=30 Oct 13 08:02:01 crc kubenswrapper[4833]: I1013 08:02:01.268229 4833 generic.go:334] "Generic (PLEG): container finished" podID="a776b922-af90-4326-8f70-261decad52ce" containerID="12b3a27311c5dbb16508db9037690f56635e36b2665fc0e86cf1b71351a279c9" exitCode=143 Oct 13 08:02:01 crc kubenswrapper[4833]: I1013 08:02:01.268306 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9f59b7956-9bj8x" event={"ID":"a776b922-af90-4326-8f70-261decad52ce","Type":"ContainerDied","Data":"12b3a27311c5dbb16508db9037690f56635e36b2665fc0e86cf1b71351a279c9"} Oct 13 08:02:02 crc kubenswrapper[4833]: I1013 08:02:02.012833 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757f897c4f-nv78p" Oct 13 08:02:02 crc kubenswrapper[4833]: I1013 08:02:02.118270 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7858474d7c-dfrkm"] Oct 13 08:02:02 crc kubenswrapper[4833]: I1013 08:02:02.118697 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7858474d7c-dfrkm" podUID="f6144ac3-73ed-4d20-8ec7-a971caa832a7" containerName="dnsmasq-dns" containerID="cri-o://0b268c9b716fc9618bb959a20c0f87908220299864938a40663e7cff2695d0d9" gracePeriod=10 Oct 13 08:02:02 crc kubenswrapper[4833]: I1013 08:02:02.309203 4833 generic.go:334] "Generic (PLEG): container finished" podID="f6144ac3-73ed-4d20-8ec7-a971caa832a7" containerID="0b268c9b716fc9618bb959a20c0f87908220299864938a40663e7cff2695d0d9" exitCode=0 Oct 13 08:02:02 crc kubenswrapper[4833]: I1013 08:02:02.309254 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7858474d7c-dfrkm" event={"ID":"f6144ac3-73ed-4d20-8ec7-a971caa832a7","Type":"ContainerDied","Data":"0b268c9b716fc9618bb959a20c0f87908220299864938a40663e7cff2695d0d9"} Oct 13 08:02:02 crc kubenswrapper[4833]: I1013 08:02:02.610096 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7858474d7c-dfrkm" Oct 13 08:02:02 crc kubenswrapper[4833]: I1013 08:02:02.766834 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6144ac3-73ed-4d20-8ec7-a971caa832a7-ovsdbserver-sb\") pod \"f6144ac3-73ed-4d20-8ec7-a971caa832a7\" (UID: \"f6144ac3-73ed-4d20-8ec7-a971caa832a7\") " Oct 13 08:02:02 crc kubenswrapper[4833]: I1013 08:02:02.766964 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6144ac3-73ed-4d20-8ec7-a971caa832a7-dns-svc\") pod \"f6144ac3-73ed-4d20-8ec7-a971caa832a7\" (UID: \"f6144ac3-73ed-4d20-8ec7-a971caa832a7\") " Oct 13 08:02:02 crc kubenswrapper[4833]: I1013 08:02:02.767044 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vft58\" (UniqueName: \"kubernetes.io/projected/f6144ac3-73ed-4d20-8ec7-a971caa832a7-kube-api-access-vft58\") pod \"f6144ac3-73ed-4d20-8ec7-a971caa832a7\" (UID: \"f6144ac3-73ed-4d20-8ec7-a971caa832a7\") " Oct 13 08:02:02 crc kubenswrapper[4833]: I1013 08:02:02.767069 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6144ac3-73ed-4d20-8ec7-a971caa832a7-config\") pod \"f6144ac3-73ed-4d20-8ec7-a971caa832a7\" (UID: \"f6144ac3-73ed-4d20-8ec7-a971caa832a7\") " Oct 13 08:02:02 crc kubenswrapper[4833]: I1013 08:02:02.767083 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6144ac3-73ed-4d20-8ec7-a971caa832a7-ovsdbserver-nb\") pod \"f6144ac3-73ed-4d20-8ec7-a971caa832a7\" (UID: \"f6144ac3-73ed-4d20-8ec7-a971caa832a7\") " Oct 13 08:02:02 crc kubenswrapper[4833]: I1013 08:02:02.779892 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6144ac3-73ed-4d20-8ec7-a971caa832a7-kube-api-access-vft58" (OuterVolumeSpecName: "kube-api-access-vft58") pod "f6144ac3-73ed-4d20-8ec7-a971caa832a7" (UID: "f6144ac3-73ed-4d20-8ec7-a971caa832a7"). InnerVolumeSpecName "kube-api-access-vft58". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:02:02 crc kubenswrapper[4833]: I1013 08:02:02.825417 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6144ac3-73ed-4d20-8ec7-a971caa832a7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f6144ac3-73ed-4d20-8ec7-a971caa832a7" (UID: "f6144ac3-73ed-4d20-8ec7-a971caa832a7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:02:02 crc kubenswrapper[4833]: I1013 08:02:02.826929 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6144ac3-73ed-4d20-8ec7-a971caa832a7-config" (OuterVolumeSpecName: "config") pod "f6144ac3-73ed-4d20-8ec7-a971caa832a7" (UID: "f6144ac3-73ed-4d20-8ec7-a971caa832a7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:02:02 crc kubenswrapper[4833]: I1013 08:02:02.838269 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6144ac3-73ed-4d20-8ec7-a971caa832a7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f6144ac3-73ed-4d20-8ec7-a971caa832a7" (UID: "f6144ac3-73ed-4d20-8ec7-a971caa832a7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:02:02 crc kubenswrapper[4833]: I1013 08:02:02.852514 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6144ac3-73ed-4d20-8ec7-a971caa832a7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f6144ac3-73ed-4d20-8ec7-a971caa832a7" (UID: "f6144ac3-73ed-4d20-8ec7-a971caa832a7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:02:02 crc kubenswrapper[4833]: I1013 08:02:02.869699 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6144ac3-73ed-4d20-8ec7-a971caa832a7-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 08:02:02 crc kubenswrapper[4833]: I1013 08:02:02.870016 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vft58\" (UniqueName: \"kubernetes.io/projected/f6144ac3-73ed-4d20-8ec7-a971caa832a7-kube-api-access-vft58\") on node \"crc\" DevicePath \"\"" Oct 13 08:02:02 crc kubenswrapper[4833]: I1013 08:02:02.870166 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6144ac3-73ed-4d20-8ec7-a971caa832a7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 08:02:02 crc kubenswrapper[4833]: I1013 08:02:02.870288 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6144ac3-73ed-4d20-8ec7-a971caa832a7-config\") on node \"crc\" DevicePath \"\"" Oct 13 08:02:02 crc kubenswrapper[4833]: I1013 08:02:02.870414 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6144ac3-73ed-4d20-8ec7-a971caa832a7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 08:02:03 crc kubenswrapper[4833]: I1013 08:02:03.319614 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7858474d7c-dfrkm" event={"ID":"f6144ac3-73ed-4d20-8ec7-a971caa832a7","Type":"ContainerDied","Data":"43e75f8bf16eb7279c726fd2bad451f09c9aafb0ff5c5c511baf1f0420b6baab"} Oct 13 08:02:03 crc kubenswrapper[4833]: I1013 08:02:03.320313 4833 scope.go:117] "RemoveContainer" containerID="0b268c9b716fc9618bb959a20c0f87908220299864938a40663e7cff2695d0d9" Oct 13 08:02:03 crc kubenswrapper[4833]: I1013 08:02:03.319798 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7858474d7c-dfrkm" Oct 13 08:02:03 crc kubenswrapper[4833]: I1013 08:02:03.355163 4833 scope.go:117] "RemoveContainer" containerID="41297594f99bdb7cad78e6761bf3b756eb3fd09c25f900c1d5d75d12e8a0fedc" Oct 13 08:02:03 crc kubenswrapper[4833]: I1013 08:02:03.375579 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7858474d7c-dfrkm"] Oct 13 08:02:03 crc kubenswrapper[4833]: I1013 08:02:03.383754 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7858474d7c-dfrkm"] Oct 13 08:02:03 crc kubenswrapper[4833]: I1013 08:02:03.721985 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-9f59b7956-9bj8x" podUID="a776b922-af90-4326-8f70-261decad52ce" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.1.39:9311/healthcheck\": read tcp 10.217.0.2:51322->10.217.1.39:9311: read: connection reset by peer" Oct 13 08:02:03 crc kubenswrapper[4833]: I1013 08:02:03.722086 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-9f59b7956-9bj8x" podUID="a776b922-af90-4326-8f70-261decad52ce" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.1.39:9311/healthcheck\": read tcp 10.217.0.2:51320->10.217.1.39:9311: read: connection reset by peer" Oct 13 08:02:04 crc kubenswrapper[4833]: I1013 08:02:04.176730 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9f59b7956-9bj8x" Oct 13 08:02:04 crc kubenswrapper[4833]: I1013 08:02:04.298199 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6z4p\" (UniqueName: \"kubernetes.io/projected/a776b922-af90-4326-8f70-261decad52ce-kube-api-access-n6z4p\") pod \"a776b922-af90-4326-8f70-261decad52ce\" (UID: \"a776b922-af90-4326-8f70-261decad52ce\") " Oct 13 08:02:04 crc kubenswrapper[4833]: I1013 08:02:04.298398 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a776b922-af90-4326-8f70-261decad52ce-combined-ca-bundle\") pod \"a776b922-af90-4326-8f70-261decad52ce\" (UID: \"a776b922-af90-4326-8f70-261decad52ce\") " Oct 13 08:02:04 crc kubenswrapper[4833]: I1013 08:02:04.298589 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a776b922-af90-4326-8f70-261decad52ce-config-data\") pod \"a776b922-af90-4326-8f70-261decad52ce\" (UID: \"a776b922-af90-4326-8f70-261decad52ce\") " Oct 13 08:02:04 crc kubenswrapper[4833]: I1013 08:02:04.298652 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a776b922-af90-4326-8f70-261decad52ce-logs\") pod \"a776b922-af90-4326-8f70-261decad52ce\" (UID: \"a776b922-af90-4326-8f70-261decad52ce\") " Oct 13 08:02:04 crc kubenswrapper[4833]: I1013 08:02:04.298689 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a776b922-af90-4326-8f70-261decad52ce-config-data-custom\") pod \"a776b922-af90-4326-8f70-261decad52ce\" (UID: \"a776b922-af90-4326-8f70-261decad52ce\") " Oct 13 08:02:04 crc kubenswrapper[4833]: I1013 08:02:04.300399 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a776b922-af90-4326-8f70-261decad52ce-logs" (OuterVolumeSpecName: "logs") pod "a776b922-af90-4326-8f70-261decad52ce" (UID: "a776b922-af90-4326-8f70-261decad52ce"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:02:04 crc kubenswrapper[4833]: I1013 08:02:04.303282 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a776b922-af90-4326-8f70-261decad52ce-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a776b922-af90-4326-8f70-261decad52ce" (UID: "a776b922-af90-4326-8f70-261decad52ce"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:02:04 crc kubenswrapper[4833]: I1013 08:02:04.308699 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a776b922-af90-4326-8f70-261decad52ce-kube-api-access-n6z4p" (OuterVolumeSpecName: "kube-api-access-n6z4p") pod "a776b922-af90-4326-8f70-261decad52ce" (UID: "a776b922-af90-4326-8f70-261decad52ce"). InnerVolumeSpecName "kube-api-access-n6z4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:02:04 crc kubenswrapper[4833]: I1013 08:02:04.329828 4833 generic.go:334] "Generic (PLEG): container finished" podID="a776b922-af90-4326-8f70-261decad52ce" containerID="99d7291506e02246a4a85e1e17a09381232251a49d2b96d322d184c0cdad85cc" exitCode=0 Oct 13 08:02:04 crc kubenswrapper[4833]: I1013 08:02:04.329935 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9f59b7956-9bj8x" Oct 13 08:02:04 crc kubenswrapper[4833]: I1013 08:02:04.330085 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9f59b7956-9bj8x" event={"ID":"a776b922-af90-4326-8f70-261decad52ce","Type":"ContainerDied","Data":"99d7291506e02246a4a85e1e17a09381232251a49d2b96d322d184c0cdad85cc"} Oct 13 08:02:04 crc kubenswrapper[4833]: I1013 08:02:04.330278 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9f59b7956-9bj8x" event={"ID":"a776b922-af90-4326-8f70-261decad52ce","Type":"ContainerDied","Data":"bae738b99e2bd6e77cf8d39bc839df543e983770feee41368f035a2b513d0ab5"} Oct 13 08:02:04 crc kubenswrapper[4833]: I1013 08:02:04.330337 4833 scope.go:117] "RemoveContainer" containerID="99d7291506e02246a4a85e1e17a09381232251a49d2b96d322d184c0cdad85cc" Oct 13 08:02:04 crc kubenswrapper[4833]: I1013 08:02:04.347072 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a776b922-af90-4326-8f70-261decad52ce-config-data" (OuterVolumeSpecName: "config-data") pod "a776b922-af90-4326-8f70-261decad52ce" (UID: "a776b922-af90-4326-8f70-261decad52ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:02:04 crc kubenswrapper[4833]: I1013 08:02:04.347990 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a776b922-af90-4326-8f70-261decad52ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a776b922-af90-4326-8f70-261decad52ce" (UID: "a776b922-af90-4326-8f70-261decad52ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:02:04 crc kubenswrapper[4833]: I1013 08:02:04.400625 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a776b922-af90-4326-8f70-261decad52ce-logs\") on node \"crc\" DevicePath \"\"" Oct 13 08:02:04 crc kubenswrapper[4833]: I1013 08:02:04.400654 4833 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a776b922-af90-4326-8f70-261decad52ce-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 13 08:02:04 crc kubenswrapper[4833]: I1013 08:02:04.400666 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6z4p\" (UniqueName: \"kubernetes.io/projected/a776b922-af90-4326-8f70-261decad52ce-kube-api-access-n6z4p\") on node \"crc\" DevicePath \"\"" Oct 13 08:02:04 crc kubenswrapper[4833]: I1013 08:02:04.400675 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a776b922-af90-4326-8f70-261decad52ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:02:04 crc kubenswrapper[4833]: I1013 08:02:04.400683 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a776b922-af90-4326-8f70-261decad52ce-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:02:04 crc kubenswrapper[4833]: I1013 08:02:04.430849 4833 scope.go:117] "RemoveContainer" containerID="12b3a27311c5dbb16508db9037690f56635e36b2665fc0e86cf1b71351a279c9" Oct 13 08:02:04 crc kubenswrapper[4833]: I1013 08:02:04.466230 4833 scope.go:117] "RemoveContainer" containerID="99d7291506e02246a4a85e1e17a09381232251a49d2b96d322d184c0cdad85cc" Oct 13 08:02:04 crc kubenswrapper[4833]: E1013 08:02:04.466853 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99d7291506e02246a4a85e1e17a09381232251a49d2b96d322d184c0cdad85cc\": container with ID starting with 99d7291506e02246a4a85e1e17a09381232251a49d2b96d322d184c0cdad85cc not found: ID does not exist" containerID="99d7291506e02246a4a85e1e17a09381232251a49d2b96d322d184c0cdad85cc" Oct 13 08:02:04 crc kubenswrapper[4833]: I1013 08:02:04.466919 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99d7291506e02246a4a85e1e17a09381232251a49d2b96d322d184c0cdad85cc"} err="failed to get container status \"99d7291506e02246a4a85e1e17a09381232251a49d2b96d322d184c0cdad85cc\": rpc error: code = NotFound desc = could not find container \"99d7291506e02246a4a85e1e17a09381232251a49d2b96d322d184c0cdad85cc\": container with ID starting with 99d7291506e02246a4a85e1e17a09381232251a49d2b96d322d184c0cdad85cc not found: ID does not exist" Oct 13 08:02:04 crc kubenswrapper[4833]: I1013 08:02:04.466959 4833 scope.go:117] "RemoveContainer" containerID="12b3a27311c5dbb16508db9037690f56635e36b2665fc0e86cf1b71351a279c9" Oct 13 08:02:04 crc kubenswrapper[4833]: E1013 08:02:04.467414 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12b3a27311c5dbb16508db9037690f56635e36b2665fc0e86cf1b71351a279c9\": container with ID starting with 12b3a27311c5dbb16508db9037690f56635e36b2665fc0e86cf1b71351a279c9 not found: ID does not exist" containerID="12b3a27311c5dbb16508db9037690f56635e36b2665fc0e86cf1b71351a279c9" Oct 13 08:02:04 crc kubenswrapper[4833]: I1013 08:02:04.467463 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12b3a27311c5dbb16508db9037690f56635e36b2665fc0e86cf1b71351a279c9"} err="failed to get container status \"12b3a27311c5dbb16508db9037690f56635e36b2665fc0e86cf1b71351a279c9\": rpc error: code = NotFound desc = could not find container \"12b3a27311c5dbb16508db9037690f56635e36b2665fc0e86cf1b71351a279c9\": container with ID starting with 12b3a27311c5dbb16508db9037690f56635e36b2665fc0e86cf1b71351a279c9 not found: ID does not exist" Oct 13 08:02:04 crc kubenswrapper[4833]: I1013 08:02:04.636607 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6144ac3-73ed-4d20-8ec7-a971caa832a7" path="/var/lib/kubelet/pods/f6144ac3-73ed-4d20-8ec7-a971caa832a7/volumes" Oct 13 08:02:04 crc kubenswrapper[4833]: I1013 08:02:04.674548 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-9f59b7956-9bj8x"] Oct 13 08:02:04 crc kubenswrapper[4833]: I1013 08:02:04.683176 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-9f59b7956-9bj8x"] Oct 13 08:02:06 crc kubenswrapper[4833]: I1013 08:02:06.644995 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a776b922-af90-4326-8f70-261decad52ce" path="/var/lib/kubelet/pods/a776b922-af90-4326-8f70-261decad52ce/volumes" Oct 13 08:02:07 crc kubenswrapper[4833]: I1013 08:02:07.129004 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-vt9xr"] Oct 13 08:02:07 crc kubenswrapper[4833]: E1013 08:02:07.129666 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6144ac3-73ed-4d20-8ec7-a971caa832a7" containerName="init" Oct 13 08:02:07 crc kubenswrapper[4833]: I1013 08:02:07.129688 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6144ac3-73ed-4d20-8ec7-a971caa832a7" containerName="init" Oct 13 08:02:07 crc kubenswrapper[4833]: E1013 08:02:07.129722 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6144ac3-73ed-4d20-8ec7-a971caa832a7" containerName="dnsmasq-dns" Oct 13 08:02:07 crc kubenswrapper[4833]: I1013 08:02:07.129730 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6144ac3-73ed-4d20-8ec7-a971caa832a7" containerName="dnsmasq-dns" Oct 13 08:02:07 crc kubenswrapper[4833]: E1013 08:02:07.129763 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a776b922-af90-4326-8f70-261decad52ce" containerName="barbican-api" Oct 13 08:02:07 crc kubenswrapper[4833]: I1013 08:02:07.129771 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a776b922-af90-4326-8f70-261decad52ce" containerName="barbican-api" Oct 13 08:02:07 crc kubenswrapper[4833]: E1013 08:02:07.129788 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a776b922-af90-4326-8f70-261decad52ce" containerName="barbican-api-log" Oct 13 08:02:07 crc kubenswrapper[4833]: I1013 08:02:07.129797 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a776b922-af90-4326-8f70-261decad52ce" containerName="barbican-api-log" Oct 13 08:02:07 crc kubenswrapper[4833]: I1013 08:02:07.129979 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6144ac3-73ed-4d20-8ec7-a971caa832a7" containerName="dnsmasq-dns" Oct 13 08:02:07 crc kubenswrapper[4833]: I1013 08:02:07.130022 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a776b922-af90-4326-8f70-261decad52ce" containerName="barbican-api-log" Oct 13 08:02:07 crc kubenswrapper[4833]: I1013 08:02:07.130038 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a776b922-af90-4326-8f70-261decad52ce" containerName="barbican-api" Oct 13 08:02:07 crc kubenswrapper[4833]: I1013 08:02:07.130582 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vt9xr" Oct 13 08:02:07 crc kubenswrapper[4833]: I1013 08:02:07.140042 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-vt9xr"] Oct 13 08:02:07 crc kubenswrapper[4833]: I1013 08:02:07.247733 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7kp5\" (UniqueName: \"kubernetes.io/projected/62f705b8-c040-4df0-9e2f-e9eb7a71b3ed-kube-api-access-w7kp5\") pod \"neutron-db-create-vt9xr\" (UID: \"62f705b8-c040-4df0-9e2f-e9eb7a71b3ed\") " pod="openstack/neutron-db-create-vt9xr" Oct 13 08:02:07 crc kubenswrapper[4833]: I1013 08:02:07.348983 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7kp5\" (UniqueName: \"kubernetes.io/projected/62f705b8-c040-4df0-9e2f-e9eb7a71b3ed-kube-api-access-w7kp5\") pod \"neutron-db-create-vt9xr\" (UID: \"62f705b8-c040-4df0-9e2f-e9eb7a71b3ed\") " pod="openstack/neutron-db-create-vt9xr" Oct 13 08:02:07 crc kubenswrapper[4833]: I1013 08:02:07.375416 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7kp5\" (UniqueName: \"kubernetes.io/projected/62f705b8-c040-4df0-9e2f-e9eb7a71b3ed-kube-api-access-w7kp5\") pod \"neutron-db-create-vt9xr\" (UID: \"62f705b8-c040-4df0-9e2f-e9eb7a71b3ed\") " pod="openstack/neutron-db-create-vt9xr" Oct 13 08:02:07 crc kubenswrapper[4833]: I1013 08:02:07.482352 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vt9xr" Oct 13 08:02:07 crc kubenswrapper[4833]: I1013 08:02:07.925464 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-vt9xr"] Oct 13 08:02:07 crc kubenswrapper[4833]: W1013 08:02:07.926861 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62f705b8_c040_4df0_9e2f_e9eb7a71b3ed.slice/crio-b76aaf231f15c71a20aeb3c17544691934344223400061e2908554d778dab05e WatchSource:0}: Error finding container b76aaf231f15c71a20aeb3c17544691934344223400061e2908554d778dab05e: Status 404 returned error can't find the container with id b76aaf231f15c71a20aeb3c17544691934344223400061e2908554d778dab05e Oct 13 08:02:08 crc kubenswrapper[4833]: I1013 08:02:08.368088 4833 generic.go:334] "Generic (PLEG): container finished" podID="62f705b8-c040-4df0-9e2f-e9eb7a71b3ed" containerID="56f49c78bc36d5b56105a0b0172978f946dbf5bf3df6d8e6f5df850affcb6085" exitCode=0 Oct 13 08:02:08 crc kubenswrapper[4833]: I1013 08:02:08.368176 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vt9xr" event={"ID":"62f705b8-c040-4df0-9e2f-e9eb7a71b3ed","Type":"ContainerDied","Data":"56f49c78bc36d5b56105a0b0172978f946dbf5bf3df6d8e6f5df850affcb6085"} Oct 13 08:02:08 crc kubenswrapper[4833]: I1013 08:02:08.368270 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vt9xr" event={"ID":"62f705b8-c040-4df0-9e2f-e9eb7a71b3ed","Type":"ContainerStarted","Data":"b76aaf231f15c71a20aeb3c17544691934344223400061e2908554d778dab05e"} Oct 13 08:02:09 crc kubenswrapper[4833]: I1013 08:02:09.745531 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vt9xr" Oct 13 08:02:09 crc kubenswrapper[4833]: I1013 08:02:09.793654 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7kp5\" (UniqueName: \"kubernetes.io/projected/62f705b8-c040-4df0-9e2f-e9eb7a71b3ed-kube-api-access-w7kp5\") pod \"62f705b8-c040-4df0-9e2f-e9eb7a71b3ed\" (UID: \"62f705b8-c040-4df0-9e2f-e9eb7a71b3ed\") " Oct 13 08:02:09 crc kubenswrapper[4833]: I1013 08:02:09.802973 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62f705b8-c040-4df0-9e2f-e9eb7a71b3ed-kube-api-access-w7kp5" (OuterVolumeSpecName: "kube-api-access-w7kp5") pod "62f705b8-c040-4df0-9e2f-e9eb7a71b3ed" (UID: "62f705b8-c040-4df0-9e2f-e9eb7a71b3ed"). InnerVolumeSpecName "kube-api-access-w7kp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:02:09 crc kubenswrapper[4833]: I1013 08:02:09.896461 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7kp5\" (UniqueName: \"kubernetes.io/projected/62f705b8-c040-4df0-9e2f-e9eb7a71b3ed-kube-api-access-w7kp5\") on node \"crc\" DevicePath \"\"" Oct 13 08:02:10 crc kubenswrapper[4833]: I1013 08:02:10.393977 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vt9xr" event={"ID":"62f705b8-c040-4df0-9e2f-e9eb7a71b3ed","Type":"ContainerDied","Data":"b76aaf231f15c71a20aeb3c17544691934344223400061e2908554d778dab05e"} Oct 13 08:02:10 crc kubenswrapper[4833]: I1013 08:02:10.394045 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b76aaf231f15c71a20aeb3c17544691934344223400061e2908554d778dab05e" Oct 13 08:02:10 crc kubenswrapper[4833]: I1013 08:02:10.394128 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vt9xr" Oct 13 08:02:17 crc kubenswrapper[4833]: I1013 08:02:17.189862 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-3032-account-create-x8xnn"] Oct 13 08:02:17 crc kubenswrapper[4833]: E1013 08:02:17.190740 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f705b8-c040-4df0-9e2f-e9eb7a71b3ed" containerName="mariadb-database-create" Oct 13 08:02:17 crc kubenswrapper[4833]: I1013 08:02:17.190757 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f705b8-c040-4df0-9e2f-e9eb7a71b3ed" containerName="mariadb-database-create" Oct 13 08:02:17 crc kubenswrapper[4833]: I1013 08:02:17.190959 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="62f705b8-c040-4df0-9e2f-e9eb7a71b3ed" containerName="mariadb-database-create" Oct 13 08:02:17 crc kubenswrapper[4833]: I1013 08:02:17.191642 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3032-account-create-x8xnn" Oct 13 08:02:17 crc kubenswrapper[4833]: I1013 08:02:17.194327 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Oct 13 08:02:17 crc kubenswrapper[4833]: I1013 08:02:17.201358 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3032-account-create-x8xnn"] Oct 13 08:02:17 crc kubenswrapper[4833]: I1013 08:02:17.267253 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrssh\" (UniqueName: \"kubernetes.io/projected/d242248e-b0f5-48a2-bf01-94af4ddf9f34-kube-api-access-vrssh\") pod \"neutron-3032-account-create-x8xnn\" (UID: \"d242248e-b0f5-48a2-bf01-94af4ddf9f34\") " pod="openstack/neutron-3032-account-create-x8xnn" Oct 13 08:02:17 crc kubenswrapper[4833]: I1013 08:02:17.370132 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrssh\" (UniqueName: \"kubernetes.io/projected/d242248e-b0f5-48a2-bf01-94af4ddf9f34-kube-api-access-vrssh\") pod \"neutron-3032-account-create-x8xnn\" (UID: \"d242248e-b0f5-48a2-bf01-94af4ddf9f34\") " pod="openstack/neutron-3032-account-create-x8xnn" Oct 13 08:02:17 crc kubenswrapper[4833]: I1013 08:02:17.393888 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrssh\" (UniqueName: \"kubernetes.io/projected/d242248e-b0f5-48a2-bf01-94af4ddf9f34-kube-api-access-vrssh\") pod \"neutron-3032-account-create-x8xnn\" (UID: \"d242248e-b0f5-48a2-bf01-94af4ddf9f34\") " pod="openstack/neutron-3032-account-create-x8xnn" Oct 13 08:02:17 crc kubenswrapper[4833]: I1013 08:02:17.521784 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3032-account-create-x8xnn" Oct 13 08:02:18 crc kubenswrapper[4833]: I1013 08:02:18.008897 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3032-account-create-x8xnn"] Oct 13 08:02:18 crc kubenswrapper[4833]: I1013 08:02:18.493471 4833 generic.go:334] "Generic (PLEG): container finished" podID="d242248e-b0f5-48a2-bf01-94af4ddf9f34" containerID="2f69f3d34d6d455330886244c23d5da749e1f774325cb65cca32d66a215e3b41" exitCode=0 Oct 13 08:02:18 crc kubenswrapper[4833]: I1013 08:02:18.493575 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3032-account-create-x8xnn" event={"ID":"d242248e-b0f5-48a2-bf01-94af4ddf9f34","Type":"ContainerDied","Data":"2f69f3d34d6d455330886244c23d5da749e1f774325cb65cca32d66a215e3b41"} Oct 13 08:02:18 crc kubenswrapper[4833]: I1013 08:02:18.493925 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3032-account-create-x8xnn" event={"ID":"d242248e-b0f5-48a2-bf01-94af4ddf9f34","Type":"ContainerStarted","Data":"4d3e244f1c85362115ccbfffb4067ae1093bfd96fdd2366292cf9282241c8689"} Oct 13 08:02:19 crc kubenswrapper[4833]: I1013 08:02:19.895520 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3032-account-create-x8xnn" Oct 13 08:02:19 crc kubenswrapper[4833]: I1013 08:02:19.939511 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrssh\" (UniqueName: \"kubernetes.io/projected/d242248e-b0f5-48a2-bf01-94af4ddf9f34-kube-api-access-vrssh\") pod \"d242248e-b0f5-48a2-bf01-94af4ddf9f34\" (UID: \"d242248e-b0f5-48a2-bf01-94af4ddf9f34\") " Oct 13 08:02:19 crc kubenswrapper[4833]: I1013 08:02:19.947304 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d242248e-b0f5-48a2-bf01-94af4ddf9f34-kube-api-access-vrssh" (OuterVolumeSpecName: "kube-api-access-vrssh") pod "d242248e-b0f5-48a2-bf01-94af4ddf9f34" (UID: "d242248e-b0f5-48a2-bf01-94af4ddf9f34"). InnerVolumeSpecName "kube-api-access-vrssh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:02:20 crc kubenswrapper[4833]: I1013 08:02:20.042702 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrssh\" (UniqueName: \"kubernetes.io/projected/d242248e-b0f5-48a2-bf01-94af4ddf9f34-kube-api-access-vrssh\") on node \"crc\" DevicePath \"\"" Oct 13 08:02:20 crc kubenswrapper[4833]: I1013 08:02:20.521017 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3032-account-create-x8xnn" event={"ID":"d242248e-b0f5-48a2-bf01-94af4ddf9f34","Type":"ContainerDied","Data":"4d3e244f1c85362115ccbfffb4067ae1093bfd96fdd2366292cf9282241c8689"} Oct 13 08:02:20 crc kubenswrapper[4833]: I1013 08:02:20.521067 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d3e244f1c85362115ccbfffb4067ae1093bfd96fdd2366292cf9282241c8689" Oct 13 08:02:20 crc kubenswrapper[4833]: I1013 08:02:20.521077 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3032-account-create-x8xnn" Oct 13 08:02:22 crc kubenswrapper[4833]: I1013 08:02:22.418889 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-cnqxb"] Oct 13 08:02:22 crc kubenswrapper[4833]: E1013 08:02:22.419393 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d242248e-b0f5-48a2-bf01-94af4ddf9f34" containerName="mariadb-account-create" Oct 13 08:02:22 crc kubenswrapper[4833]: I1013 08:02:22.419413 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d242248e-b0f5-48a2-bf01-94af4ddf9f34" containerName="mariadb-account-create" Oct 13 08:02:22 crc kubenswrapper[4833]: I1013 08:02:22.419651 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d242248e-b0f5-48a2-bf01-94af4ddf9f34" containerName="mariadb-account-create" Oct 13 08:02:22 crc kubenswrapper[4833]: I1013 08:02:22.420420 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cnqxb" Oct 13 08:02:22 crc kubenswrapper[4833]: I1013 08:02:22.422803 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 13 08:02:22 crc kubenswrapper[4833]: I1013 08:02:22.423310 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 13 08:02:22 crc kubenswrapper[4833]: I1013 08:02:22.425916 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-bpdk5" Oct 13 08:02:22 crc kubenswrapper[4833]: I1013 08:02:22.432682 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-cnqxb"] Oct 13 08:02:22 crc kubenswrapper[4833]: I1013 08:02:22.497951 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7954\" (UniqueName: \"kubernetes.io/projected/3efa7e52-a2ff-4ce0-a294-3f326ef52cde-kube-api-access-j7954\") pod \"neutron-db-sync-cnqxb\" (UID: \"3efa7e52-a2ff-4ce0-a294-3f326ef52cde\") " pod="openstack/neutron-db-sync-cnqxb" Oct 13 08:02:22 crc kubenswrapper[4833]: I1013 08:02:22.498031 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3efa7e52-a2ff-4ce0-a294-3f326ef52cde-combined-ca-bundle\") pod \"neutron-db-sync-cnqxb\" (UID: \"3efa7e52-a2ff-4ce0-a294-3f326ef52cde\") " pod="openstack/neutron-db-sync-cnqxb" Oct 13 08:02:22 crc kubenswrapper[4833]: I1013 08:02:22.498156 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3efa7e52-a2ff-4ce0-a294-3f326ef52cde-config\") pod \"neutron-db-sync-cnqxb\" (UID: \"3efa7e52-a2ff-4ce0-a294-3f326ef52cde\") " pod="openstack/neutron-db-sync-cnqxb" Oct 13 08:02:22 crc kubenswrapper[4833]: I1013 08:02:22.600804 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7954\" (UniqueName: \"kubernetes.io/projected/3efa7e52-a2ff-4ce0-a294-3f326ef52cde-kube-api-access-j7954\") pod \"neutron-db-sync-cnqxb\" (UID: \"3efa7e52-a2ff-4ce0-a294-3f326ef52cde\") " pod="openstack/neutron-db-sync-cnqxb" Oct 13 08:02:22 crc kubenswrapper[4833]: I1013 08:02:22.600868 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3efa7e52-a2ff-4ce0-a294-3f326ef52cde-combined-ca-bundle\") pod \"neutron-db-sync-cnqxb\" (UID: \"3efa7e52-a2ff-4ce0-a294-3f326ef52cde\") " pod="openstack/neutron-db-sync-cnqxb" Oct 13 08:02:22 crc kubenswrapper[4833]: I1013 08:02:22.600962 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3efa7e52-a2ff-4ce0-a294-3f326ef52cde-config\") pod \"neutron-db-sync-cnqxb\" (UID: \"3efa7e52-a2ff-4ce0-a294-3f326ef52cde\") " pod="openstack/neutron-db-sync-cnqxb" Oct 13 08:02:22 crc kubenswrapper[4833]: I1013 08:02:22.607471 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3efa7e52-a2ff-4ce0-a294-3f326ef52cde-config\") pod \"neutron-db-sync-cnqxb\" (UID: \"3efa7e52-a2ff-4ce0-a294-3f326ef52cde\") " pod="openstack/neutron-db-sync-cnqxb" Oct 13 08:02:22 crc kubenswrapper[4833]: I1013 08:02:22.612160 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3efa7e52-a2ff-4ce0-a294-3f326ef52cde-combined-ca-bundle\") pod \"neutron-db-sync-cnqxb\" (UID: \"3efa7e52-a2ff-4ce0-a294-3f326ef52cde\") " pod="openstack/neutron-db-sync-cnqxb" Oct 13 08:02:22 crc kubenswrapper[4833]: I1013 08:02:22.627285 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7954\" (UniqueName: \"kubernetes.io/projected/3efa7e52-a2ff-4ce0-a294-3f326ef52cde-kube-api-access-j7954\") pod \"neutron-db-sync-cnqxb\" (UID: \"3efa7e52-a2ff-4ce0-a294-3f326ef52cde\") " pod="openstack/neutron-db-sync-cnqxb" Oct 13 08:02:22 crc kubenswrapper[4833]: I1013 08:02:22.785348 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cnqxb" Oct 13 08:02:23 crc kubenswrapper[4833]: I1013 08:02:23.251674 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-cnqxb"] Oct 13 08:02:23 crc kubenswrapper[4833]: W1013 08:02:23.257704 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3efa7e52_a2ff_4ce0_a294_3f326ef52cde.slice/crio-b648bf6c067dabb74bf8157146cfe8aa86c1d30d65e436998bda29acb82b0d61 WatchSource:0}: Error finding container b648bf6c067dabb74bf8157146cfe8aa86c1d30d65e436998bda29acb82b0d61: Status 404 returned error can't find the container with id b648bf6c067dabb74bf8157146cfe8aa86c1d30d65e436998bda29acb82b0d61 Oct 13 08:02:23 crc kubenswrapper[4833]: I1013 08:02:23.555103 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cnqxb" event={"ID":"3efa7e52-a2ff-4ce0-a294-3f326ef52cde","Type":"ContainerStarted","Data":"f107cc8b2a19371cedc2a10f03fbf969373d3440068d4180c418d0d725820b85"} Oct 13 08:02:23 crc kubenswrapper[4833]: I1013 08:02:23.556287 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cnqxb" event={"ID":"3efa7e52-a2ff-4ce0-a294-3f326ef52cde","Type":"ContainerStarted","Data":"b648bf6c067dabb74bf8157146cfe8aa86c1d30d65e436998bda29acb82b0d61"} Oct 13 08:02:23 crc kubenswrapper[4833]: I1013 08:02:23.571642 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-cnqxb" podStartSLOduration=1.571627664 podStartE2EDuration="1.571627664s" podCreationTimestamp="2025-10-13 08:02:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:02:23.570179672 +0000 UTC m=+5633.670602588" watchObservedRunningTime="2025-10-13 08:02:23.571627664 +0000 UTC m=+5633.672050580" Oct 13 08:02:27 crc kubenswrapper[4833]: I1013 08:02:27.606384 4833 generic.go:334] "Generic (PLEG): container finished" podID="3efa7e52-a2ff-4ce0-a294-3f326ef52cde" containerID="f107cc8b2a19371cedc2a10f03fbf969373d3440068d4180c418d0d725820b85" exitCode=0 Oct 13 08:02:27 crc kubenswrapper[4833]: I1013 08:02:27.606512 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cnqxb" event={"ID":"3efa7e52-a2ff-4ce0-a294-3f326ef52cde","Type":"ContainerDied","Data":"f107cc8b2a19371cedc2a10f03fbf969373d3440068d4180c418d0d725820b85"} Oct 13 08:02:28 crc kubenswrapper[4833]: I1013 08:02:28.972394 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k8wsq"] Oct 13 08:02:28 crc kubenswrapper[4833]: I1013 08:02:28.975761 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k8wsq" Oct 13 08:02:28 crc kubenswrapper[4833]: I1013 08:02:28.986720 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k8wsq"] Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.021589 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cnqxb" Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.053730 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dmpd\" (UniqueName: \"kubernetes.io/projected/07123727-9ae4-4e4d-a27c-b33ccf1b7b09-kube-api-access-6dmpd\") pod \"certified-operators-k8wsq\" (UID: \"07123727-9ae4-4e4d-a27c-b33ccf1b7b09\") " pod="openshift-marketplace/certified-operators-k8wsq" Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.054070 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07123727-9ae4-4e4d-a27c-b33ccf1b7b09-catalog-content\") pod \"certified-operators-k8wsq\" (UID: \"07123727-9ae4-4e4d-a27c-b33ccf1b7b09\") " pod="openshift-marketplace/certified-operators-k8wsq" Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.054280 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07123727-9ae4-4e4d-a27c-b33ccf1b7b09-utilities\") pod \"certified-operators-k8wsq\" (UID: \"07123727-9ae4-4e4d-a27c-b33ccf1b7b09\") " pod="openshift-marketplace/certified-operators-k8wsq" Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.155735 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3efa7e52-a2ff-4ce0-a294-3f326ef52cde-combined-ca-bundle\") pod \"3efa7e52-a2ff-4ce0-a294-3f326ef52cde\" (UID: \"3efa7e52-a2ff-4ce0-a294-3f326ef52cde\") " Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.155862 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3efa7e52-a2ff-4ce0-a294-3f326ef52cde-config\") pod \"3efa7e52-a2ff-4ce0-a294-3f326ef52cde\" (UID: \"3efa7e52-a2ff-4ce0-a294-3f326ef52cde\") " Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.155962 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7954\" (UniqueName: \"kubernetes.io/projected/3efa7e52-a2ff-4ce0-a294-3f326ef52cde-kube-api-access-j7954\") pod \"3efa7e52-a2ff-4ce0-a294-3f326ef52cde\" (UID: \"3efa7e52-a2ff-4ce0-a294-3f326ef52cde\") " Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.156120 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07123727-9ae4-4e4d-a27c-b33ccf1b7b09-utilities\") pod \"certified-operators-k8wsq\" (UID: \"07123727-9ae4-4e4d-a27c-b33ccf1b7b09\") " pod="openshift-marketplace/certified-operators-k8wsq" Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.156158 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dmpd\" (UniqueName: \"kubernetes.io/projected/07123727-9ae4-4e4d-a27c-b33ccf1b7b09-kube-api-access-6dmpd\") pod \"certified-operators-k8wsq\" (UID: \"07123727-9ae4-4e4d-a27c-b33ccf1b7b09\") " pod="openshift-marketplace/certified-operators-k8wsq" Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.156190 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07123727-9ae4-4e4d-a27c-b33ccf1b7b09-catalog-content\") pod \"certified-operators-k8wsq\" (UID: \"07123727-9ae4-4e4d-a27c-b33ccf1b7b09\") " pod="openshift-marketplace/certified-operators-k8wsq" Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.156804 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07123727-9ae4-4e4d-a27c-b33ccf1b7b09-catalog-content\") pod \"certified-operators-k8wsq\" (UID: \"07123727-9ae4-4e4d-a27c-b33ccf1b7b09\") " pod="openshift-marketplace/certified-operators-k8wsq" Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.157261 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07123727-9ae4-4e4d-a27c-b33ccf1b7b09-utilities\") pod \"certified-operators-k8wsq\" (UID: \"07123727-9ae4-4e4d-a27c-b33ccf1b7b09\") " pod="openshift-marketplace/certified-operators-k8wsq" Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.165746 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3efa7e52-a2ff-4ce0-a294-3f326ef52cde-kube-api-access-j7954" (OuterVolumeSpecName: "kube-api-access-j7954") pod "3efa7e52-a2ff-4ce0-a294-3f326ef52cde" (UID: "3efa7e52-a2ff-4ce0-a294-3f326ef52cde"). InnerVolumeSpecName "kube-api-access-j7954". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.180299 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dmpd\" (UniqueName: \"kubernetes.io/projected/07123727-9ae4-4e4d-a27c-b33ccf1b7b09-kube-api-access-6dmpd\") pod \"certified-operators-k8wsq\" (UID: \"07123727-9ae4-4e4d-a27c-b33ccf1b7b09\") " pod="openshift-marketplace/certified-operators-k8wsq" Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.182571 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3efa7e52-a2ff-4ce0-a294-3f326ef52cde-config" (OuterVolumeSpecName: "config") pod "3efa7e52-a2ff-4ce0-a294-3f326ef52cde" (UID: "3efa7e52-a2ff-4ce0-a294-3f326ef52cde"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.193571 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3efa7e52-a2ff-4ce0-a294-3f326ef52cde-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3efa7e52-a2ff-4ce0-a294-3f326ef52cde" (UID: "3efa7e52-a2ff-4ce0-a294-3f326ef52cde"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.257599 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7954\" (UniqueName: \"kubernetes.io/projected/3efa7e52-a2ff-4ce0-a294-3f326ef52cde-kube-api-access-j7954\") on node \"crc\" DevicePath \"\"" Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.257643 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3efa7e52-a2ff-4ce0-a294-3f326ef52cde-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.257657 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3efa7e52-a2ff-4ce0-a294-3f326ef52cde-config\") on node \"crc\" DevicePath \"\"" Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.334789 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k8wsq" Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.626650 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cnqxb" event={"ID":"3efa7e52-a2ff-4ce0-a294-3f326ef52cde","Type":"ContainerDied","Data":"b648bf6c067dabb74bf8157146cfe8aa86c1d30d65e436998bda29acb82b0d61"} Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.626891 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b648bf6c067dabb74bf8157146cfe8aa86c1d30d65e436998bda29acb82b0d61" Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.626705 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cnqxb" Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.838583 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k8wsq"] Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.887730 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58f8c59d4c-ttfk5"] Oct 13 08:02:29 crc kubenswrapper[4833]: E1013 08:02:29.888138 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3efa7e52-a2ff-4ce0-a294-3f326ef52cde" containerName="neutron-db-sync" Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.888155 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3efa7e52-a2ff-4ce0-a294-3f326ef52cde" containerName="neutron-db-sync" Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.888312 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3efa7e52-a2ff-4ce0-a294-3f326ef52cde" containerName="neutron-db-sync" Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.892256 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58f8c59d4c-ttfk5" Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.913272 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58f8c59d4c-ttfk5"] Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.969628 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c884898d-1091-4c1e-8497-4bbc8147394b-config\") pod \"dnsmasq-dns-58f8c59d4c-ttfk5\" (UID: \"c884898d-1091-4c1e-8497-4bbc8147394b\") " pod="openstack/dnsmasq-dns-58f8c59d4c-ttfk5" Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.969730 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c884898d-1091-4c1e-8497-4bbc8147394b-ovsdbserver-sb\") pod \"dnsmasq-dns-58f8c59d4c-ttfk5\" (UID: \"c884898d-1091-4c1e-8497-4bbc8147394b\") " pod="openstack/dnsmasq-dns-58f8c59d4c-ttfk5" Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.969773 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c884898d-1091-4c1e-8497-4bbc8147394b-ovsdbserver-nb\") pod \"dnsmasq-dns-58f8c59d4c-ttfk5\" (UID: \"c884898d-1091-4c1e-8497-4bbc8147394b\") " pod="openstack/dnsmasq-dns-58f8c59d4c-ttfk5" Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.969831 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c884898d-1091-4c1e-8497-4bbc8147394b-dns-svc\") pod \"dnsmasq-dns-58f8c59d4c-ttfk5\" (UID: \"c884898d-1091-4c1e-8497-4bbc8147394b\") " pod="openstack/dnsmasq-dns-58f8c59d4c-ttfk5" Oct 13 08:02:29 crc kubenswrapper[4833]: I1013 08:02:29.969857 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td7th\" (UniqueName: \"kubernetes.io/projected/c884898d-1091-4c1e-8497-4bbc8147394b-kube-api-access-td7th\") pod \"dnsmasq-dns-58f8c59d4c-ttfk5\" (UID: \"c884898d-1091-4c1e-8497-4bbc8147394b\") " pod="openstack/dnsmasq-dns-58f8c59d4c-ttfk5" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.037284 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-77c9d74d84-z27v9"] Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.040957 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77c9d74d84-z27v9" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.045730 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.045829 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.045870 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.045922 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-bpdk5" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.074000 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c884898d-1091-4c1e-8497-4bbc8147394b-ovsdbserver-nb\") pod \"dnsmasq-dns-58f8c59d4c-ttfk5\" (UID: \"c884898d-1091-4c1e-8497-4bbc8147394b\") " pod="openstack/dnsmasq-dns-58f8c59d4c-ttfk5" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.075084 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c884898d-1091-4c1e-8497-4bbc8147394b-ovsdbserver-nb\") pod \"dnsmasq-dns-58f8c59d4c-ttfk5\" (UID: \"c884898d-1091-4c1e-8497-4bbc8147394b\") " pod="openstack/dnsmasq-dns-58f8c59d4c-ttfk5" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.075625 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c884898d-1091-4c1e-8497-4bbc8147394b-dns-svc\") pod \"dnsmasq-dns-58f8c59d4c-ttfk5\" (UID: \"c884898d-1091-4c1e-8497-4bbc8147394b\") " pod="openstack/dnsmasq-dns-58f8c59d4c-ttfk5" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.075744 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td7th\" (UniqueName: \"kubernetes.io/projected/c884898d-1091-4c1e-8497-4bbc8147394b-kube-api-access-td7th\") pod \"dnsmasq-dns-58f8c59d4c-ttfk5\" (UID: \"c884898d-1091-4c1e-8497-4bbc8147394b\") " pod="openstack/dnsmasq-dns-58f8c59d4c-ttfk5" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.075962 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c884898d-1091-4c1e-8497-4bbc8147394b-config\") pod \"dnsmasq-dns-58f8c59d4c-ttfk5\" (UID: \"c884898d-1091-4c1e-8497-4bbc8147394b\") " pod="openstack/dnsmasq-dns-58f8c59d4c-ttfk5" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.077215 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c884898d-1091-4c1e-8497-4bbc8147394b-ovsdbserver-sb\") pod \"dnsmasq-dns-58f8c59d4c-ttfk5\" (UID: \"c884898d-1091-4c1e-8497-4bbc8147394b\") " pod="openstack/dnsmasq-dns-58f8c59d4c-ttfk5" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.077100 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c884898d-1091-4c1e-8497-4bbc8147394b-config\") pod \"dnsmasq-dns-58f8c59d4c-ttfk5\" (UID: \"c884898d-1091-4c1e-8497-4bbc8147394b\") " pod="openstack/dnsmasq-dns-58f8c59d4c-ttfk5" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.076745 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c884898d-1091-4c1e-8497-4bbc8147394b-dns-svc\") pod \"dnsmasq-dns-58f8c59d4c-ttfk5\" (UID: \"c884898d-1091-4c1e-8497-4bbc8147394b\") " pod="openstack/dnsmasq-dns-58f8c59d4c-ttfk5" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.078270 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c884898d-1091-4c1e-8497-4bbc8147394b-ovsdbserver-sb\") pod \"dnsmasq-dns-58f8c59d4c-ttfk5\" (UID: \"c884898d-1091-4c1e-8497-4bbc8147394b\") " pod="openstack/dnsmasq-dns-58f8c59d4c-ttfk5" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.079660 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77c9d74d84-z27v9"] Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.098366 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td7th\" (UniqueName: \"kubernetes.io/projected/c884898d-1091-4c1e-8497-4bbc8147394b-kube-api-access-td7th\") pod \"dnsmasq-dns-58f8c59d4c-ttfk5\" (UID: \"c884898d-1091-4c1e-8497-4bbc8147394b\") " pod="openstack/dnsmasq-dns-58f8c59d4c-ttfk5" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.178523 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ebd46cf-e758-437f-a2d6-0c7c32733228-httpd-config\") pod \"neutron-77c9d74d84-z27v9\" (UID: \"2ebd46cf-e758-437f-a2d6-0c7c32733228\") " pod="openstack/neutron-77c9d74d84-z27v9" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.178794 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ebd46cf-e758-437f-a2d6-0c7c32733228-combined-ca-bundle\") pod \"neutron-77c9d74d84-z27v9\" (UID: \"2ebd46cf-e758-437f-a2d6-0c7c32733228\") " pod="openstack/neutron-77c9d74d84-z27v9" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.178992 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ebd46cf-e758-437f-a2d6-0c7c32733228-ovndb-tls-certs\") pod \"neutron-77c9d74d84-z27v9\" (UID: \"2ebd46cf-e758-437f-a2d6-0c7c32733228\") " pod="openstack/neutron-77c9d74d84-z27v9" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.179118 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5rmh\" (UniqueName: \"kubernetes.io/projected/2ebd46cf-e758-437f-a2d6-0c7c32733228-kube-api-access-k5rmh\") pod \"neutron-77c9d74d84-z27v9\" (UID: \"2ebd46cf-e758-437f-a2d6-0c7c32733228\") " pod="openstack/neutron-77c9d74d84-z27v9" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.179255 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ebd46cf-e758-437f-a2d6-0c7c32733228-config\") pod \"neutron-77c9d74d84-z27v9\" (UID: \"2ebd46cf-e758-437f-a2d6-0c7c32733228\") " pod="openstack/neutron-77c9d74d84-z27v9" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.274109 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58f8c59d4c-ttfk5" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.281330 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ebd46cf-e758-437f-a2d6-0c7c32733228-httpd-config\") pod \"neutron-77c9d74d84-z27v9\" (UID: \"2ebd46cf-e758-437f-a2d6-0c7c32733228\") " pod="openstack/neutron-77c9d74d84-z27v9" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.281379 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ebd46cf-e758-437f-a2d6-0c7c32733228-combined-ca-bundle\") pod \"neutron-77c9d74d84-z27v9\" (UID: \"2ebd46cf-e758-437f-a2d6-0c7c32733228\") " pod="openstack/neutron-77c9d74d84-z27v9" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.281440 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ebd46cf-e758-437f-a2d6-0c7c32733228-ovndb-tls-certs\") pod \"neutron-77c9d74d84-z27v9\" (UID: \"2ebd46cf-e758-437f-a2d6-0c7c32733228\") " pod="openstack/neutron-77c9d74d84-z27v9" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.281475 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5rmh\" (UniqueName: \"kubernetes.io/projected/2ebd46cf-e758-437f-a2d6-0c7c32733228-kube-api-access-k5rmh\") pod \"neutron-77c9d74d84-z27v9\" (UID: \"2ebd46cf-e758-437f-a2d6-0c7c32733228\") " pod="openstack/neutron-77c9d74d84-z27v9" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.281530 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ebd46cf-e758-437f-a2d6-0c7c32733228-config\") pod \"neutron-77c9d74d84-z27v9\" (UID: \"2ebd46cf-e758-437f-a2d6-0c7c32733228\") " pod="openstack/neutron-77c9d74d84-z27v9" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.285166 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ebd46cf-e758-437f-a2d6-0c7c32733228-ovndb-tls-certs\") pod \"neutron-77c9d74d84-z27v9\" (UID: \"2ebd46cf-e758-437f-a2d6-0c7c32733228\") " pod="openstack/neutron-77c9d74d84-z27v9" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.286898 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ebd46cf-e758-437f-a2d6-0c7c32733228-config\") pod \"neutron-77c9d74d84-z27v9\" (UID: \"2ebd46cf-e758-437f-a2d6-0c7c32733228\") " pod="openstack/neutron-77c9d74d84-z27v9" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.287149 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ebd46cf-e758-437f-a2d6-0c7c32733228-httpd-config\") pod \"neutron-77c9d74d84-z27v9\" (UID: \"2ebd46cf-e758-437f-a2d6-0c7c32733228\") " pod="openstack/neutron-77c9d74d84-z27v9" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.287809 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ebd46cf-e758-437f-a2d6-0c7c32733228-combined-ca-bundle\") pod \"neutron-77c9d74d84-z27v9\" (UID: \"2ebd46cf-e758-437f-a2d6-0c7c32733228\") " pod="openstack/neutron-77c9d74d84-z27v9" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.310915 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5rmh\" (UniqueName: \"kubernetes.io/projected/2ebd46cf-e758-437f-a2d6-0c7c32733228-kube-api-access-k5rmh\") pod \"neutron-77c9d74d84-z27v9\" (UID: \"2ebd46cf-e758-437f-a2d6-0c7c32733228\") " pod="openstack/neutron-77c9d74d84-z27v9" Oct 13 08:02:30 crc kubenswrapper[4833]: I1013 08:02:30.369065 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77c9d74d84-z27v9" Oct 13 08:02:31 crc kubenswrapper[4833]: I1013 08:02:30.542938 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:02:31 crc kubenswrapper[4833]: I1013 08:02:30.543310 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 08:02:31 crc kubenswrapper[4833]: I1013 08:02:30.543355 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 08:02:31 crc kubenswrapper[4833]: I1013 08:02:30.544017 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"84144bfb0195d5a0d1b4378bb219c91ea274fee3df90291859e033d126a958d8"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 08:02:31 crc kubenswrapper[4833]: I1013 08:02:30.544069 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://84144bfb0195d5a0d1b4378bb219c91ea274fee3df90291859e033d126a958d8" gracePeriod=600 Oct 13 08:02:31 crc kubenswrapper[4833]: I1013 08:02:30.640259 4833 generic.go:334] "Generic (PLEG): container finished" podID="07123727-9ae4-4e4d-a27c-b33ccf1b7b09" containerID="65c127428d45086d21e259c8f6bd875ff4f97f488d09f73c66a3d2874deb9cd2" exitCode=0 Oct 13 08:02:31 crc kubenswrapper[4833]: I1013 08:02:30.640292 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8wsq" event={"ID":"07123727-9ae4-4e4d-a27c-b33ccf1b7b09","Type":"ContainerDied","Data":"65c127428d45086d21e259c8f6bd875ff4f97f488d09f73c66a3d2874deb9cd2"} Oct 13 08:02:31 crc kubenswrapper[4833]: I1013 08:02:30.640311 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8wsq" event={"ID":"07123727-9ae4-4e4d-a27c-b33ccf1b7b09","Type":"ContainerStarted","Data":"8e7e17cd7a442fd6df4fac5cf40c3cb35ab8ad16f4991a0f51d5ef4264a0fbae"} Oct 13 08:02:31 crc kubenswrapper[4833]: E1013 08:02:30.672665 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:02:31 crc kubenswrapper[4833]: I1013 08:02:30.727566 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58f8c59d4c-ttfk5"] Oct 13 08:02:31 crc kubenswrapper[4833]: W1013 08:02:30.730032 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc884898d_1091_4c1e_8497_4bbc8147394b.slice/crio-fc55fbf410c86fa97f108c5055a058316902fc15135e49a249fb8670844cba01 WatchSource:0}: Error finding container fc55fbf410c86fa97f108c5055a058316902fc15135e49a249fb8670844cba01: Status 404 returned error can't find the container with id fc55fbf410c86fa97f108c5055a058316902fc15135e49a249fb8670844cba01 Oct 13 08:02:31 crc kubenswrapper[4833]: I1013 08:02:31.562607 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77c9d74d84-z27v9"] Oct 13 08:02:31 crc kubenswrapper[4833]: W1013 08:02:31.571190 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ebd46cf_e758_437f_a2d6_0c7c32733228.slice/crio-38b62744b88be5d99f652cacb5e771c04f503b73c93f527dea5d494839471be8 WatchSource:0}: Error finding container 38b62744b88be5d99f652cacb5e771c04f503b73c93f527dea5d494839471be8: Status 404 returned error can't find the container with id 38b62744b88be5d99f652cacb5e771c04f503b73c93f527dea5d494839471be8 Oct 13 08:02:31 crc kubenswrapper[4833]: I1013 08:02:31.653879 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="84144bfb0195d5a0d1b4378bb219c91ea274fee3df90291859e033d126a958d8" exitCode=0 Oct 13 08:02:31 crc kubenswrapper[4833]: I1013 08:02:31.654027 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"84144bfb0195d5a0d1b4378bb219c91ea274fee3df90291859e033d126a958d8"} Oct 13 08:02:31 crc kubenswrapper[4833]: I1013 08:02:31.654067 4833 scope.go:117] "RemoveContainer" containerID="529aa9b3cfb16d8dd65469ec64ebdc474a0f44417fe7a26128d25222a7982fb0" Oct 13 08:02:31 crc kubenswrapper[4833]: I1013 08:02:31.655980 4833 scope.go:117] "RemoveContainer" containerID="84144bfb0195d5a0d1b4378bb219c91ea274fee3df90291859e033d126a958d8" Oct 13 08:02:31 crc kubenswrapper[4833]: E1013 08:02:31.656669 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:02:31 crc kubenswrapper[4833]: I1013 08:02:31.662763 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77c9d74d84-z27v9" event={"ID":"2ebd46cf-e758-437f-a2d6-0c7c32733228","Type":"ContainerStarted","Data":"38b62744b88be5d99f652cacb5e771c04f503b73c93f527dea5d494839471be8"} Oct 13 08:02:31 crc kubenswrapper[4833]: I1013 08:02:31.664699 4833 generic.go:334] "Generic (PLEG): container finished" podID="c884898d-1091-4c1e-8497-4bbc8147394b" containerID="d5572badf75b84d03f25676a305ca4ae34792b607275bff3900eb5c216ffd1ff" exitCode=0 Oct 13 08:02:31 crc kubenswrapper[4833]: I1013 08:02:31.664753 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58f8c59d4c-ttfk5" event={"ID":"c884898d-1091-4c1e-8497-4bbc8147394b","Type":"ContainerDied","Data":"d5572badf75b84d03f25676a305ca4ae34792b607275bff3900eb5c216ffd1ff"} Oct 13 08:02:31 crc kubenswrapper[4833]: I1013 08:02:31.664770 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58f8c59d4c-ttfk5" event={"ID":"c884898d-1091-4c1e-8497-4bbc8147394b","Type":"ContainerStarted","Data":"fc55fbf410c86fa97f108c5055a058316902fc15135e49a249fb8670844cba01"} Oct 13 08:02:31 crc kubenswrapper[4833]: I1013 08:02:31.672278 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8wsq" event={"ID":"07123727-9ae4-4e4d-a27c-b33ccf1b7b09","Type":"ContainerStarted","Data":"8318109a0be4dbf4a25fa66c711fea4266e870a05a1226c10777cd4f9e755814"} Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.317521 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-ddc9cc9f7-vclj7"] Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.319229 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ddc9cc9f7-vclj7" Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.321300 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.321343 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.337180 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ddc9cc9f7-vclj7"] Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.428627 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c9e9ff9f-9222-4649-bb70-e6112a50dfe9-httpd-config\") pod \"neutron-ddc9cc9f7-vclj7\" (UID: \"c9e9ff9f-9222-4649-bb70-e6112a50dfe9\") " pod="openstack/neutron-ddc9cc9f7-vclj7" Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.428716 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9e9ff9f-9222-4649-bb70-e6112a50dfe9-internal-tls-certs\") pod \"neutron-ddc9cc9f7-vclj7\" (UID: \"c9e9ff9f-9222-4649-bb70-e6112a50dfe9\") " pod="openstack/neutron-ddc9cc9f7-vclj7" Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.428802 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c9e9ff9f-9222-4649-bb70-e6112a50dfe9-config\") pod \"neutron-ddc9cc9f7-vclj7\" (UID: \"c9e9ff9f-9222-4649-bb70-e6112a50dfe9\") " pod="openstack/neutron-ddc9cc9f7-vclj7" Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.428835 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9e9ff9f-9222-4649-bb70-e6112a50dfe9-ovndb-tls-certs\") pod \"neutron-ddc9cc9f7-vclj7\" (UID: \"c9e9ff9f-9222-4649-bb70-e6112a50dfe9\") " pod="openstack/neutron-ddc9cc9f7-vclj7" Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.428898 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9e9ff9f-9222-4649-bb70-e6112a50dfe9-public-tls-certs\") pod \"neutron-ddc9cc9f7-vclj7\" (UID: \"c9e9ff9f-9222-4649-bb70-e6112a50dfe9\") " pod="openstack/neutron-ddc9cc9f7-vclj7" Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.428928 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9e9ff9f-9222-4649-bb70-e6112a50dfe9-combined-ca-bundle\") pod \"neutron-ddc9cc9f7-vclj7\" (UID: \"c9e9ff9f-9222-4649-bb70-e6112a50dfe9\") " pod="openstack/neutron-ddc9cc9f7-vclj7" Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.429054 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjq6v\" (UniqueName: \"kubernetes.io/projected/c9e9ff9f-9222-4649-bb70-e6112a50dfe9-kube-api-access-tjq6v\") pod \"neutron-ddc9cc9f7-vclj7\" (UID: \"c9e9ff9f-9222-4649-bb70-e6112a50dfe9\") " pod="openstack/neutron-ddc9cc9f7-vclj7" Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.530038 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c9e9ff9f-9222-4649-bb70-e6112a50dfe9-config\") pod \"neutron-ddc9cc9f7-vclj7\" (UID: \"c9e9ff9f-9222-4649-bb70-e6112a50dfe9\") " pod="openstack/neutron-ddc9cc9f7-vclj7" Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.530246 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9e9ff9f-9222-4649-bb70-e6112a50dfe9-ovndb-tls-certs\") pod \"neutron-ddc9cc9f7-vclj7\" (UID: \"c9e9ff9f-9222-4649-bb70-e6112a50dfe9\") " pod="openstack/neutron-ddc9cc9f7-vclj7" Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.530354 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9e9ff9f-9222-4649-bb70-e6112a50dfe9-public-tls-certs\") pod \"neutron-ddc9cc9f7-vclj7\" (UID: \"c9e9ff9f-9222-4649-bb70-e6112a50dfe9\") " pod="openstack/neutron-ddc9cc9f7-vclj7" Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.530444 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9e9ff9f-9222-4649-bb70-e6112a50dfe9-combined-ca-bundle\") pod \"neutron-ddc9cc9f7-vclj7\" (UID: \"c9e9ff9f-9222-4649-bb70-e6112a50dfe9\") " pod="openstack/neutron-ddc9cc9f7-vclj7" Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.530564 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjq6v\" (UniqueName: \"kubernetes.io/projected/c9e9ff9f-9222-4649-bb70-e6112a50dfe9-kube-api-access-tjq6v\") pod \"neutron-ddc9cc9f7-vclj7\" (UID: \"c9e9ff9f-9222-4649-bb70-e6112a50dfe9\") " pod="openstack/neutron-ddc9cc9f7-vclj7" Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.530740 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c9e9ff9f-9222-4649-bb70-e6112a50dfe9-httpd-config\") pod \"neutron-ddc9cc9f7-vclj7\" (UID: \"c9e9ff9f-9222-4649-bb70-e6112a50dfe9\") " pod="openstack/neutron-ddc9cc9f7-vclj7" Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.530821 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9e9ff9f-9222-4649-bb70-e6112a50dfe9-internal-tls-certs\") pod \"neutron-ddc9cc9f7-vclj7\" (UID: \"c9e9ff9f-9222-4649-bb70-e6112a50dfe9\") " pod="openstack/neutron-ddc9cc9f7-vclj7" Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.536382 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c9e9ff9f-9222-4649-bb70-e6112a50dfe9-config\") pod \"neutron-ddc9cc9f7-vclj7\" (UID: \"c9e9ff9f-9222-4649-bb70-e6112a50dfe9\") " pod="openstack/neutron-ddc9cc9f7-vclj7" Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.544864 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9e9ff9f-9222-4649-bb70-e6112a50dfe9-internal-tls-certs\") pod \"neutron-ddc9cc9f7-vclj7\" (UID: \"c9e9ff9f-9222-4649-bb70-e6112a50dfe9\") " pod="openstack/neutron-ddc9cc9f7-vclj7" Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.550241 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9e9ff9f-9222-4649-bb70-e6112a50dfe9-public-tls-certs\") pod \"neutron-ddc9cc9f7-vclj7\" (UID: \"c9e9ff9f-9222-4649-bb70-e6112a50dfe9\") " pod="openstack/neutron-ddc9cc9f7-vclj7" Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.551084 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9e9ff9f-9222-4649-bb70-e6112a50dfe9-combined-ca-bundle\") pod \"neutron-ddc9cc9f7-vclj7\" (UID: \"c9e9ff9f-9222-4649-bb70-e6112a50dfe9\") " pod="openstack/neutron-ddc9cc9f7-vclj7" Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.557163 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9e9ff9f-9222-4649-bb70-e6112a50dfe9-ovndb-tls-certs\") pod \"neutron-ddc9cc9f7-vclj7\" (UID: \"c9e9ff9f-9222-4649-bb70-e6112a50dfe9\") " pod="openstack/neutron-ddc9cc9f7-vclj7" Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.562042 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c9e9ff9f-9222-4649-bb70-e6112a50dfe9-httpd-config\") pod \"neutron-ddc9cc9f7-vclj7\" (UID: \"c9e9ff9f-9222-4649-bb70-e6112a50dfe9\") " pod="openstack/neutron-ddc9cc9f7-vclj7" Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.564880 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjq6v\" (UniqueName: \"kubernetes.io/projected/c9e9ff9f-9222-4649-bb70-e6112a50dfe9-kube-api-access-tjq6v\") pod \"neutron-ddc9cc9f7-vclj7\" (UID: \"c9e9ff9f-9222-4649-bb70-e6112a50dfe9\") " pod="openstack/neutron-ddc9cc9f7-vclj7" Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.634174 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ddc9cc9f7-vclj7" Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.683576 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58f8c59d4c-ttfk5" event={"ID":"c884898d-1091-4c1e-8497-4bbc8147394b","Type":"ContainerStarted","Data":"c4c9785d0ea09878ad4d3f48a4fba2abe9d1a883658b3239c81bdf1c51676f91"} Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.683829 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58f8c59d4c-ttfk5" Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.689556 4833 generic.go:334] "Generic (PLEG): container finished" podID="07123727-9ae4-4e4d-a27c-b33ccf1b7b09" containerID="8318109a0be4dbf4a25fa66c711fea4266e870a05a1226c10777cd4f9e755814" exitCode=0 Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.689622 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8wsq" event={"ID":"07123727-9ae4-4e4d-a27c-b33ccf1b7b09","Type":"ContainerDied","Data":"8318109a0be4dbf4a25fa66c711fea4266e870a05a1226c10777cd4f9e755814"} Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.697414 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77c9d74d84-z27v9" event={"ID":"2ebd46cf-e758-437f-a2d6-0c7c32733228","Type":"ContainerStarted","Data":"14fd0bf4b1314f37dea1ec89b9c2ab1f806ddfd14ed844b10470f61c6572ede8"} Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.697452 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77c9d74d84-z27v9" event={"ID":"2ebd46cf-e758-437f-a2d6-0c7c32733228","Type":"ContainerStarted","Data":"99e84833ae368cb2a0a3b799be4d0dec7133177097042778a1fcaa359d089b58"} Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.697710 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-77c9d74d84-z27v9" Oct 13 08:02:32 crc kubenswrapper[4833]: I1013 08:02:32.709506 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58f8c59d4c-ttfk5" podStartSLOduration=3.709487307 podStartE2EDuration="3.709487307s" podCreationTimestamp="2025-10-13 08:02:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:02:32.705984877 +0000 UTC m=+5642.806407793" watchObservedRunningTime="2025-10-13 08:02:32.709487307 +0000 UTC m=+5642.809910223" Oct 13 08:02:33 crc kubenswrapper[4833]: W1013 08:02:33.253723 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9e9ff9f_9222_4649_bb70_e6112a50dfe9.slice/crio-62cf3ed9bd7a05775327f06018218a4e897ee8db182a55376e25d48dfd2ece75 WatchSource:0}: Error finding container 62cf3ed9bd7a05775327f06018218a4e897ee8db182a55376e25d48dfd2ece75: Status 404 returned error can't find the container with id 62cf3ed9bd7a05775327f06018218a4e897ee8db182a55376e25d48dfd2ece75 Oct 13 08:02:33 crc kubenswrapper[4833]: I1013 08:02:33.255993 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-77c9d74d84-z27v9" podStartSLOduration=3.25597007 podStartE2EDuration="3.25597007s" podCreationTimestamp="2025-10-13 08:02:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:02:32.760879238 +0000 UTC m=+5642.861302164" watchObservedRunningTime="2025-10-13 08:02:33.25597007 +0000 UTC m=+5643.356392996" Oct 13 08:02:33 crc kubenswrapper[4833]: I1013 08:02:33.258504 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ddc9cc9f7-vclj7"] Oct 13 08:02:33 crc kubenswrapper[4833]: I1013 08:02:33.706475 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ddc9cc9f7-vclj7" event={"ID":"c9e9ff9f-9222-4649-bb70-e6112a50dfe9","Type":"ContainerStarted","Data":"d8edc714519be8f038918c46fa2069a3b5cf492933ae2e7532ecd1711235496d"} Oct 13 08:02:33 crc kubenswrapper[4833]: I1013 08:02:33.706753 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ddc9cc9f7-vclj7" event={"ID":"c9e9ff9f-9222-4649-bb70-e6112a50dfe9","Type":"ContainerStarted","Data":"5c84fc20de156db9745db6724bf888613c9d2b60bccda9dacb425b6837cc82df"} Oct 13 08:02:33 crc kubenswrapper[4833]: I1013 08:02:33.706764 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ddc9cc9f7-vclj7" event={"ID":"c9e9ff9f-9222-4649-bb70-e6112a50dfe9","Type":"ContainerStarted","Data":"62cf3ed9bd7a05775327f06018218a4e897ee8db182a55376e25d48dfd2ece75"} Oct 13 08:02:33 crc kubenswrapper[4833]: I1013 08:02:33.706794 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-ddc9cc9f7-vclj7" Oct 13 08:02:33 crc kubenswrapper[4833]: I1013 08:02:33.716899 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8wsq" event={"ID":"07123727-9ae4-4e4d-a27c-b33ccf1b7b09","Type":"ContainerStarted","Data":"ae87f49cf471d6e97e5f52dcec24f7dae992efa5c4877bb16760f03942ea7f83"} Oct 13 08:02:33 crc kubenswrapper[4833]: I1013 08:02:33.728021 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-ddc9cc9f7-vclj7" podStartSLOduration=1.728000857 podStartE2EDuration="1.728000857s" podCreationTimestamp="2025-10-13 08:02:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:02:33.72562771 +0000 UTC m=+5643.826050626" watchObservedRunningTime="2025-10-13 08:02:33.728000857 +0000 UTC m=+5643.828423773" Oct 13 08:02:33 crc kubenswrapper[4833]: I1013 08:02:33.752395 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k8wsq" podStartSLOduration=2.962654675 podStartE2EDuration="5.75237326s" podCreationTimestamp="2025-10-13 08:02:28 +0000 UTC" firstStartedPulling="2025-10-13 08:02:30.641376693 +0000 UTC m=+5640.741799609" lastFinishedPulling="2025-10-13 08:02:33.431095278 +0000 UTC m=+5643.531518194" observedRunningTime="2025-10-13 08:02:33.750888708 +0000 UTC m=+5643.851311624" watchObservedRunningTime="2025-10-13 08:02:33.75237326 +0000 UTC m=+5643.852796176" Oct 13 08:02:39 crc kubenswrapper[4833]: I1013 08:02:39.335477 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k8wsq" Oct 13 08:02:39 crc kubenswrapper[4833]: I1013 08:02:39.336109 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k8wsq" Oct 13 08:02:39 crc kubenswrapper[4833]: I1013 08:02:39.405877 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k8wsq" Oct 13 08:02:39 crc kubenswrapper[4833]: I1013 08:02:39.866208 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k8wsq" Oct 13 08:02:39 crc kubenswrapper[4833]: I1013 08:02:39.933881 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k8wsq"] Oct 13 08:02:40 crc kubenswrapper[4833]: I1013 08:02:40.276378 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58f8c59d4c-ttfk5" Oct 13 08:02:40 crc kubenswrapper[4833]: I1013 08:02:40.371706 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757f897c4f-nv78p"] Oct 13 08:02:40 crc kubenswrapper[4833]: I1013 08:02:40.372048 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757f897c4f-nv78p" podUID="1ece2475-4308-40f4-9c61-9663fa98fa06" containerName="dnsmasq-dns" containerID="cri-o://385a5ecc93b9e35ecae77545bc42aa1c021f860e6d6720584a1bca79e82bca53" gracePeriod=10 Oct 13 08:02:40 crc kubenswrapper[4833]: I1013 08:02:40.806556 4833 generic.go:334] "Generic (PLEG): container finished" podID="1ece2475-4308-40f4-9c61-9663fa98fa06" containerID="385a5ecc93b9e35ecae77545bc42aa1c021f860e6d6720584a1bca79e82bca53" exitCode=0 Oct 13 08:02:40 crc kubenswrapper[4833]: I1013 08:02:40.806590 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757f897c4f-nv78p" event={"ID":"1ece2475-4308-40f4-9c61-9663fa98fa06","Type":"ContainerDied","Data":"385a5ecc93b9e35ecae77545bc42aa1c021f860e6d6720584a1bca79e82bca53"} Oct 13 08:02:40 crc kubenswrapper[4833]: I1013 08:02:40.806838 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757f897c4f-nv78p" event={"ID":"1ece2475-4308-40f4-9c61-9663fa98fa06","Type":"ContainerDied","Data":"21f6aedf27a55d11513c381a5d4f37e99ffd758c7502e17ec5be0207a270aec8"} Oct 13 08:02:40 crc kubenswrapper[4833]: I1013 08:02:40.806852 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21f6aedf27a55d11513c381a5d4f37e99ffd758c7502e17ec5be0207a270aec8" Oct 13 08:02:40 crc kubenswrapper[4833]: I1013 08:02:40.874613 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757f897c4f-nv78p" Oct 13 08:02:40 crc kubenswrapper[4833]: I1013 08:02:40.989207 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ece2475-4308-40f4-9c61-9663fa98fa06-dns-svc\") pod \"1ece2475-4308-40f4-9c61-9663fa98fa06\" (UID: \"1ece2475-4308-40f4-9c61-9663fa98fa06\") " Oct 13 08:02:40 crc kubenswrapper[4833]: I1013 08:02:40.989255 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ece2475-4308-40f4-9c61-9663fa98fa06-ovsdbserver-sb\") pod \"1ece2475-4308-40f4-9c61-9663fa98fa06\" (UID: \"1ece2475-4308-40f4-9c61-9663fa98fa06\") " Oct 13 08:02:40 crc kubenswrapper[4833]: I1013 08:02:40.989299 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ece2475-4308-40f4-9c61-9663fa98fa06-config\") pod \"1ece2475-4308-40f4-9c61-9663fa98fa06\" (UID: \"1ece2475-4308-40f4-9c61-9663fa98fa06\") " Oct 13 08:02:40 crc kubenswrapper[4833]: I1013 08:02:40.989370 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bpfm\" (UniqueName: \"kubernetes.io/projected/1ece2475-4308-40f4-9c61-9663fa98fa06-kube-api-access-5bpfm\") pod \"1ece2475-4308-40f4-9c61-9663fa98fa06\" (UID: \"1ece2475-4308-40f4-9c61-9663fa98fa06\") " Oct 13 08:02:40 crc kubenswrapper[4833]: I1013 08:02:40.990107 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ece2475-4308-40f4-9c61-9663fa98fa06-ovsdbserver-nb\") pod \"1ece2475-4308-40f4-9c61-9663fa98fa06\" (UID: \"1ece2475-4308-40f4-9c61-9663fa98fa06\") " Oct 13 08:02:40 crc kubenswrapper[4833]: I1013 08:02:40.994295 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ece2475-4308-40f4-9c61-9663fa98fa06-kube-api-access-5bpfm" (OuterVolumeSpecName: "kube-api-access-5bpfm") pod "1ece2475-4308-40f4-9c61-9663fa98fa06" (UID: "1ece2475-4308-40f4-9c61-9663fa98fa06"). InnerVolumeSpecName "kube-api-access-5bpfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:02:41 crc kubenswrapper[4833]: I1013 08:02:41.030450 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ece2475-4308-40f4-9c61-9663fa98fa06-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1ece2475-4308-40f4-9c61-9663fa98fa06" (UID: "1ece2475-4308-40f4-9c61-9663fa98fa06"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:02:41 crc kubenswrapper[4833]: I1013 08:02:41.030948 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ece2475-4308-40f4-9c61-9663fa98fa06-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1ece2475-4308-40f4-9c61-9663fa98fa06" (UID: "1ece2475-4308-40f4-9c61-9663fa98fa06"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:02:41 crc kubenswrapper[4833]: I1013 08:02:41.037519 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ece2475-4308-40f4-9c61-9663fa98fa06-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1ece2475-4308-40f4-9c61-9663fa98fa06" (UID: "1ece2475-4308-40f4-9c61-9663fa98fa06"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:02:41 crc kubenswrapper[4833]: I1013 08:02:41.048185 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ece2475-4308-40f4-9c61-9663fa98fa06-config" (OuterVolumeSpecName: "config") pod "1ece2475-4308-40f4-9c61-9663fa98fa06" (UID: "1ece2475-4308-40f4-9c61-9663fa98fa06"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:02:41 crc kubenswrapper[4833]: I1013 08:02:41.092246 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ece2475-4308-40f4-9c61-9663fa98fa06-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 08:02:41 crc kubenswrapper[4833]: I1013 08:02:41.092275 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ece2475-4308-40f4-9c61-9663fa98fa06-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 08:02:41 crc kubenswrapper[4833]: I1013 08:02:41.092286 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ece2475-4308-40f4-9c61-9663fa98fa06-config\") on node \"crc\" DevicePath \"\"" Oct 13 08:02:41 crc kubenswrapper[4833]: I1013 08:02:41.092297 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bpfm\" (UniqueName: \"kubernetes.io/projected/1ece2475-4308-40f4-9c61-9663fa98fa06-kube-api-access-5bpfm\") on node \"crc\" DevicePath \"\"" Oct 13 08:02:41 crc kubenswrapper[4833]: I1013 08:02:41.092306 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ece2475-4308-40f4-9c61-9663fa98fa06-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 08:02:41 crc kubenswrapper[4833]: I1013 08:02:41.816193 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757f897c4f-nv78p" Oct 13 08:02:41 crc kubenswrapper[4833]: I1013 08:02:41.816469 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k8wsq" podUID="07123727-9ae4-4e4d-a27c-b33ccf1b7b09" containerName="registry-server" containerID="cri-o://ae87f49cf471d6e97e5f52dcec24f7dae992efa5c4877bb16760f03942ea7f83" gracePeriod=2 Oct 13 08:02:41 crc kubenswrapper[4833]: I1013 08:02:41.867884 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757f897c4f-nv78p"] Oct 13 08:02:41 crc kubenswrapper[4833]: I1013 08:02:41.879820 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757f897c4f-nv78p"] Oct 13 08:02:42 crc kubenswrapper[4833]: I1013 08:02:42.651634 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ece2475-4308-40f4-9c61-9663fa98fa06" path="/var/lib/kubelet/pods/1ece2475-4308-40f4-9c61-9663fa98fa06/volumes" Oct 13 08:02:42 crc kubenswrapper[4833]: I1013 08:02:42.792332 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k8wsq" Oct 13 08:02:42 crc kubenswrapper[4833]: I1013 08:02:42.870805 4833 generic.go:334] "Generic (PLEG): container finished" podID="07123727-9ae4-4e4d-a27c-b33ccf1b7b09" containerID="ae87f49cf471d6e97e5f52dcec24f7dae992efa5c4877bb16760f03942ea7f83" exitCode=0 Oct 13 08:02:42 crc kubenswrapper[4833]: I1013 08:02:42.870847 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8wsq" event={"ID":"07123727-9ae4-4e4d-a27c-b33ccf1b7b09","Type":"ContainerDied","Data":"ae87f49cf471d6e97e5f52dcec24f7dae992efa5c4877bb16760f03942ea7f83"} Oct 13 08:02:42 crc kubenswrapper[4833]: I1013 08:02:42.870874 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8wsq" event={"ID":"07123727-9ae4-4e4d-a27c-b33ccf1b7b09","Type":"ContainerDied","Data":"8e7e17cd7a442fd6df4fac5cf40c3cb35ab8ad16f4991a0f51d5ef4264a0fbae"} Oct 13 08:02:42 crc kubenswrapper[4833]: I1013 08:02:42.870887 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k8wsq" Oct 13 08:02:42 crc kubenswrapper[4833]: I1013 08:02:42.870892 4833 scope.go:117] "RemoveContainer" containerID="ae87f49cf471d6e97e5f52dcec24f7dae992efa5c4877bb16760f03942ea7f83" Oct 13 08:02:42 crc kubenswrapper[4833]: I1013 08:02:42.921562 4833 scope.go:117] "RemoveContainer" containerID="8318109a0be4dbf4a25fa66c711fea4266e870a05a1226c10777cd4f9e755814" Oct 13 08:02:42 crc kubenswrapper[4833]: I1013 08:02:42.942482 4833 scope.go:117] "RemoveContainer" containerID="65c127428d45086d21e259c8f6bd875ff4f97f488d09f73c66a3d2874deb9cd2" Oct 13 08:02:42 crc kubenswrapper[4833]: I1013 08:02:42.967671 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07123727-9ae4-4e4d-a27c-b33ccf1b7b09-utilities\") pod \"07123727-9ae4-4e4d-a27c-b33ccf1b7b09\" (UID: \"07123727-9ae4-4e4d-a27c-b33ccf1b7b09\") " Oct 13 08:02:42 crc kubenswrapper[4833]: I1013 08:02:42.967756 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmpd\" (UniqueName: \"kubernetes.io/projected/07123727-9ae4-4e4d-a27c-b33ccf1b7b09-kube-api-access-6dmpd\") pod \"07123727-9ae4-4e4d-a27c-b33ccf1b7b09\" (UID: \"07123727-9ae4-4e4d-a27c-b33ccf1b7b09\") " Oct 13 08:02:42 crc kubenswrapper[4833]: I1013 08:02:42.967808 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07123727-9ae4-4e4d-a27c-b33ccf1b7b09-catalog-content\") pod \"07123727-9ae4-4e4d-a27c-b33ccf1b7b09\" (UID: \"07123727-9ae4-4e4d-a27c-b33ccf1b7b09\") " Oct 13 08:02:42 crc kubenswrapper[4833]: I1013 08:02:42.970252 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07123727-9ae4-4e4d-a27c-b33ccf1b7b09-utilities" (OuterVolumeSpecName: "utilities") pod "07123727-9ae4-4e4d-a27c-b33ccf1b7b09" (UID: "07123727-9ae4-4e4d-a27c-b33ccf1b7b09"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:02:42 crc kubenswrapper[4833]: I1013 08:02:42.975148 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07123727-9ae4-4e4d-a27c-b33ccf1b7b09-kube-api-access-6dmpd" (OuterVolumeSpecName: "kube-api-access-6dmpd") pod "07123727-9ae4-4e4d-a27c-b33ccf1b7b09" (UID: "07123727-9ae4-4e4d-a27c-b33ccf1b7b09"). InnerVolumeSpecName "kube-api-access-6dmpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:02:42 crc kubenswrapper[4833]: I1013 08:02:42.997997 4833 scope.go:117] "RemoveContainer" containerID="ae87f49cf471d6e97e5f52dcec24f7dae992efa5c4877bb16760f03942ea7f83" Oct 13 08:02:42 crc kubenswrapper[4833]: E1013 08:02:42.998357 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae87f49cf471d6e97e5f52dcec24f7dae992efa5c4877bb16760f03942ea7f83\": container with ID starting with ae87f49cf471d6e97e5f52dcec24f7dae992efa5c4877bb16760f03942ea7f83 not found: ID does not exist" containerID="ae87f49cf471d6e97e5f52dcec24f7dae992efa5c4877bb16760f03942ea7f83" Oct 13 08:02:42 crc kubenswrapper[4833]: I1013 08:02:42.998414 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae87f49cf471d6e97e5f52dcec24f7dae992efa5c4877bb16760f03942ea7f83"} err="failed to get container status \"ae87f49cf471d6e97e5f52dcec24f7dae992efa5c4877bb16760f03942ea7f83\": rpc error: code = NotFound desc = could not find container \"ae87f49cf471d6e97e5f52dcec24f7dae992efa5c4877bb16760f03942ea7f83\": container with ID starting with ae87f49cf471d6e97e5f52dcec24f7dae992efa5c4877bb16760f03942ea7f83 not found: ID does not exist" Oct 13 08:02:42 crc kubenswrapper[4833]: I1013 08:02:42.998450 4833 scope.go:117] "RemoveContainer" containerID="8318109a0be4dbf4a25fa66c711fea4266e870a05a1226c10777cd4f9e755814" Oct 13 08:02:42 crc kubenswrapper[4833]: E1013 08:02:42.998876 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8318109a0be4dbf4a25fa66c711fea4266e870a05a1226c10777cd4f9e755814\": container with ID starting with 8318109a0be4dbf4a25fa66c711fea4266e870a05a1226c10777cd4f9e755814 not found: ID does not exist" containerID="8318109a0be4dbf4a25fa66c711fea4266e870a05a1226c10777cd4f9e755814" Oct 13 08:02:42 crc kubenswrapper[4833]: I1013 08:02:42.998904 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8318109a0be4dbf4a25fa66c711fea4266e870a05a1226c10777cd4f9e755814"} err="failed to get container status \"8318109a0be4dbf4a25fa66c711fea4266e870a05a1226c10777cd4f9e755814\": rpc error: code = NotFound desc = could not find container \"8318109a0be4dbf4a25fa66c711fea4266e870a05a1226c10777cd4f9e755814\": container with ID starting with 8318109a0be4dbf4a25fa66c711fea4266e870a05a1226c10777cd4f9e755814 not found: ID does not exist" Oct 13 08:02:42 crc kubenswrapper[4833]: I1013 08:02:42.998922 4833 scope.go:117] "RemoveContainer" containerID="65c127428d45086d21e259c8f6bd875ff4f97f488d09f73c66a3d2874deb9cd2" Oct 13 08:02:42 crc kubenswrapper[4833]: E1013 08:02:42.999165 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65c127428d45086d21e259c8f6bd875ff4f97f488d09f73c66a3d2874deb9cd2\": container with ID starting with 65c127428d45086d21e259c8f6bd875ff4f97f488d09f73c66a3d2874deb9cd2 not found: ID does not exist" containerID="65c127428d45086d21e259c8f6bd875ff4f97f488d09f73c66a3d2874deb9cd2" Oct 13 08:02:42 crc kubenswrapper[4833]: I1013 08:02:42.999186 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65c127428d45086d21e259c8f6bd875ff4f97f488d09f73c66a3d2874deb9cd2"} err="failed to get container status \"65c127428d45086d21e259c8f6bd875ff4f97f488d09f73c66a3d2874deb9cd2\": rpc error: code = NotFound desc = could not find container \"65c127428d45086d21e259c8f6bd875ff4f97f488d09f73c66a3d2874deb9cd2\": container with ID starting with 65c127428d45086d21e259c8f6bd875ff4f97f488d09f73c66a3d2874deb9cd2 not found: ID does not exist" Oct 13 08:02:43 crc kubenswrapper[4833]: I1013 08:02:43.022381 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07123727-9ae4-4e4d-a27c-b33ccf1b7b09-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07123727-9ae4-4e4d-a27c-b33ccf1b7b09" (UID: "07123727-9ae4-4e4d-a27c-b33ccf1b7b09"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:02:43 crc kubenswrapper[4833]: I1013 08:02:43.070724 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07123727-9ae4-4e4d-a27c-b33ccf1b7b09-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 08:02:43 crc kubenswrapper[4833]: I1013 08:02:43.070946 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dmpd\" (UniqueName: \"kubernetes.io/projected/07123727-9ae4-4e4d-a27c-b33ccf1b7b09-kube-api-access-6dmpd\") on node \"crc\" DevicePath \"\"" Oct 13 08:02:43 crc kubenswrapper[4833]: I1013 08:02:43.071003 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07123727-9ae4-4e4d-a27c-b33ccf1b7b09-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 08:02:43 crc kubenswrapper[4833]: I1013 08:02:43.208880 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k8wsq"] Oct 13 08:02:43 crc kubenswrapper[4833]: I1013 08:02:43.219355 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k8wsq"] Oct 13 08:02:44 crc kubenswrapper[4833]: I1013 08:02:44.645757 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07123727-9ae4-4e4d-a27c-b33ccf1b7b09" path="/var/lib/kubelet/pods/07123727-9ae4-4e4d-a27c-b33ccf1b7b09/volumes" Oct 13 08:02:46 crc kubenswrapper[4833]: I1013 08:02:46.628689 4833 scope.go:117] "RemoveContainer" containerID="84144bfb0195d5a0d1b4378bb219c91ea274fee3df90291859e033d126a958d8" Oct 13 08:02:46 crc kubenswrapper[4833]: E1013 08:02:46.629253 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:02:57 crc kubenswrapper[4833]: I1013 08:02:57.630121 4833 scope.go:117] "RemoveContainer" containerID="84144bfb0195d5a0d1b4378bb219c91ea274fee3df90291859e033d126a958d8" Oct 13 08:02:57 crc kubenswrapper[4833]: E1013 08:02:57.631228 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:03:00 crc kubenswrapper[4833]: I1013 08:03:00.383188 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-77c9d74d84-z27v9" Oct 13 08:03:02 crc kubenswrapper[4833]: I1013 08:03:02.659121 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-ddc9cc9f7-vclj7" Oct 13 08:03:02 crc kubenswrapper[4833]: I1013 08:03:02.755931 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-77c9d74d84-z27v9"] Oct 13 08:03:02 crc kubenswrapper[4833]: I1013 08:03:02.756189 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-77c9d74d84-z27v9" podUID="2ebd46cf-e758-437f-a2d6-0c7c32733228" containerName="neutron-api" containerID="cri-o://99e84833ae368cb2a0a3b799be4d0dec7133177097042778a1fcaa359d089b58" gracePeriod=30 Oct 13 08:03:02 crc kubenswrapper[4833]: I1013 08:03:02.756287 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-77c9d74d84-z27v9" podUID="2ebd46cf-e758-437f-a2d6-0c7c32733228" containerName="neutron-httpd" containerID="cri-o://14fd0bf4b1314f37dea1ec89b9c2ab1f806ddfd14ed844b10470f61c6572ede8" gracePeriod=30 Oct 13 08:03:03 crc kubenswrapper[4833]: I1013 08:03:03.085230 4833 generic.go:334] "Generic (PLEG): container finished" podID="2ebd46cf-e758-437f-a2d6-0c7c32733228" containerID="14fd0bf4b1314f37dea1ec89b9c2ab1f806ddfd14ed844b10470f61c6572ede8" exitCode=0 Oct 13 08:03:03 crc kubenswrapper[4833]: I1013 08:03:03.085253 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77c9d74d84-z27v9" event={"ID":"2ebd46cf-e758-437f-a2d6-0c7c32733228","Type":"ContainerDied","Data":"14fd0bf4b1314f37dea1ec89b9c2ab1f806ddfd14ed844b10470f61c6572ede8"} Oct 13 08:03:07 crc kubenswrapper[4833]: I1013 08:03:07.120678 4833 generic.go:334] "Generic (PLEG): container finished" podID="2ebd46cf-e758-437f-a2d6-0c7c32733228" containerID="99e84833ae368cb2a0a3b799be4d0dec7133177097042778a1fcaa359d089b58" exitCode=0 Oct 13 08:03:07 crc kubenswrapper[4833]: I1013 08:03:07.120729 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77c9d74d84-z27v9" event={"ID":"2ebd46cf-e758-437f-a2d6-0c7c32733228","Type":"ContainerDied","Data":"99e84833ae368cb2a0a3b799be4d0dec7133177097042778a1fcaa359d089b58"} Oct 13 08:03:07 crc kubenswrapper[4833]: I1013 08:03:07.244125 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77c9d74d84-z27v9" Oct 13 08:03:07 crc kubenswrapper[4833]: I1013 08:03:07.357858 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5rmh\" (UniqueName: \"kubernetes.io/projected/2ebd46cf-e758-437f-a2d6-0c7c32733228-kube-api-access-k5rmh\") pod \"2ebd46cf-e758-437f-a2d6-0c7c32733228\" (UID: \"2ebd46cf-e758-437f-a2d6-0c7c32733228\") " Oct 13 08:03:07 crc kubenswrapper[4833]: I1013 08:03:07.358008 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ebd46cf-e758-437f-a2d6-0c7c32733228-httpd-config\") pod \"2ebd46cf-e758-437f-a2d6-0c7c32733228\" (UID: \"2ebd46cf-e758-437f-a2d6-0c7c32733228\") " Oct 13 08:03:07 crc kubenswrapper[4833]: I1013 08:03:07.358038 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ebd46cf-e758-437f-a2d6-0c7c32733228-config\") pod \"2ebd46cf-e758-437f-a2d6-0c7c32733228\" (UID: \"2ebd46cf-e758-437f-a2d6-0c7c32733228\") " Oct 13 08:03:07 crc kubenswrapper[4833]: I1013 08:03:07.358112 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ebd46cf-e758-437f-a2d6-0c7c32733228-ovndb-tls-certs\") pod \"2ebd46cf-e758-437f-a2d6-0c7c32733228\" (UID: \"2ebd46cf-e758-437f-a2d6-0c7c32733228\") " Oct 13 08:03:07 crc kubenswrapper[4833]: I1013 08:03:07.358133 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ebd46cf-e758-437f-a2d6-0c7c32733228-combined-ca-bundle\") pod \"2ebd46cf-e758-437f-a2d6-0c7c32733228\" (UID: \"2ebd46cf-e758-437f-a2d6-0c7c32733228\") " Oct 13 08:03:07 crc kubenswrapper[4833]: I1013 08:03:07.363349 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ebd46cf-e758-437f-a2d6-0c7c32733228-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "2ebd46cf-e758-437f-a2d6-0c7c32733228" (UID: "2ebd46cf-e758-437f-a2d6-0c7c32733228"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:03:07 crc kubenswrapper[4833]: I1013 08:03:07.363668 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ebd46cf-e758-437f-a2d6-0c7c32733228-kube-api-access-k5rmh" (OuterVolumeSpecName: "kube-api-access-k5rmh") pod "2ebd46cf-e758-437f-a2d6-0c7c32733228" (UID: "2ebd46cf-e758-437f-a2d6-0c7c32733228"). InnerVolumeSpecName "kube-api-access-k5rmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:03:07 crc kubenswrapper[4833]: I1013 08:03:07.407127 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ebd46cf-e758-437f-a2d6-0c7c32733228-config" (OuterVolumeSpecName: "config") pod "2ebd46cf-e758-437f-a2d6-0c7c32733228" (UID: "2ebd46cf-e758-437f-a2d6-0c7c32733228"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:03:07 crc kubenswrapper[4833]: E1013 08:03:07.427166 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ebd46cf-e758-437f-a2d6-0c7c32733228-combined-ca-bundle podName:2ebd46cf-e758-437f-a2d6-0c7c32733228 nodeName:}" failed. No retries permitted until 2025-10-13 08:03:07.927132777 +0000 UTC m=+5678.027555703 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/2ebd46cf-e758-437f-a2d6-0c7c32733228-combined-ca-bundle") pod "2ebd46cf-e758-437f-a2d6-0c7c32733228" (UID: "2ebd46cf-e758-437f-a2d6-0c7c32733228") : error deleting /var/lib/kubelet/pods/2ebd46cf-e758-437f-a2d6-0c7c32733228/volume-subpaths: remove /var/lib/kubelet/pods/2ebd46cf-e758-437f-a2d6-0c7c32733228/volume-subpaths: no such file or directory Oct 13 08:03:07 crc kubenswrapper[4833]: I1013 08:03:07.429450 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ebd46cf-e758-437f-a2d6-0c7c32733228-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "2ebd46cf-e758-437f-a2d6-0c7c32733228" (UID: "2ebd46cf-e758-437f-a2d6-0c7c32733228"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:03:07 crc kubenswrapper[4833]: I1013 08:03:07.459924 4833 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ebd46cf-e758-437f-a2d6-0c7c32733228-httpd-config\") on node \"crc\" DevicePath \"\"" Oct 13 08:03:07 crc kubenswrapper[4833]: I1013 08:03:07.459956 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ebd46cf-e758-437f-a2d6-0c7c32733228-config\") on node \"crc\" DevicePath \"\"" Oct 13 08:03:07 crc kubenswrapper[4833]: I1013 08:03:07.459965 4833 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ebd46cf-e758-437f-a2d6-0c7c32733228-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 08:03:07 crc kubenswrapper[4833]: I1013 08:03:07.459976 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5rmh\" (UniqueName: \"kubernetes.io/projected/2ebd46cf-e758-437f-a2d6-0c7c32733228-kube-api-access-k5rmh\") on node \"crc\" DevicePath \"\"" Oct 13 08:03:07 crc kubenswrapper[4833]: I1013 08:03:07.969293 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ebd46cf-e758-437f-a2d6-0c7c32733228-combined-ca-bundle\") pod \"2ebd46cf-e758-437f-a2d6-0c7c32733228\" (UID: \"2ebd46cf-e758-437f-a2d6-0c7c32733228\") " Oct 13 08:03:07 crc kubenswrapper[4833]: I1013 08:03:07.974244 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ebd46cf-e758-437f-a2d6-0c7c32733228-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ebd46cf-e758-437f-a2d6-0c7c32733228" (UID: "2ebd46cf-e758-437f-a2d6-0c7c32733228"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:03:08 crc kubenswrapper[4833]: I1013 08:03:08.072619 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ebd46cf-e758-437f-a2d6-0c7c32733228-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:03:08 crc kubenswrapper[4833]: I1013 08:03:08.131041 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77c9d74d84-z27v9" event={"ID":"2ebd46cf-e758-437f-a2d6-0c7c32733228","Type":"ContainerDied","Data":"38b62744b88be5d99f652cacb5e771c04f503b73c93f527dea5d494839471be8"} Oct 13 08:03:08 crc kubenswrapper[4833]: I1013 08:03:08.131141 4833 scope.go:117] "RemoveContainer" containerID="14fd0bf4b1314f37dea1ec89b9c2ab1f806ddfd14ed844b10470f61c6572ede8" Oct 13 08:03:08 crc kubenswrapper[4833]: I1013 08:03:08.131199 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77c9d74d84-z27v9" Oct 13 08:03:08 crc kubenswrapper[4833]: I1013 08:03:08.164897 4833 scope.go:117] "RemoveContainer" containerID="99e84833ae368cb2a0a3b799be4d0dec7133177097042778a1fcaa359d089b58" Oct 13 08:03:08 crc kubenswrapper[4833]: I1013 08:03:08.165916 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-77c9d74d84-z27v9"] Oct 13 08:03:08 crc kubenswrapper[4833]: I1013 08:03:08.181032 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-77c9d74d84-z27v9"] Oct 13 08:03:08 crc kubenswrapper[4833]: I1013 08:03:08.638295 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ebd46cf-e758-437f-a2d6-0c7c32733228" path="/var/lib/kubelet/pods/2ebd46cf-e758-437f-a2d6-0c7c32733228/volumes" Oct 13 08:03:09 crc kubenswrapper[4833]: I1013 08:03:09.627514 4833 scope.go:117] "RemoveContainer" containerID="84144bfb0195d5a0d1b4378bb219c91ea274fee3df90291859e033d126a958d8" Oct 13 08:03:09 crc kubenswrapper[4833]: E1013 08:03:09.628151 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.191836 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-dcsbj"] Oct 13 08:03:12 crc kubenswrapper[4833]: E1013 08:03:12.192564 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07123727-9ae4-4e4d-a27c-b33ccf1b7b09" containerName="registry-server" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.192582 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="07123727-9ae4-4e4d-a27c-b33ccf1b7b09" containerName="registry-server" Oct 13 08:03:12 crc kubenswrapper[4833]: E1013 08:03:12.192603 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ebd46cf-e758-437f-a2d6-0c7c32733228" containerName="neutron-api" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.192612 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ebd46cf-e758-437f-a2d6-0c7c32733228" containerName="neutron-api" Oct 13 08:03:12 crc kubenswrapper[4833]: E1013 08:03:12.192630 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07123727-9ae4-4e4d-a27c-b33ccf1b7b09" containerName="extract-content" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.192639 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="07123727-9ae4-4e4d-a27c-b33ccf1b7b09" containerName="extract-content" Oct 13 08:03:12 crc kubenswrapper[4833]: E1013 08:03:12.192649 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ece2475-4308-40f4-9c61-9663fa98fa06" containerName="init" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.192656 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ece2475-4308-40f4-9c61-9663fa98fa06" containerName="init" Oct 13 08:03:12 crc kubenswrapper[4833]: E1013 08:03:12.192676 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07123727-9ae4-4e4d-a27c-b33ccf1b7b09" containerName="extract-utilities" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.192684 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="07123727-9ae4-4e4d-a27c-b33ccf1b7b09" containerName="extract-utilities" Oct 13 08:03:12 crc kubenswrapper[4833]: E1013 08:03:12.192713 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ece2475-4308-40f4-9c61-9663fa98fa06" containerName="dnsmasq-dns" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.192721 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ece2475-4308-40f4-9c61-9663fa98fa06" containerName="dnsmasq-dns" Oct 13 08:03:12 crc kubenswrapper[4833]: E1013 08:03:12.192740 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ebd46cf-e758-437f-a2d6-0c7c32733228" containerName="neutron-httpd" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.192770 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ebd46cf-e758-437f-a2d6-0c7c32733228" containerName="neutron-httpd" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.192987 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ebd46cf-e758-437f-a2d6-0c7c32733228" containerName="neutron-httpd" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.192997 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="07123727-9ae4-4e4d-a27c-b33ccf1b7b09" containerName="registry-server" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.193007 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ebd46cf-e758-437f-a2d6-0c7c32733228" containerName="neutron-api" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.193019 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ece2475-4308-40f4-9c61-9663fa98fa06" containerName="dnsmasq-dns" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.193612 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dcsbj" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.197561 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.198632 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.198859 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.199016 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.199165 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-fkf6t" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.221654 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dcsbj"] Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.247750 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/26e000c4-06bf-4225-b73d-6738529f741a-swiftconf\") pod \"swift-ring-rebalance-dcsbj\" (UID: \"26e000c4-06bf-4225-b73d-6738529f741a\") " pod="openstack/swift-ring-rebalance-dcsbj" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.247914 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j2tp\" (UniqueName: \"kubernetes.io/projected/26e000c4-06bf-4225-b73d-6738529f741a-kube-api-access-8j2tp\") pod \"swift-ring-rebalance-dcsbj\" (UID: \"26e000c4-06bf-4225-b73d-6738529f741a\") " pod="openstack/swift-ring-rebalance-dcsbj" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.247985 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/26e000c4-06bf-4225-b73d-6738529f741a-dispersionconf\") pod \"swift-ring-rebalance-dcsbj\" (UID: \"26e000c4-06bf-4225-b73d-6738529f741a\") " pod="openstack/swift-ring-rebalance-dcsbj" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.248111 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e000c4-06bf-4225-b73d-6738529f741a-combined-ca-bundle\") pod \"swift-ring-rebalance-dcsbj\" (UID: \"26e000c4-06bf-4225-b73d-6738529f741a\") " pod="openstack/swift-ring-rebalance-dcsbj" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.248205 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26e000c4-06bf-4225-b73d-6738529f741a-scripts\") pod \"swift-ring-rebalance-dcsbj\" (UID: \"26e000c4-06bf-4225-b73d-6738529f741a\") " pod="openstack/swift-ring-rebalance-dcsbj" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.248239 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/26e000c4-06bf-4225-b73d-6738529f741a-ring-data-devices\") pod \"swift-ring-rebalance-dcsbj\" (UID: \"26e000c4-06bf-4225-b73d-6738529f741a\") " pod="openstack/swift-ring-rebalance-dcsbj" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.248319 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/26e000c4-06bf-4225-b73d-6738529f741a-etc-swift\") pod \"swift-ring-rebalance-dcsbj\" (UID: \"26e000c4-06bf-4225-b73d-6738529f741a\") " pod="openstack/swift-ring-rebalance-dcsbj" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.292085 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58bc49bc47-zb67j"] Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.303734 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bc49bc47-zb67j" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.304079 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58bc49bc47-zb67j"] Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.349922 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e000c4-06bf-4225-b73d-6738529f741a-combined-ca-bundle\") pod \"swift-ring-rebalance-dcsbj\" (UID: \"26e000c4-06bf-4225-b73d-6738529f741a\") " pod="openstack/swift-ring-rebalance-dcsbj" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.349981 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb80d1ba-d1ff-4641-9e07-c06e41ef4756-dns-svc\") pod \"dnsmasq-dns-58bc49bc47-zb67j\" (UID: \"bb80d1ba-d1ff-4641-9e07-c06e41ef4756\") " pod="openstack/dnsmasq-dns-58bc49bc47-zb67j" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.350067 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb80d1ba-d1ff-4641-9e07-c06e41ef4756-ovsdbserver-nb\") pod \"dnsmasq-dns-58bc49bc47-zb67j\" (UID: \"bb80d1ba-d1ff-4641-9e07-c06e41ef4756\") " pod="openstack/dnsmasq-dns-58bc49bc47-zb67j" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.350094 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26e000c4-06bf-4225-b73d-6738529f741a-scripts\") pod \"swift-ring-rebalance-dcsbj\" (UID: \"26e000c4-06bf-4225-b73d-6738529f741a\") " pod="openstack/swift-ring-rebalance-dcsbj" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.350141 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/26e000c4-06bf-4225-b73d-6738529f741a-ring-data-devices\") pod \"swift-ring-rebalance-dcsbj\" (UID: \"26e000c4-06bf-4225-b73d-6738529f741a\") " pod="openstack/swift-ring-rebalance-dcsbj" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.350799 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/26e000c4-06bf-4225-b73d-6738529f741a-ring-data-devices\") pod \"swift-ring-rebalance-dcsbj\" (UID: \"26e000c4-06bf-4225-b73d-6738529f741a\") " pod="openstack/swift-ring-rebalance-dcsbj" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.350848 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/26e000c4-06bf-4225-b73d-6738529f741a-etc-swift\") pod \"swift-ring-rebalance-dcsbj\" (UID: \"26e000c4-06bf-4225-b73d-6738529f741a\") " pod="openstack/swift-ring-rebalance-dcsbj" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.350871 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb80d1ba-d1ff-4641-9e07-c06e41ef4756-config\") pod \"dnsmasq-dns-58bc49bc47-zb67j\" (UID: \"bb80d1ba-d1ff-4641-9e07-c06e41ef4756\") " pod="openstack/dnsmasq-dns-58bc49bc47-zb67j" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.350899 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/26e000c4-06bf-4225-b73d-6738529f741a-swiftconf\") pod \"swift-ring-rebalance-dcsbj\" (UID: \"26e000c4-06bf-4225-b73d-6738529f741a\") " pod="openstack/swift-ring-rebalance-dcsbj" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.350849 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26e000c4-06bf-4225-b73d-6738529f741a-scripts\") pod \"swift-ring-rebalance-dcsbj\" (UID: \"26e000c4-06bf-4225-b73d-6738529f741a\") " pod="openstack/swift-ring-rebalance-dcsbj" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.350960 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/26e000c4-06bf-4225-b73d-6738529f741a-etc-swift\") pod \"swift-ring-rebalance-dcsbj\" (UID: \"26e000c4-06bf-4225-b73d-6738529f741a\") " pod="openstack/swift-ring-rebalance-dcsbj" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.350976 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6cpw\" (UniqueName: \"kubernetes.io/projected/bb80d1ba-d1ff-4641-9e07-c06e41ef4756-kube-api-access-g6cpw\") pod \"dnsmasq-dns-58bc49bc47-zb67j\" (UID: \"bb80d1ba-d1ff-4641-9e07-c06e41ef4756\") " pod="openstack/dnsmasq-dns-58bc49bc47-zb67j" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.351044 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j2tp\" (UniqueName: \"kubernetes.io/projected/26e000c4-06bf-4225-b73d-6738529f741a-kube-api-access-8j2tp\") pod \"swift-ring-rebalance-dcsbj\" (UID: \"26e000c4-06bf-4225-b73d-6738529f741a\") " pod="openstack/swift-ring-rebalance-dcsbj" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.351072 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb80d1ba-d1ff-4641-9e07-c06e41ef4756-ovsdbserver-sb\") pod \"dnsmasq-dns-58bc49bc47-zb67j\" (UID: \"bb80d1ba-d1ff-4641-9e07-c06e41ef4756\") " pod="openstack/dnsmasq-dns-58bc49bc47-zb67j" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.351097 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/26e000c4-06bf-4225-b73d-6738529f741a-dispersionconf\") pod \"swift-ring-rebalance-dcsbj\" (UID: \"26e000c4-06bf-4225-b73d-6738529f741a\") " pod="openstack/swift-ring-rebalance-dcsbj" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.357347 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/26e000c4-06bf-4225-b73d-6738529f741a-dispersionconf\") pod \"swift-ring-rebalance-dcsbj\" (UID: \"26e000c4-06bf-4225-b73d-6738529f741a\") " pod="openstack/swift-ring-rebalance-dcsbj" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.358324 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e000c4-06bf-4225-b73d-6738529f741a-combined-ca-bundle\") pod \"swift-ring-rebalance-dcsbj\" (UID: \"26e000c4-06bf-4225-b73d-6738529f741a\") " pod="openstack/swift-ring-rebalance-dcsbj" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.358678 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/26e000c4-06bf-4225-b73d-6738529f741a-swiftconf\") pod \"swift-ring-rebalance-dcsbj\" (UID: \"26e000c4-06bf-4225-b73d-6738529f741a\") " pod="openstack/swift-ring-rebalance-dcsbj" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.377144 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j2tp\" (UniqueName: \"kubernetes.io/projected/26e000c4-06bf-4225-b73d-6738529f741a-kube-api-access-8j2tp\") pod \"swift-ring-rebalance-dcsbj\" (UID: \"26e000c4-06bf-4225-b73d-6738529f741a\") " pod="openstack/swift-ring-rebalance-dcsbj" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.452794 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6cpw\" (UniqueName: \"kubernetes.io/projected/bb80d1ba-d1ff-4641-9e07-c06e41ef4756-kube-api-access-g6cpw\") pod \"dnsmasq-dns-58bc49bc47-zb67j\" (UID: \"bb80d1ba-d1ff-4641-9e07-c06e41ef4756\") " pod="openstack/dnsmasq-dns-58bc49bc47-zb67j" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.453256 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb80d1ba-d1ff-4641-9e07-c06e41ef4756-ovsdbserver-sb\") pod \"dnsmasq-dns-58bc49bc47-zb67j\" (UID: \"bb80d1ba-d1ff-4641-9e07-c06e41ef4756\") " pod="openstack/dnsmasq-dns-58bc49bc47-zb67j" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.453374 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb80d1ba-d1ff-4641-9e07-c06e41ef4756-dns-svc\") pod \"dnsmasq-dns-58bc49bc47-zb67j\" (UID: \"bb80d1ba-d1ff-4641-9e07-c06e41ef4756\") " pod="openstack/dnsmasq-dns-58bc49bc47-zb67j" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.453449 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb80d1ba-d1ff-4641-9e07-c06e41ef4756-ovsdbserver-nb\") pod \"dnsmasq-dns-58bc49bc47-zb67j\" (UID: \"bb80d1ba-d1ff-4641-9e07-c06e41ef4756\") " pod="openstack/dnsmasq-dns-58bc49bc47-zb67j" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.453517 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb80d1ba-d1ff-4641-9e07-c06e41ef4756-config\") pod \"dnsmasq-dns-58bc49bc47-zb67j\" (UID: \"bb80d1ba-d1ff-4641-9e07-c06e41ef4756\") " pod="openstack/dnsmasq-dns-58bc49bc47-zb67j" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.454166 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb80d1ba-d1ff-4641-9e07-c06e41ef4756-dns-svc\") pod \"dnsmasq-dns-58bc49bc47-zb67j\" (UID: \"bb80d1ba-d1ff-4641-9e07-c06e41ef4756\") " pod="openstack/dnsmasq-dns-58bc49bc47-zb67j" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.454177 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb80d1ba-d1ff-4641-9e07-c06e41ef4756-ovsdbserver-nb\") pod \"dnsmasq-dns-58bc49bc47-zb67j\" (UID: \"bb80d1ba-d1ff-4641-9e07-c06e41ef4756\") " pod="openstack/dnsmasq-dns-58bc49bc47-zb67j" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.454233 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb80d1ba-d1ff-4641-9e07-c06e41ef4756-ovsdbserver-sb\") pod \"dnsmasq-dns-58bc49bc47-zb67j\" (UID: \"bb80d1ba-d1ff-4641-9e07-c06e41ef4756\") " pod="openstack/dnsmasq-dns-58bc49bc47-zb67j" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.454521 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb80d1ba-d1ff-4641-9e07-c06e41ef4756-config\") pod \"dnsmasq-dns-58bc49bc47-zb67j\" (UID: \"bb80d1ba-d1ff-4641-9e07-c06e41ef4756\") " pod="openstack/dnsmasq-dns-58bc49bc47-zb67j" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.473035 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6cpw\" (UniqueName: \"kubernetes.io/projected/bb80d1ba-d1ff-4641-9e07-c06e41ef4756-kube-api-access-g6cpw\") pod \"dnsmasq-dns-58bc49bc47-zb67j\" (UID: \"bb80d1ba-d1ff-4641-9e07-c06e41ef4756\") " pod="openstack/dnsmasq-dns-58bc49bc47-zb67j" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.518185 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dcsbj" Oct 13 08:03:12 crc kubenswrapper[4833]: I1013 08:03:12.744623 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bc49bc47-zb67j" Oct 13 08:03:13 crc kubenswrapper[4833]: I1013 08:03:13.026034 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dcsbj"] Oct 13 08:03:13 crc kubenswrapper[4833]: I1013 08:03:13.181530 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dcsbj" event={"ID":"26e000c4-06bf-4225-b73d-6738529f741a","Type":"ContainerStarted","Data":"06061be4f771600a3ea827882db790934f9826624d2723687d27f9982992bffb"} Oct 13 08:03:13 crc kubenswrapper[4833]: I1013 08:03:13.197507 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58bc49bc47-zb67j"] Oct 13 08:03:14 crc kubenswrapper[4833]: I1013 08:03:14.190223 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dcsbj" event={"ID":"26e000c4-06bf-4225-b73d-6738529f741a","Type":"ContainerStarted","Data":"0c3df518a8e761aaa9a3e526c91bf4972775b1366a4c72adb06ff8efdffdb702"} Oct 13 08:03:14 crc kubenswrapper[4833]: I1013 08:03:14.191936 4833 generic.go:334] "Generic (PLEG): container finished" podID="bb80d1ba-d1ff-4641-9e07-c06e41ef4756" containerID="303cf075d25684ce8378dec9d6a4a126bb061d42ddb8a1be3c99096e77de3af3" exitCode=0 Oct 13 08:03:14 crc kubenswrapper[4833]: I1013 08:03:14.191980 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bc49bc47-zb67j" event={"ID":"bb80d1ba-d1ff-4641-9e07-c06e41ef4756","Type":"ContainerDied","Data":"303cf075d25684ce8378dec9d6a4a126bb061d42ddb8a1be3c99096e77de3af3"} Oct 13 08:03:14 crc kubenswrapper[4833]: I1013 08:03:14.192094 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bc49bc47-zb67j" event={"ID":"bb80d1ba-d1ff-4641-9e07-c06e41ef4756","Type":"ContainerStarted","Data":"b7c38eb84230f323213c5adb66b620731a15cbd72522506203cf958f92d50d5e"} Oct 13 08:03:14 crc kubenswrapper[4833]: I1013 08:03:14.216442 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-dcsbj" podStartSLOduration=2.216424326 podStartE2EDuration="2.216424326s" podCreationTimestamp="2025-10-13 08:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:03:14.20812522 +0000 UTC m=+5684.308548136" watchObservedRunningTime="2025-10-13 08:03:14.216424326 +0000 UTC m=+5684.316847252" Oct 13 08:03:14 crc kubenswrapper[4833]: I1013 08:03:14.333053 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-54b564595d-h4m8t"] Oct 13 08:03:14 crc kubenswrapper[4833]: I1013 08:03:14.335802 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-54b564595d-h4m8t" Oct 13 08:03:14 crc kubenswrapper[4833]: I1013 08:03:14.340676 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Oct 13 08:03:14 crc kubenswrapper[4833]: I1013 08:03:14.347974 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-54b564595d-h4m8t"] Oct 13 08:03:14 crc kubenswrapper[4833]: I1013 08:03:14.406061 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/37bb6b90-3730-4ab2-a31c-9679e76de18d-etc-swift\") pod \"swift-proxy-54b564595d-h4m8t\" (UID: \"37bb6b90-3730-4ab2-a31c-9679e76de18d\") " pod="openstack/swift-proxy-54b564595d-h4m8t" Oct 13 08:03:14 crc kubenswrapper[4833]: I1013 08:03:14.406130 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37bb6b90-3730-4ab2-a31c-9679e76de18d-combined-ca-bundle\") pod \"swift-proxy-54b564595d-h4m8t\" (UID: \"37bb6b90-3730-4ab2-a31c-9679e76de18d\") " pod="openstack/swift-proxy-54b564595d-h4m8t" Oct 13 08:03:14 crc kubenswrapper[4833]: I1013 08:03:14.406182 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vsvd\" (UniqueName: \"kubernetes.io/projected/37bb6b90-3730-4ab2-a31c-9679e76de18d-kube-api-access-5vsvd\") pod \"swift-proxy-54b564595d-h4m8t\" (UID: \"37bb6b90-3730-4ab2-a31c-9679e76de18d\") " pod="openstack/swift-proxy-54b564595d-h4m8t" Oct 13 08:03:14 crc kubenswrapper[4833]: I1013 08:03:14.406215 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37bb6b90-3730-4ab2-a31c-9679e76de18d-config-data\") pod \"swift-proxy-54b564595d-h4m8t\" (UID: \"37bb6b90-3730-4ab2-a31c-9679e76de18d\") " pod="openstack/swift-proxy-54b564595d-h4m8t" Oct 13 08:03:14 crc kubenswrapper[4833]: I1013 08:03:14.406304 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37bb6b90-3730-4ab2-a31c-9679e76de18d-log-httpd\") pod \"swift-proxy-54b564595d-h4m8t\" (UID: \"37bb6b90-3730-4ab2-a31c-9679e76de18d\") " pod="openstack/swift-proxy-54b564595d-h4m8t" Oct 13 08:03:14 crc kubenswrapper[4833]: I1013 08:03:14.406351 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37bb6b90-3730-4ab2-a31c-9679e76de18d-run-httpd\") pod \"swift-proxy-54b564595d-h4m8t\" (UID: \"37bb6b90-3730-4ab2-a31c-9679e76de18d\") " pod="openstack/swift-proxy-54b564595d-h4m8t" Oct 13 08:03:14 crc kubenswrapper[4833]: I1013 08:03:14.508343 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/37bb6b90-3730-4ab2-a31c-9679e76de18d-etc-swift\") pod \"swift-proxy-54b564595d-h4m8t\" (UID: \"37bb6b90-3730-4ab2-a31c-9679e76de18d\") " pod="openstack/swift-proxy-54b564595d-h4m8t" Oct 13 08:03:14 crc kubenswrapper[4833]: I1013 08:03:14.508408 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37bb6b90-3730-4ab2-a31c-9679e76de18d-combined-ca-bundle\") pod \"swift-proxy-54b564595d-h4m8t\" (UID: \"37bb6b90-3730-4ab2-a31c-9679e76de18d\") " pod="openstack/swift-proxy-54b564595d-h4m8t" Oct 13 08:03:14 crc kubenswrapper[4833]: I1013 08:03:14.508448 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vsvd\" (UniqueName: \"kubernetes.io/projected/37bb6b90-3730-4ab2-a31c-9679e76de18d-kube-api-access-5vsvd\") pod \"swift-proxy-54b564595d-h4m8t\" (UID: \"37bb6b90-3730-4ab2-a31c-9679e76de18d\") " pod="openstack/swift-proxy-54b564595d-h4m8t" Oct 13 08:03:14 crc kubenswrapper[4833]: I1013 08:03:14.508477 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37bb6b90-3730-4ab2-a31c-9679e76de18d-config-data\") pod \"swift-proxy-54b564595d-h4m8t\" (UID: \"37bb6b90-3730-4ab2-a31c-9679e76de18d\") " pod="openstack/swift-proxy-54b564595d-h4m8t" Oct 13 08:03:14 crc kubenswrapper[4833]: I1013 08:03:14.508597 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37bb6b90-3730-4ab2-a31c-9679e76de18d-log-httpd\") pod \"swift-proxy-54b564595d-h4m8t\" (UID: \"37bb6b90-3730-4ab2-a31c-9679e76de18d\") " pod="openstack/swift-proxy-54b564595d-h4m8t" Oct 13 08:03:14 crc kubenswrapper[4833]: I1013 08:03:14.508642 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37bb6b90-3730-4ab2-a31c-9679e76de18d-run-httpd\") pod \"swift-proxy-54b564595d-h4m8t\" (UID: \"37bb6b90-3730-4ab2-a31c-9679e76de18d\") " pod="openstack/swift-proxy-54b564595d-h4m8t" Oct 13 08:03:14 crc kubenswrapper[4833]: I1013 08:03:14.509174 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37bb6b90-3730-4ab2-a31c-9679e76de18d-run-httpd\") pod \"swift-proxy-54b564595d-h4m8t\" (UID: \"37bb6b90-3730-4ab2-a31c-9679e76de18d\") " pod="openstack/swift-proxy-54b564595d-h4m8t" Oct 13 08:03:14 crc kubenswrapper[4833]: I1013 08:03:14.512986 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37bb6b90-3730-4ab2-a31c-9679e76de18d-log-httpd\") pod \"swift-proxy-54b564595d-h4m8t\" (UID: \"37bb6b90-3730-4ab2-a31c-9679e76de18d\") " pod="openstack/swift-proxy-54b564595d-h4m8t" Oct 13 08:03:14 crc kubenswrapper[4833]: I1013 08:03:14.514693 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37bb6b90-3730-4ab2-a31c-9679e76de18d-config-data\") pod \"swift-proxy-54b564595d-h4m8t\" (UID: \"37bb6b90-3730-4ab2-a31c-9679e76de18d\") " pod="openstack/swift-proxy-54b564595d-h4m8t" Oct 13 08:03:14 crc kubenswrapper[4833]: I1013 08:03:14.515078 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/37bb6b90-3730-4ab2-a31c-9679e76de18d-etc-swift\") pod \"swift-proxy-54b564595d-h4m8t\" (UID: \"37bb6b90-3730-4ab2-a31c-9679e76de18d\") " pod="openstack/swift-proxy-54b564595d-h4m8t" Oct 13 08:03:14 crc kubenswrapper[4833]: I1013 08:03:14.515854 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37bb6b90-3730-4ab2-a31c-9679e76de18d-combined-ca-bundle\") pod \"swift-proxy-54b564595d-h4m8t\" (UID: \"37bb6b90-3730-4ab2-a31c-9679e76de18d\") " pod="openstack/swift-proxy-54b564595d-h4m8t" Oct 13 08:03:14 crc kubenswrapper[4833]: I1013 08:03:14.532502 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vsvd\" (UniqueName: \"kubernetes.io/projected/37bb6b90-3730-4ab2-a31c-9679e76de18d-kube-api-access-5vsvd\") pod \"swift-proxy-54b564595d-h4m8t\" (UID: \"37bb6b90-3730-4ab2-a31c-9679e76de18d\") " pod="openstack/swift-proxy-54b564595d-h4m8t" Oct 13 08:03:14 crc kubenswrapper[4833]: I1013 08:03:14.726584 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-54b564595d-h4m8t" Oct 13 08:03:15 crc kubenswrapper[4833]: I1013 08:03:15.201144 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bc49bc47-zb67j" event={"ID":"bb80d1ba-d1ff-4641-9e07-c06e41ef4756","Type":"ContainerStarted","Data":"77044a256a92aa096dd01dd5c4fcf4eb460b79340af1e07c32c3d8a99fc2df1f"} Oct 13 08:03:15 crc kubenswrapper[4833]: I1013 08:03:15.201458 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58bc49bc47-zb67j" Oct 13 08:03:15 crc kubenswrapper[4833]: I1013 08:03:15.218381 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58bc49bc47-zb67j" podStartSLOduration=3.218360704 podStartE2EDuration="3.218360704s" podCreationTimestamp="2025-10-13 08:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:03:15.217438648 +0000 UTC m=+5685.317861564" watchObservedRunningTime="2025-10-13 08:03:15.218360704 +0000 UTC m=+5685.318783620" Oct 13 08:03:15 crc kubenswrapper[4833]: I1013 08:03:15.401233 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-54b564595d-h4m8t"] Oct 13 08:03:15 crc kubenswrapper[4833]: W1013 08:03:15.401523 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37bb6b90_3730_4ab2_a31c_9679e76de18d.slice/crio-4454e7c9dfa5fd93871d53b87eb229b6826aa4309af31dfb514c80a34bf8e145 WatchSource:0}: Error finding container 4454e7c9dfa5fd93871d53b87eb229b6826aa4309af31dfb514c80a34bf8e145: Status 404 returned error can't find the container with id 4454e7c9dfa5fd93871d53b87eb229b6826aa4309af31dfb514c80a34bf8e145 Oct 13 08:03:16 crc kubenswrapper[4833]: I1013 08:03:16.210869 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-54b564595d-h4m8t" event={"ID":"37bb6b90-3730-4ab2-a31c-9679e76de18d","Type":"ContainerStarted","Data":"59ad2e7c170e9b661eba6b0d20d5a862ff0ce1c13bc957252e9ca921b3efa91f"} Oct 13 08:03:16 crc kubenswrapper[4833]: I1013 08:03:16.211247 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-54b564595d-h4m8t" event={"ID":"37bb6b90-3730-4ab2-a31c-9679e76de18d","Type":"ContainerStarted","Data":"5e95ff6027d61dd9ba72acf784e1e847ee100b97bd374704fb9db58ca8fc7f9e"} Oct 13 08:03:16 crc kubenswrapper[4833]: I1013 08:03:16.211265 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-54b564595d-h4m8t" event={"ID":"37bb6b90-3730-4ab2-a31c-9679e76de18d","Type":"ContainerStarted","Data":"4454e7c9dfa5fd93871d53b87eb229b6826aa4309af31dfb514c80a34bf8e145"} Oct 13 08:03:16 crc kubenswrapper[4833]: I1013 08:03:16.238944 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-54b564595d-h4m8t" podStartSLOduration=2.238927973 podStartE2EDuration="2.238927973s" podCreationTimestamp="2025-10-13 08:03:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:03:16.235830155 +0000 UTC m=+5686.336253071" watchObservedRunningTime="2025-10-13 08:03:16.238927973 +0000 UTC m=+5686.339350889" Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.145223 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6bcbb6d8f6-j7v6l"] Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.147834 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.152901 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.153118 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.168201 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6bcbb6d8f6-j7v6l"] Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.233464 4833 generic.go:334] "Generic (PLEG): container finished" podID="26e000c4-06bf-4225-b73d-6738529f741a" containerID="0c3df518a8e761aaa9a3e526c91bf4972775b1366a4c72adb06ff8efdffdb702" exitCode=0 Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.233592 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dcsbj" event={"ID":"26e000c4-06bf-4225-b73d-6738529f741a","Type":"ContainerDied","Data":"0c3df518a8e761aaa9a3e526c91bf4972775b1366a4c72adb06ff8efdffdb702"} Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.233735 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-54b564595d-h4m8t" Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.233832 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-54b564595d-h4m8t" Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.251966 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b84ec2a-5911-404b-a6e4-6654625c0e0f-public-tls-certs\") pod \"swift-proxy-6bcbb6d8f6-j7v6l\" (UID: \"8b84ec2a-5911-404b-a6e4-6654625c0e0f\") " pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.252009 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q44mf\" (UniqueName: \"kubernetes.io/projected/8b84ec2a-5911-404b-a6e4-6654625c0e0f-kube-api-access-q44mf\") pod \"swift-proxy-6bcbb6d8f6-j7v6l\" (UID: \"8b84ec2a-5911-404b-a6e4-6654625c0e0f\") " pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.252045 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b84ec2a-5911-404b-a6e4-6654625c0e0f-log-httpd\") pod \"swift-proxy-6bcbb6d8f6-j7v6l\" (UID: \"8b84ec2a-5911-404b-a6e4-6654625c0e0f\") " pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.252105 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b84ec2a-5911-404b-a6e4-6654625c0e0f-etc-swift\") pod \"swift-proxy-6bcbb6d8f6-j7v6l\" (UID: \"8b84ec2a-5911-404b-a6e4-6654625c0e0f\") " pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.252307 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b84ec2a-5911-404b-a6e4-6654625c0e0f-internal-tls-certs\") pod \"swift-proxy-6bcbb6d8f6-j7v6l\" (UID: \"8b84ec2a-5911-404b-a6e4-6654625c0e0f\") " pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.252466 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b84ec2a-5911-404b-a6e4-6654625c0e0f-run-httpd\") pod \"swift-proxy-6bcbb6d8f6-j7v6l\" (UID: \"8b84ec2a-5911-404b-a6e4-6654625c0e0f\") " pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.252641 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b84ec2a-5911-404b-a6e4-6654625c0e0f-config-data\") pod \"swift-proxy-6bcbb6d8f6-j7v6l\" (UID: \"8b84ec2a-5911-404b-a6e4-6654625c0e0f\") " pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.252710 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b84ec2a-5911-404b-a6e4-6654625c0e0f-combined-ca-bundle\") pod \"swift-proxy-6bcbb6d8f6-j7v6l\" (UID: \"8b84ec2a-5911-404b-a6e4-6654625c0e0f\") " pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.356133 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b84ec2a-5911-404b-a6e4-6654625c0e0f-etc-swift\") pod \"swift-proxy-6bcbb6d8f6-j7v6l\" (UID: \"8b84ec2a-5911-404b-a6e4-6654625c0e0f\") " pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.356282 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b84ec2a-5911-404b-a6e4-6654625c0e0f-internal-tls-certs\") pod \"swift-proxy-6bcbb6d8f6-j7v6l\" (UID: \"8b84ec2a-5911-404b-a6e4-6654625c0e0f\") " pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.356364 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b84ec2a-5911-404b-a6e4-6654625c0e0f-run-httpd\") pod \"swift-proxy-6bcbb6d8f6-j7v6l\" (UID: \"8b84ec2a-5911-404b-a6e4-6654625c0e0f\") " pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.356421 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b84ec2a-5911-404b-a6e4-6654625c0e0f-config-data\") pod \"swift-proxy-6bcbb6d8f6-j7v6l\" (UID: \"8b84ec2a-5911-404b-a6e4-6654625c0e0f\") " pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.356451 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b84ec2a-5911-404b-a6e4-6654625c0e0f-combined-ca-bundle\") pod \"swift-proxy-6bcbb6d8f6-j7v6l\" (UID: \"8b84ec2a-5911-404b-a6e4-6654625c0e0f\") " pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.356650 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b84ec2a-5911-404b-a6e4-6654625c0e0f-public-tls-certs\") pod \"swift-proxy-6bcbb6d8f6-j7v6l\" (UID: \"8b84ec2a-5911-404b-a6e4-6654625c0e0f\") " pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.356696 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q44mf\" (UniqueName: \"kubernetes.io/projected/8b84ec2a-5911-404b-a6e4-6654625c0e0f-kube-api-access-q44mf\") pod \"swift-proxy-6bcbb6d8f6-j7v6l\" (UID: \"8b84ec2a-5911-404b-a6e4-6654625c0e0f\") " pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.356760 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b84ec2a-5911-404b-a6e4-6654625c0e0f-log-httpd\") pod \"swift-proxy-6bcbb6d8f6-j7v6l\" (UID: \"8b84ec2a-5911-404b-a6e4-6654625c0e0f\") " pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.357744 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b84ec2a-5911-404b-a6e4-6654625c0e0f-log-httpd\") pod \"swift-proxy-6bcbb6d8f6-j7v6l\" (UID: \"8b84ec2a-5911-404b-a6e4-6654625c0e0f\") " pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.357977 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b84ec2a-5911-404b-a6e4-6654625c0e0f-run-httpd\") pod \"swift-proxy-6bcbb6d8f6-j7v6l\" (UID: \"8b84ec2a-5911-404b-a6e4-6654625c0e0f\") " pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.360039 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b84ec2a-5911-404b-a6e4-6654625c0e0f-public-tls-certs\") pod \"swift-proxy-6bcbb6d8f6-j7v6l\" (UID: \"8b84ec2a-5911-404b-a6e4-6654625c0e0f\") " pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.360803 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b84ec2a-5911-404b-a6e4-6654625c0e0f-combined-ca-bundle\") pod \"swift-proxy-6bcbb6d8f6-j7v6l\" (UID: \"8b84ec2a-5911-404b-a6e4-6654625c0e0f\") " pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.361111 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b84ec2a-5911-404b-a6e4-6654625c0e0f-etc-swift\") pod \"swift-proxy-6bcbb6d8f6-j7v6l\" (UID: \"8b84ec2a-5911-404b-a6e4-6654625c0e0f\") " pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.363263 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b84ec2a-5911-404b-a6e4-6654625c0e0f-config-data\") pod \"swift-proxy-6bcbb6d8f6-j7v6l\" (UID: \"8b84ec2a-5911-404b-a6e4-6654625c0e0f\") " pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.369248 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b84ec2a-5911-404b-a6e4-6654625c0e0f-internal-tls-certs\") pod \"swift-proxy-6bcbb6d8f6-j7v6l\" (UID: \"8b84ec2a-5911-404b-a6e4-6654625c0e0f\") " pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.385193 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q44mf\" (UniqueName: \"kubernetes.io/projected/8b84ec2a-5911-404b-a6e4-6654625c0e0f-kube-api-access-q44mf\") pod \"swift-proxy-6bcbb6d8f6-j7v6l\" (UID: \"8b84ec2a-5911-404b-a6e4-6654625c0e0f\") " pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" Oct 13 08:03:17 crc kubenswrapper[4833]: I1013 08:03:17.533895 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" Oct 13 08:03:18 crc kubenswrapper[4833]: I1013 08:03:18.212828 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6bcbb6d8f6-j7v6l"] Oct 13 08:03:18 crc kubenswrapper[4833]: I1013 08:03:18.241302 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" event={"ID":"8b84ec2a-5911-404b-a6e4-6654625c0e0f","Type":"ContainerStarted","Data":"28995499c69be6f6973e21a3eb795077866bcf9d5a2950338dd39e5beeab32e2"} Oct 13 08:03:18 crc kubenswrapper[4833]: I1013 08:03:18.601882 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dcsbj" Oct 13 08:03:18 crc kubenswrapper[4833]: I1013 08:03:18.691681 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/26e000c4-06bf-4225-b73d-6738529f741a-dispersionconf\") pod \"26e000c4-06bf-4225-b73d-6738529f741a\" (UID: \"26e000c4-06bf-4225-b73d-6738529f741a\") " Oct 13 08:03:18 crc kubenswrapper[4833]: I1013 08:03:18.691776 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26e000c4-06bf-4225-b73d-6738529f741a-scripts\") pod \"26e000c4-06bf-4225-b73d-6738529f741a\" (UID: \"26e000c4-06bf-4225-b73d-6738529f741a\") " Oct 13 08:03:18 crc kubenswrapper[4833]: I1013 08:03:18.691929 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j2tp\" (UniqueName: \"kubernetes.io/projected/26e000c4-06bf-4225-b73d-6738529f741a-kube-api-access-8j2tp\") pod \"26e000c4-06bf-4225-b73d-6738529f741a\" (UID: \"26e000c4-06bf-4225-b73d-6738529f741a\") " Oct 13 08:03:18 crc kubenswrapper[4833]: I1013 08:03:18.691978 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/26e000c4-06bf-4225-b73d-6738529f741a-swiftconf\") pod \"26e000c4-06bf-4225-b73d-6738529f741a\" (UID: \"26e000c4-06bf-4225-b73d-6738529f741a\") " Oct 13 08:03:18 crc kubenswrapper[4833]: I1013 08:03:18.692027 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/26e000c4-06bf-4225-b73d-6738529f741a-ring-data-devices\") pod \"26e000c4-06bf-4225-b73d-6738529f741a\" (UID: \"26e000c4-06bf-4225-b73d-6738529f741a\") " Oct 13 08:03:18 crc kubenswrapper[4833]: I1013 08:03:18.692064 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e000c4-06bf-4225-b73d-6738529f741a-combined-ca-bundle\") pod \"26e000c4-06bf-4225-b73d-6738529f741a\" (UID: \"26e000c4-06bf-4225-b73d-6738529f741a\") " Oct 13 08:03:18 crc kubenswrapper[4833]: I1013 08:03:18.692109 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/26e000c4-06bf-4225-b73d-6738529f741a-etc-swift\") pod \"26e000c4-06bf-4225-b73d-6738529f741a\" (UID: \"26e000c4-06bf-4225-b73d-6738529f741a\") " Oct 13 08:03:18 crc kubenswrapper[4833]: I1013 08:03:18.693225 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26e000c4-06bf-4225-b73d-6738529f741a-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "26e000c4-06bf-4225-b73d-6738529f741a" (UID: "26e000c4-06bf-4225-b73d-6738529f741a"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:03:18 crc kubenswrapper[4833]: I1013 08:03:18.694203 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26e000c4-06bf-4225-b73d-6738529f741a-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "26e000c4-06bf-4225-b73d-6738529f741a" (UID: "26e000c4-06bf-4225-b73d-6738529f741a"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:03:18 crc kubenswrapper[4833]: I1013 08:03:18.694849 4833 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/26e000c4-06bf-4225-b73d-6738529f741a-ring-data-devices\") on node \"crc\" DevicePath \"\"" Oct 13 08:03:18 crc kubenswrapper[4833]: I1013 08:03:18.694890 4833 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/26e000c4-06bf-4225-b73d-6738529f741a-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 13 08:03:18 crc kubenswrapper[4833]: I1013 08:03:18.699485 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26e000c4-06bf-4225-b73d-6738529f741a-kube-api-access-8j2tp" (OuterVolumeSpecName: "kube-api-access-8j2tp") pod "26e000c4-06bf-4225-b73d-6738529f741a" (UID: "26e000c4-06bf-4225-b73d-6738529f741a"). InnerVolumeSpecName "kube-api-access-8j2tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:03:18 crc kubenswrapper[4833]: I1013 08:03:18.703897 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e000c4-06bf-4225-b73d-6738529f741a-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "26e000c4-06bf-4225-b73d-6738529f741a" (UID: "26e000c4-06bf-4225-b73d-6738529f741a"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:03:18 crc kubenswrapper[4833]: I1013 08:03:18.715072 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26e000c4-06bf-4225-b73d-6738529f741a-scripts" (OuterVolumeSpecName: "scripts") pod "26e000c4-06bf-4225-b73d-6738529f741a" (UID: "26e000c4-06bf-4225-b73d-6738529f741a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:03:18 crc kubenswrapper[4833]: I1013 08:03:18.725449 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e000c4-06bf-4225-b73d-6738529f741a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26e000c4-06bf-4225-b73d-6738529f741a" (UID: "26e000c4-06bf-4225-b73d-6738529f741a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:03:18 crc kubenswrapper[4833]: I1013 08:03:18.746589 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e000c4-06bf-4225-b73d-6738529f741a-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "26e000c4-06bf-4225-b73d-6738529f741a" (UID: "26e000c4-06bf-4225-b73d-6738529f741a"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:03:18 crc kubenswrapper[4833]: I1013 08:03:18.796185 4833 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/26e000c4-06bf-4225-b73d-6738529f741a-dispersionconf\") on node \"crc\" DevicePath \"\"" Oct 13 08:03:18 crc kubenswrapper[4833]: I1013 08:03:18.796482 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26e000c4-06bf-4225-b73d-6738529f741a-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 08:03:18 crc kubenswrapper[4833]: I1013 08:03:18.796499 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8j2tp\" (UniqueName: \"kubernetes.io/projected/26e000c4-06bf-4225-b73d-6738529f741a-kube-api-access-8j2tp\") on node \"crc\" DevicePath \"\"" Oct 13 08:03:18 crc kubenswrapper[4833]: I1013 08:03:18.796513 4833 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/26e000c4-06bf-4225-b73d-6738529f741a-swiftconf\") on node \"crc\" DevicePath \"\"" Oct 13 08:03:18 crc kubenswrapper[4833]: I1013 08:03:18.796525 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e000c4-06bf-4225-b73d-6738529f741a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:03:19 crc kubenswrapper[4833]: I1013 08:03:19.251825 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dcsbj" event={"ID":"26e000c4-06bf-4225-b73d-6738529f741a","Type":"ContainerDied","Data":"06061be4f771600a3ea827882db790934f9826624d2723687d27f9982992bffb"} Oct 13 08:03:19 crc kubenswrapper[4833]: I1013 08:03:19.251874 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06061be4f771600a3ea827882db790934f9826624d2723687d27f9982992bffb" Oct 13 08:03:19 crc kubenswrapper[4833]: I1013 08:03:19.251934 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dcsbj" Oct 13 08:03:19 crc kubenswrapper[4833]: I1013 08:03:19.264790 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" event={"ID":"8b84ec2a-5911-404b-a6e4-6654625c0e0f","Type":"ContainerStarted","Data":"39452ad1098fac87da6818ec5898992c37e3fa713580e8c70293f13abb89fe7b"} Oct 13 08:03:19 crc kubenswrapper[4833]: I1013 08:03:19.264843 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" event={"ID":"8b84ec2a-5911-404b-a6e4-6654625c0e0f","Type":"ContainerStarted","Data":"39b66126ea2a1d7a156ef09fe1a6f70d0973f42cdd813f8f851ed7dc70ca9c67"} Oct 13 08:03:19 crc kubenswrapper[4833]: I1013 08:03:19.266317 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" Oct 13 08:03:19 crc kubenswrapper[4833]: I1013 08:03:19.314529 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" podStartSLOduration=2.314510123 podStartE2EDuration="2.314510123s" podCreationTimestamp="2025-10-13 08:03:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:03:19.302389498 +0000 UTC m=+5689.402812454" watchObservedRunningTime="2025-10-13 08:03:19.314510123 +0000 UTC m=+5689.414933039" Oct 13 08:03:20 crc kubenswrapper[4833]: I1013 08:03:20.274596 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" Oct 13 08:03:22 crc kubenswrapper[4833]: I1013 08:03:22.745863 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58bc49bc47-zb67j" Oct 13 08:03:22 crc kubenswrapper[4833]: I1013 08:03:22.834783 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58f8c59d4c-ttfk5"] Oct 13 08:03:22 crc kubenswrapper[4833]: I1013 08:03:22.836813 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58f8c59d4c-ttfk5" podUID="c884898d-1091-4c1e-8497-4bbc8147394b" containerName="dnsmasq-dns" containerID="cri-o://c4c9785d0ea09878ad4d3f48a4fba2abe9d1a883658b3239c81bdf1c51676f91" gracePeriod=10 Oct 13 08:03:23 crc kubenswrapper[4833]: I1013 08:03:23.312414 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58f8c59d4c-ttfk5" Oct 13 08:03:23 crc kubenswrapper[4833]: I1013 08:03:23.316983 4833 generic.go:334] "Generic (PLEG): container finished" podID="c884898d-1091-4c1e-8497-4bbc8147394b" containerID="c4c9785d0ea09878ad4d3f48a4fba2abe9d1a883658b3239c81bdf1c51676f91" exitCode=0 Oct 13 08:03:23 crc kubenswrapper[4833]: I1013 08:03:23.317036 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58f8c59d4c-ttfk5" event={"ID":"c884898d-1091-4c1e-8497-4bbc8147394b","Type":"ContainerDied","Data":"c4c9785d0ea09878ad4d3f48a4fba2abe9d1a883658b3239c81bdf1c51676f91"} Oct 13 08:03:23 crc kubenswrapper[4833]: I1013 08:03:23.317066 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58f8c59d4c-ttfk5" event={"ID":"c884898d-1091-4c1e-8497-4bbc8147394b","Type":"ContainerDied","Data":"fc55fbf410c86fa97f108c5055a058316902fc15135e49a249fb8670844cba01"} Oct 13 08:03:23 crc kubenswrapper[4833]: I1013 08:03:23.317212 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58f8c59d4c-ttfk5" Oct 13 08:03:23 crc kubenswrapper[4833]: I1013 08:03:23.317778 4833 scope.go:117] "RemoveContainer" containerID="c4c9785d0ea09878ad4d3f48a4fba2abe9d1a883658b3239c81bdf1c51676f91" Oct 13 08:03:23 crc kubenswrapper[4833]: I1013 08:03:23.344062 4833 scope.go:117] "RemoveContainer" containerID="d5572badf75b84d03f25676a305ca4ae34792b607275bff3900eb5c216ffd1ff" Oct 13 08:03:23 crc kubenswrapper[4833]: I1013 08:03:23.371037 4833 scope.go:117] "RemoveContainer" containerID="c4c9785d0ea09878ad4d3f48a4fba2abe9d1a883658b3239c81bdf1c51676f91" Oct 13 08:03:23 crc kubenswrapper[4833]: E1013 08:03:23.371633 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4c9785d0ea09878ad4d3f48a4fba2abe9d1a883658b3239c81bdf1c51676f91\": container with ID starting with c4c9785d0ea09878ad4d3f48a4fba2abe9d1a883658b3239c81bdf1c51676f91 not found: ID does not exist" containerID="c4c9785d0ea09878ad4d3f48a4fba2abe9d1a883658b3239c81bdf1c51676f91" Oct 13 08:03:23 crc kubenswrapper[4833]: I1013 08:03:23.371671 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4c9785d0ea09878ad4d3f48a4fba2abe9d1a883658b3239c81bdf1c51676f91"} err="failed to get container status \"c4c9785d0ea09878ad4d3f48a4fba2abe9d1a883658b3239c81bdf1c51676f91\": rpc error: code = NotFound desc = could not find container \"c4c9785d0ea09878ad4d3f48a4fba2abe9d1a883658b3239c81bdf1c51676f91\": container with ID starting with c4c9785d0ea09878ad4d3f48a4fba2abe9d1a883658b3239c81bdf1c51676f91 not found: ID does not exist" Oct 13 08:03:23 crc kubenswrapper[4833]: I1013 08:03:23.371697 4833 scope.go:117] "RemoveContainer" containerID="d5572badf75b84d03f25676a305ca4ae34792b607275bff3900eb5c216ffd1ff" Oct 13 08:03:23 crc kubenswrapper[4833]: E1013 08:03:23.372023 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5572badf75b84d03f25676a305ca4ae34792b607275bff3900eb5c216ffd1ff\": container with ID starting with d5572badf75b84d03f25676a305ca4ae34792b607275bff3900eb5c216ffd1ff not found: ID does not exist" containerID="d5572badf75b84d03f25676a305ca4ae34792b607275bff3900eb5c216ffd1ff" Oct 13 08:03:23 crc kubenswrapper[4833]: I1013 08:03:23.372049 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5572badf75b84d03f25676a305ca4ae34792b607275bff3900eb5c216ffd1ff"} err="failed to get container status \"d5572badf75b84d03f25676a305ca4ae34792b607275bff3900eb5c216ffd1ff\": rpc error: code = NotFound desc = could not find container \"d5572badf75b84d03f25676a305ca4ae34792b607275bff3900eb5c216ffd1ff\": container with ID starting with d5572badf75b84d03f25676a305ca4ae34792b607275bff3900eb5c216ffd1ff not found: ID does not exist" Oct 13 08:03:23 crc kubenswrapper[4833]: I1013 08:03:23.415385 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c884898d-1091-4c1e-8497-4bbc8147394b-config\") pod \"c884898d-1091-4c1e-8497-4bbc8147394b\" (UID: \"c884898d-1091-4c1e-8497-4bbc8147394b\") " Oct 13 08:03:23 crc kubenswrapper[4833]: I1013 08:03:23.415504 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c884898d-1091-4c1e-8497-4bbc8147394b-dns-svc\") pod \"c884898d-1091-4c1e-8497-4bbc8147394b\" (UID: \"c884898d-1091-4c1e-8497-4bbc8147394b\") " Oct 13 08:03:23 crc kubenswrapper[4833]: I1013 08:03:23.416389 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c884898d-1091-4c1e-8497-4bbc8147394b-ovsdbserver-nb\") pod \"c884898d-1091-4c1e-8497-4bbc8147394b\" (UID: \"c884898d-1091-4c1e-8497-4bbc8147394b\") " Oct 13 08:03:23 crc kubenswrapper[4833]: I1013 08:03:23.416573 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td7th\" (UniqueName: \"kubernetes.io/projected/c884898d-1091-4c1e-8497-4bbc8147394b-kube-api-access-td7th\") pod \"c884898d-1091-4c1e-8497-4bbc8147394b\" (UID: \"c884898d-1091-4c1e-8497-4bbc8147394b\") " Oct 13 08:03:23 crc kubenswrapper[4833]: I1013 08:03:23.416652 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c884898d-1091-4c1e-8497-4bbc8147394b-ovsdbserver-sb\") pod \"c884898d-1091-4c1e-8497-4bbc8147394b\" (UID: \"c884898d-1091-4c1e-8497-4bbc8147394b\") " Oct 13 08:03:23 crc kubenswrapper[4833]: I1013 08:03:23.420920 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c884898d-1091-4c1e-8497-4bbc8147394b-kube-api-access-td7th" (OuterVolumeSpecName: "kube-api-access-td7th") pod "c884898d-1091-4c1e-8497-4bbc8147394b" (UID: "c884898d-1091-4c1e-8497-4bbc8147394b"). InnerVolumeSpecName "kube-api-access-td7th". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:03:23 crc kubenswrapper[4833]: I1013 08:03:23.459388 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c884898d-1091-4c1e-8497-4bbc8147394b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c884898d-1091-4c1e-8497-4bbc8147394b" (UID: "c884898d-1091-4c1e-8497-4bbc8147394b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:03:23 crc kubenswrapper[4833]: I1013 08:03:23.460137 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c884898d-1091-4c1e-8497-4bbc8147394b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c884898d-1091-4c1e-8497-4bbc8147394b" (UID: "c884898d-1091-4c1e-8497-4bbc8147394b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:03:23 crc kubenswrapper[4833]: I1013 08:03:23.473679 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c884898d-1091-4c1e-8497-4bbc8147394b-config" (OuterVolumeSpecName: "config") pod "c884898d-1091-4c1e-8497-4bbc8147394b" (UID: "c884898d-1091-4c1e-8497-4bbc8147394b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:03:23 crc kubenswrapper[4833]: I1013 08:03:23.474615 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c884898d-1091-4c1e-8497-4bbc8147394b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c884898d-1091-4c1e-8497-4bbc8147394b" (UID: "c884898d-1091-4c1e-8497-4bbc8147394b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:03:23 crc kubenswrapper[4833]: I1013 08:03:23.519222 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c884898d-1091-4c1e-8497-4bbc8147394b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 08:03:23 crc kubenswrapper[4833]: I1013 08:03:23.519261 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c884898d-1091-4c1e-8497-4bbc8147394b-config\") on node \"crc\" DevicePath \"\"" Oct 13 08:03:23 crc kubenswrapper[4833]: I1013 08:03:23.519275 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c884898d-1091-4c1e-8497-4bbc8147394b-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 08:03:23 crc kubenswrapper[4833]: I1013 08:03:23.519286 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c884898d-1091-4c1e-8497-4bbc8147394b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 08:03:23 crc kubenswrapper[4833]: I1013 08:03:23.519297 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td7th\" (UniqueName: \"kubernetes.io/projected/c884898d-1091-4c1e-8497-4bbc8147394b-kube-api-access-td7th\") on node \"crc\" DevicePath \"\"" Oct 13 08:03:23 crc kubenswrapper[4833]: I1013 08:03:23.627147 4833 scope.go:117] "RemoveContainer" containerID="84144bfb0195d5a0d1b4378bb219c91ea274fee3df90291859e033d126a958d8" Oct 13 08:03:23 crc kubenswrapper[4833]: E1013 08:03:23.627631 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:03:23 crc kubenswrapper[4833]: I1013 08:03:23.654059 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58f8c59d4c-ttfk5"] Oct 13 08:03:23 crc kubenswrapper[4833]: I1013 08:03:23.660864 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58f8c59d4c-ttfk5"] Oct 13 08:03:24 crc kubenswrapper[4833]: I1013 08:03:24.645465 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c884898d-1091-4c1e-8497-4bbc8147394b" path="/var/lib/kubelet/pods/c884898d-1091-4c1e-8497-4bbc8147394b/volumes" Oct 13 08:03:24 crc kubenswrapper[4833]: I1013 08:03:24.732635 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-54b564595d-h4m8t" Oct 13 08:03:24 crc kubenswrapper[4833]: I1013 08:03:24.734842 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-54b564595d-h4m8t" Oct 13 08:03:27 crc kubenswrapper[4833]: I1013 08:03:27.548008 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" Oct 13 08:03:27 crc kubenswrapper[4833]: I1013 08:03:27.552947 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6bcbb6d8f6-j7v6l" Oct 13 08:03:27 crc kubenswrapper[4833]: I1013 08:03:27.691947 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-54b564595d-h4m8t"] Oct 13 08:03:27 crc kubenswrapper[4833]: I1013 08:03:27.692516 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-54b564595d-h4m8t" podUID="37bb6b90-3730-4ab2-a31c-9679e76de18d" containerName="proxy-httpd" containerID="cri-o://5e95ff6027d61dd9ba72acf784e1e847ee100b97bd374704fb9db58ca8fc7f9e" gracePeriod=30 Oct 13 08:03:27 crc kubenswrapper[4833]: I1013 08:03:27.692749 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-54b564595d-h4m8t" podUID="37bb6b90-3730-4ab2-a31c-9679e76de18d" containerName="proxy-server" containerID="cri-o://59ad2e7c170e9b661eba6b0d20d5a862ff0ce1c13bc957252e9ca921b3efa91f" gracePeriod=30 Oct 13 08:03:28 crc kubenswrapper[4833]: I1013 08:03:28.363292 4833 generic.go:334] "Generic (PLEG): container finished" podID="37bb6b90-3730-4ab2-a31c-9679e76de18d" containerID="59ad2e7c170e9b661eba6b0d20d5a862ff0ce1c13bc957252e9ca921b3efa91f" exitCode=0 Oct 13 08:03:28 crc kubenswrapper[4833]: I1013 08:03:28.363339 4833 generic.go:334] "Generic (PLEG): container finished" podID="37bb6b90-3730-4ab2-a31c-9679e76de18d" containerID="5e95ff6027d61dd9ba72acf784e1e847ee100b97bd374704fb9db58ca8fc7f9e" exitCode=0 Oct 13 08:03:28 crc kubenswrapper[4833]: I1013 08:03:28.363434 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-54b564595d-h4m8t" event={"ID":"37bb6b90-3730-4ab2-a31c-9679e76de18d","Type":"ContainerDied","Data":"59ad2e7c170e9b661eba6b0d20d5a862ff0ce1c13bc957252e9ca921b3efa91f"} Oct 13 08:03:28 crc kubenswrapper[4833]: I1013 08:03:28.363533 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-54b564595d-h4m8t" event={"ID":"37bb6b90-3730-4ab2-a31c-9679e76de18d","Type":"ContainerDied","Data":"5e95ff6027d61dd9ba72acf784e1e847ee100b97bd374704fb9db58ca8fc7f9e"} Oct 13 08:03:28 crc kubenswrapper[4833]: I1013 08:03:28.792105 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-54b564595d-h4m8t" Oct 13 08:03:28 crc kubenswrapper[4833]: I1013 08:03:28.916075 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vsvd\" (UniqueName: \"kubernetes.io/projected/37bb6b90-3730-4ab2-a31c-9679e76de18d-kube-api-access-5vsvd\") pod \"37bb6b90-3730-4ab2-a31c-9679e76de18d\" (UID: \"37bb6b90-3730-4ab2-a31c-9679e76de18d\") " Oct 13 08:03:28 crc kubenswrapper[4833]: I1013 08:03:28.916212 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37bb6b90-3730-4ab2-a31c-9679e76de18d-run-httpd\") pod \"37bb6b90-3730-4ab2-a31c-9679e76de18d\" (UID: \"37bb6b90-3730-4ab2-a31c-9679e76de18d\") " Oct 13 08:03:28 crc kubenswrapper[4833]: I1013 08:03:28.916252 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37bb6b90-3730-4ab2-a31c-9679e76de18d-combined-ca-bundle\") pod \"37bb6b90-3730-4ab2-a31c-9679e76de18d\" (UID: \"37bb6b90-3730-4ab2-a31c-9679e76de18d\") " Oct 13 08:03:28 crc kubenswrapper[4833]: I1013 08:03:28.916318 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37bb6b90-3730-4ab2-a31c-9679e76de18d-config-data\") pod \"37bb6b90-3730-4ab2-a31c-9679e76de18d\" (UID: \"37bb6b90-3730-4ab2-a31c-9679e76de18d\") " Oct 13 08:03:28 crc kubenswrapper[4833]: I1013 08:03:28.916342 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37bb6b90-3730-4ab2-a31c-9679e76de18d-log-httpd\") pod \"37bb6b90-3730-4ab2-a31c-9679e76de18d\" (UID: \"37bb6b90-3730-4ab2-a31c-9679e76de18d\") " Oct 13 08:03:28 crc kubenswrapper[4833]: I1013 08:03:28.916395 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/37bb6b90-3730-4ab2-a31c-9679e76de18d-etc-swift\") pod \"37bb6b90-3730-4ab2-a31c-9679e76de18d\" (UID: \"37bb6b90-3730-4ab2-a31c-9679e76de18d\") " Oct 13 08:03:28 crc kubenswrapper[4833]: I1013 08:03:28.916613 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37bb6b90-3730-4ab2-a31c-9679e76de18d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "37bb6b90-3730-4ab2-a31c-9679e76de18d" (UID: "37bb6b90-3730-4ab2-a31c-9679e76de18d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:03:28 crc kubenswrapper[4833]: I1013 08:03:28.916706 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37bb6b90-3730-4ab2-a31c-9679e76de18d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "37bb6b90-3730-4ab2-a31c-9679e76de18d" (UID: "37bb6b90-3730-4ab2-a31c-9679e76de18d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:03:28 crc kubenswrapper[4833]: I1013 08:03:28.917271 4833 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37bb6b90-3730-4ab2-a31c-9679e76de18d-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 08:03:28 crc kubenswrapper[4833]: I1013 08:03:28.917294 4833 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37bb6b90-3730-4ab2-a31c-9679e76de18d-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 08:03:28 crc kubenswrapper[4833]: I1013 08:03:28.921446 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37bb6b90-3730-4ab2-a31c-9679e76de18d-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "37bb6b90-3730-4ab2-a31c-9679e76de18d" (UID: "37bb6b90-3730-4ab2-a31c-9679e76de18d"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:03:28 crc kubenswrapper[4833]: I1013 08:03:28.925791 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37bb6b90-3730-4ab2-a31c-9679e76de18d-kube-api-access-5vsvd" (OuterVolumeSpecName: "kube-api-access-5vsvd") pod "37bb6b90-3730-4ab2-a31c-9679e76de18d" (UID: "37bb6b90-3730-4ab2-a31c-9679e76de18d"). InnerVolumeSpecName "kube-api-access-5vsvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:03:28 crc kubenswrapper[4833]: I1013 08:03:28.972228 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37bb6b90-3730-4ab2-a31c-9679e76de18d-config-data" (OuterVolumeSpecName: "config-data") pod "37bb6b90-3730-4ab2-a31c-9679e76de18d" (UID: "37bb6b90-3730-4ab2-a31c-9679e76de18d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:03:28 crc kubenswrapper[4833]: I1013 08:03:28.983356 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37bb6b90-3730-4ab2-a31c-9679e76de18d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37bb6b90-3730-4ab2-a31c-9679e76de18d" (UID: "37bb6b90-3730-4ab2-a31c-9679e76de18d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:03:29 crc kubenswrapper[4833]: I1013 08:03:29.018862 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vsvd\" (UniqueName: \"kubernetes.io/projected/37bb6b90-3730-4ab2-a31c-9679e76de18d-kube-api-access-5vsvd\") on node \"crc\" DevicePath \"\"" Oct 13 08:03:29 crc kubenswrapper[4833]: I1013 08:03:29.018918 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37bb6b90-3730-4ab2-a31c-9679e76de18d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:03:29 crc kubenswrapper[4833]: I1013 08:03:29.018936 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37bb6b90-3730-4ab2-a31c-9679e76de18d-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:03:29 crc kubenswrapper[4833]: I1013 08:03:29.018956 4833 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/37bb6b90-3730-4ab2-a31c-9679e76de18d-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 13 08:03:29 crc kubenswrapper[4833]: I1013 08:03:29.375481 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-54b564595d-h4m8t" event={"ID":"37bb6b90-3730-4ab2-a31c-9679e76de18d","Type":"ContainerDied","Data":"4454e7c9dfa5fd93871d53b87eb229b6826aa4309af31dfb514c80a34bf8e145"} Oct 13 08:03:29 crc kubenswrapper[4833]: I1013 08:03:29.375569 4833 scope.go:117] "RemoveContainer" containerID="59ad2e7c170e9b661eba6b0d20d5a862ff0ce1c13bc957252e9ca921b3efa91f" Oct 13 08:03:29 crc kubenswrapper[4833]: I1013 08:03:29.375572 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-54b564595d-h4m8t" Oct 13 08:03:29 crc kubenswrapper[4833]: I1013 08:03:29.421026 4833 scope.go:117] "RemoveContainer" containerID="5e95ff6027d61dd9ba72acf784e1e847ee100b97bd374704fb9db58ca8fc7f9e" Oct 13 08:03:29 crc kubenswrapper[4833]: I1013 08:03:29.425463 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-54b564595d-h4m8t"] Oct 13 08:03:29 crc kubenswrapper[4833]: I1013 08:03:29.432765 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-54b564595d-h4m8t"] Oct 13 08:03:30 crc kubenswrapper[4833]: I1013 08:03:30.643776 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37bb6b90-3730-4ab2-a31c-9679e76de18d" path="/var/lib/kubelet/pods/37bb6b90-3730-4ab2-a31c-9679e76de18d/volumes" Oct 13 08:03:33 crc kubenswrapper[4833]: I1013 08:03:33.533685 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-gvmh4"] Oct 13 08:03:33 crc kubenswrapper[4833]: E1013 08:03:33.534399 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c884898d-1091-4c1e-8497-4bbc8147394b" containerName="dnsmasq-dns" Oct 13 08:03:33 crc kubenswrapper[4833]: I1013 08:03:33.534413 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c884898d-1091-4c1e-8497-4bbc8147394b" containerName="dnsmasq-dns" Oct 13 08:03:33 crc kubenswrapper[4833]: E1013 08:03:33.534427 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37bb6b90-3730-4ab2-a31c-9679e76de18d" containerName="proxy-httpd" Oct 13 08:03:33 crc kubenswrapper[4833]: I1013 08:03:33.534433 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="37bb6b90-3730-4ab2-a31c-9679e76de18d" containerName="proxy-httpd" Oct 13 08:03:33 crc kubenswrapper[4833]: E1013 08:03:33.534444 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37bb6b90-3730-4ab2-a31c-9679e76de18d" containerName="proxy-server" Oct 13 08:03:33 crc kubenswrapper[4833]: I1013 08:03:33.534451 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="37bb6b90-3730-4ab2-a31c-9679e76de18d" containerName="proxy-server" Oct 13 08:03:33 crc kubenswrapper[4833]: E1013 08:03:33.534465 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26e000c4-06bf-4225-b73d-6738529f741a" containerName="swift-ring-rebalance" Oct 13 08:03:33 crc kubenswrapper[4833]: I1013 08:03:33.534472 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="26e000c4-06bf-4225-b73d-6738529f741a" containerName="swift-ring-rebalance" Oct 13 08:03:33 crc kubenswrapper[4833]: E1013 08:03:33.534479 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c884898d-1091-4c1e-8497-4bbc8147394b" containerName="init" Oct 13 08:03:33 crc kubenswrapper[4833]: I1013 08:03:33.534484 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c884898d-1091-4c1e-8497-4bbc8147394b" containerName="init" Oct 13 08:03:33 crc kubenswrapper[4833]: I1013 08:03:33.534670 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="37bb6b90-3730-4ab2-a31c-9679e76de18d" containerName="proxy-httpd" Oct 13 08:03:33 crc kubenswrapper[4833]: I1013 08:03:33.534683 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="37bb6b90-3730-4ab2-a31c-9679e76de18d" containerName="proxy-server" Oct 13 08:03:33 crc kubenswrapper[4833]: I1013 08:03:33.534695 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="26e000c4-06bf-4225-b73d-6738529f741a" containerName="swift-ring-rebalance" Oct 13 08:03:33 crc kubenswrapper[4833]: I1013 08:03:33.534704 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c884898d-1091-4c1e-8497-4bbc8147394b" containerName="dnsmasq-dns" Oct 13 08:03:33 crc kubenswrapper[4833]: I1013 08:03:33.535249 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gvmh4" Oct 13 08:03:33 crc kubenswrapper[4833]: I1013 08:03:33.542740 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-gvmh4"] Oct 13 08:03:33 crc kubenswrapper[4833]: I1013 08:03:33.706499 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfkrc\" (UniqueName: \"kubernetes.io/projected/5f8b4645-97a7-4bb8-b15e-35ad4c8bf70d-kube-api-access-dfkrc\") pod \"cinder-db-create-gvmh4\" (UID: \"5f8b4645-97a7-4bb8-b15e-35ad4c8bf70d\") " pod="openstack/cinder-db-create-gvmh4" Oct 13 08:03:33 crc kubenswrapper[4833]: I1013 08:03:33.808689 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfkrc\" (UniqueName: \"kubernetes.io/projected/5f8b4645-97a7-4bb8-b15e-35ad4c8bf70d-kube-api-access-dfkrc\") pod \"cinder-db-create-gvmh4\" (UID: \"5f8b4645-97a7-4bb8-b15e-35ad4c8bf70d\") " pod="openstack/cinder-db-create-gvmh4" Oct 13 08:03:33 crc kubenswrapper[4833]: I1013 08:03:33.830269 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfkrc\" (UniqueName: \"kubernetes.io/projected/5f8b4645-97a7-4bb8-b15e-35ad4c8bf70d-kube-api-access-dfkrc\") pod \"cinder-db-create-gvmh4\" (UID: \"5f8b4645-97a7-4bb8-b15e-35ad4c8bf70d\") " pod="openstack/cinder-db-create-gvmh4" Oct 13 08:03:33 crc kubenswrapper[4833]: I1013 08:03:33.897253 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gvmh4" Oct 13 08:03:34 crc kubenswrapper[4833]: I1013 08:03:34.329375 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-gvmh4"] Oct 13 08:03:34 crc kubenswrapper[4833]: I1013 08:03:34.417618 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gvmh4" event={"ID":"5f8b4645-97a7-4bb8-b15e-35ad4c8bf70d","Type":"ContainerStarted","Data":"fca859b9dec2ada0dc444510986c8e7542696345f54236d46ccb586fc8a6e4fc"} Oct 13 08:03:34 crc kubenswrapper[4833]: I1013 08:03:34.626622 4833 scope.go:117] "RemoveContainer" containerID="84144bfb0195d5a0d1b4378bb219c91ea274fee3df90291859e033d126a958d8" Oct 13 08:03:34 crc kubenswrapper[4833]: E1013 08:03:34.626879 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:03:35 crc kubenswrapper[4833]: I1013 08:03:35.427728 4833 generic.go:334] "Generic (PLEG): container finished" podID="5f8b4645-97a7-4bb8-b15e-35ad4c8bf70d" containerID="89dbd316acd0c0fe8845b1bc1c8b4ffe8226fe32ae0be9030235e141d9c27520" exitCode=0 Oct 13 08:03:35 crc kubenswrapper[4833]: I1013 08:03:35.427808 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gvmh4" event={"ID":"5f8b4645-97a7-4bb8-b15e-35ad4c8bf70d","Type":"ContainerDied","Data":"89dbd316acd0c0fe8845b1bc1c8b4ffe8226fe32ae0be9030235e141d9c27520"} Oct 13 08:03:36 crc kubenswrapper[4833]: I1013 08:03:36.814039 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gvmh4" Oct 13 08:03:36 crc kubenswrapper[4833]: I1013 08:03:36.971558 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfkrc\" (UniqueName: \"kubernetes.io/projected/5f8b4645-97a7-4bb8-b15e-35ad4c8bf70d-kube-api-access-dfkrc\") pod \"5f8b4645-97a7-4bb8-b15e-35ad4c8bf70d\" (UID: \"5f8b4645-97a7-4bb8-b15e-35ad4c8bf70d\") " Oct 13 08:03:36 crc kubenswrapper[4833]: I1013 08:03:36.978868 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f8b4645-97a7-4bb8-b15e-35ad4c8bf70d-kube-api-access-dfkrc" (OuterVolumeSpecName: "kube-api-access-dfkrc") pod "5f8b4645-97a7-4bb8-b15e-35ad4c8bf70d" (UID: "5f8b4645-97a7-4bb8-b15e-35ad4c8bf70d"). InnerVolumeSpecName "kube-api-access-dfkrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:03:37 crc kubenswrapper[4833]: I1013 08:03:37.073472 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfkrc\" (UniqueName: \"kubernetes.io/projected/5f8b4645-97a7-4bb8-b15e-35ad4c8bf70d-kube-api-access-dfkrc\") on node \"crc\" DevicePath \"\"" Oct 13 08:03:37 crc kubenswrapper[4833]: I1013 08:03:37.449734 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gvmh4" event={"ID":"5f8b4645-97a7-4bb8-b15e-35ad4c8bf70d","Type":"ContainerDied","Data":"fca859b9dec2ada0dc444510986c8e7542696345f54236d46ccb586fc8a6e4fc"} Oct 13 08:03:37 crc kubenswrapper[4833]: I1013 08:03:37.449784 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gvmh4" Oct 13 08:03:37 crc kubenswrapper[4833]: I1013 08:03:37.449796 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fca859b9dec2ada0dc444510986c8e7542696345f54236d46ccb586fc8a6e4fc" Oct 13 08:03:43 crc kubenswrapper[4833]: I1013 08:03:43.666593 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-4a75-account-create-xgv4p"] Oct 13 08:03:43 crc kubenswrapper[4833]: E1013 08:03:43.667698 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f8b4645-97a7-4bb8-b15e-35ad4c8bf70d" containerName="mariadb-database-create" Oct 13 08:03:43 crc kubenswrapper[4833]: I1013 08:03:43.667723 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f8b4645-97a7-4bb8-b15e-35ad4c8bf70d" containerName="mariadb-database-create" Oct 13 08:03:43 crc kubenswrapper[4833]: I1013 08:03:43.667966 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f8b4645-97a7-4bb8-b15e-35ad4c8bf70d" containerName="mariadb-database-create" Oct 13 08:03:43 crc kubenswrapper[4833]: I1013 08:03:43.668802 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4a75-account-create-xgv4p" Oct 13 08:03:43 crc kubenswrapper[4833]: I1013 08:03:43.671988 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Oct 13 08:03:43 crc kubenswrapper[4833]: I1013 08:03:43.677909 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-4a75-account-create-xgv4p"] Oct 13 08:03:43 crc kubenswrapper[4833]: I1013 08:03:43.807583 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4btb\" (UniqueName: \"kubernetes.io/projected/e01e6bee-66dc-4aaf-8cfc-a601a28e7f2b-kube-api-access-g4btb\") pod \"cinder-4a75-account-create-xgv4p\" (UID: \"e01e6bee-66dc-4aaf-8cfc-a601a28e7f2b\") " pod="openstack/cinder-4a75-account-create-xgv4p" Oct 13 08:03:43 crc kubenswrapper[4833]: I1013 08:03:43.908659 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4btb\" (UniqueName: \"kubernetes.io/projected/e01e6bee-66dc-4aaf-8cfc-a601a28e7f2b-kube-api-access-g4btb\") pod \"cinder-4a75-account-create-xgv4p\" (UID: \"e01e6bee-66dc-4aaf-8cfc-a601a28e7f2b\") " pod="openstack/cinder-4a75-account-create-xgv4p" Oct 13 08:03:43 crc kubenswrapper[4833]: I1013 08:03:43.941848 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4btb\" (UniqueName: \"kubernetes.io/projected/e01e6bee-66dc-4aaf-8cfc-a601a28e7f2b-kube-api-access-g4btb\") pod \"cinder-4a75-account-create-xgv4p\" (UID: \"e01e6bee-66dc-4aaf-8cfc-a601a28e7f2b\") " pod="openstack/cinder-4a75-account-create-xgv4p" Oct 13 08:03:43 crc kubenswrapper[4833]: I1013 08:03:43.997216 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4a75-account-create-xgv4p" Oct 13 08:03:44 crc kubenswrapper[4833]: I1013 08:03:44.333825 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-4a75-account-create-xgv4p"] Oct 13 08:03:44 crc kubenswrapper[4833]: I1013 08:03:44.532600 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4a75-account-create-xgv4p" event={"ID":"e01e6bee-66dc-4aaf-8cfc-a601a28e7f2b","Type":"ContainerStarted","Data":"6807c762f1c7e8e0f53cf97203eca753ac88866be714d116a7d3fdb106d9bb53"} Oct 13 08:03:45 crc kubenswrapper[4833]: I1013 08:03:45.545360 4833 generic.go:334] "Generic (PLEG): container finished" podID="e01e6bee-66dc-4aaf-8cfc-a601a28e7f2b" containerID="5bfe638406657c961623ef99c2da2850489f807594bbaa6e4c9a5214b966ecf4" exitCode=0 Oct 13 08:03:45 crc kubenswrapper[4833]: I1013 08:03:45.545426 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4a75-account-create-xgv4p" event={"ID":"e01e6bee-66dc-4aaf-8cfc-a601a28e7f2b","Type":"ContainerDied","Data":"5bfe638406657c961623ef99c2da2850489f807594bbaa6e4c9a5214b966ecf4"} Oct 13 08:03:46 crc kubenswrapper[4833]: I1013 08:03:46.940451 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4a75-account-create-xgv4p" Oct 13 08:03:47 crc kubenswrapper[4833]: I1013 08:03:47.070086 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4btb\" (UniqueName: \"kubernetes.io/projected/e01e6bee-66dc-4aaf-8cfc-a601a28e7f2b-kube-api-access-g4btb\") pod \"e01e6bee-66dc-4aaf-8cfc-a601a28e7f2b\" (UID: \"e01e6bee-66dc-4aaf-8cfc-a601a28e7f2b\") " Oct 13 08:03:47 crc kubenswrapper[4833]: I1013 08:03:47.081052 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e01e6bee-66dc-4aaf-8cfc-a601a28e7f2b-kube-api-access-g4btb" (OuterVolumeSpecName: "kube-api-access-g4btb") pod "e01e6bee-66dc-4aaf-8cfc-a601a28e7f2b" (UID: "e01e6bee-66dc-4aaf-8cfc-a601a28e7f2b"). InnerVolumeSpecName "kube-api-access-g4btb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:03:47 crc kubenswrapper[4833]: I1013 08:03:47.171911 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4btb\" (UniqueName: \"kubernetes.io/projected/e01e6bee-66dc-4aaf-8cfc-a601a28e7f2b-kube-api-access-g4btb\") on node \"crc\" DevicePath \"\"" Oct 13 08:03:47 crc kubenswrapper[4833]: I1013 08:03:47.564573 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4a75-account-create-xgv4p" event={"ID":"e01e6bee-66dc-4aaf-8cfc-a601a28e7f2b","Type":"ContainerDied","Data":"6807c762f1c7e8e0f53cf97203eca753ac88866be714d116a7d3fdb106d9bb53"} Oct 13 08:03:47 crc kubenswrapper[4833]: I1013 08:03:47.564623 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6807c762f1c7e8e0f53cf97203eca753ac88866be714d116a7d3fdb106d9bb53" Oct 13 08:03:47 crc kubenswrapper[4833]: I1013 08:03:47.564651 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4a75-account-create-xgv4p" Oct 13 08:03:48 crc kubenswrapper[4833]: I1013 08:03:48.627082 4833 scope.go:117] "RemoveContainer" containerID="84144bfb0195d5a0d1b4378bb219c91ea274fee3df90291859e033d126a958d8" Oct 13 08:03:48 crc kubenswrapper[4833]: E1013 08:03:48.627551 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:03:48 crc kubenswrapper[4833]: I1013 08:03:48.836818 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-c25l7"] Oct 13 08:03:48 crc kubenswrapper[4833]: E1013 08:03:48.837353 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e01e6bee-66dc-4aaf-8cfc-a601a28e7f2b" containerName="mariadb-account-create" Oct 13 08:03:48 crc kubenswrapper[4833]: I1013 08:03:48.837387 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e01e6bee-66dc-4aaf-8cfc-a601a28e7f2b" containerName="mariadb-account-create" Oct 13 08:03:48 crc kubenswrapper[4833]: I1013 08:03:48.837756 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="e01e6bee-66dc-4aaf-8cfc-a601a28e7f2b" containerName="mariadb-account-create" Oct 13 08:03:48 crc kubenswrapper[4833]: I1013 08:03:48.838751 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-c25l7" Oct 13 08:03:48 crc kubenswrapper[4833]: I1013 08:03:48.841039 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 13 08:03:48 crc kubenswrapper[4833]: I1013 08:03:48.842173 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-2bzd8" Oct 13 08:03:48 crc kubenswrapper[4833]: I1013 08:03:48.842177 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 13 08:03:48 crc kubenswrapper[4833]: I1013 08:03:48.847314 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-c25l7"] Oct 13 08:03:48 crc kubenswrapper[4833]: I1013 08:03:48.907205 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e923cf03-0639-47ae-af30-793d9582ec2b-etc-machine-id\") pod \"cinder-db-sync-c25l7\" (UID: \"e923cf03-0639-47ae-af30-793d9582ec2b\") " pod="openstack/cinder-db-sync-c25l7" Oct 13 08:03:48 crc kubenswrapper[4833]: I1013 08:03:48.907320 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwsb6\" (UniqueName: \"kubernetes.io/projected/e923cf03-0639-47ae-af30-793d9582ec2b-kube-api-access-hwsb6\") pod \"cinder-db-sync-c25l7\" (UID: \"e923cf03-0639-47ae-af30-793d9582ec2b\") " pod="openstack/cinder-db-sync-c25l7" Oct 13 08:03:48 crc kubenswrapper[4833]: I1013 08:03:48.907368 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e923cf03-0639-47ae-af30-793d9582ec2b-db-sync-config-data\") pod \"cinder-db-sync-c25l7\" (UID: \"e923cf03-0639-47ae-af30-793d9582ec2b\") " pod="openstack/cinder-db-sync-c25l7" Oct 13 08:03:48 crc kubenswrapper[4833]: I1013 08:03:48.907421 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e923cf03-0639-47ae-af30-793d9582ec2b-combined-ca-bundle\") pod \"cinder-db-sync-c25l7\" (UID: \"e923cf03-0639-47ae-af30-793d9582ec2b\") " pod="openstack/cinder-db-sync-c25l7" Oct 13 08:03:48 crc kubenswrapper[4833]: I1013 08:03:48.907479 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e923cf03-0639-47ae-af30-793d9582ec2b-config-data\") pod \"cinder-db-sync-c25l7\" (UID: \"e923cf03-0639-47ae-af30-793d9582ec2b\") " pod="openstack/cinder-db-sync-c25l7" Oct 13 08:03:48 crc kubenswrapper[4833]: I1013 08:03:48.907507 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e923cf03-0639-47ae-af30-793d9582ec2b-scripts\") pod \"cinder-db-sync-c25l7\" (UID: \"e923cf03-0639-47ae-af30-793d9582ec2b\") " pod="openstack/cinder-db-sync-c25l7" Oct 13 08:03:49 crc kubenswrapper[4833]: I1013 08:03:49.009260 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e923cf03-0639-47ae-af30-793d9582ec2b-combined-ca-bundle\") pod \"cinder-db-sync-c25l7\" (UID: \"e923cf03-0639-47ae-af30-793d9582ec2b\") " pod="openstack/cinder-db-sync-c25l7" Oct 13 08:03:49 crc kubenswrapper[4833]: I1013 08:03:49.009367 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e923cf03-0639-47ae-af30-793d9582ec2b-config-data\") pod \"cinder-db-sync-c25l7\" (UID: \"e923cf03-0639-47ae-af30-793d9582ec2b\") " pod="openstack/cinder-db-sync-c25l7" Oct 13 08:03:49 crc kubenswrapper[4833]: I1013 08:03:49.009398 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e923cf03-0639-47ae-af30-793d9582ec2b-scripts\") pod \"cinder-db-sync-c25l7\" (UID: \"e923cf03-0639-47ae-af30-793d9582ec2b\") " pod="openstack/cinder-db-sync-c25l7" Oct 13 08:03:49 crc kubenswrapper[4833]: I1013 08:03:49.010527 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e923cf03-0639-47ae-af30-793d9582ec2b-etc-machine-id\") pod \"cinder-db-sync-c25l7\" (UID: \"e923cf03-0639-47ae-af30-793d9582ec2b\") " pod="openstack/cinder-db-sync-c25l7" Oct 13 08:03:49 crc kubenswrapper[4833]: I1013 08:03:49.010628 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e923cf03-0639-47ae-af30-793d9582ec2b-etc-machine-id\") pod \"cinder-db-sync-c25l7\" (UID: \"e923cf03-0639-47ae-af30-793d9582ec2b\") " pod="openstack/cinder-db-sync-c25l7" Oct 13 08:03:49 crc kubenswrapper[4833]: I1013 08:03:49.010644 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwsb6\" (UniqueName: \"kubernetes.io/projected/e923cf03-0639-47ae-af30-793d9582ec2b-kube-api-access-hwsb6\") pod \"cinder-db-sync-c25l7\" (UID: \"e923cf03-0639-47ae-af30-793d9582ec2b\") " pod="openstack/cinder-db-sync-c25l7" Oct 13 08:03:49 crc kubenswrapper[4833]: I1013 08:03:49.010747 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e923cf03-0639-47ae-af30-793d9582ec2b-db-sync-config-data\") pod \"cinder-db-sync-c25l7\" (UID: \"e923cf03-0639-47ae-af30-793d9582ec2b\") " pod="openstack/cinder-db-sync-c25l7" Oct 13 08:03:49 crc kubenswrapper[4833]: I1013 08:03:49.015962 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e923cf03-0639-47ae-af30-793d9582ec2b-scripts\") pod \"cinder-db-sync-c25l7\" (UID: \"e923cf03-0639-47ae-af30-793d9582ec2b\") " pod="openstack/cinder-db-sync-c25l7" Oct 13 08:03:49 crc kubenswrapper[4833]: I1013 08:03:49.018557 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e923cf03-0639-47ae-af30-793d9582ec2b-db-sync-config-data\") pod \"cinder-db-sync-c25l7\" (UID: \"e923cf03-0639-47ae-af30-793d9582ec2b\") " pod="openstack/cinder-db-sync-c25l7" Oct 13 08:03:49 crc kubenswrapper[4833]: I1013 08:03:49.019701 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e923cf03-0639-47ae-af30-793d9582ec2b-combined-ca-bundle\") pod \"cinder-db-sync-c25l7\" (UID: \"e923cf03-0639-47ae-af30-793d9582ec2b\") " pod="openstack/cinder-db-sync-c25l7" Oct 13 08:03:49 crc kubenswrapper[4833]: I1013 08:03:49.028758 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e923cf03-0639-47ae-af30-793d9582ec2b-config-data\") pod \"cinder-db-sync-c25l7\" (UID: \"e923cf03-0639-47ae-af30-793d9582ec2b\") " pod="openstack/cinder-db-sync-c25l7" Oct 13 08:03:49 crc kubenswrapper[4833]: I1013 08:03:49.033939 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwsb6\" (UniqueName: \"kubernetes.io/projected/e923cf03-0639-47ae-af30-793d9582ec2b-kube-api-access-hwsb6\") pod \"cinder-db-sync-c25l7\" (UID: \"e923cf03-0639-47ae-af30-793d9582ec2b\") " pod="openstack/cinder-db-sync-c25l7" Oct 13 08:03:49 crc kubenswrapper[4833]: I1013 08:03:49.155797 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-c25l7" Oct 13 08:03:49 crc kubenswrapper[4833]: I1013 08:03:49.670420 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-c25l7"] Oct 13 08:03:49 crc kubenswrapper[4833]: W1013 08:03:49.678353 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode923cf03_0639_47ae_af30_793d9582ec2b.slice/crio-e2ec82495b024cad2d1b6d262496a8697f8a86d7f00b97c8dceb744287817cb6 WatchSource:0}: Error finding container e2ec82495b024cad2d1b6d262496a8697f8a86d7f00b97c8dceb744287817cb6: Status 404 returned error can't find the container with id e2ec82495b024cad2d1b6d262496a8697f8a86d7f00b97c8dceb744287817cb6 Oct 13 08:03:50 crc kubenswrapper[4833]: I1013 08:03:50.599814 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-c25l7" event={"ID":"e923cf03-0639-47ae-af30-793d9582ec2b","Type":"ContainerStarted","Data":"4972162f9b7ee3ce65a461cc5c46bb9c6b189afd806fd4241c71a71f927d4297"} Oct 13 08:03:50 crc kubenswrapper[4833]: I1013 08:03:50.600157 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-c25l7" event={"ID":"e923cf03-0639-47ae-af30-793d9582ec2b","Type":"ContainerStarted","Data":"e2ec82495b024cad2d1b6d262496a8697f8a86d7f00b97c8dceb744287817cb6"} Oct 13 08:03:50 crc kubenswrapper[4833]: I1013 08:03:50.627522 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-c25l7" podStartSLOduration=2.627499231 podStartE2EDuration="2.627499231s" podCreationTimestamp="2025-10-13 08:03:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:03:50.62147048 +0000 UTC m=+5720.721893416" watchObservedRunningTime="2025-10-13 08:03:50.627499231 +0000 UTC m=+5720.727922187" Oct 13 08:03:51 crc kubenswrapper[4833]: I1013 08:03:51.407024 4833 scope.go:117] "RemoveContainer" containerID="054af7fab471f7272ab3504b942875052e0832e0e624e58ca28c67052d1667b4" Oct 13 08:03:51 crc kubenswrapper[4833]: I1013 08:03:51.457038 4833 scope.go:117] "RemoveContainer" containerID="6bf45e9adc7c6b9491ec622a592655cde72c18832e72e2bb5e45779df1800aeb" Oct 13 08:03:53 crc kubenswrapper[4833]: I1013 08:03:53.645301 4833 generic.go:334] "Generic (PLEG): container finished" podID="e923cf03-0639-47ae-af30-793d9582ec2b" containerID="4972162f9b7ee3ce65a461cc5c46bb9c6b189afd806fd4241c71a71f927d4297" exitCode=0 Oct 13 08:03:53 crc kubenswrapper[4833]: I1013 08:03:53.645414 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-c25l7" event={"ID":"e923cf03-0639-47ae-af30-793d9582ec2b","Type":"ContainerDied","Data":"4972162f9b7ee3ce65a461cc5c46bb9c6b189afd806fd4241c71a71f927d4297"} Oct 13 08:03:54 crc kubenswrapper[4833]: I1013 08:03:54.990093 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-c25l7" Oct 13 08:03:55 crc kubenswrapper[4833]: I1013 08:03:55.131300 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e923cf03-0639-47ae-af30-793d9582ec2b-config-data\") pod \"e923cf03-0639-47ae-af30-793d9582ec2b\" (UID: \"e923cf03-0639-47ae-af30-793d9582ec2b\") " Oct 13 08:03:55 crc kubenswrapper[4833]: I1013 08:03:55.131474 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e923cf03-0639-47ae-af30-793d9582ec2b-db-sync-config-data\") pod \"e923cf03-0639-47ae-af30-793d9582ec2b\" (UID: \"e923cf03-0639-47ae-af30-793d9582ec2b\") " Oct 13 08:03:55 crc kubenswrapper[4833]: I1013 08:03:55.131524 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwsb6\" (UniqueName: \"kubernetes.io/projected/e923cf03-0639-47ae-af30-793d9582ec2b-kube-api-access-hwsb6\") pod \"e923cf03-0639-47ae-af30-793d9582ec2b\" (UID: \"e923cf03-0639-47ae-af30-793d9582ec2b\") " Oct 13 08:03:55 crc kubenswrapper[4833]: I1013 08:03:55.131623 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e923cf03-0639-47ae-af30-793d9582ec2b-combined-ca-bundle\") pod \"e923cf03-0639-47ae-af30-793d9582ec2b\" (UID: \"e923cf03-0639-47ae-af30-793d9582ec2b\") " Oct 13 08:03:55 crc kubenswrapper[4833]: I1013 08:03:55.131669 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e923cf03-0639-47ae-af30-793d9582ec2b-etc-machine-id\") pod \"e923cf03-0639-47ae-af30-793d9582ec2b\" (UID: \"e923cf03-0639-47ae-af30-793d9582ec2b\") " Oct 13 08:03:55 crc kubenswrapper[4833]: I1013 08:03:55.131706 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e923cf03-0639-47ae-af30-793d9582ec2b-scripts\") pod \"e923cf03-0639-47ae-af30-793d9582ec2b\" (UID: \"e923cf03-0639-47ae-af30-793d9582ec2b\") " Oct 13 08:03:55 crc kubenswrapper[4833]: I1013 08:03:55.131812 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e923cf03-0639-47ae-af30-793d9582ec2b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e923cf03-0639-47ae-af30-793d9582ec2b" (UID: "e923cf03-0639-47ae-af30-793d9582ec2b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 08:03:55 crc kubenswrapper[4833]: I1013 08:03:55.133215 4833 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e923cf03-0639-47ae-af30-793d9582ec2b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 13 08:03:55 crc kubenswrapper[4833]: I1013 08:03:55.139253 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e923cf03-0639-47ae-af30-793d9582ec2b-scripts" (OuterVolumeSpecName: "scripts") pod "e923cf03-0639-47ae-af30-793d9582ec2b" (UID: "e923cf03-0639-47ae-af30-793d9582ec2b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:03:55 crc kubenswrapper[4833]: I1013 08:03:55.139385 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e923cf03-0639-47ae-af30-793d9582ec2b-kube-api-access-hwsb6" (OuterVolumeSpecName: "kube-api-access-hwsb6") pod "e923cf03-0639-47ae-af30-793d9582ec2b" (UID: "e923cf03-0639-47ae-af30-793d9582ec2b"). InnerVolumeSpecName "kube-api-access-hwsb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:03:55 crc kubenswrapper[4833]: I1013 08:03:55.145717 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e923cf03-0639-47ae-af30-793d9582ec2b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e923cf03-0639-47ae-af30-793d9582ec2b" (UID: "e923cf03-0639-47ae-af30-793d9582ec2b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:03:55 crc kubenswrapper[4833]: I1013 08:03:55.163451 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e923cf03-0639-47ae-af30-793d9582ec2b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e923cf03-0639-47ae-af30-793d9582ec2b" (UID: "e923cf03-0639-47ae-af30-793d9582ec2b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:03:55 crc kubenswrapper[4833]: I1013 08:03:55.195221 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e923cf03-0639-47ae-af30-793d9582ec2b-config-data" (OuterVolumeSpecName: "config-data") pod "e923cf03-0639-47ae-af30-793d9582ec2b" (UID: "e923cf03-0639-47ae-af30-793d9582ec2b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:03:55 crc kubenswrapper[4833]: I1013 08:03:55.235722 4833 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e923cf03-0639-47ae-af30-793d9582ec2b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:03:55 crc kubenswrapper[4833]: I1013 08:03:55.235772 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwsb6\" (UniqueName: \"kubernetes.io/projected/e923cf03-0639-47ae-af30-793d9582ec2b-kube-api-access-hwsb6\") on node \"crc\" DevicePath \"\"" Oct 13 08:03:55 crc kubenswrapper[4833]: I1013 08:03:55.235791 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e923cf03-0639-47ae-af30-793d9582ec2b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:03:55 crc kubenswrapper[4833]: I1013 08:03:55.235811 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e923cf03-0639-47ae-af30-793d9582ec2b-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 08:03:55 crc kubenswrapper[4833]: I1013 08:03:55.235828 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e923cf03-0639-47ae-af30-793d9582ec2b-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:03:55 crc kubenswrapper[4833]: I1013 08:03:55.661754 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-c25l7" event={"ID":"e923cf03-0639-47ae-af30-793d9582ec2b","Type":"ContainerDied","Data":"e2ec82495b024cad2d1b6d262496a8697f8a86d7f00b97c8dceb744287817cb6"} Oct 13 08:03:55 crc kubenswrapper[4833]: I1013 08:03:55.661803 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2ec82495b024cad2d1b6d262496a8697f8a86d7f00b97c8dceb744287817cb6" Oct 13 08:03:55 crc kubenswrapper[4833]: I1013 08:03:55.661837 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-c25l7" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.012246 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69565fc5c9-v8vbs"] Oct 13 08:03:56 crc kubenswrapper[4833]: E1013 08:03:56.012596 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e923cf03-0639-47ae-af30-793d9582ec2b" containerName="cinder-db-sync" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.012609 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e923cf03-0639-47ae-af30-793d9582ec2b" containerName="cinder-db-sync" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.012778 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="e923cf03-0639-47ae-af30-793d9582ec2b" containerName="cinder-db-sync" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.013612 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69565fc5c9-v8vbs" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.025667 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69565fc5c9-v8vbs"] Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.050377 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0a0cc6b-1478-4d81-b17b-467aee896980-ovsdbserver-sb\") pod \"dnsmasq-dns-69565fc5c9-v8vbs\" (UID: \"d0a0cc6b-1478-4d81-b17b-467aee896980\") " pod="openstack/dnsmasq-dns-69565fc5c9-v8vbs" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.050438 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0a0cc6b-1478-4d81-b17b-467aee896980-ovsdbserver-nb\") pod \"dnsmasq-dns-69565fc5c9-v8vbs\" (UID: \"d0a0cc6b-1478-4d81-b17b-467aee896980\") " pod="openstack/dnsmasq-dns-69565fc5c9-v8vbs" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.050477 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72png\" (UniqueName: \"kubernetes.io/projected/d0a0cc6b-1478-4d81-b17b-467aee896980-kube-api-access-72png\") pod \"dnsmasq-dns-69565fc5c9-v8vbs\" (UID: \"d0a0cc6b-1478-4d81-b17b-467aee896980\") " pod="openstack/dnsmasq-dns-69565fc5c9-v8vbs" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.050525 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0a0cc6b-1478-4d81-b17b-467aee896980-config\") pod \"dnsmasq-dns-69565fc5c9-v8vbs\" (UID: \"d0a0cc6b-1478-4d81-b17b-467aee896980\") " pod="openstack/dnsmasq-dns-69565fc5c9-v8vbs" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.050558 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0a0cc6b-1478-4d81-b17b-467aee896980-dns-svc\") pod \"dnsmasq-dns-69565fc5c9-v8vbs\" (UID: \"d0a0cc6b-1478-4d81-b17b-467aee896980\") " pod="openstack/dnsmasq-dns-69565fc5c9-v8vbs" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.152567 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0a0cc6b-1478-4d81-b17b-467aee896980-ovsdbserver-nb\") pod \"dnsmasq-dns-69565fc5c9-v8vbs\" (UID: \"d0a0cc6b-1478-4d81-b17b-467aee896980\") " pod="openstack/dnsmasq-dns-69565fc5c9-v8vbs" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.152630 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72png\" (UniqueName: \"kubernetes.io/projected/d0a0cc6b-1478-4d81-b17b-467aee896980-kube-api-access-72png\") pod \"dnsmasq-dns-69565fc5c9-v8vbs\" (UID: \"d0a0cc6b-1478-4d81-b17b-467aee896980\") " pod="openstack/dnsmasq-dns-69565fc5c9-v8vbs" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.152681 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0a0cc6b-1478-4d81-b17b-467aee896980-config\") pod \"dnsmasq-dns-69565fc5c9-v8vbs\" (UID: \"d0a0cc6b-1478-4d81-b17b-467aee896980\") " pod="openstack/dnsmasq-dns-69565fc5c9-v8vbs" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.152701 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0a0cc6b-1478-4d81-b17b-467aee896980-dns-svc\") pod \"dnsmasq-dns-69565fc5c9-v8vbs\" (UID: \"d0a0cc6b-1478-4d81-b17b-467aee896980\") " pod="openstack/dnsmasq-dns-69565fc5c9-v8vbs" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.152774 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0a0cc6b-1478-4d81-b17b-467aee896980-ovsdbserver-sb\") pod \"dnsmasq-dns-69565fc5c9-v8vbs\" (UID: \"d0a0cc6b-1478-4d81-b17b-467aee896980\") " pod="openstack/dnsmasq-dns-69565fc5c9-v8vbs" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.153692 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0a0cc6b-1478-4d81-b17b-467aee896980-ovsdbserver-sb\") pod \"dnsmasq-dns-69565fc5c9-v8vbs\" (UID: \"d0a0cc6b-1478-4d81-b17b-467aee896980\") " pod="openstack/dnsmasq-dns-69565fc5c9-v8vbs" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.153711 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0a0cc6b-1478-4d81-b17b-467aee896980-ovsdbserver-nb\") pod \"dnsmasq-dns-69565fc5c9-v8vbs\" (UID: \"d0a0cc6b-1478-4d81-b17b-467aee896980\") " pod="openstack/dnsmasq-dns-69565fc5c9-v8vbs" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.153733 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0a0cc6b-1478-4d81-b17b-467aee896980-dns-svc\") pod \"dnsmasq-dns-69565fc5c9-v8vbs\" (UID: \"d0a0cc6b-1478-4d81-b17b-467aee896980\") " pod="openstack/dnsmasq-dns-69565fc5c9-v8vbs" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.154051 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0a0cc6b-1478-4d81-b17b-467aee896980-config\") pod \"dnsmasq-dns-69565fc5c9-v8vbs\" (UID: \"d0a0cc6b-1478-4d81-b17b-467aee896980\") " pod="openstack/dnsmasq-dns-69565fc5c9-v8vbs" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.160243 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.161521 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.164419 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.164557 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.164590 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-2bzd8" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.166728 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.188790 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.203748 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72png\" (UniqueName: \"kubernetes.io/projected/d0a0cc6b-1478-4d81-b17b-467aee896980-kube-api-access-72png\") pod \"dnsmasq-dns-69565fc5c9-v8vbs\" (UID: \"d0a0cc6b-1478-4d81-b17b-467aee896980\") " pod="openstack/dnsmasq-dns-69565fc5c9-v8vbs" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.255840 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\") " pod="openstack/cinder-api-0" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.255923 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\") " pod="openstack/cinder-api-0" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.255950 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-config-data-custom\") pod \"cinder-api-0\" (UID: \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\") " pod="openstack/cinder-api-0" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.255989 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-scripts\") pod \"cinder-api-0\" (UID: \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\") " pod="openstack/cinder-api-0" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.256062 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-config-data\") pod \"cinder-api-0\" (UID: \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\") " pod="openstack/cinder-api-0" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.256094 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxg4x\" (UniqueName: \"kubernetes.io/projected/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-kube-api-access-kxg4x\") pod \"cinder-api-0\" (UID: \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\") " pod="openstack/cinder-api-0" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.256133 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-logs\") pod \"cinder-api-0\" (UID: \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\") " pod="openstack/cinder-api-0" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.337830 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69565fc5c9-v8vbs" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.357494 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-logs\") pod \"cinder-api-0\" (UID: \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\") " pod="openstack/cinder-api-0" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.357623 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\") " pod="openstack/cinder-api-0" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.357679 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\") " pod="openstack/cinder-api-0" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.357694 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-config-data-custom\") pod \"cinder-api-0\" (UID: \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\") " pod="openstack/cinder-api-0" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.357819 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-scripts\") pod \"cinder-api-0\" (UID: \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\") " pod="openstack/cinder-api-0" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.357823 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\") " pod="openstack/cinder-api-0" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.357919 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-config-data\") pod \"cinder-api-0\" (UID: \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\") " pod="openstack/cinder-api-0" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.357966 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxg4x\" (UniqueName: \"kubernetes.io/projected/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-kube-api-access-kxg4x\") pod \"cinder-api-0\" (UID: \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\") " pod="openstack/cinder-api-0" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.358390 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-logs\") pod \"cinder-api-0\" (UID: \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\") " pod="openstack/cinder-api-0" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.362488 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\") " pod="openstack/cinder-api-0" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.362703 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-config-data\") pod \"cinder-api-0\" (UID: \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\") " pod="openstack/cinder-api-0" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.366861 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-scripts\") pod \"cinder-api-0\" (UID: \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\") " pod="openstack/cinder-api-0" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.371505 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-config-data-custom\") pod \"cinder-api-0\" (UID: \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\") " pod="openstack/cinder-api-0" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.374124 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxg4x\" (UniqueName: \"kubernetes.io/projected/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-kube-api-access-kxg4x\") pod \"cinder-api-0\" (UID: \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\") " pod="openstack/cinder-api-0" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.485331 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.576766 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69565fc5c9-v8vbs"] Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.676434 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69565fc5c9-v8vbs" event={"ID":"d0a0cc6b-1478-4d81-b17b-467aee896980","Type":"ContainerStarted","Data":"f96daeafefa6ef09d513a1099d22e29e6545ec66698b04b43e6a2a6ddbffcc58"} Oct 13 08:03:56 crc kubenswrapper[4833]: I1013 08:03:56.779964 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 13 08:03:57 crc kubenswrapper[4833]: I1013 08:03:57.695338 4833 generic.go:334] "Generic (PLEG): container finished" podID="d0a0cc6b-1478-4d81-b17b-467aee896980" containerID="bac3403947034b4a4eeffa06766504cf48545d6cc50def508471adcbd802e709" exitCode=0 Oct 13 08:03:57 crc kubenswrapper[4833]: I1013 08:03:57.695458 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69565fc5c9-v8vbs" event={"ID":"d0a0cc6b-1478-4d81-b17b-467aee896980","Type":"ContainerDied","Data":"bac3403947034b4a4eeffa06766504cf48545d6cc50def508471adcbd802e709"} Oct 13 08:03:57 crc kubenswrapper[4833]: I1013 08:03:57.700727 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55","Type":"ContainerStarted","Data":"60676af0301e1bd93f7b182f53039f972d15b1bbab6274d5c09e59ff81df352e"} Oct 13 08:03:57 crc kubenswrapper[4833]: I1013 08:03:57.700761 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55","Type":"ContainerStarted","Data":"d6119ed3501f4c4dbd659f3ede87a96ca80f78804755c66a6ed5c289dbf8e559"} Oct 13 08:03:58 crc kubenswrapper[4833]: I1013 08:03:58.223860 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 13 08:03:58 crc kubenswrapper[4833]: I1013 08:03:58.712992 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69565fc5c9-v8vbs" event={"ID":"d0a0cc6b-1478-4d81-b17b-467aee896980","Type":"ContainerStarted","Data":"e3900fde784d1198adfad6002cc06aeff629e1828a24a0674ae9e41ebdf0f38d"} Oct 13 08:03:58 crc kubenswrapper[4833]: I1013 08:03:58.713385 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69565fc5c9-v8vbs" Oct 13 08:03:58 crc kubenswrapper[4833]: I1013 08:03:58.716176 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55","Type":"ContainerStarted","Data":"394de5d3ed57800ed5071c663accb432bb6c6a9b93e2d1c0695260fa2e789a69"} Oct 13 08:03:58 crc kubenswrapper[4833]: I1013 08:03:58.716394 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 13 08:03:58 crc kubenswrapper[4833]: I1013 08:03:58.736992 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69565fc5c9-v8vbs" podStartSLOduration=3.736974193 podStartE2EDuration="3.736974193s" podCreationTimestamp="2025-10-13 08:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:03:58.736014076 +0000 UTC m=+5728.836436992" watchObservedRunningTime="2025-10-13 08:03:58.736974193 +0000 UTC m=+5728.837397109" Oct 13 08:03:58 crc kubenswrapper[4833]: I1013 08:03:58.773390 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.773361138 podStartE2EDuration="2.773361138s" podCreationTimestamp="2025-10-13 08:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:03:58.755873211 +0000 UTC m=+5728.856296127" watchObservedRunningTime="2025-10-13 08:03:58.773361138 +0000 UTC m=+5728.873784054" Oct 13 08:03:59 crc kubenswrapper[4833]: I1013 08:03:59.726623 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55" containerName="cinder-api-log" containerID="cri-o://60676af0301e1bd93f7b182f53039f972d15b1bbab6274d5c09e59ff81df352e" gracePeriod=30 Oct 13 08:03:59 crc kubenswrapper[4833]: I1013 08:03:59.726695 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55" containerName="cinder-api" containerID="cri-o://394de5d3ed57800ed5071c663accb432bb6c6a9b93e2d1c0695260fa2e789a69" gracePeriod=30 Oct 13 08:04:00 crc kubenswrapper[4833]: E1013 08:04:00.140869 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f9e4eb1_b8eb_4db1_a310_4077ab8c0c55.slice/crio-conmon-394de5d3ed57800ed5071c663accb432bb6c6a9b93e2d1c0695260fa2e789a69.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f9e4eb1_b8eb_4db1_a310_4077ab8c0c55.slice/crio-394de5d3ed57800ed5071c663accb432bb6c6a9b93e2d1c0695260fa2e789a69.scope\": RecentStats: unable to find data in memory cache]" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.287971 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.336524 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-config-data-custom\") pod \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\" (UID: \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\") " Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.336656 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-combined-ca-bundle\") pod \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\" (UID: \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\") " Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.337082 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxg4x\" (UniqueName: \"kubernetes.io/projected/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-kube-api-access-kxg4x\") pod \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\" (UID: \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\") " Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.337165 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-etc-machine-id\") pod \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\" (UID: \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\") " Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.337802 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-scripts\") pod \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\" (UID: \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\") " Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.337924 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-config-data\") pod \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\" (UID: \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\") " Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.337962 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-logs\") pod \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\" (UID: \"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55\") " Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.337694 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55" (UID: "1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.339124 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-logs" (OuterVolumeSpecName: "logs") pod "1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55" (UID: "1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.343081 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55" (UID: "1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.343217 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-scripts" (OuterVolumeSpecName: "scripts") pod "1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55" (UID: "1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.344033 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-kube-api-access-kxg4x" (OuterVolumeSpecName: "kube-api-access-kxg4x") pod "1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55" (UID: "1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55"). InnerVolumeSpecName "kube-api-access-kxg4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.367623 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55" (UID: "1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.395611 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-config-data" (OuterVolumeSpecName: "config-data") pod "1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55" (UID: "1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.440882 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxg4x\" (UniqueName: \"kubernetes.io/projected/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-kube-api-access-kxg4x\") on node \"crc\" DevicePath \"\"" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.440948 4833 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.440971 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.440994 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.441014 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-logs\") on node \"crc\" DevicePath \"\"" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.441031 4833 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.441051 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.638384 4833 scope.go:117] "RemoveContainer" containerID="84144bfb0195d5a0d1b4378bb219c91ea274fee3df90291859e033d126a958d8" Oct 13 08:04:00 crc kubenswrapper[4833]: E1013 08:04:00.638964 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.742744 4833 generic.go:334] "Generic (PLEG): container finished" podID="1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55" containerID="394de5d3ed57800ed5071c663accb432bb6c6a9b93e2d1c0695260fa2e789a69" exitCode=0 Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.742795 4833 generic.go:334] "Generic (PLEG): container finished" podID="1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55" containerID="60676af0301e1bd93f7b182f53039f972d15b1bbab6274d5c09e59ff81df352e" exitCode=143 Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.742827 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55","Type":"ContainerDied","Data":"394de5d3ed57800ed5071c663accb432bb6c6a9b93e2d1c0695260fa2e789a69"} Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.742866 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55","Type":"ContainerDied","Data":"60676af0301e1bd93f7b182f53039f972d15b1bbab6274d5c09e59ff81df352e"} Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.742886 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55","Type":"ContainerDied","Data":"d6119ed3501f4c4dbd659f3ede87a96ca80f78804755c66a6ed5c289dbf8e559"} Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.742912 4833 scope.go:117] "RemoveContainer" containerID="394de5d3ed57800ed5071c663accb432bb6c6a9b93e2d1c0695260fa2e789a69" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.743087 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.773656 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.784371 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.799626 4833 scope.go:117] "RemoveContainer" containerID="60676af0301e1bd93f7b182f53039f972d15b1bbab6274d5c09e59ff81df352e" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.808397 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 13 08:04:00 crc kubenswrapper[4833]: E1013 08:04:00.808830 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55" containerName="cinder-api" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.808845 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55" containerName="cinder-api" Oct 13 08:04:00 crc kubenswrapper[4833]: E1013 08:04:00.808876 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55" containerName="cinder-api-log" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.808885 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55" containerName="cinder-api-log" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.809087 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55" containerName="cinder-api" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.809118 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55" containerName="cinder-api-log" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.810203 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.817954 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.819801 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.820853 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.820971 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.821140 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-2bzd8" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.824866 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.827855 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.842725 4833 scope.go:117] "RemoveContainer" containerID="394de5d3ed57800ed5071c663accb432bb6c6a9b93e2d1c0695260fa2e789a69" Oct 13 08:04:00 crc kubenswrapper[4833]: E1013 08:04:00.843151 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"394de5d3ed57800ed5071c663accb432bb6c6a9b93e2d1c0695260fa2e789a69\": container with ID starting with 394de5d3ed57800ed5071c663accb432bb6c6a9b93e2d1c0695260fa2e789a69 not found: ID does not exist" containerID="394de5d3ed57800ed5071c663accb432bb6c6a9b93e2d1c0695260fa2e789a69" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.843183 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"394de5d3ed57800ed5071c663accb432bb6c6a9b93e2d1c0695260fa2e789a69"} err="failed to get container status \"394de5d3ed57800ed5071c663accb432bb6c6a9b93e2d1c0695260fa2e789a69\": rpc error: code = NotFound desc = could not find container \"394de5d3ed57800ed5071c663accb432bb6c6a9b93e2d1c0695260fa2e789a69\": container with ID starting with 394de5d3ed57800ed5071c663accb432bb6c6a9b93e2d1c0695260fa2e789a69 not found: ID does not exist" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.843214 4833 scope.go:117] "RemoveContainer" containerID="60676af0301e1bd93f7b182f53039f972d15b1bbab6274d5c09e59ff81df352e" Oct 13 08:04:00 crc kubenswrapper[4833]: E1013 08:04:00.843439 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60676af0301e1bd93f7b182f53039f972d15b1bbab6274d5c09e59ff81df352e\": container with ID starting with 60676af0301e1bd93f7b182f53039f972d15b1bbab6274d5c09e59ff81df352e not found: ID does not exist" containerID="60676af0301e1bd93f7b182f53039f972d15b1bbab6274d5c09e59ff81df352e" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.843463 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60676af0301e1bd93f7b182f53039f972d15b1bbab6274d5c09e59ff81df352e"} err="failed to get container status \"60676af0301e1bd93f7b182f53039f972d15b1bbab6274d5c09e59ff81df352e\": rpc error: code = NotFound desc = could not find container \"60676af0301e1bd93f7b182f53039f972d15b1bbab6274d5c09e59ff81df352e\": container with ID starting with 60676af0301e1bd93f7b182f53039f972d15b1bbab6274d5c09e59ff81df352e not found: ID does not exist" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.843476 4833 scope.go:117] "RemoveContainer" containerID="394de5d3ed57800ed5071c663accb432bb6c6a9b93e2d1c0695260fa2e789a69" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.843692 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"394de5d3ed57800ed5071c663accb432bb6c6a9b93e2d1c0695260fa2e789a69"} err="failed to get container status \"394de5d3ed57800ed5071c663accb432bb6c6a9b93e2d1c0695260fa2e789a69\": rpc error: code = NotFound desc = could not find container \"394de5d3ed57800ed5071c663accb432bb6c6a9b93e2d1c0695260fa2e789a69\": container with ID starting with 394de5d3ed57800ed5071c663accb432bb6c6a9b93e2d1c0695260fa2e789a69 not found: ID does not exist" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.843712 4833 scope.go:117] "RemoveContainer" containerID="60676af0301e1bd93f7b182f53039f972d15b1bbab6274d5c09e59ff81df352e" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.843863 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60676af0301e1bd93f7b182f53039f972d15b1bbab6274d5c09e59ff81df352e"} err="failed to get container status \"60676af0301e1bd93f7b182f53039f972d15b1bbab6274d5c09e59ff81df352e\": rpc error: code = NotFound desc = could not find container \"60676af0301e1bd93f7b182f53039f972d15b1bbab6274d5c09e59ff81df352e\": container with ID starting with 60676af0301e1bd93f7b182f53039f972d15b1bbab6274d5c09e59ff81df352e not found: ID does not exist" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.848381 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " pod="openstack/cinder-api-0" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.848415 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-config-data\") pod \"cinder-api-0\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " pod="openstack/cinder-api-0" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.848435 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " pod="openstack/cinder-api-0" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.848563 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-config-data-custom\") pod \"cinder-api-0\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " pod="openstack/cinder-api-0" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.848635 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " pod="openstack/cinder-api-0" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.848670 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4876de8b-a077-410e-b98a-1b50beaa4efc-logs\") pod \"cinder-api-0\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " pod="openstack/cinder-api-0" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.848716 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjkgs\" (UniqueName: \"kubernetes.io/projected/4876de8b-a077-410e-b98a-1b50beaa4efc-kube-api-access-pjkgs\") pod \"cinder-api-0\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " pod="openstack/cinder-api-0" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.848778 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4876de8b-a077-410e-b98a-1b50beaa4efc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " pod="openstack/cinder-api-0" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.848880 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-scripts\") pod \"cinder-api-0\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " pod="openstack/cinder-api-0" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.950620 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " pod="openstack/cinder-api-0" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.950749 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-config-data-custom\") pod \"cinder-api-0\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " pod="openstack/cinder-api-0" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.950815 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " pod="openstack/cinder-api-0" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.950842 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4876de8b-a077-410e-b98a-1b50beaa4efc-logs\") pod \"cinder-api-0\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " pod="openstack/cinder-api-0" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.950873 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjkgs\" (UniqueName: \"kubernetes.io/projected/4876de8b-a077-410e-b98a-1b50beaa4efc-kube-api-access-pjkgs\") pod \"cinder-api-0\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " pod="openstack/cinder-api-0" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.950939 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4876de8b-a077-410e-b98a-1b50beaa4efc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " pod="openstack/cinder-api-0" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.950996 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-scripts\") pod \"cinder-api-0\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " pod="openstack/cinder-api-0" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.951037 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " pod="openstack/cinder-api-0" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.951062 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-config-data\") pod \"cinder-api-0\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " pod="openstack/cinder-api-0" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.952095 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4876de8b-a077-410e-b98a-1b50beaa4efc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " pod="openstack/cinder-api-0" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.952759 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4876de8b-a077-410e-b98a-1b50beaa4efc-logs\") pod \"cinder-api-0\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " pod="openstack/cinder-api-0" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.955216 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-config-data\") pod \"cinder-api-0\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " pod="openstack/cinder-api-0" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.955528 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-scripts\") pod \"cinder-api-0\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " pod="openstack/cinder-api-0" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.955814 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-config-data-custom\") pod \"cinder-api-0\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " pod="openstack/cinder-api-0" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.959341 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " pod="openstack/cinder-api-0" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.963077 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " pod="openstack/cinder-api-0" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.968256 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " pod="openstack/cinder-api-0" Oct 13 08:04:00 crc kubenswrapper[4833]: I1013 08:04:00.975627 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjkgs\" (UniqueName: \"kubernetes.io/projected/4876de8b-a077-410e-b98a-1b50beaa4efc-kube-api-access-pjkgs\") pod \"cinder-api-0\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " pod="openstack/cinder-api-0" Oct 13 08:04:01 crc kubenswrapper[4833]: I1013 08:04:01.147830 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 13 08:04:01 crc kubenswrapper[4833]: I1013 08:04:01.677296 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 13 08:04:01 crc kubenswrapper[4833]: W1013 08:04:01.689004 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4876de8b_a077_410e_b98a_1b50beaa4efc.slice/crio-5a7ac6de90e53a5c90bee95acbefdc80ccd8cda8d9ec5e03620450ae7efc1eeb WatchSource:0}: Error finding container 5a7ac6de90e53a5c90bee95acbefdc80ccd8cda8d9ec5e03620450ae7efc1eeb: Status 404 returned error can't find the container with id 5a7ac6de90e53a5c90bee95acbefdc80ccd8cda8d9ec5e03620450ae7efc1eeb Oct 13 08:04:01 crc kubenswrapper[4833]: I1013 08:04:01.759293 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4876de8b-a077-410e-b98a-1b50beaa4efc","Type":"ContainerStarted","Data":"5a7ac6de90e53a5c90bee95acbefdc80ccd8cda8d9ec5e03620450ae7efc1eeb"} Oct 13 08:04:02 crc kubenswrapper[4833]: I1013 08:04:02.643406 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55" path="/var/lib/kubelet/pods/1f9e4eb1-b8eb-4db1-a310-4077ab8c0c55/volumes" Oct 13 08:04:02 crc kubenswrapper[4833]: I1013 08:04:02.776988 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4876de8b-a077-410e-b98a-1b50beaa4efc","Type":"ContainerStarted","Data":"e8d0cdf7cbfa222afbde0f470a068ac9b5347d72ae0e454e2df7c47a9a485822"} Oct 13 08:04:03 crc kubenswrapper[4833]: I1013 08:04:03.786808 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4876de8b-a077-410e-b98a-1b50beaa4efc","Type":"ContainerStarted","Data":"1e01df2a4d059b1895931688d0fd5a668097dc6750d62c8cc487fb12a5e38295"} Oct 13 08:04:03 crc kubenswrapper[4833]: I1013 08:04:03.787174 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 13 08:04:03 crc kubenswrapper[4833]: I1013 08:04:03.815419 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.815396052 podStartE2EDuration="3.815396052s" podCreationTimestamp="2025-10-13 08:04:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:04:03.809205936 +0000 UTC m=+5733.909628852" watchObservedRunningTime="2025-10-13 08:04:03.815396052 +0000 UTC m=+5733.915818968" Oct 13 08:04:06 crc kubenswrapper[4833]: I1013 08:04:06.340601 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69565fc5c9-v8vbs" Oct 13 08:04:06 crc kubenswrapper[4833]: I1013 08:04:06.421762 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58bc49bc47-zb67j"] Oct 13 08:04:06 crc kubenswrapper[4833]: I1013 08:04:06.422207 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58bc49bc47-zb67j" podUID="bb80d1ba-d1ff-4641-9e07-c06e41ef4756" containerName="dnsmasq-dns" containerID="cri-o://77044a256a92aa096dd01dd5c4fcf4eb460b79340af1e07c32c3d8a99fc2df1f" gracePeriod=10 Oct 13 08:04:06 crc kubenswrapper[4833]: I1013 08:04:06.821774 4833 generic.go:334] "Generic (PLEG): container finished" podID="bb80d1ba-d1ff-4641-9e07-c06e41ef4756" containerID="77044a256a92aa096dd01dd5c4fcf4eb460b79340af1e07c32c3d8a99fc2df1f" exitCode=0 Oct 13 08:04:06 crc kubenswrapper[4833]: I1013 08:04:06.821852 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bc49bc47-zb67j" event={"ID":"bb80d1ba-d1ff-4641-9e07-c06e41ef4756","Type":"ContainerDied","Data":"77044a256a92aa096dd01dd5c4fcf4eb460b79340af1e07c32c3d8a99fc2df1f"} Oct 13 08:04:06 crc kubenswrapper[4833]: I1013 08:04:06.908149 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bc49bc47-zb67j" Oct 13 08:04:06 crc kubenswrapper[4833]: I1013 08:04:06.979189 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb80d1ba-d1ff-4641-9e07-c06e41ef4756-ovsdbserver-sb\") pod \"bb80d1ba-d1ff-4641-9e07-c06e41ef4756\" (UID: \"bb80d1ba-d1ff-4641-9e07-c06e41ef4756\") " Oct 13 08:04:06 crc kubenswrapper[4833]: I1013 08:04:06.979284 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb80d1ba-d1ff-4641-9e07-c06e41ef4756-dns-svc\") pod \"bb80d1ba-d1ff-4641-9e07-c06e41ef4756\" (UID: \"bb80d1ba-d1ff-4641-9e07-c06e41ef4756\") " Oct 13 08:04:06 crc kubenswrapper[4833]: I1013 08:04:06.979321 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb80d1ba-d1ff-4641-9e07-c06e41ef4756-ovsdbserver-nb\") pod \"bb80d1ba-d1ff-4641-9e07-c06e41ef4756\" (UID: \"bb80d1ba-d1ff-4641-9e07-c06e41ef4756\") " Oct 13 08:04:06 crc kubenswrapper[4833]: I1013 08:04:06.979345 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6cpw\" (UniqueName: \"kubernetes.io/projected/bb80d1ba-d1ff-4641-9e07-c06e41ef4756-kube-api-access-g6cpw\") pod \"bb80d1ba-d1ff-4641-9e07-c06e41ef4756\" (UID: \"bb80d1ba-d1ff-4641-9e07-c06e41ef4756\") " Oct 13 08:04:06 crc kubenswrapper[4833]: I1013 08:04:06.979383 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb80d1ba-d1ff-4641-9e07-c06e41ef4756-config\") pod \"bb80d1ba-d1ff-4641-9e07-c06e41ef4756\" (UID: \"bb80d1ba-d1ff-4641-9e07-c06e41ef4756\") " Oct 13 08:04:06 crc kubenswrapper[4833]: I1013 08:04:06.985039 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb80d1ba-d1ff-4641-9e07-c06e41ef4756-kube-api-access-g6cpw" (OuterVolumeSpecName: "kube-api-access-g6cpw") pod "bb80d1ba-d1ff-4641-9e07-c06e41ef4756" (UID: "bb80d1ba-d1ff-4641-9e07-c06e41ef4756"). InnerVolumeSpecName "kube-api-access-g6cpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:04:07 crc kubenswrapper[4833]: I1013 08:04:07.022290 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb80d1ba-d1ff-4641-9e07-c06e41ef4756-config" (OuterVolumeSpecName: "config") pod "bb80d1ba-d1ff-4641-9e07-c06e41ef4756" (UID: "bb80d1ba-d1ff-4641-9e07-c06e41ef4756"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:04:07 crc kubenswrapper[4833]: I1013 08:04:07.024088 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb80d1ba-d1ff-4641-9e07-c06e41ef4756-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bb80d1ba-d1ff-4641-9e07-c06e41ef4756" (UID: "bb80d1ba-d1ff-4641-9e07-c06e41ef4756"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:04:07 crc kubenswrapper[4833]: I1013 08:04:07.033517 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb80d1ba-d1ff-4641-9e07-c06e41ef4756-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bb80d1ba-d1ff-4641-9e07-c06e41ef4756" (UID: "bb80d1ba-d1ff-4641-9e07-c06e41ef4756"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:04:07 crc kubenswrapper[4833]: I1013 08:04:07.040264 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb80d1ba-d1ff-4641-9e07-c06e41ef4756-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bb80d1ba-d1ff-4641-9e07-c06e41ef4756" (UID: "bb80d1ba-d1ff-4641-9e07-c06e41ef4756"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:04:07 crc kubenswrapper[4833]: I1013 08:04:07.081180 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb80d1ba-d1ff-4641-9e07-c06e41ef4756-config\") on node \"crc\" DevicePath \"\"" Oct 13 08:04:07 crc kubenswrapper[4833]: I1013 08:04:07.081221 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb80d1ba-d1ff-4641-9e07-c06e41ef4756-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 08:04:07 crc kubenswrapper[4833]: I1013 08:04:07.081234 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb80d1ba-d1ff-4641-9e07-c06e41ef4756-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 08:04:07 crc kubenswrapper[4833]: I1013 08:04:07.081246 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb80d1ba-d1ff-4641-9e07-c06e41ef4756-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 08:04:07 crc kubenswrapper[4833]: I1013 08:04:07.081258 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6cpw\" (UniqueName: \"kubernetes.io/projected/bb80d1ba-d1ff-4641-9e07-c06e41ef4756-kube-api-access-g6cpw\") on node \"crc\" DevicePath \"\"" Oct 13 08:04:07 crc kubenswrapper[4833]: I1013 08:04:07.837788 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bc49bc47-zb67j" event={"ID":"bb80d1ba-d1ff-4641-9e07-c06e41ef4756","Type":"ContainerDied","Data":"b7c38eb84230f323213c5adb66b620731a15cbd72522506203cf958f92d50d5e"} Oct 13 08:04:07 crc kubenswrapper[4833]: I1013 08:04:07.837891 4833 scope.go:117] "RemoveContainer" containerID="77044a256a92aa096dd01dd5c4fcf4eb460b79340af1e07c32c3d8a99fc2df1f" Oct 13 08:04:07 crc kubenswrapper[4833]: I1013 08:04:07.837830 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bc49bc47-zb67j" Oct 13 08:04:07 crc kubenswrapper[4833]: I1013 08:04:07.880161 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58bc49bc47-zb67j"] Oct 13 08:04:07 crc kubenswrapper[4833]: I1013 08:04:07.886073 4833 scope.go:117] "RemoveContainer" containerID="303cf075d25684ce8378dec9d6a4a126bb061d42ddb8a1be3c99096e77de3af3" Oct 13 08:04:07 crc kubenswrapper[4833]: I1013 08:04:07.888966 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58bc49bc47-zb67j"] Oct 13 08:04:08 crc kubenswrapper[4833]: I1013 08:04:08.643530 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb80d1ba-d1ff-4641-9e07-c06e41ef4756" path="/var/lib/kubelet/pods/bb80d1ba-d1ff-4641-9e07-c06e41ef4756/volumes" Oct 13 08:04:12 crc kubenswrapper[4833]: I1013 08:04:12.628078 4833 scope.go:117] "RemoveContainer" containerID="84144bfb0195d5a0d1b4378bb219c91ea274fee3df90291859e033d126a958d8" Oct 13 08:04:12 crc kubenswrapper[4833]: E1013 08:04:12.628805 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:04:12 crc kubenswrapper[4833]: I1013 08:04:12.785566 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Oct 13 08:04:23 crc kubenswrapper[4833]: I1013 08:04:23.628636 4833 scope.go:117] "RemoveContainer" containerID="84144bfb0195d5a0d1b4378bb219c91ea274fee3df90291859e033d126a958d8" Oct 13 08:04:23 crc kubenswrapper[4833]: E1013 08:04:23.629813 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:04:29 crc kubenswrapper[4833]: I1013 08:04:29.513979 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 13 08:04:29 crc kubenswrapper[4833]: E1013 08:04:29.515036 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb80d1ba-d1ff-4641-9e07-c06e41ef4756" containerName="dnsmasq-dns" Oct 13 08:04:29 crc kubenswrapper[4833]: I1013 08:04:29.515059 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb80d1ba-d1ff-4641-9e07-c06e41ef4756" containerName="dnsmasq-dns" Oct 13 08:04:29 crc kubenswrapper[4833]: E1013 08:04:29.515128 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb80d1ba-d1ff-4641-9e07-c06e41ef4756" containerName="init" Oct 13 08:04:29 crc kubenswrapper[4833]: I1013 08:04:29.515141 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb80d1ba-d1ff-4641-9e07-c06e41ef4756" containerName="init" Oct 13 08:04:29 crc kubenswrapper[4833]: I1013 08:04:29.515424 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb80d1ba-d1ff-4641-9e07-c06e41ef4756" containerName="dnsmasq-dns" Oct 13 08:04:29 crc kubenswrapper[4833]: I1013 08:04:29.517601 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 13 08:04:29 crc kubenswrapper[4833]: I1013 08:04:29.523366 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 13 08:04:29 crc kubenswrapper[4833]: I1013 08:04:29.534885 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 13 08:04:29 crc kubenswrapper[4833]: I1013 08:04:29.611279 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3603b64-2260-4bc6-bea6-6aab0745367a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f3603b64-2260-4bc6-bea6-6aab0745367a\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:29 crc kubenswrapper[4833]: I1013 08:04:29.611424 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzl57\" (UniqueName: \"kubernetes.io/projected/f3603b64-2260-4bc6-bea6-6aab0745367a-kube-api-access-hzl57\") pod \"cinder-scheduler-0\" (UID: \"f3603b64-2260-4bc6-bea6-6aab0745367a\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:29 crc kubenswrapper[4833]: I1013 08:04:29.611459 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3603b64-2260-4bc6-bea6-6aab0745367a-scripts\") pod \"cinder-scheduler-0\" (UID: \"f3603b64-2260-4bc6-bea6-6aab0745367a\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:29 crc kubenswrapper[4833]: I1013 08:04:29.611489 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3603b64-2260-4bc6-bea6-6aab0745367a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f3603b64-2260-4bc6-bea6-6aab0745367a\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:29 crc kubenswrapper[4833]: I1013 08:04:29.611520 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3603b64-2260-4bc6-bea6-6aab0745367a-config-data\") pod \"cinder-scheduler-0\" (UID: \"f3603b64-2260-4bc6-bea6-6aab0745367a\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:29 crc kubenswrapper[4833]: I1013 08:04:29.611583 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3603b64-2260-4bc6-bea6-6aab0745367a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f3603b64-2260-4bc6-bea6-6aab0745367a\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:29 crc kubenswrapper[4833]: I1013 08:04:29.713180 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3603b64-2260-4bc6-bea6-6aab0745367a-config-data\") pod \"cinder-scheduler-0\" (UID: \"f3603b64-2260-4bc6-bea6-6aab0745367a\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:29 crc kubenswrapper[4833]: I1013 08:04:29.713261 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3603b64-2260-4bc6-bea6-6aab0745367a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f3603b64-2260-4bc6-bea6-6aab0745367a\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:29 crc kubenswrapper[4833]: I1013 08:04:29.713295 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3603b64-2260-4bc6-bea6-6aab0745367a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f3603b64-2260-4bc6-bea6-6aab0745367a\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:29 crc kubenswrapper[4833]: I1013 08:04:29.713457 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzl57\" (UniqueName: \"kubernetes.io/projected/f3603b64-2260-4bc6-bea6-6aab0745367a-kube-api-access-hzl57\") pod \"cinder-scheduler-0\" (UID: \"f3603b64-2260-4bc6-bea6-6aab0745367a\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:29 crc kubenswrapper[4833]: I1013 08:04:29.713491 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3603b64-2260-4bc6-bea6-6aab0745367a-scripts\") pod \"cinder-scheduler-0\" (UID: \"f3603b64-2260-4bc6-bea6-6aab0745367a\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:29 crc kubenswrapper[4833]: I1013 08:04:29.713518 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3603b64-2260-4bc6-bea6-6aab0745367a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f3603b64-2260-4bc6-bea6-6aab0745367a\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:29 crc kubenswrapper[4833]: I1013 08:04:29.713626 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3603b64-2260-4bc6-bea6-6aab0745367a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f3603b64-2260-4bc6-bea6-6aab0745367a\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:29 crc kubenswrapper[4833]: I1013 08:04:29.719659 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3603b64-2260-4bc6-bea6-6aab0745367a-config-data\") pod \"cinder-scheduler-0\" (UID: \"f3603b64-2260-4bc6-bea6-6aab0745367a\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:29 crc kubenswrapper[4833]: I1013 08:04:29.720308 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3603b64-2260-4bc6-bea6-6aab0745367a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f3603b64-2260-4bc6-bea6-6aab0745367a\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:29 crc kubenswrapper[4833]: I1013 08:04:29.721369 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3603b64-2260-4bc6-bea6-6aab0745367a-scripts\") pod \"cinder-scheduler-0\" (UID: \"f3603b64-2260-4bc6-bea6-6aab0745367a\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:29 crc kubenswrapper[4833]: I1013 08:04:29.723382 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3603b64-2260-4bc6-bea6-6aab0745367a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f3603b64-2260-4bc6-bea6-6aab0745367a\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:29 crc kubenswrapper[4833]: I1013 08:04:29.735209 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzl57\" (UniqueName: \"kubernetes.io/projected/f3603b64-2260-4bc6-bea6-6aab0745367a-kube-api-access-hzl57\") pod \"cinder-scheduler-0\" (UID: \"f3603b64-2260-4bc6-bea6-6aab0745367a\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:29 crc kubenswrapper[4833]: I1013 08:04:29.840531 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 13 08:04:30 crc kubenswrapper[4833]: I1013 08:04:30.362236 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 13 08:04:31 crc kubenswrapper[4833]: I1013 08:04:31.114301 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f3603b64-2260-4bc6-bea6-6aab0745367a","Type":"ContainerStarted","Data":"8a95c5ab2cee076881d9b2cb5dec33c571fcbf29465e8bd4d44ef1630b9542f1"} Oct 13 08:04:31 crc kubenswrapper[4833]: I1013 08:04:31.114566 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f3603b64-2260-4bc6-bea6-6aab0745367a","Type":"ContainerStarted","Data":"6b2cfa1ca550c6433867514dcd9508b1ea6f96bfa3fbfda191eeb49ac0a2d3ba"} Oct 13 08:04:31 crc kubenswrapper[4833]: I1013 08:04:31.155736 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 13 08:04:31 crc kubenswrapper[4833]: I1013 08:04:31.156000 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="4876de8b-a077-410e-b98a-1b50beaa4efc" containerName="cinder-api-log" containerID="cri-o://e8d0cdf7cbfa222afbde0f470a068ac9b5347d72ae0e454e2df7c47a9a485822" gracePeriod=30 Oct 13 08:04:31 crc kubenswrapper[4833]: I1013 08:04:31.156157 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="4876de8b-a077-410e-b98a-1b50beaa4efc" containerName="cinder-api" containerID="cri-o://1e01df2a4d059b1895931688d0fd5a668097dc6750d62c8cc487fb12a5e38295" gracePeriod=30 Oct 13 08:04:31 crc kubenswrapper[4833]: I1013 08:04:31.162315 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="4876de8b-a077-410e-b98a-1b50beaa4efc" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.1.57:8776/healthcheck\": EOF" Oct 13 08:04:31 crc kubenswrapper[4833]: I1013 08:04:31.162478 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="4876de8b-a077-410e-b98a-1b50beaa4efc" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.1.57:8776/healthcheck\": EOF" Oct 13 08:04:32 crc kubenswrapper[4833]: I1013 08:04:32.128877 4833 generic.go:334] "Generic (PLEG): container finished" podID="4876de8b-a077-410e-b98a-1b50beaa4efc" containerID="e8d0cdf7cbfa222afbde0f470a068ac9b5347d72ae0e454e2df7c47a9a485822" exitCode=143 Oct 13 08:04:32 crc kubenswrapper[4833]: I1013 08:04:32.128936 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4876de8b-a077-410e-b98a-1b50beaa4efc","Type":"ContainerDied","Data":"e8d0cdf7cbfa222afbde0f470a068ac9b5347d72ae0e454e2df7c47a9a485822"} Oct 13 08:04:32 crc kubenswrapper[4833]: I1013 08:04:32.131853 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f3603b64-2260-4bc6-bea6-6aab0745367a","Type":"ContainerStarted","Data":"5e5c7cd3366d2210e1e4060c92caa114ceb5cf9a7495dac8a65ec2cfad9c158d"} Oct 13 08:04:32 crc kubenswrapper[4833]: I1013 08:04:32.150239 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.150221627 podStartE2EDuration="3.150221627s" podCreationTimestamp="2025-10-13 08:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:04:32.149724663 +0000 UTC m=+5762.250147589" watchObservedRunningTime="2025-10-13 08:04:32.150221627 +0000 UTC m=+5762.250644553" Oct 13 08:04:34 crc kubenswrapper[4833]: I1013 08:04:34.627618 4833 scope.go:117] "RemoveContainer" containerID="84144bfb0195d5a0d1b4378bb219c91ea274fee3df90291859e033d126a958d8" Oct 13 08:04:34 crc kubenswrapper[4833]: E1013 08:04:34.628879 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:04:34 crc kubenswrapper[4833]: I1013 08:04:34.732867 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 13 08:04:34 crc kubenswrapper[4833]: I1013 08:04:34.831709 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-scripts\") pod \"4876de8b-a077-410e-b98a-1b50beaa4efc\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " Oct 13 08:04:34 crc kubenswrapper[4833]: I1013 08:04:34.831779 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-config-data\") pod \"4876de8b-a077-410e-b98a-1b50beaa4efc\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " Oct 13 08:04:34 crc kubenswrapper[4833]: I1013 08:04:34.831823 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-public-tls-certs\") pod \"4876de8b-a077-410e-b98a-1b50beaa4efc\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " Oct 13 08:04:34 crc kubenswrapper[4833]: I1013 08:04:34.831876 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4876de8b-a077-410e-b98a-1b50beaa4efc-logs\") pod \"4876de8b-a077-410e-b98a-1b50beaa4efc\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " Oct 13 08:04:34 crc kubenswrapper[4833]: I1013 08:04:34.831931 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4876de8b-a077-410e-b98a-1b50beaa4efc-etc-machine-id\") pod \"4876de8b-a077-410e-b98a-1b50beaa4efc\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " Oct 13 08:04:34 crc kubenswrapper[4833]: I1013 08:04:34.831955 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-combined-ca-bundle\") pod \"4876de8b-a077-410e-b98a-1b50beaa4efc\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " Oct 13 08:04:34 crc kubenswrapper[4833]: I1013 08:04:34.832028 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjkgs\" (UniqueName: \"kubernetes.io/projected/4876de8b-a077-410e-b98a-1b50beaa4efc-kube-api-access-pjkgs\") pod \"4876de8b-a077-410e-b98a-1b50beaa4efc\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " Oct 13 08:04:34 crc kubenswrapper[4833]: I1013 08:04:34.832076 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-internal-tls-certs\") pod \"4876de8b-a077-410e-b98a-1b50beaa4efc\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " Oct 13 08:04:34 crc kubenswrapper[4833]: I1013 08:04:34.832105 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-config-data-custom\") pod \"4876de8b-a077-410e-b98a-1b50beaa4efc\" (UID: \"4876de8b-a077-410e-b98a-1b50beaa4efc\") " Oct 13 08:04:34 crc kubenswrapper[4833]: I1013 08:04:34.832623 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4876de8b-a077-410e-b98a-1b50beaa4efc-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4876de8b-a077-410e-b98a-1b50beaa4efc" (UID: "4876de8b-a077-410e-b98a-1b50beaa4efc"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 08:04:34 crc kubenswrapper[4833]: I1013 08:04:34.833028 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4876de8b-a077-410e-b98a-1b50beaa4efc-logs" (OuterVolumeSpecName: "logs") pod "4876de8b-a077-410e-b98a-1b50beaa4efc" (UID: "4876de8b-a077-410e-b98a-1b50beaa4efc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:04:34 crc kubenswrapper[4833]: I1013 08:04:34.833812 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4876de8b-a077-410e-b98a-1b50beaa4efc-logs\") on node \"crc\" DevicePath \"\"" Oct 13 08:04:34 crc kubenswrapper[4833]: I1013 08:04:34.833842 4833 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4876de8b-a077-410e-b98a-1b50beaa4efc-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 13 08:04:34 crc kubenswrapper[4833]: I1013 08:04:34.838617 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-scripts" (OuterVolumeSpecName: "scripts") pod "4876de8b-a077-410e-b98a-1b50beaa4efc" (UID: "4876de8b-a077-410e-b98a-1b50beaa4efc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:04:34 crc kubenswrapper[4833]: I1013 08:04:34.842598 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 13 08:04:34 crc kubenswrapper[4833]: I1013 08:04:34.848388 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4876de8b-a077-410e-b98a-1b50beaa4efc-kube-api-access-pjkgs" (OuterVolumeSpecName: "kube-api-access-pjkgs") pod "4876de8b-a077-410e-b98a-1b50beaa4efc" (UID: "4876de8b-a077-410e-b98a-1b50beaa4efc"). InnerVolumeSpecName "kube-api-access-pjkgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:04:34 crc kubenswrapper[4833]: I1013 08:04:34.851792 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4876de8b-a077-410e-b98a-1b50beaa4efc" (UID: "4876de8b-a077-410e-b98a-1b50beaa4efc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:04:34 crc kubenswrapper[4833]: I1013 08:04:34.862949 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4876de8b-a077-410e-b98a-1b50beaa4efc" (UID: "4876de8b-a077-410e-b98a-1b50beaa4efc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:04:34 crc kubenswrapper[4833]: I1013 08:04:34.894705 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4876de8b-a077-410e-b98a-1b50beaa4efc" (UID: "4876de8b-a077-410e-b98a-1b50beaa4efc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:04:34 crc kubenswrapper[4833]: I1013 08:04:34.921864 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-config-data" (OuterVolumeSpecName: "config-data") pod "4876de8b-a077-410e-b98a-1b50beaa4efc" (UID: "4876de8b-a077-410e-b98a-1b50beaa4efc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:04:34 crc kubenswrapper[4833]: I1013 08:04:34.930730 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4876de8b-a077-410e-b98a-1b50beaa4efc" (UID: "4876de8b-a077-410e-b98a-1b50beaa4efc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:04:34 crc kubenswrapper[4833]: I1013 08:04:34.936362 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 08:04:34 crc kubenswrapper[4833]: I1013 08:04:34.936392 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:04:34 crc kubenswrapper[4833]: I1013 08:04:34.936402 4833 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 08:04:34 crc kubenswrapper[4833]: I1013 08:04:34.936412 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:04:34 crc kubenswrapper[4833]: I1013 08:04:34.936423 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjkgs\" (UniqueName: \"kubernetes.io/projected/4876de8b-a077-410e-b98a-1b50beaa4efc-kube-api-access-pjkgs\") on node \"crc\" DevicePath \"\"" Oct 13 08:04:34 crc kubenswrapper[4833]: I1013 08:04:34.936432 4833 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 08:04:34 crc kubenswrapper[4833]: I1013 08:04:34.936440 4833 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4876de8b-a077-410e-b98a-1b50beaa4efc-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.163606 4833 generic.go:334] "Generic (PLEG): container finished" podID="4876de8b-a077-410e-b98a-1b50beaa4efc" containerID="1e01df2a4d059b1895931688d0fd5a668097dc6750d62c8cc487fb12a5e38295" exitCode=0 Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.163664 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4876de8b-a077-410e-b98a-1b50beaa4efc","Type":"ContainerDied","Data":"1e01df2a4d059b1895931688d0fd5a668097dc6750d62c8cc487fb12a5e38295"} Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.163721 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4876de8b-a077-410e-b98a-1b50beaa4efc","Type":"ContainerDied","Data":"5a7ac6de90e53a5c90bee95acbefdc80ccd8cda8d9ec5e03620450ae7efc1eeb"} Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.163741 4833 scope.go:117] "RemoveContainer" containerID="1e01df2a4d059b1895931688d0fd5a668097dc6750d62c8cc487fb12a5e38295" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.163745 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.205345 4833 scope.go:117] "RemoveContainer" containerID="e8d0cdf7cbfa222afbde0f470a068ac9b5347d72ae0e454e2df7c47a9a485822" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.231698 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.238063 4833 scope.go:117] "RemoveContainer" containerID="1e01df2a4d059b1895931688d0fd5a668097dc6750d62c8cc487fb12a5e38295" Oct 13 08:04:35 crc kubenswrapper[4833]: E1013 08:04:35.238598 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e01df2a4d059b1895931688d0fd5a668097dc6750d62c8cc487fb12a5e38295\": container with ID starting with 1e01df2a4d059b1895931688d0fd5a668097dc6750d62c8cc487fb12a5e38295 not found: ID does not exist" containerID="1e01df2a4d059b1895931688d0fd5a668097dc6750d62c8cc487fb12a5e38295" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.238645 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e01df2a4d059b1895931688d0fd5a668097dc6750d62c8cc487fb12a5e38295"} err="failed to get container status \"1e01df2a4d059b1895931688d0fd5a668097dc6750d62c8cc487fb12a5e38295\": rpc error: code = NotFound desc = could not find container \"1e01df2a4d059b1895931688d0fd5a668097dc6750d62c8cc487fb12a5e38295\": container with ID starting with 1e01df2a4d059b1895931688d0fd5a668097dc6750d62c8cc487fb12a5e38295 not found: ID does not exist" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.238675 4833 scope.go:117] "RemoveContainer" containerID="e8d0cdf7cbfa222afbde0f470a068ac9b5347d72ae0e454e2df7c47a9a485822" Oct 13 08:04:35 crc kubenswrapper[4833]: E1013 08:04:35.239418 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8d0cdf7cbfa222afbde0f470a068ac9b5347d72ae0e454e2df7c47a9a485822\": container with ID starting with e8d0cdf7cbfa222afbde0f470a068ac9b5347d72ae0e454e2df7c47a9a485822 not found: ID does not exist" containerID="e8d0cdf7cbfa222afbde0f470a068ac9b5347d72ae0e454e2df7c47a9a485822" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.239464 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8d0cdf7cbfa222afbde0f470a068ac9b5347d72ae0e454e2df7c47a9a485822"} err="failed to get container status \"e8d0cdf7cbfa222afbde0f470a068ac9b5347d72ae0e454e2df7c47a9a485822\": rpc error: code = NotFound desc = could not find container \"e8d0cdf7cbfa222afbde0f470a068ac9b5347d72ae0e454e2df7c47a9a485822\": container with ID starting with e8d0cdf7cbfa222afbde0f470a068ac9b5347d72ae0e454e2df7c47a9a485822 not found: ID does not exist" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.242646 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.269527 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 13 08:04:35 crc kubenswrapper[4833]: E1013 08:04:35.270051 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4876de8b-a077-410e-b98a-1b50beaa4efc" containerName="cinder-api-log" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.270073 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4876de8b-a077-410e-b98a-1b50beaa4efc" containerName="cinder-api-log" Oct 13 08:04:35 crc kubenswrapper[4833]: E1013 08:04:35.270092 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4876de8b-a077-410e-b98a-1b50beaa4efc" containerName="cinder-api" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.270102 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4876de8b-a077-410e-b98a-1b50beaa4efc" containerName="cinder-api" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.270306 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4876de8b-a077-410e-b98a-1b50beaa4efc" containerName="cinder-api-log" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.270336 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4876de8b-a077-410e-b98a-1b50beaa4efc" containerName="cinder-api" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.271551 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.273530 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.275828 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.276133 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.283250 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.346616 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dd89614-5887-4774-bbd2-4b8a41630d51-scripts\") pod \"cinder-api-0\" (UID: \"8dd89614-5887-4774-bbd2-4b8a41630d51\") " pod="openstack/cinder-api-0" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.346663 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs9mh\" (UniqueName: \"kubernetes.io/projected/8dd89614-5887-4774-bbd2-4b8a41630d51-kube-api-access-hs9mh\") pod \"cinder-api-0\" (UID: \"8dd89614-5887-4774-bbd2-4b8a41630d51\") " pod="openstack/cinder-api-0" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.346746 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dd89614-5887-4774-bbd2-4b8a41630d51-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8dd89614-5887-4774-bbd2-4b8a41630d51\") " pod="openstack/cinder-api-0" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.346848 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dd89614-5887-4774-bbd2-4b8a41630d51-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8dd89614-5887-4774-bbd2-4b8a41630d51\") " pod="openstack/cinder-api-0" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.346953 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8dd89614-5887-4774-bbd2-4b8a41630d51-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8dd89614-5887-4774-bbd2-4b8a41630d51\") " pod="openstack/cinder-api-0" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.347122 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8dd89614-5887-4774-bbd2-4b8a41630d51-config-data-custom\") pod \"cinder-api-0\" (UID: \"8dd89614-5887-4774-bbd2-4b8a41630d51\") " pod="openstack/cinder-api-0" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.347401 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dd89614-5887-4774-bbd2-4b8a41630d51-config-data\") pod \"cinder-api-0\" (UID: \"8dd89614-5887-4774-bbd2-4b8a41630d51\") " pod="openstack/cinder-api-0" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.347460 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8dd89614-5887-4774-bbd2-4b8a41630d51-logs\") pod \"cinder-api-0\" (UID: \"8dd89614-5887-4774-bbd2-4b8a41630d51\") " pod="openstack/cinder-api-0" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.347558 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dd89614-5887-4774-bbd2-4b8a41630d51-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8dd89614-5887-4774-bbd2-4b8a41630d51\") " pod="openstack/cinder-api-0" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.449360 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8dd89614-5887-4774-bbd2-4b8a41630d51-config-data-custom\") pod \"cinder-api-0\" (UID: \"8dd89614-5887-4774-bbd2-4b8a41630d51\") " pod="openstack/cinder-api-0" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.449432 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dd89614-5887-4774-bbd2-4b8a41630d51-config-data\") pod \"cinder-api-0\" (UID: \"8dd89614-5887-4774-bbd2-4b8a41630d51\") " pod="openstack/cinder-api-0" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.449457 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8dd89614-5887-4774-bbd2-4b8a41630d51-logs\") pod \"cinder-api-0\" (UID: \"8dd89614-5887-4774-bbd2-4b8a41630d51\") " pod="openstack/cinder-api-0" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.449488 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dd89614-5887-4774-bbd2-4b8a41630d51-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8dd89614-5887-4774-bbd2-4b8a41630d51\") " pod="openstack/cinder-api-0" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.449542 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dd89614-5887-4774-bbd2-4b8a41630d51-scripts\") pod \"cinder-api-0\" (UID: \"8dd89614-5887-4774-bbd2-4b8a41630d51\") " pod="openstack/cinder-api-0" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.449559 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs9mh\" (UniqueName: \"kubernetes.io/projected/8dd89614-5887-4774-bbd2-4b8a41630d51-kube-api-access-hs9mh\") pod \"cinder-api-0\" (UID: \"8dd89614-5887-4774-bbd2-4b8a41630d51\") " pod="openstack/cinder-api-0" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.449588 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dd89614-5887-4774-bbd2-4b8a41630d51-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8dd89614-5887-4774-bbd2-4b8a41630d51\") " pod="openstack/cinder-api-0" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.449613 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dd89614-5887-4774-bbd2-4b8a41630d51-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8dd89614-5887-4774-bbd2-4b8a41630d51\") " pod="openstack/cinder-api-0" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.449631 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8dd89614-5887-4774-bbd2-4b8a41630d51-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8dd89614-5887-4774-bbd2-4b8a41630d51\") " pod="openstack/cinder-api-0" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.449721 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8dd89614-5887-4774-bbd2-4b8a41630d51-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8dd89614-5887-4774-bbd2-4b8a41630d51\") " pod="openstack/cinder-api-0" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.451102 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8dd89614-5887-4774-bbd2-4b8a41630d51-logs\") pod \"cinder-api-0\" (UID: \"8dd89614-5887-4774-bbd2-4b8a41630d51\") " pod="openstack/cinder-api-0" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.455216 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dd89614-5887-4774-bbd2-4b8a41630d51-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8dd89614-5887-4774-bbd2-4b8a41630d51\") " pod="openstack/cinder-api-0" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.455698 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dd89614-5887-4774-bbd2-4b8a41630d51-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8dd89614-5887-4774-bbd2-4b8a41630d51\") " pod="openstack/cinder-api-0" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.456995 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dd89614-5887-4774-bbd2-4b8a41630d51-config-data\") pod \"cinder-api-0\" (UID: \"8dd89614-5887-4774-bbd2-4b8a41630d51\") " pod="openstack/cinder-api-0" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.457378 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8dd89614-5887-4774-bbd2-4b8a41630d51-config-data-custom\") pod \"cinder-api-0\" (UID: \"8dd89614-5887-4774-bbd2-4b8a41630d51\") " pod="openstack/cinder-api-0" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.457416 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dd89614-5887-4774-bbd2-4b8a41630d51-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8dd89614-5887-4774-bbd2-4b8a41630d51\") " pod="openstack/cinder-api-0" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.457686 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dd89614-5887-4774-bbd2-4b8a41630d51-scripts\") pod \"cinder-api-0\" (UID: \"8dd89614-5887-4774-bbd2-4b8a41630d51\") " pod="openstack/cinder-api-0" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.472443 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs9mh\" (UniqueName: \"kubernetes.io/projected/8dd89614-5887-4774-bbd2-4b8a41630d51-kube-api-access-hs9mh\") pod \"cinder-api-0\" (UID: \"8dd89614-5887-4774-bbd2-4b8a41630d51\") " pod="openstack/cinder-api-0" Oct 13 08:04:35 crc kubenswrapper[4833]: I1013 08:04:35.588830 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 13 08:04:36 crc kubenswrapper[4833]: I1013 08:04:36.064396 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 13 08:04:36 crc kubenswrapper[4833]: I1013 08:04:36.180326 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8dd89614-5887-4774-bbd2-4b8a41630d51","Type":"ContainerStarted","Data":"1c353963f02657d04c103d228bdb10ca487760e80512258390e98e807251125a"} Oct 13 08:04:36 crc kubenswrapper[4833]: I1013 08:04:36.638086 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4876de8b-a077-410e-b98a-1b50beaa4efc" path="/var/lib/kubelet/pods/4876de8b-a077-410e-b98a-1b50beaa4efc/volumes" Oct 13 08:04:37 crc kubenswrapper[4833]: I1013 08:04:37.204230 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8dd89614-5887-4774-bbd2-4b8a41630d51","Type":"ContainerStarted","Data":"9b094362b966c71c5148428c45d257b8ef325ec00cc67ddd1b0611a265906e71"} Oct 13 08:04:38 crc kubenswrapper[4833]: I1013 08:04:38.215881 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8dd89614-5887-4774-bbd2-4b8a41630d51","Type":"ContainerStarted","Data":"c92a8fcefc626c129970d8d7e9ec9284123900e6111bab696a200d169c88a31d"} Oct 13 08:04:38 crc kubenswrapper[4833]: I1013 08:04:38.216824 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 13 08:04:40 crc kubenswrapper[4833]: I1013 08:04:40.076007 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 13 08:04:40 crc kubenswrapper[4833]: I1013 08:04:40.101041 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.10101575 podStartE2EDuration="5.10101575s" podCreationTimestamp="2025-10-13 08:04:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:04:38.245156689 +0000 UTC m=+5768.345579615" watchObservedRunningTime="2025-10-13 08:04:40.10101575 +0000 UTC m=+5770.201438696" Oct 13 08:04:40 crc kubenswrapper[4833]: I1013 08:04:40.144471 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 13 08:04:40 crc kubenswrapper[4833]: I1013 08:04:40.235337 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f3603b64-2260-4bc6-bea6-6aab0745367a" containerName="cinder-scheduler" containerID="cri-o://8a95c5ab2cee076881d9b2cb5dec33c571fcbf29465e8bd4d44ef1630b9542f1" gracePeriod=30 Oct 13 08:04:40 crc kubenswrapper[4833]: I1013 08:04:40.235494 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f3603b64-2260-4bc6-bea6-6aab0745367a" containerName="probe" containerID="cri-o://5e5c7cd3366d2210e1e4060c92caa114ceb5cf9a7495dac8a65ec2cfad9c158d" gracePeriod=30 Oct 13 08:04:41 crc kubenswrapper[4833]: I1013 08:04:41.249612 4833 generic.go:334] "Generic (PLEG): container finished" podID="f3603b64-2260-4bc6-bea6-6aab0745367a" containerID="5e5c7cd3366d2210e1e4060c92caa114ceb5cf9a7495dac8a65ec2cfad9c158d" exitCode=0 Oct 13 08:04:41 crc kubenswrapper[4833]: I1013 08:04:41.249692 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f3603b64-2260-4bc6-bea6-6aab0745367a","Type":"ContainerDied","Data":"5e5c7cd3366d2210e1e4060c92caa114ceb5cf9a7495dac8a65ec2cfad9c158d"} Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.228798 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.261763 4833 generic.go:334] "Generic (PLEG): container finished" podID="f3603b64-2260-4bc6-bea6-6aab0745367a" containerID="8a95c5ab2cee076881d9b2cb5dec33c571fcbf29465e8bd4d44ef1630b9542f1" exitCode=0 Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.261807 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.261824 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f3603b64-2260-4bc6-bea6-6aab0745367a","Type":"ContainerDied","Data":"8a95c5ab2cee076881d9b2cb5dec33c571fcbf29465e8bd4d44ef1630b9542f1"} Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.261870 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f3603b64-2260-4bc6-bea6-6aab0745367a","Type":"ContainerDied","Data":"6b2cfa1ca550c6433867514dcd9508b1ea6f96bfa3fbfda191eeb49ac0a2d3ba"} Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.261898 4833 scope.go:117] "RemoveContainer" containerID="5e5c7cd3366d2210e1e4060c92caa114ceb5cf9a7495dac8a65ec2cfad9c158d" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.283984 4833 scope.go:117] "RemoveContainer" containerID="8a95c5ab2cee076881d9b2cb5dec33c571fcbf29465e8bd4d44ef1630b9542f1" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.301851 4833 scope.go:117] "RemoveContainer" containerID="5e5c7cd3366d2210e1e4060c92caa114ceb5cf9a7495dac8a65ec2cfad9c158d" Oct 13 08:04:42 crc kubenswrapper[4833]: E1013 08:04:42.302241 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e5c7cd3366d2210e1e4060c92caa114ceb5cf9a7495dac8a65ec2cfad9c158d\": container with ID starting with 5e5c7cd3366d2210e1e4060c92caa114ceb5cf9a7495dac8a65ec2cfad9c158d not found: ID does not exist" containerID="5e5c7cd3366d2210e1e4060c92caa114ceb5cf9a7495dac8a65ec2cfad9c158d" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.302305 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e5c7cd3366d2210e1e4060c92caa114ceb5cf9a7495dac8a65ec2cfad9c158d"} err="failed to get container status \"5e5c7cd3366d2210e1e4060c92caa114ceb5cf9a7495dac8a65ec2cfad9c158d\": rpc error: code = NotFound desc = could not find container \"5e5c7cd3366d2210e1e4060c92caa114ceb5cf9a7495dac8a65ec2cfad9c158d\": container with ID starting with 5e5c7cd3366d2210e1e4060c92caa114ceb5cf9a7495dac8a65ec2cfad9c158d not found: ID does not exist" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.302341 4833 scope.go:117] "RemoveContainer" containerID="8a95c5ab2cee076881d9b2cb5dec33c571fcbf29465e8bd4d44ef1630b9542f1" Oct 13 08:04:42 crc kubenswrapper[4833]: E1013 08:04:42.302668 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a95c5ab2cee076881d9b2cb5dec33c571fcbf29465e8bd4d44ef1630b9542f1\": container with ID starting with 8a95c5ab2cee076881d9b2cb5dec33c571fcbf29465e8bd4d44ef1630b9542f1 not found: ID does not exist" containerID="8a95c5ab2cee076881d9b2cb5dec33c571fcbf29465e8bd4d44ef1630b9542f1" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.302730 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a95c5ab2cee076881d9b2cb5dec33c571fcbf29465e8bd4d44ef1630b9542f1"} err="failed to get container status \"8a95c5ab2cee076881d9b2cb5dec33c571fcbf29465e8bd4d44ef1630b9542f1\": rpc error: code = NotFound desc = could not find container \"8a95c5ab2cee076881d9b2cb5dec33c571fcbf29465e8bd4d44ef1630b9542f1\": container with ID starting with 8a95c5ab2cee076881d9b2cb5dec33c571fcbf29465e8bd4d44ef1630b9542f1 not found: ID does not exist" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.305364 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3603b64-2260-4bc6-bea6-6aab0745367a-config-data-custom\") pod \"f3603b64-2260-4bc6-bea6-6aab0745367a\" (UID: \"f3603b64-2260-4bc6-bea6-6aab0745367a\") " Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.305445 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3603b64-2260-4bc6-bea6-6aab0745367a-scripts\") pod \"f3603b64-2260-4bc6-bea6-6aab0745367a\" (UID: \"f3603b64-2260-4bc6-bea6-6aab0745367a\") " Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.305495 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3603b64-2260-4bc6-bea6-6aab0745367a-etc-machine-id\") pod \"f3603b64-2260-4bc6-bea6-6aab0745367a\" (UID: \"f3603b64-2260-4bc6-bea6-6aab0745367a\") " Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.305663 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzl57\" (UniqueName: \"kubernetes.io/projected/f3603b64-2260-4bc6-bea6-6aab0745367a-kube-api-access-hzl57\") pod \"f3603b64-2260-4bc6-bea6-6aab0745367a\" (UID: \"f3603b64-2260-4bc6-bea6-6aab0745367a\") " Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.305696 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3603b64-2260-4bc6-bea6-6aab0745367a-combined-ca-bundle\") pod \"f3603b64-2260-4bc6-bea6-6aab0745367a\" (UID: \"f3603b64-2260-4bc6-bea6-6aab0745367a\") " Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.305719 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3603b64-2260-4bc6-bea6-6aab0745367a-config-data\") pod \"f3603b64-2260-4bc6-bea6-6aab0745367a\" (UID: \"f3603b64-2260-4bc6-bea6-6aab0745367a\") " Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.306885 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3603b64-2260-4bc6-bea6-6aab0745367a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f3603b64-2260-4bc6-bea6-6aab0745367a" (UID: "f3603b64-2260-4bc6-bea6-6aab0745367a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.328313 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3603b64-2260-4bc6-bea6-6aab0745367a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f3603b64-2260-4bc6-bea6-6aab0745367a" (UID: "f3603b64-2260-4bc6-bea6-6aab0745367a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.328368 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3603b64-2260-4bc6-bea6-6aab0745367a-scripts" (OuterVolumeSpecName: "scripts") pod "f3603b64-2260-4bc6-bea6-6aab0745367a" (UID: "f3603b64-2260-4bc6-bea6-6aab0745367a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.328409 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3603b64-2260-4bc6-bea6-6aab0745367a-kube-api-access-hzl57" (OuterVolumeSpecName: "kube-api-access-hzl57") pod "f3603b64-2260-4bc6-bea6-6aab0745367a" (UID: "f3603b64-2260-4bc6-bea6-6aab0745367a"). InnerVolumeSpecName "kube-api-access-hzl57". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.384355 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3603b64-2260-4bc6-bea6-6aab0745367a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3603b64-2260-4bc6-bea6-6aab0745367a" (UID: "f3603b64-2260-4bc6-bea6-6aab0745367a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.408687 4833 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3603b64-2260-4bc6-bea6-6aab0745367a-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.408717 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3603b64-2260-4bc6-bea6-6aab0745367a-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.408727 4833 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3603b64-2260-4bc6-bea6-6aab0745367a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.408736 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzl57\" (UniqueName: \"kubernetes.io/projected/f3603b64-2260-4bc6-bea6-6aab0745367a-kube-api-access-hzl57\") on node \"crc\" DevicePath \"\"" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.408747 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3603b64-2260-4bc6-bea6-6aab0745367a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.425270 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3603b64-2260-4bc6-bea6-6aab0745367a-config-data" (OuterVolumeSpecName: "config-data") pod "f3603b64-2260-4bc6-bea6-6aab0745367a" (UID: "f3603b64-2260-4bc6-bea6-6aab0745367a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.510376 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3603b64-2260-4bc6-bea6-6aab0745367a-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.596708 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.610831 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.620814 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 13 08:04:42 crc kubenswrapper[4833]: E1013 08:04:42.621218 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3603b64-2260-4bc6-bea6-6aab0745367a" containerName="probe" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.621238 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3603b64-2260-4bc6-bea6-6aab0745367a" containerName="probe" Oct 13 08:04:42 crc kubenswrapper[4833]: E1013 08:04:42.621276 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3603b64-2260-4bc6-bea6-6aab0745367a" containerName="cinder-scheduler" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.621285 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3603b64-2260-4bc6-bea6-6aab0745367a" containerName="cinder-scheduler" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.621504 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3603b64-2260-4bc6-bea6-6aab0745367a" containerName="cinder-scheduler" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.621528 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3603b64-2260-4bc6-bea6-6aab0745367a" containerName="probe" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.622472 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.624463 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.672325 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3603b64-2260-4bc6-bea6-6aab0745367a" path="/var/lib/kubelet/pods/f3603b64-2260-4bc6-bea6-6aab0745367a/volumes" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.672996 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.713419 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6a2abf4-9a00-428a-8d10-8212929d2dd4-config-data\") pod \"cinder-scheduler-0\" (UID: \"c6a2abf4-9a00-428a-8d10-8212929d2dd4\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.713454 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c6a2abf4-9a00-428a-8d10-8212929d2dd4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c6a2abf4-9a00-428a-8d10-8212929d2dd4\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.713521 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6a2abf4-9a00-428a-8d10-8212929d2dd4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c6a2abf4-9a00-428a-8d10-8212929d2dd4\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.713540 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldbwv\" (UniqueName: \"kubernetes.io/projected/c6a2abf4-9a00-428a-8d10-8212929d2dd4-kube-api-access-ldbwv\") pod \"cinder-scheduler-0\" (UID: \"c6a2abf4-9a00-428a-8d10-8212929d2dd4\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.713577 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6a2abf4-9a00-428a-8d10-8212929d2dd4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c6a2abf4-9a00-428a-8d10-8212929d2dd4\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.713635 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6a2abf4-9a00-428a-8d10-8212929d2dd4-scripts\") pod \"cinder-scheduler-0\" (UID: \"c6a2abf4-9a00-428a-8d10-8212929d2dd4\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.814775 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6a2abf4-9a00-428a-8d10-8212929d2dd4-config-data\") pod \"cinder-scheduler-0\" (UID: \"c6a2abf4-9a00-428a-8d10-8212929d2dd4\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.814821 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c6a2abf4-9a00-428a-8d10-8212929d2dd4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c6a2abf4-9a00-428a-8d10-8212929d2dd4\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.814866 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6a2abf4-9a00-428a-8d10-8212929d2dd4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c6a2abf4-9a00-428a-8d10-8212929d2dd4\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.814887 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldbwv\" (UniqueName: \"kubernetes.io/projected/c6a2abf4-9a00-428a-8d10-8212929d2dd4-kube-api-access-ldbwv\") pod \"cinder-scheduler-0\" (UID: \"c6a2abf4-9a00-428a-8d10-8212929d2dd4\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.814906 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6a2abf4-9a00-428a-8d10-8212929d2dd4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c6a2abf4-9a00-428a-8d10-8212929d2dd4\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.814953 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6a2abf4-9a00-428a-8d10-8212929d2dd4-scripts\") pod \"cinder-scheduler-0\" (UID: \"c6a2abf4-9a00-428a-8d10-8212929d2dd4\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.815015 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c6a2abf4-9a00-428a-8d10-8212929d2dd4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c6a2abf4-9a00-428a-8d10-8212929d2dd4\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.818514 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6a2abf4-9a00-428a-8d10-8212929d2dd4-scripts\") pod \"cinder-scheduler-0\" (UID: \"c6a2abf4-9a00-428a-8d10-8212929d2dd4\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.819069 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6a2abf4-9a00-428a-8d10-8212929d2dd4-config-data\") pod \"cinder-scheduler-0\" (UID: \"c6a2abf4-9a00-428a-8d10-8212929d2dd4\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.819185 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6a2abf4-9a00-428a-8d10-8212929d2dd4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c6a2abf4-9a00-428a-8d10-8212929d2dd4\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.819459 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6a2abf4-9a00-428a-8d10-8212929d2dd4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c6a2abf4-9a00-428a-8d10-8212929d2dd4\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.837314 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldbwv\" (UniqueName: \"kubernetes.io/projected/c6a2abf4-9a00-428a-8d10-8212929d2dd4-kube-api-access-ldbwv\") pod \"cinder-scheduler-0\" (UID: \"c6a2abf4-9a00-428a-8d10-8212929d2dd4\") " pod="openstack/cinder-scheduler-0" Oct 13 08:04:42 crc kubenswrapper[4833]: I1013 08:04:42.947087 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 13 08:04:43 crc kubenswrapper[4833]: I1013 08:04:43.193040 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 13 08:04:43 crc kubenswrapper[4833]: I1013 08:04:43.277175 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c6a2abf4-9a00-428a-8d10-8212929d2dd4","Type":"ContainerStarted","Data":"56558f097ed53b736aa638fc50156a79b5074324170bc3f0736b29edd0f9dbd8"} Oct 13 08:04:44 crc kubenswrapper[4833]: I1013 08:04:44.292661 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c6a2abf4-9a00-428a-8d10-8212929d2dd4","Type":"ContainerStarted","Data":"0d31841e7ffc1059cc8aa3707261a6af030235bcead3db1ce5c51d1d17a98800"} Oct 13 08:04:45 crc kubenswrapper[4833]: I1013 08:04:45.306114 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c6a2abf4-9a00-428a-8d10-8212929d2dd4","Type":"ContainerStarted","Data":"245ab59bbbd069764d3f9173b16faf353e472ae5f715e345f3b90e7f522f414f"} Oct 13 08:04:45 crc kubenswrapper[4833]: I1013 08:04:45.337118 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.337085669 podStartE2EDuration="3.337085669s" podCreationTimestamp="2025-10-13 08:04:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:04:45.327184097 +0000 UTC m=+5775.427607083" watchObservedRunningTime="2025-10-13 08:04:45.337085669 +0000 UTC m=+5775.437508635" Oct 13 08:04:47 crc kubenswrapper[4833]: I1013 08:04:47.497329 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Oct 13 08:04:47 crc kubenswrapper[4833]: I1013 08:04:47.635213 4833 scope.go:117] "RemoveContainer" containerID="84144bfb0195d5a0d1b4378bb219c91ea274fee3df90291859e033d126a958d8" Oct 13 08:04:47 crc kubenswrapper[4833]: E1013 08:04:47.635401 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:04:47 crc kubenswrapper[4833]: I1013 08:04:47.947792 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 13 08:04:53 crc kubenswrapper[4833]: I1013 08:04:53.318204 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 13 08:04:56 crc kubenswrapper[4833]: I1013 08:04:56.730270 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-jspfq"] Oct 13 08:04:56 crc kubenswrapper[4833]: I1013 08:04:56.731925 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jspfq" Oct 13 08:04:56 crc kubenswrapper[4833]: I1013 08:04:56.741270 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-jspfq"] Oct 13 08:04:56 crc kubenswrapper[4833]: I1013 08:04:56.787280 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsctj\" (UniqueName: \"kubernetes.io/projected/601fd1b5-672e-469c-ab36-ea8b202585b6-kube-api-access-lsctj\") pod \"glance-db-create-jspfq\" (UID: \"601fd1b5-672e-469c-ab36-ea8b202585b6\") " pod="openstack/glance-db-create-jspfq" Oct 13 08:04:56 crc kubenswrapper[4833]: I1013 08:04:56.890219 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsctj\" (UniqueName: \"kubernetes.io/projected/601fd1b5-672e-469c-ab36-ea8b202585b6-kube-api-access-lsctj\") pod \"glance-db-create-jspfq\" (UID: \"601fd1b5-672e-469c-ab36-ea8b202585b6\") " pod="openstack/glance-db-create-jspfq" Oct 13 08:04:56 crc kubenswrapper[4833]: I1013 08:04:56.920614 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsctj\" (UniqueName: \"kubernetes.io/projected/601fd1b5-672e-469c-ab36-ea8b202585b6-kube-api-access-lsctj\") pod \"glance-db-create-jspfq\" (UID: \"601fd1b5-672e-469c-ab36-ea8b202585b6\") " pod="openstack/glance-db-create-jspfq" Oct 13 08:04:57 crc kubenswrapper[4833]: I1013 08:04:57.052876 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jspfq" Oct 13 08:04:57 crc kubenswrapper[4833]: I1013 08:04:57.390715 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-jspfq"] Oct 13 08:04:57 crc kubenswrapper[4833]: I1013 08:04:57.458704 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jspfq" event={"ID":"601fd1b5-672e-469c-ab36-ea8b202585b6","Type":"ContainerStarted","Data":"d5c79fcb267e467fd24acfc054ece66f5a25099590758290194e613df3234181"} Oct 13 08:04:58 crc kubenswrapper[4833]: I1013 08:04:58.470181 4833 generic.go:334] "Generic (PLEG): container finished" podID="601fd1b5-672e-469c-ab36-ea8b202585b6" containerID="a214e216d3bfa1eccf16a04212e5ec91cf8a9a7411baa081f51bc0977f4d636c" exitCode=0 Oct 13 08:04:58 crc kubenswrapper[4833]: I1013 08:04:58.470280 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jspfq" event={"ID":"601fd1b5-672e-469c-ab36-ea8b202585b6","Type":"ContainerDied","Data":"a214e216d3bfa1eccf16a04212e5ec91cf8a9a7411baa081f51bc0977f4d636c"} Oct 13 08:04:59 crc kubenswrapper[4833]: I1013 08:04:59.928498 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jspfq" Oct 13 08:04:59 crc kubenswrapper[4833]: I1013 08:04:59.943158 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsctj\" (UniqueName: \"kubernetes.io/projected/601fd1b5-672e-469c-ab36-ea8b202585b6-kube-api-access-lsctj\") pod \"601fd1b5-672e-469c-ab36-ea8b202585b6\" (UID: \"601fd1b5-672e-469c-ab36-ea8b202585b6\") " Oct 13 08:04:59 crc kubenswrapper[4833]: I1013 08:04:59.952809 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/601fd1b5-672e-469c-ab36-ea8b202585b6-kube-api-access-lsctj" (OuterVolumeSpecName: "kube-api-access-lsctj") pod "601fd1b5-672e-469c-ab36-ea8b202585b6" (UID: "601fd1b5-672e-469c-ab36-ea8b202585b6"). InnerVolumeSpecName "kube-api-access-lsctj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:05:00 crc kubenswrapper[4833]: I1013 08:05:00.045251 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsctj\" (UniqueName: \"kubernetes.io/projected/601fd1b5-672e-469c-ab36-ea8b202585b6-kube-api-access-lsctj\") on node \"crc\" DevicePath \"\"" Oct 13 08:05:00 crc kubenswrapper[4833]: I1013 08:05:00.495931 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jspfq" event={"ID":"601fd1b5-672e-469c-ab36-ea8b202585b6","Type":"ContainerDied","Data":"d5c79fcb267e467fd24acfc054ece66f5a25099590758290194e613df3234181"} Oct 13 08:05:00 crc kubenswrapper[4833]: I1013 08:05:00.495998 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5c79fcb267e467fd24acfc054ece66f5a25099590758290194e613df3234181" Oct 13 08:05:00 crc kubenswrapper[4833]: I1013 08:05:00.496055 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jspfq" Oct 13 08:05:02 crc kubenswrapper[4833]: I1013 08:05:02.629962 4833 scope.go:117] "RemoveContainer" containerID="84144bfb0195d5a0d1b4378bb219c91ea274fee3df90291859e033d126a958d8" Oct 13 08:05:02 crc kubenswrapper[4833]: E1013 08:05:02.631792 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:05:06 crc kubenswrapper[4833]: I1013 08:05:06.837776 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-9467-account-create-kw92q"] Oct 13 08:05:06 crc kubenswrapper[4833]: E1013 08:05:06.840724 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="601fd1b5-672e-469c-ab36-ea8b202585b6" containerName="mariadb-database-create" Oct 13 08:05:06 crc kubenswrapper[4833]: I1013 08:05:06.842088 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="601fd1b5-672e-469c-ab36-ea8b202585b6" containerName="mariadb-database-create" Oct 13 08:05:06 crc kubenswrapper[4833]: I1013 08:05:06.842687 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="601fd1b5-672e-469c-ab36-ea8b202585b6" containerName="mariadb-database-create" Oct 13 08:05:06 crc kubenswrapper[4833]: I1013 08:05:06.843948 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9467-account-create-kw92q" Oct 13 08:05:06 crc kubenswrapper[4833]: I1013 08:05:06.847039 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Oct 13 08:05:06 crc kubenswrapper[4833]: I1013 08:05:06.848033 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9467-account-create-kw92q"] Oct 13 08:05:06 crc kubenswrapper[4833]: I1013 08:05:06.889250 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4f5t\" (UniqueName: \"kubernetes.io/projected/dc7c7e79-b563-4b67-81de-4bd0ca50cb0a-kube-api-access-j4f5t\") pod \"glance-9467-account-create-kw92q\" (UID: \"dc7c7e79-b563-4b67-81de-4bd0ca50cb0a\") " pod="openstack/glance-9467-account-create-kw92q" Oct 13 08:05:06 crc kubenswrapper[4833]: I1013 08:05:06.990211 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4f5t\" (UniqueName: \"kubernetes.io/projected/dc7c7e79-b563-4b67-81de-4bd0ca50cb0a-kube-api-access-j4f5t\") pod \"glance-9467-account-create-kw92q\" (UID: \"dc7c7e79-b563-4b67-81de-4bd0ca50cb0a\") " pod="openstack/glance-9467-account-create-kw92q" Oct 13 08:05:07 crc kubenswrapper[4833]: I1013 08:05:07.018478 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4f5t\" (UniqueName: \"kubernetes.io/projected/dc7c7e79-b563-4b67-81de-4bd0ca50cb0a-kube-api-access-j4f5t\") pod \"glance-9467-account-create-kw92q\" (UID: \"dc7c7e79-b563-4b67-81de-4bd0ca50cb0a\") " pod="openstack/glance-9467-account-create-kw92q" Oct 13 08:05:07 crc kubenswrapper[4833]: I1013 08:05:07.168015 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9467-account-create-kw92q" Oct 13 08:05:07 crc kubenswrapper[4833]: I1013 08:05:07.463567 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9467-account-create-kw92q"] Oct 13 08:05:07 crc kubenswrapper[4833]: I1013 08:05:07.579661 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9467-account-create-kw92q" event={"ID":"dc7c7e79-b563-4b67-81de-4bd0ca50cb0a","Type":"ContainerStarted","Data":"c13126cbc1f651ed78f3c083aa8a5492b7338e1a757fa969c2926174595b44ee"} Oct 13 08:05:08 crc kubenswrapper[4833]: I1013 08:05:08.595290 4833 generic.go:334] "Generic (PLEG): container finished" podID="dc7c7e79-b563-4b67-81de-4bd0ca50cb0a" containerID="e849214437c4b6212427c2c853defeb99c7ab6ca4fa6e178da01d81866796277" exitCode=0 Oct 13 08:05:08 crc kubenswrapper[4833]: I1013 08:05:08.595360 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9467-account-create-kw92q" event={"ID":"dc7c7e79-b563-4b67-81de-4bd0ca50cb0a","Type":"ContainerDied","Data":"e849214437c4b6212427c2c853defeb99c7ab6ca4fa6e178da01d81866796277"} Oct 13 08:05:09 crc kubenswrapper[4833]: I1013 08:05:09.986077 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9467-account-create-kw92q" Oct 13 08:05:10 crc kubenswrapper[4833]: I1013 08:05:10.162703 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4f5t\" (UniqueName: \"kubernetes.io/projected/dc7c7e79-b563-4b67-81de-4bd0ca50cb0a-kube-api-access-j4f5t\") pod \"dc7c7e79-b563-4b67-81de-4bd0ca50cb0a\" (UID: \"dc7c7e79-b563-4b67-81de-4bd0ca50cb0a\") " Oct 13 08:05:10 crc kubenswrapper[4833]: I1013 08:05:10.168830 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc7c7e79-b563-4b67-81de-4bd0ca50cb0a-kube-api-access-j4f5t" (OuterVolumeSpecName: "kube-api-access-j4f5t") pod "dc7c7e79-b563-4b67-81de-4bd0ca50cb0a" (UID: "dc7c7e79-b563-4b67-81de-4bd0ca50cb0a"). InnerVolumeSpecName "kube-api-access-j4f5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:05:10 crc kubenswrapper[4833]: I1013 08:05:10.265657 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4f5t\" (UniqueName: \"kubernetes.io/projected/dc7c7e79-b563-4b67-81de-4bd0ca50cb0a-kube-api-access-j4f5t\") on node \"crc\" DevicePath \"\"" Oct 13 08:05:10 crc kubenswrapper[4833]: I1013 08:05:10.623286 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9467-account-create-kw92q" event={"ID":"dc7c7e79-b563-4b67-81de-4bd0ca50cb0a","Type":"ContainerDied","Data":"c13126cbc1f651ed78f3c083aa8a5492b7338e1a757fa969c2926174595b44ee"} Oct 13 08:05:10 crc kubenswrapper[4833]: I1013 08:05:10.623362 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c13126cbc1f651ed78f3c083aa8a5492b7338e1a757fa969c2926174595b44ee" Oct 13 08:05:10 crc kubenswrapper[4833]: I1013 08:05:10.623310 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9467-account-create-kw92q" Oct 13 08:05:11 crc kubenswrapper[4833]: I1013 08:05:11.984894 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-h5lrr"] Oct 13 08:05:11 crc kubenswrapper[4833]: E1013 08:05:11.986001 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc7c7e79-b563-4b67-81de-4bd0ca50cb0a" containerName="mariadb-account-create" Oct 13 08:05:11 crc kubenswrapper[4833]: I1013 08:05:11.986030 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc7c7e79-b563-4b67-81de-4bd0ca50cb0a" containerName="mariadb-account-create" Oct 13 08:05:11 crc kubenswrapper[4833]: I1013 08:05:11.986331 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc7c7e79-b563-4b67-81de-4bd0ca50cb0a" containerName="mariadb-account-create" Oct 13 08:05:11 crc kubenswrapper[4833]: I1013 08:05:11.987382 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-h5lrr" Oct 13 08:05:11 crc kubenswrapper[4833]: I1013 08:05:11.990594 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Oct 13 08:05:11 crc kubenswrapper[4833]: I1013 08:05:11.990993 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-l8ns9" Oct 13 08:05:11 crc kubenswrapper[4833]: I1013 08:05:11.999074 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-h5lrr"] Oct 13 08:05:12 crc kubenswrapper[4833]: I1013 08:05:12.002254 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aefd2f1-a059-411f-a13c-767d0f58a117-config-data\") pod \"glance-db-sync-h5lrr\" (UID: \"4aefd2f1-a059-411f-a13c-767d0f58a117\") " pod="openstack/glance-db-sync-h5lrr" Oct 13 08:05:12 crc kubenswrapper[4833]: I1013 08:05:12.002319 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aefd2f1-a059-411f-a13c-767d0f58a117-combined-ca-bundle\") pod \"glance-db-sync-h5lrr\" (UID: \"4aefd2f1-a059-411f-a13c-767d0f58a117\") " pod="openstack/glance-db-sync-h5lrr" Oct 13 08:05:12 crc kubenswrapper[4833]: I1013 08:05:12.002470 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrfb9\" (UniqueName: \"kubernetes.io/projected/4aefd2f1-a059-411f-a13c-767d0f58a117-kube-api-access-vrfb9\") pod \"glance-db-sync-h5lrr\" (UID: \"4aefd2f1-a059-411f-a13c-767d0f58a117\") " pod="openstack/glance-db-sync-h5lrr" Oct 13 08:05:12 crc kubenswrapper[4833]: I1013 08:05:12.002505 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4aefd2f1-a059-411f-a13c-767d0f58a117-db-sync-config-data\") pod \"glance-db-sync-h5lrr\" (UID: \"4aefd2f1-a059-411f-a13c-767d0f58a117\") " pod="openstack/glance-db-sync-h5lrr" Oct 13 08:05:12 crc kubenswrapper[4833]: I1013 08:05:12.104769 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aefd2f1-a059-411f-a13c-767d0f58a117-config-data\") pod \"glance-db-sync-h5lrr\" (UID: \"4aefd2f1-a059-411f-a13c-767d0f58a117\") " pod="openstack/glance-db-sync-h5lrr" Oct 13 08:05:12 crc kubenswrapper[4833]: I1013 08:05:12.104815 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aefd2f1-a059-411f-a13c-767d0f58a117-combined-ca-bundle\") pod \"glance-db-sync-h5lrr\" (UID: \"4aefd2f1-a059-411f-a13c-767d0f58a117\") " pod="openstack/glance-db-sync-h5lrr" Oct 13 08:05:12 crc kubenswrapper[4833]: I1013 08:05:12.104910 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrfb9\" (UniqueName: \"kubernetes.io/projected/4aefd2f1-a059-411f-a13c-767d0f58a117-kube-api-access-vrfb9\") pod \"glance-db-sync-h5lrr\" (UID: \"4aefd2f1-a059-411f-a13c-767d0f58a117\") " pod="openstack/glance-db-sync-h5lrr" Oct 13 08:05:12 crc kubenswrapper[4833]: I1013 08:05:12.104933 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4aefd2f1-a059-411f-a13c-767d0f58a117-db-sync-config-data\") pod \"glance-db-sync-h5lrr\" (UID: \"4aefd2f1-a059-411f-a13c-767d0f58a117\") " pod="openstack/glance-db-sync-h5lrr" Oct 13 08:05:12 crc kubenswrapper[4833]: I1013 08:05:12.111019 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aefd2f1-a059-411f-a13c-767d0f58a117-combined-ca-bundle\") pod \"glance-db-sync-h5lrr\" (UID: \"4aefd2f1-a059-411f-a13c-767d0f58a117\") " pod="openstack/glance-db-sync-h5lrr" Oct 13 08:05:12 crc kubenswrapper[4833]: I1013 08:05:12.111073 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4aefd2f1-a059-411f-a13c-767d0f58a117-db-sync-config-data\") pod \"glance-db-sync-h5lrr\" (UID: \"4aefd2f1-a059-411f-a13c-767d0f58a117\") " pod="openstack/glance-db-sync-h5lrr" Oct 13 08:05:12 crc kubenswrapper[4833]: I1013 08:05:12.114557 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aefd2f1-a059-411f-a13c-767d0f58a117-config-data\") pod \"glance-db-sync-h5lrr\" (UID: \"4aefd2f1-a059-411f-a13c-767d0f58a117\") " pod="openstack/glance-db-sync-h5lrr" Oct 13 08:05:12 crc kubenswrapper[4833]: I1013 08:05:12.127120 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrfb9\" (UniqueName: \"kubernetes.io/projected/4aefd2f1-a059-411f-a13c-767d0f58a117-kube-api-access-vrfb9\") pod \"glance-db-sync-h5lrr\" (UID: \"4aefd2f1-a059-411f-a13c-767d0f58a117\") " pod="openstack/glance-db-sync-h5lrr" Oct 13 08:05:12 crc kubenswrapper[4833]: I1013 08:05:12.316056 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-h5lrr" Oct 13 08:05:12 crc kubenswrapper[4833]: I1013 08:05:12.861860 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-h5lrr"] Oct 13 08:05:13 crc kubenswrapper[4833]: I1013 08:05:13.671944 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-h5lrr" event={"ID":"4aefd2f1-a059-411f-a13c-767d0f58a117","Type":"ContainerStarted","Data":"ba1564ddd4c25a7af1e27230ca7758620e0484cf78d84e5ae663de7fbef5e22b"} Oct 13 08:05:13 crc kubenswrapper[4833]: I1013 08:05:13.673212 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-h5lrr" event={"ID":"4aefd2f1-a059-411f-a13c-767d0f58a117","Type":"ContainerStarted","Data":"4cbbb27b1430b2e92f27ea71784cc3030b5e90d711f0f696263f47d8a30535a3"} Oct 13 08:05:15 crc kubenswrapper[4833]: I1013 08:05:15.627955 4833 scope.go:117] "RemoveContainer" containerID="84144bfb0195d5a0d1b4378bb219c91ea274fee3df90291859e033d126a958d8" Oct 13 08:05:15 crc kubenswrapper[4833]: E1013 08:05:15.630138 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:05:16 crc kubenswrapper[4833]: I1013 08:05:16.713851 4833 generic.go:334] "Generic (PLEG): container finished" podID="4aefd2f1-a059-411f-a13c-767d0f58a117" containerID="ba1564ddd4c25a7af1e27230ca7758620e0484cf78d84e5ae663de7fbef5e22b" exitCode=0 Oct 13 08:05:16 crc kubenswrapper[4833]: I1013 08:05:16.713921 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-h5lrr" event={"ID":"4aefd2f1-a059-411f-a13c-767d0f58a117","Type":"ContainerDied","Data":"ba1564ddd4c25a7af1e27230ca7758620e0484cf78d84e5ae663de7fbef5e22b"} Oct 13 08:05:18 crc kubenswrapper[4833]: I1013 08:05:18.201295 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-h5lrr" Oct 13 08:05:18 crc kubenswrapper[4833]: I1013 08:05:18.226883 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrfb9\" (UniqueName: \"kubernetes.io/projected/4aefd2f1-a059-411f-a13c-767d0f58a117-kube-api-access-vrfb9\") pod \"4aefd2f1-a059-411f-a13c-767d0f58a117\" (UID: \"4aefd2f1-a059-411f-a13c-767d0f58a117\") " Oct 13 08:05:18 crc kubenswrapper[4833]: I1013 08:05:18.227283 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aefd2f1-a059-411f-a13c-767d0f58a117-combined-ca-bundle\") pod \"4aefd2f1-a059-411f-a13c-767d0f58a117\" (UID: \"4aefd2f1-a059-411f-a13c-767d0f58a117\") " Oct 13 08:05:18 crc kubenswrapper[4833]: I1013 08:05:18.227309 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aefd2f1-a059-411f-a13c-767d0f58a117-config-data\") pod \"4aefd2f1-a059-411f-a13c-767d0f58a117\" (UID: \"4aefd2f1-a059-411f-a13c-767d0f58a117\") " Oct 13 08:05:18 crc kubenswrapper[4833]: I1013 08:05:18.227337 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4aefd2f1-a059-411f-a13c-767d0f58a117-db-sync-config-data\") pod \"4aefd2f1-a059-411f-a13c-767d0f58a117\" (UID: \"4aefd2f1-a059-411f-a13c-767d0f58a117\") " Oct 13 08:05:18 crc kubenswrapper[4833]: I1013 08:05:18.253919 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aefd2f1-a059-411f-a13c-767d0f58a117-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4aefd2f1-a059-411f-a13c-767d0f58a117" (UID: "4aefd2f1-a059-411f-a13c-767d0f58a117"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:05:18 crc kubenswrapper[4833]: I1013 08:05:18.262886 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aefd2f1-a059-411f-a13c-767d0f58a117-kube-api-access-vrfb9" (OuterVolumeSpecName: "kube-api-access-vrfb9") pod "4aefd2f1-a059-411f-a13c-767d0f58a117" (UID: "4aefd2f1-a059-411f-a13c-767d0f58a117"). InnerVolumeSpecName "kube-api-access-vrfb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:05:18 crc kubenswrapper[4833]: I1013 08:05:18.266959 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aefd2f1-a059-411f-a13c-767d0f58a117-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4aefd2f1-a059-411f-a13c-767d0f58a117" (UID: "4aefd2f1-a059-411f-a13c-767d0f58a117"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:05:18 crc kubenswrapper[4833]: I1013 08:05:18.301681 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aefd2f1-a059-411f-a13c-767d0f58a117-config-data" (OuterVolumeSpecName: "config-data") pod "4aefd2f1-a059-411f-a13c-767d0f58a117" (UID: "4aefd2f1-a059-411f-a13c-767d0f58a117"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:05:18 crc kubenswrapper[4833]: I1013 08:05:18.329832 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aefd2f1-a059-411f-a13c-767d0f58a117-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:05:18 crc kubenswrapper[4833]: I1013 08:05:18.329874 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aefd2f1-a059-411f-a13c-767d0f58a117-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:05:18 crc kubenswrapper[4833]: I1013 08:05:18.329887 4833 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4aefd2f1-a059-411f-a13c-767d0f58a117-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:05:18 crc kubenswrapper[4833]: I1013 08:05:18.329899 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrfb9\" (UniqueName: \"kubernetes.io/projected/4aefd2f1-a059-411f-a13c-767d0f58a117-kube-api-access-vrfb9\") on node \"crc\" DevicePath \"\"" Oct 13 08:05:18 crc kubenswrapper[4833]: I1013 08:05:18.790870 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-h5lrr" event={"ID":"4aefd2f1-a059-411f-a13c-767d0f58a117","Type":"ContainerDied","Data":"4cbbb27b1430b2e92f27ea71784cc3030b5e90d711f0f696263f47d8a30535a3"} Oct 13 08:05:18 crc kubenswrapper[4833]: I1013 08:05:18.790930 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cbbb27b1430b2e92f27ea71784cc3030b5e90d711f0f696263f47d8a30535a3" Oct 13 08:05:18 crc kubenswrapper[4833]: I1013 08:05:18.790936 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-h5lrr" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.167587 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 08:05:19 crc kubenswrapper[4833]: E1013 08:05:19.168294 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aefd2f1-a059-411f-a13c-767d0f58a117" containerName="glance-db-sync" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.168310 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aefd2f1-a059-411f-a13c-767d0f58a117" containerName="glance-db-sync" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.168465 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aefd2f1-a059-411f-a13c-767d0f58a117" containerName="glance-db-sync" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.169434 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.174413 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.174598 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-l8ns9" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.174714 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.187609 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b8cc95f99-9kjsw"] Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.189294 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b8cc95f99-9kjsw" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.203582 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b8cc95f99-9kjsw"] Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.219750 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.274918 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.277188 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.282677 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.320372 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.352455 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a33e9d7e-8788-45d9-9a9c-74dff59236f7-config\") pod \"dnsmasq-dns-5b8cc95f99-9kjsw\" (UID: \"a33e9d7e-8788-45d9-9a9c-74dff59236f7\") " pod="openstack/dnsmasq-dns-5b8cc95f99-9kjsw" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.352597 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-logs\") pod \"glance-default-external-api-0\" (UID: \"d7db1c3d-e118-4d6d-aadf-0e69a43976e2\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.352672 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d7db1c3d-e118-4d6d-aadf-0e69a43976e2\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.352727 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-config-data\") pod \"glance-default-external-api-0\" (UID: \"d7db1c3d-e118-4d6d-aadf-0e69a43976e2\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.355347 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e556e304-b751-4ce9-98c3-fb377a87eaed-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e556e304-b751-4ce9-98c3-fb377a87eaed\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.355464 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e556e304-b751-4ce9-98c3-fb377a87eaed-logs\") pod \"glance-default-internal-api-0\" (UID: \"e556e304-b751-4ce9-98c3-fb377a87eaed\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.357287 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a33e9d7e-8788-45d9-9a9c-74dff59236f7-dns-svc\") pod \"dnsmasq-dns-5b8cc95f99-9kjsw\" (UID: \"a33e9d7e-8788-45d9-9a9c-74dff59236f7\") " pod="openstack/dnsmasq-dns-5b8cc95f99-9kjsw" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.357378 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4zxv\" (UniqueName: \"kubernetes.io/projected/a33e9d7e-8788-45d9-9a9c-74dff59236f7-kube-api-access-f4zxv\") pod \"dnsmasq-dns-5b8cc95f99-9kjsw\" (UID: \"a33e9d7e-8788-45d9-9a9c-74dff59236f7\") " pod="openstack/dnsmasq-dns-5b8cc95f99-9kjsw" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.357416 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a33e9d7e-8788-45d9-9a9c-74dff59236f7-ovsdbserver-sb\") pod \"dnsmasq-dns-5b8cc95f99-9kjsw\" (UID: \"a33e9d7e-8788-45d9-9a9c-74dff59236f7\") " pod="openstack/dnsmasq-dns-5b8cc95f99-9kjsw" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.357479 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a33e9d7e-8788-45d9-9a9c-74dff59236f7-ovsdbserver-nb\") pod \"dnsmasq-dns-5b8cc95f99-9kjsw\" (UID: \"a33e9d7e-8788-45d9-9a9c-74dff59236f7\") " pod="openstack/dnsmasq-dns-5b8cc95f99-9kjsw" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.357550 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tcfg\" (UniqueName: \"kubernetes.io/projected/e556e304-b751-4ce9-98c3-fb377a87eaed-kube-api-access-9tcfg\") pod \"glance-default-internal-api-0\" (UID: \"e556e304-b751-4ce9-98c3-fb377a87eaed\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.357589 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d7db1c3d-e118-4d6d-aadf-0e69a43976e2\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.357705 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e556e304-b751-4ce9-98c3-fb377a87eaed-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e556e304-b751-4ce9-98c3-fb377a87eaed\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.357741 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e556e304-b751-4ce9-98c3-fb377a87eaed-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e556e304-b751-4ce9-98c3-fb377a87eaed\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.357791 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-scripts\") pod \"glance-default-external-api-0\" (UID: \"d7db1c3d-e118-4d6d-aadf-0e69a43976e2\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.357825 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e556e304-b751-4ce9-98c3-fb377a87eaed-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e556e304-b751-4ce9-98c3-fb377a87eaed\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.357966 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jbp6\" (UniqueName: \"kubernetes.io/projected/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-kube-api-access-6jbp6\") pod \"glance-default-external-api-0\" (UID: \"d7db1c3d-e118-4d6d-aadf-0e69a43976e2\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.459879 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d7db1c3d-e118-4d6d-aadf-0e69a43976e2\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.459935 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-config-data\") pod \"glance-default-external-api-0\" (UID: \"d7db1c3d-e118-4d6d-aadf-0e69a43976e2\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.459969 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e556e304-b751-4ce9-98c3-fb377a87eaed-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e556e304-b751-4ce9-98c3-fb377a87eaed\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.460001 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e556e304-b751-4ce9-98c3-fb377a87eaed-logs\") pod \"glance-default-internal-api-0\" (UID: \"e556e304-b751-4ce9-98c3-fb377a87eaed\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.460046 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a33e9d7e-8788-45d9-9a9c-74dff59236f7-dns-svc\") pod \"dnsmasq-dns-5b8cc95f99-9kjsw\" (UID: \"a33e9d7e-8788-45d9-9a9c-74dff59236f7\") " pod="openstack/dnsmasq-dns-5b8cc95f99-9kjsw" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.460066 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4zxv\" (UniqueName: \"kubernetes.io/projected/a33e9d7e-8788-45d9-9a9c-74dff59236f7-kube-api-access-f4zxv\") pod \"dnsmasq-dns-5b8cc95f99-9kjsw\" (UID: \"a33e9d7e-8788-45d9-9a9c-74dff59236f7\") " pod="openstack/dnsmasq-dns-5b8cc95f99-9kjsw" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.460091 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a33e9d7e-8788-45d9-9a9c-74dff59236f7-ovsdbserver-sb\") pod \"dnsmasq-dns-5b8cc95f99-9kjsw\" (UID: \"a33e9d7e-8788-45d9-9a9c-74dff59236f7\") " pod="openstack/dnsmasq-dns-5b8cc95f99-9kjsw" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.460109 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a33e9d7e-8788-45d9-9a9c-74dff59236f7-ovsdbserver-nb\") pod \"dnsmasq-dns-5b8cc95f99-9kjsw\" (UID: \"a33e9d7e-8788-45d9-9a9c-74dff59236f7\") " pod="openstack/dnsmasq-dns-5b8cc95f99-9kjsw" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.460123 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tcfg\" (UniqueName: \"kubernetes.io/projected/e556e304-b751-4ce9-98c3-fb377a87eaed-kube-api-access-9tcfg\") pod \"glance-default-internal-api-0\" (UID: \"e556e304-b751-4ce9-98c3-fb377a87eaed\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.460144 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d7db1c3d-e118-4d6d-aadf-0e69a43976e2\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.460166 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e556e304-b751-4ce9-98c3-fb377a87eaed-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e556e304-b751-4ce9-98c3-fb377a87eaed\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.460185 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e556e304-b751-4ce9-98c3-fb377a87eaed-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e556e304-b751-4ce9-98c3-fb377a87eaed\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.460223 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-scripts\") pod \"glance-default-external-api-0\" (UID: \"d7db1c3d-e118-4d6d-aadf-0e69a43976e2\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.460245 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e556e304-b751-4ce9-98c3-fb377a87eaed-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e556e304-b751-4ce9-98c3-fb377a87eaed\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.460278 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jbp6\" (UniqueName: \"kubernetes.io/projected/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-kube-api-access-6jbp6\") pod \"glance-default-external-api-0\" (UID: \"d7db1c3d-e118-4d6d-aadf-0e69a43976e2\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.460304 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a33e9d7e-8788-45d9-9a9c-74dff59236f7-config\") pod \"dnsmasq-dns-5b8cc95f99-9kjsw\" (UID: \"a33e9d7e-8788-45d9-9a9c-74dff59236f7\") " pod="openstack/dnsmasq-dns-5b8cc95f99-9kjsw" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.460321 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-logs\") pod \"glance-default-external-api-0\" (UID: \"d7db1c3d-e118-4d6d-aadf-0e69a43976e2\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.461162 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-logs\") pod \"glance-default-external-api-0\" (UID: \"d7db1c3d-e118-4d6d-aadf-0e69a43976e2\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.461225 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a33e9d7e-8788-45d9-9a9c-74dff59236f7-ovsdbserver-nb\") pod \"dnsmasq-dns-5b8cc95f99-9kjsw\" (UID: \"a33e9d7e-8788-45d9-9a9c-74dff59236f7\") " pod="openstack/dnsmasq-dns-5b8cc95f99-9kjsw" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.461325 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a33e9d7e-8788-45d9-9a9c-74dff59236f7-dns-svc\") pod \"dnsmasq-dns-5b8cc95f99-9kjsw\" (UID: \"a33e9d7e-8788-45d9-9a9c-74dff59236f7\") " pod="openstack/dnsmasq-dns-5b8cc95f99-9kjsw" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.461613 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d7db1c3d-e118-4d6d-aadf-0e69a43976e2\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.461750 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a33e9d7e-8788-45d9-9a9c-74dff59236f7-ovsdbserver-sb\") pod \"dnsmasq-dns-5b8cc95f99-9kjsw\" (UID: \"a33e9d7e-8788-45d9-9a9c-74dff59236f7\") " pod="openstack/dnsmasq-dns-5b8cc95f99-9kjsw" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.462065 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a33e9d7e-8788-45d9-9a9c-74dff59236f7-config\") pod \"dnsmasq-dns-5b8cc95f99-9kjsw\" (UID: \"a33e9d7e-8788-45d9-9a9c-74dff59236f7\") " pod="openstack/dnsmasq-dns-5b8cc95f99-9kjsw" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.463974 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e556e304-b751-4ce9-98c3-fb377a87eaed-logs\") pod \"glance-default-internal-api-0\" (UID: \"e556e304-b751-4ce9-98c3-fb377a87eaed\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.464466 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-scripts\") pod \"glance-default-external-api-0\" (UID: \"d7db1c3d-e118-4d6d-aadf-0e69a43976e2\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.464791 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-config-data\") pod \"glance-default-external-api-0\" (UID: \"d7db1c3d-e118-4d6d-aadf-0e69a43976e2\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.467133 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e556e304-b751-4ce9-98c3-fb377a87eaed-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e556e304-b751-4ce9-98c3-fb377a87eaed\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.468228 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d7db1c3d-e118-4d6d-aadf-0e69a43976e2\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.468344 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e556e304-b751-4ce9-98c3-fb377a87eaed-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e556e304-b751-4ce9-98c3-fb377a87eaed\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.468612 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e556e304-b751-4ce9-98c3-fb377a87eaed-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e556e304-b751-4ce9-98c3-fb377a87eaed\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.473066 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e556e304-b751-4ce9-98c3-fb377a87eaed-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e556e304-b751-4ce9-98c3-fb377a87eaed\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.474691 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jbp6\" (UniqueName: \"kubernetes.io/projected/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-kube-api-access-6jbp6\") pod \"glance-default-external-api-0\" (UID: \"d7db1c3d-e118-4d6d-aadf-0e69a43976e2\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.475135 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tcfg\" (UniqueName: \"kubernetes.io/projected/e556e304-b751-4ce9-98c3-fb377a87eaed-kube-api-access-9tcfg\") pod \"glance-default-internal-api-0\" (UID: \"e556e304-b751-4ce9-98c3-fb377a87eaed\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.475262 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4zxv\" (UniqueName: \"kubernetes.io/projected/a33e9d7e-8788-45d9-9a9c-74dff59236f7-kube-api-access-f4zxv\") pod \"dnsmasq-dns-5b8cc95f99-9kjsw\" (UID: \"a33e9d7e-8788-45d9-9a9c-74dff59236f7\") " pod="openstack/dnsmasq-dns-5b8cc95f99-9kjsw" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.486601 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.506099 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b8cc95f99-9kjsw" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.639639 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 08:05:19 crc kubenswrapper[4833]: I1013 08:05:19.993191 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b8cc95f99-9kjsw"] Oct 13 08:05:20 crc kubenswrapper[4833]: I1013 08:05:20.081485 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 08:05:20 crc kubenswrapper[4833]: W1013 08:05:20.085843 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7db1c3d_e118_4d6d_aadf_0e69a43976e2.slice/crio-b1819776b8e02e9e0f680e2999a8fb8d2c9cacba71b946c0c778b13f6dd87e54 WatchSource:0}: Error finding container b1819776b8e02e9e0f680e2999a8fb8d2c9cacba71b946c0c778b13f6dd87e54: Status 404 returned error can't find the container with id b1819776b8e02e9e0f680e2999a8fb8d2c9cacba71b946c0c778b13f6dd87e54 Oct 13 08:05:20 crc kubenswrapper[4833]: I1013 08:05:20.200984 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 08:05:20 crc kubenswrapper[4833]: W1013 08:05:20.214349 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode556e304_b751_4ce9_98c3_fb377a87eaed.slice/crio-3ce44f71314a7046ce2f14912910e45c327c20fe57e333263974c4006a87e22e WatchSource:0}: Error finding container 3ce44f71314a7046ce2f14912910e45c327c20fe57e333263974c4006a87e22e: Status 404 returned error can't find the container with id 3ce44f71314a7046ce2f14912910e45c327c20fe57e333263974c4006a87e22e Oct 13 08:05:20 crc kubenswrapper[4833]: I1013 08:05:20.404743 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 08:05:20 crc kubenswrapper[4833]: I1013 08:05:20.814870 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d7db1c3d-e118-4d6d-aadf-0e69a43976e2","Type":"ContainerStarted","Data":"1c5fd17283f9df64984b4ef3dd0101addf449d7224c843d2b0845cc052141622"} Oct 13 08:05:20 crc kubenswrapper[4833]: I1013 08:05:20.815272 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d7db1c3d-e118-4d6d-aadf-0e69a43976e2","Type":"ContainerStarted","Data":"b1819776b8e02e9e0f680e2999a8fb8d2c9cacba71b946c0c778b13f6dd87e54"} Oct 13 08:05:20 crc kubenswrapper[4833]: I1013 08:05:20.817907 4833 generic.go:334] "Generic (PLEG): container finished" podID="a33e9d7e-8788-45d9-9a9c-74dff59236f7" containerID="fafd8a8a1de2dcdcbbf64fe36c7d7062471517266af53ccc7fa5b8532a27b2ed" exitCode=0 Oct 13 08:05:20 crc kubenswrapper[4833]: I1013 08:05:20.818999 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8cc95f99-9kjsw" event={"ID":"a33e9d7e-8788-45d9-9a9c-74dff59236f7","Type":"ContainerDied","Data":"fafd8a8a1de2dcdcbbf64fe36c7d7062471517266af53ccc7fa5b8532a27b2ed"} Oct 13 08:05:20 crc kubenswrapper[4833]: I1013 08:05:20.819104 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8cc95f99-9kjsw" event={"ID":"a33e9d7e-8788-45d9-9a9c-74dff59236f7","Type":"ContainerStarted","Data":"23185dbd40cdd3cb99fc1657970dc10514e791bca9143eb72ce1b8a05e27dbba"} Oct 13 08:05:20 crc kubenswrapper[4833]: I1013 08:05:20.824301 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e556e304-b751-4ce9-98c3-fb377a87eaed","Type":"ContainerStarted","Data":"3162cf1fe00b1cde2ffefe94a1fde665fc5609e8a9f291b169e2e7fa48810c99"} Oct 13 08:05:20 crc kubenswrapper[4833]: I1013 08:05:20.824337 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e556e304-b751-4ce9-98c3-fb377a87eaed","Type":"ContainerStarted","Data":"3ce44f71314a7046ce2f14912910e45c327c20fe57e333263974c4006a87e22e"} Oct 13 08:05:21 crc kubenswrapper[4833]: I1013 08:05:21.612870 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 08:05:21 crc kubenswrapper[4833]: I1013 08:05:21.831938 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8cc95f99-9kjsw" event={"ID":"a33e9d7e-8788-45d9-9a9c-74dff59236f7","Type":"ContainerStarted","Data":"19b68b0ad8d8d1e30bc8f863279feec7caba2ca5a8b65d940f0eb34760b4db34"} Oct 13 08:05:21 crc kubenswrapper[4833]: I1013 08:05:21.832861 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b8cc95f99-9kjsw" Oct 13 08:05:21 crc kubenswrapper[4833]: I1013 08:05:21.834037 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e556e304-b751-4ce9-98c3-fb377a87eaed","Type":"ContainerStarted","Data":"7f5e6dff7264ea9b9ac1c35b08f3cd0b41812111102848cc35454ab51b6d57bf"} Oct 13 08:05:21 crc kubenswrapper[4833]: I1013 08:05:21.835896 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d7db1c3d-e118-4d6d-aadf-0e69a43976e2","Type":"ContainerStarted","Data":"ee0a05fd88efcf8d7d9c147003119efff3a1d71226ece7dd0516a7cbcbfe9ff0"} Oct 13 08:05:21 crc kubenswrapper[4833]: I1013 08:05:21.836081 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d7db1c3d-e118-4d6d-aadf-0e69a43976e2" containerName="glance-log" containerID="cri-o://1c5fd17283f9df64984b4ef3dd0101addf449d7224c843d2b0845cc052141622" gracePeriod=30 Oct 13 08:05:21 crc kubenswrapper[4833]: I1013 08:05:21.836230 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d7db1c3d-e118-4d6d-aadf-0e69a43976e2" containerName="glance-httpd" containerID="cri-o://ee0a05fd88efcf8d7d9c147003119efff3a1d71226ece7dd0516a7cbcbfe9ff0" gracePeriod=30 Oct 13 08:05:21 crc kubenswrapper[4833]: I1013 08:05:21.858663 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b8cc95f99-9kjsw" podStartSLOduration=2.858639333 podStartE2EDuration="2.858639333s" podCreationTimestamp="2025-10-13 08:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:05:21.850843512 +0000 UTC m=+5811.951266428" watchObservedRunningTime="2025-10-13 08:05:21.858639333 +0000 UTC m=+5811.959062249" Oct 13 08:05:21 crc kubenswrapper[4833]: I1013 08:05:21.889645 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.889594573 podStartE2EDuration="2.889594573s" podCreationTimestamp="2025-10-13 08:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:05:21.885302041 +0000 UTC m=+5811.985724967" watchObservedRunningTime="2025-10-13 08:05:21.889594573 +0000 UTC m=+5811.990017499" Oct 13 08:05:21 crc kubenswrapper[4833]: I1013 08:05:21.914284 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=2.914264994 podStartE2EDuration="2.914264994s" podCreationTimestamp="2025-10-13 08:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:05:21.907734749 +0000 UTC m=+5812.008157665" watchObservedRunningTime="2025-10-13 08:05:21.914264994 +0000 UTC m=+5812.014687910" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.438804 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.530636 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-logs\") pod \"d7db1c3d-e118-4d6d-aadf-0e69a43976e2\" (UID: \"d7db1c3d-e118-4d6d-aadf-0e69a43976e2\") " Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.530779 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-scripts\") pod \"d7db1c3d-e118-4d6d-aadf-0e69a43976e2\" (UID: \"d7db1c3d-e118-4d6d-aadf-0e69a43976e2\") " Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.530858 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-combined-ca-bundle\") pod \"d7db1c3d-e118-4d6d-aadf-0e69a43976e2\" (UID: \"d7db1c3d-e118-4d6d-aadf-0e69a43976e2\") " Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.530977 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-httpd-run\") pod \"d7db1c3d-e118-4d6d-aadf-0e69a43976e2\" (UID: \"d7db1c3d-e118-4d6d-aadf-0e69a43976e2\") " Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.531045 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jbp6\" (UniqueName: \"kubernetes.io/projected/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-kube-api-access-6jbp6\") pod \"d7db1c3d-e118-4d6d-aadf-0e69a43976e2\" (UID: \"d7db1c3d-e118-4d6d-aadf-0e69a43976e2\") " Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.531069 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-config-data\") pod \"d7db1c3d-e118-4d6d-aadf-0e69a43976e2\" (UID: \"d7db1c3d-e118-4d6d-aadf-0e69a43976e2\") " Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.531376 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d7db1c3d-e118-4d6d-aadf-0e69a43976e2" (UID: "d7db1c3d-e118-4d6d-aadf-0e69a43976e2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.531662 4833 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.531171 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-logs" (OuterVolumeSpecName: "logs") pod "d7db1c3d-e118-4d6d-aadf-0e69a43976e2" (UID: "d7db1c3d-e118-4d6d-aadf-0e69a43976e2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.537364 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-scripts" (OuterVolumeSpecName: "scripts") pod "d7db1c3d-e118-4d6d-aadf-0e69a43976e2" (UID: "d7db1c3d-e118-4d6d-aadf-0e69a43976e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.553371 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-kube-api-access-6jbp6" (OuterVolumeSpecName: "kube-api-access-6jbp6") pod "d7db1c3d-e118-4d6d-aadf-0e69a43976e2" (UID: "d7db1c3d-e118-4d6d-aadf-0e69a43976e2"). InnerVolumeSpecName "kube-api-access-6jbp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.566670 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7db1c3d-e118-4d6d-aadf-0e69a43976e2" (UID: "d7db1c3d-e118-4d6d-aadf-0e69a43976e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.587871 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-config-data" (OuterVolumeSpecName: "config-data") pod "d7db1c3d-e118-4d6d-aadf-0e69a43976e2" (UID: "d7db1c3d-e118-4d6d-aadf-0e69a43976e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.632286 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jbp6\" (UniqueName: \"kubernetes.io/projected/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-kube-api-access-6jbp6\") on node \"crc\" DevicePath \"\"" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.632317 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.632327 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-logs\") on node \"crc\" DevicePath \"\"" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.632336 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.632344 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7db1c3d-e118-4d6d-aadf-0e69a43976e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.847070 4833 generic.go:334] "Generic (PLEG): container finished" podID="d7db1c3d-e118-4d6d-aadf-0e69a43976e2" containerID="ee0a05fd88efcf8d7d9c147003119efff3a1d71226ece7dd0516a7cbcbfe9ff0" exitCode=0 Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.847111 4833 generic.go:334] "Generic (PLEG): container finished" podID="d7db1c3d-e118-4d6d-aadf-0e69a43976e2" containerID="1c5fd17283f9df64984b4ef3dd0101addf449d7224c843d2b0845cc052141622" exitCode=143 Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.847203 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.847282 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d7db1c3d-e118-4d6d-aadf-0e69a43976e2","Type":"ContainerDied","Data":"ee0a05fd88efcf8d7d9c147003119efff3a1d71226ece7dd0516a7cbcbfe9ff0"} Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.847332 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d7db1c3d-e118-4d6d-aadf-0e69a43976e2","Type":"ContainerDied","Data":"1c5fd17283f9df64984b4ef3dd0101addf449d7224c843d2b0845cc052141622"} Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.847355 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d7db1c3d-e118-4d6d-aadf-0e69a43976e2","Type":"ContainerDied","Data":"b1819776b8e02e9e0f680e2999a8fb8d2c9cacba71b946c0c778b13f6dd87e54"} Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.847387 4833 scope.go:117] "RemoveContainer" containerID="ee0a05fd88efcf8d7d9c147003119efff3a1d71226ece7dd0516a7cbcbfe9ff0" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.848169 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e556e304-b751-4ce9-98c3-fb377a87eaed" containerName="glance-log" containerID="cri-o://3162cf1fe00b1cde2ffefe94a1fde665fc5609e8a9f291b169e2e7fa48810c99" gracePeriod=30 Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.848289 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e556e304-b751-4ce9-98c3-fb377a87eaed" containerName="glance-httpd" containerID="cri-o://7f5e6dff7264ea9b9ac1c35b08f3cd0b41812111102848cc35454ab51b6d57bf" gracePeriod=30 Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.887901 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.897072 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.902206 4833 scope.go:117] "RemoveContainer" containerID="1c5fd17283f9df64984b4ef3dd0101addf449d7224c843d2b0845cc052141622" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.914644 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 08:05:22 crc kubenswrapper[4833]: E1013 08:05:22.915307 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7db1c3d-e118-4d6d-aadf-0e69a43976e2" containerName="glance-httpd" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.915339 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7db1c3d-e118-4d6d-aadf-0e69a43976e2" containerName="glance-httpd" Oct 13 08:05:22 crc kubenswrapper[4833]: E1013 08:05:22.915365 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7db1c3d-e118-4d6d-aadf-0e69a43976e2" containerName="glance-log" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.915379 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7db1c3d-e118-4d6d-aadf-0e69a43976e2" containerName="glance-log" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.915768 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7db1c3d-e118-4d6d-aadf-0e69a43976e2" containerName="glance-log" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.915824 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7db1c3d-e118-4d6d-aadf-0e69a43976e2" containerName="glance-httpd" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.917618 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.921698 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.921825 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.936690 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.948238 4833 scope.go:117] "RemoveContainer" containerID="ee0a05fd88efcf8d7d9c147003119efff3a1d71226ece7dd0516a7cbcbfe9ff0" Oct 13 08:05:22 crc kubenswrapper[4833]: E1013 08:05:22.950328 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee0a05fd88efcf8d7d9c147003119efff3a1d71226ece7dd0516a7cbcbfe9ff0\": container with ID starting with ee0a05fd88efcf8d7d9c147003119efff3a1d71226ece7dd0516a7cbcbfe9ff0 not found: ID does not exist" containerID="ee0a05fd88efcf8d7d9c147003119efff3a1d71226ece7dd0516a7cbcbfe9ff0" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.950397 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee0a05fd88efcf8d7d9c147003119efff3a1d71226ece7dd0516a7cbcbfe9ff0"} err="failed to get container status \"ee0a05fd88efcf8d7d9c147003119efff3a1d71226ece7dd0516a7cbcbfe9ff0\": rpc error: code = NotFound desc = could not find container \"ee0a05fd88efcf8d7d9c147003119efff3a1d71226ece7dd0516a7cbcbfe9ff0\": container with ID starting with ee0a05fd88efcf8d7d9c147003119efff3a1d71226ece7dd0516a7cbcbfe9ff0 not found: ID does not exist" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.950435 4833 scope.go:117] "RemoveContainer" containerID="1c5fd17283f9df64984b4ef3dd0101addf449d7224c843d2b0845cc052141622" Oct 13 08:05:22 crc kubenswrapper[4833]: E1013 08:05:22.950994 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c5fd17283f9df64984b4ef3dd0101addf449d7224c843d2b0845cc052141622\": container with ID starting with 1c5fd17283f9df64984b4ef3dd0101addf449d7224c843d2b0845cc052141622 not found: ID does not exist" containerID="1c5fd17283f9df64984b4ef3dd0101addf449d7224c843d2b0845cc052141622" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.951043 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c5fd17283f9df64984b4ef3dd0101addf449d7224c843d2b0845cc052141622"} err="failed to get container status \"1c5fd17283f9df64984b4ef3dd0101addf449d7224c843d2b0845cc052141622\": rpc error: code = NotFound desc = could not find container \"1c5fd17283f9df64984b4ef3dd0101addf449d7224c843d2b0845cc052141622\": container with ID starting with 1c5fd17283f9df64984b4ef3dd0101addf449d7224c843d2b0845cc052141622 not found: ID does not exist" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.951069 4833 scope.go:117] "RemoveContainer" containerID="ee0a05fd88efcf8d7d9c147003119efff3a1d71226ece7dd0516a7cbcbfe9ff0" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.951481 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee0a05fd88efcf8d7d9c147003119efff3a1d71226ece7dd0516a7cbcbfe9ff0"} err="failed to get container status \"ee0a05fd88efcf8d7d9c147003119efff3a1d71226ece7dd0516a7cbcbfe9ff0\": rpc error: code = NotFound desc = could not find container \"ee0a05fd88efcf8d7d9c147003119efff3a1d71226ece7dd0516a7cbcbfe9ff0\": container with ID starting with ee0a05fd88efcf8d7d9c147003119efff3a1d71226ece7dd0516a7cbcbfe9ff0 not found: ID does not exist" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.951556 4833 scope.go:117] "RemoveContainer" containerID="1c5fd17283f9df64984b4ef3dd0101addf449d7224c843d2b0845cc052141622" Oct 13 08:05:22 crc kubenswrapper[4833]: I1013 08:05:22.951868 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c5fd17283f9df64984b4ef3dd0101addf449d7224c843d2b0845cc052141622"} err="failed to get container status \"1c5fd17283f9df64984b4ef3dd0101addf449d7224c843d2b0845cc052141622\": rpc error: code = NotFound desc = could not find container \"1c5fd17283f9df64984b4ef3dd0101addf449d7224c843d2b0845cc052141622\": container with ID starting with 1c5fd17283f9df64984b4ef3dd0101addf449d7224c843d2b0845cc052141622 not found: ID does not exist" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.051182 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvrg6\" (UniqueName: \"kubernetes.io/projected/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-kube-api-access-jvrg6\") pod \"glance-default-external-api-0\" (UID: \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.051280 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.051311 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-logs\") pod \"glance-default-external-api-0\" (UID: \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.051328 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.051343 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-config-data\") pod \"glance-default-external-api-0\" (UID: \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.051367 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.051419 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-scripts\") pod \"glance-default-external-api-0\" (UID: \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.153220 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvrg6\" (UniqueName: \"kubernetes.io/projected/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-kube-api-access-jvrg6\") pod \"glance-default-external-api-0\" (UID: \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.153341 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.153382 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-logs\") pod \"glance-default-external-api-0\" (UID: \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.153405 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.153425 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-config-data\") pod \"glance-default-external-api-0\" (UID: \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.153460 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.153556 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-scripts\") pod \"glance-default-external-api-0\" (UID: \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.154731 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-logs\") pod \"glance-default-external-api-0\" (UID: \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.155100 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.158509 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-config-data\") pod \"glance-default-external-api-0\" (UID: \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.158753 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-scripts\") pod \"glance-default-external-api-0\" (UID: \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.159504 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.170218 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.177141 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvrg6\" (UniqueName: \"kubernetes.io/projected/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-kube-api-access-jvrg6\") pod \"glance-default-external-api-0\" (UID: \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\") " pod="openstack/glance-default-external-api-0" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.247080 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.434915 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.561176 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e556e304-b751-4ce9-98c3-fb377a87eaed-config-data\") pod \"e556e304-b751-4ce9-98c3-fb377a87eaed\" (UID: \"e556e304-b751-4ce9-98c3-fb377a87eaed\") " Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.561212 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e556e304-b751-4ce9-98c3-fb377a87eaed-combined-ca-bundle\") pod \"e556e304-b751-4ce9-98c3-fb377a87eaed\" (UID: \"e556e304-b751-4ce9-98c3-fb377a87eaed\") " Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.561288 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e556e304-b751-4ce9-98c3-fb377a87eaed-logs\") pod \"e556e304-b751-4ce9-98c3-fb377a87eaed\" (UID: \"e556e304-b751-4ce9-98c3-fb377a87eaed\") " Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.561354 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tcfg\" (UniqueName: \"kubernetes.io/projected/e556e304-b751-4ce9-98c3-fb377a87eaed-kube-api-access-9tcfg\") pod \"e556e304-b751-4ce9-98c3-fb377a87eaed\" (UID: \"e556e304-b751-4ce9-98c3-fb377a87eaed\") " Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.561370 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e556e304-b751-4ce9-98c3-fb377a87eaed-httpd-run\") pod \"e556e304-b751-4ce9-98c3-fb377a87eaed\" (UID: \"e556e304-b751-4ce9-98c3-fb377a87eaed\") " Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.561391 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e556e304-b751-4ce9-98c3-fb377a87eaed-scripts\") pod \"e556e304-b751-4ce9-98c3-fb377a87eaed\" (UID: \"e556e304-b751-4ce9-98c3-fb377a87eaed\") " Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.561882 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e556e304-b751-4ce9-98c3-fb377a87eaed-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e556e304-b751-4ce9-98c3-fb377a87eaed" (UID: "e556e304-b751-4ce9-98c3-fb377a87eaed"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.561920 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e556e304-b751-4ce9-98c3-fb377a87eaed-logs" (OuterVolumeSpecName: "logs") pod "e556e304-b751-4ce9-98c3-fb377a87eaed" (UID: "e556e304-b751-4ce9-98c3-fb377a87eaed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.565922 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e556e304-b751-4ce9-98c3-fb377a87eaed-kube-api-access-9tcfg" (OuterVolumeSpecName: "kube-api-access-9tcfg") pod "e556e304-b751-4ce9-98c3-fb377a87eaed" (UID: "e556e304-b751-4ce9-98c3-fb377a87eaed"). InnerVolumeSpecName "kube-api-access-9tcfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.576815 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e556e304-b751-4ce9-98c3-fb377a87eaed-scripts" (OuterVolumeSpecName: "scripts") pod "e556e304-b751-4ce9-98c3-fb377a87eaed" (UID: "e556e304-b751-4ce9-98c3-fb377a87eaed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.600638 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e556e304-b751-4ce9-98c3-fb377a87eaed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e556e304-b751-4ce9-98c3-fb377a87eaed" (UID: "e556e304-b751-4ce9-98c3-fb377a87eaed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.618927 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e556e304-b751-4ce9-98c3-fb377a87eaed-config-data" (OuterVolumeSpecName: "config-data") pod "e556e304-b751-4ce9-98c3-fb377a87eaed" (UID: "e556e304-b751-4ce9-98c3-fb377a87eaed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.663352 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tcfg\" (UniqueName: \"kubernetes.io/projected/e556e304-b751-4ce9-98c3-fb377a87eaed-kube-api-access-9tcfg\") on node \"crc\" DevicePath \"\"" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.663381 4833 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e556e304-b751-4ce9-98c3-fb377a87eaed-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.663390 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e556e304-b751-4ce9-98c3-fb377a87eaed-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.663398 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e556e304-b751-4ce9-98c3-fb377a87eaed-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.663406 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e556e304-b751-4ce9-98c3-fb377a87eaed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.663415 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e556e304-b751-4ce9-98c3-fb377a87eaed-logs\") on node \"crc\" DevicePath \"\"" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.828709 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 08:05:23 crc kubenswrapper[4833]: W1013 08:05:23.832941 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8e44ad3_827f_4a9c_8f7e_a41059fcf803.slice/crio-cf2a6323a9ece9e4c80a38cfb93d7f0a6ffbca9601eaeb92a5a2082398d71caf WatchSource:0}: Error finding container cf2a6323a9ece9e4c80a38cfb93d7f0a6ffbca9601eaeb92a5a2082398d71caf: Status 404 returned error can't find the container with id cf2a6323a9ece9e4c80a38cfb93d7f0a6ffbca9601eaeb92a5a2082398d71caf Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.857938 4833 generic.go:334] "Generic (PLEG): container finished" podID="e556e304-b751-4ce9-98c3-fb377a87eaed" containerID="7f5e6dff7264ea9b9ac1c35b08f3cd0b41812111102848cc35454ab51b6d57bf" exitCode=0 Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.857973 4833 generic.go:334] "Generic (PLEG): container finished" podID="e556e304-b751-4ce9-98c3-fb377a87eaed" containerID="3162cf1fe00b1cde2ffefe94a1fde665fc5609e8a9f291b169e2e7fa48810c99" exitCode=143 Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.858016 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e556e304-b751-4ce9-98c3-fb377a87eaed","Type":"ContainerDied","Data":"7f5e6dff7264ea9b9ac1c35b08f3cd0b41812111102848cc35454ab51b6d57bf"} Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.858047 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e556e304-b751-4ce9-98c3-fb377a87eaed","Type":"ContainerDied","Data":"3162cf1fe00b1cde2ffefe94a1fde665fc5609e8a9f291b169e2e7fa48810c99"} Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.858060 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e556e304-b751-4ce9-98c3-fb377a87eaed","Type":"ContainerDied","Data":"3ce44f71314a7046ce2f14912910e45c327c20fe57e333263974c4006a87e22e"} Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.858077 4833 scope.go:117] "RemoveContainer" containerID="7f5e6dff7264ea9b9ac1c35b08f3cd0b41812111102848cc35454ab51b6d57bf" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.858182 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.867109 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8e44ad3-827f-4a9c-8f7e-a41059fcf803","Type":"ContainerStarted","Data":"cf2a6323a9ece9e4c80a38cfb93d7f0a6ffbca9601eaeb92a5a2082398d71caf"} Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.897468 4833 scope.go:117] "RemoveContainer" containerID="3162cf1fe00b1cde2ffefe94a1fde665fc5609e8a9f291b169e2e7fa48810c99" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.907356 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.918305 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.927255 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 08:05:23 crc kubenswrapper[4833]: E1013 08:05:23.927686 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e556e304-b751-4ce9-98c3-fb377a87eaed" containerName="glance-log" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.927704 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e556e304-b751-4ce9-98c3-fb377a87eaed" containerName="glance-log" Oct 13 08:05:23 crc kubenswrapper[4833]: E1013 08:05:23.927716 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e556e304-b751-4ce9-98c3-fb377a87eaed" containerName="glance-httpd" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.927722 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e556e304-b751-4ce9-98c3-fb377a87eaed" containerName="glance-httpd" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.927891 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="e556e304-b751-4ce9-98c3-fb377a87eaed" containerName="glance-httpd" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.927912 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="e556e304-b751-4ce9-98c3-fb377a87eaed" containerName="glance-log" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.929215 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.933336 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.933593 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 13 08:05:23 crc kubenswrapper[4833]: I1013 08:05:23.939478 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.016329 4833 scope.go:117] "RemoveContainer" containerID="7f5e6dff7264ea9b9ac1c35b08f3cd0b41812111102848cc35454ab51b6d57bf" Oct 13 08:05:24 crc kubenswrapper[4833]: E1013 08:05:24.018604 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f5e6dff7264ea9b9ac1c35b08f3cd0b41812111102848cc35454ab51b6d57bf\": container with ID starting with 7f5e6dff7264ea9b9ac1c35b08f3cd0b41812111102848cc35454ab51b6d57bf not found: ID does not exist" containerID="7f5e6dff7264ea9b9ac1c35b08f3cd0b41812111102848cc35454ab51b6d57bf" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.018628 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f5e6dff7264ea9b9ac1c35b08f3cd0b41812111102848cc35454ab51b6d57bf"} err="failed to get container status \"7f5e6dff7264ea9b9ac1c35b08f3cd0b41812111102848cc35454ab51b6d57bf\": rpc error: code = NotFound desc = could not find container \"7f5e6dff7264ea9b9ac1c35b08f3cd0b41812111102848cc35454ab51b6d57bf\": container with ID starting with 7f5e6dff7264ea9b9ac1c35b08f3cd0b41812111102848cc35454ab51b6d57bf not found: ID does not exist" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.018649 4833 scope.go:117] "RemoveContainer" containerID="3162cf1fe00b1cde2ffefe94a1fde665fc5609e8a9f291b169e2e7fa48810c99" Oct 13 08:05:24 crc kubenswrapper[4833]: E1013 08:05:24.018905 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3162cf1fe00b1cde2ffefe94a1fde665fc5609e8a9f291b169e2e7fa48810c99\": container with ID starting with 3162cf1fe00b1cde2ffefe94a1fde665fc5609e8a9f291b169e2e7fa48810c99 not found: ID does not exist" containerID="3162cf1fe00b1cde2ffefe94a1fde665fc5609e8a9f291b169e2e7fa48810c99" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.018928 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3162cf1fe00b1cde2ffefe94a1fde665fc5609e8a9f291b169e2e7fa48810c99"} err="failed to get container status \"3162cf1fe00b1cde2ffefe94a1fde665fc5609e8a9f291b169e2e7fa48810c99\": rpc error: code = NotFound desc = could not find container \"3162cf1fe00b1cde2ffefe94a1fde665fc5609e8a9f291b169e2e7fa48810c99\": container with ID starting with 3162cf1fe00b1cde2ffefe94a1fde665fc5609e8a9f291b169e2e7fa48810c99 not found: ID does not exist" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.018941 4833 scope.go:117] "RemoveContainer" containerID="7f5e6dff7264ea9b9ac1c35b08f3cd0b41812111102848cc35454ab51b6d57bf" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.019205 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f5e6dff7264ea9b9ac1c35b08f3cd0b41812111102848cc35454ab51b6d57bf"} err="failed to get container status \"7f5e6dff7264ea9b9ac1c35b08f3cd0b41812111102848cc35454ab51b6d57bf\": rpc error: code = NotFound desc = could not find container \"7f5e6dff7264ea9b9ac1c35b08f3cd0b41812111102848cc35454ab51b6d57bf\": container with ID starting with 7f5e6dff7264ea9b9ac1c35b08f3cd0b41812111102848cc35454ab51b6d57bf not found: ID does not exist" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.019222 4833 scope.go:117] "RemoveContainer" containerID="3162cf1fe00b1cde2ffefe94a1fde665fc5609e8a9f291b169e2e7fa48810c99" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.019616 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3162cf1fe00b1cde2ffefe94a1fde665fc5609e8a9f291b169e2e7fa48810c99"} err="failed to get container status \"3162cf1fe00b1cde2ffefe94a1fde665fc5609e8a9f291b169e2e7fa48810c99\": rpc error: code = NotFound desc = could not find container \"3162cf1fe00b1cde2ffefe94a1fde665fc5609e8a9f291b169e2e7fa48810c99\": container with ID starting with 3162cf1fe00b1cde2ffefe94a1fde665fc5609e8a9f291b169e2e7fa48810c99 not found: ID does not exist" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.071742 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc4np\" (UniqueName: \"kubernetes.io/projected/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-kube-api-access-zc4np\") pod \"glance-default-internal-api-0\" (UID: \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.071810 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.071856 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.071902 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.071957 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.071983 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-logs\") pod \"glance-default-internal-api-0\" (UID: \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.072039 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.174098 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc4np\" (UniqueName: \"kubernetes.io/projected/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-kube-api-access-zc4np\") pod \"glance-default-internal-api-0\" (UID: \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.174153 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.174179 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.174195 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.174222 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.174251 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-logs\") pod \"glance-default-internal-api-0\" (UID: \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.174284 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.175216 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.175680 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-logs\") pod \"glance-default-internal-api-0\" (UID: \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.178631 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.179126 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.179227 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.179724 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.198642 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc4np\" (UniqueName: \"kubernetes.io/projected/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-kube-api-access-zc4np\") pod \"glance-default-internal-api-0\" (UID: \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.312932 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.640145 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7db1c3d-e118-4d6d-aadf-0e69a43976e2" path="/var/lib/kubelet/pods/d7db1c3d-e118-4d6d-aadf-0e69a43976e2/volumes" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.641153 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e556e304-b751-4ce9-98c3-fb377a87eaed" path="/var/lib/kubelet/pods/e556e304-b751-4ce9-98c3-fb377a87eaed/volumes" Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.851183 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.878890 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8e44ad3-827f-4a9c-8f7e-a41059fcf803","Type":"ContainerStarted","Data":"6a5afbe4012a503f715489125cf71b882308e6c76f318b7bdd7a537498fca932"} Oct 13 08:05:24 crc kubenswrapper[4833]: I1013 08:05:24.880043 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cbcef078-11a6-49ff-9d33-bc1a608c6f7f","Type":"ContainerStarted","Data":"acaf13fa54a0539d83207d42cd4bea354f21fdf154e6a7babffc4ac5a4180bc2"} Oct 13 08:05:25 crc kubenswrapper[4833]: I1013 08:05:25.892912 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8e44ad3-827f-4a9c-8f7e-a41059fcf803","Type":"ContainerStarted","Data":"97c61ac8d49253595623e276bf34e21ce13946ddf579d589d7c5898c7404a95b"} Oct 13 08:05:25 crc kubenswrapper[4833]: I1013 08:05:25.896427 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cbcef078-11a6-49ff-9d33-bc1a608c6f7f","Type":"ContainerStarted","Data":"a47a5aa6a855292702dd5b5c9fc13e725ac9267c86798cbc6677ec69594359c3"} Oct 13 08:05:25 crc kubenswrapper[4833]: I1013 08:05:25.917462 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.91744476 podStartE2EDuration="3.91744476s" podCreationTimestamp="2025-10-13 08:05:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:05:25.908446925 +0000 UTC m=+5816.008869841" watchObservedRunningTime="2025-10-13 08:05:25.91744476 +0000 UTC m=+5816.017867676" Oct 13 08:05:26 crc kubenswrapper[4833]: I1013 08:05:26.927684 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cbcef078-11a6-49ff-9d33-bc1a608c6f7f","Type":"ContainerStarted","Data":"c7bcdd9c07d5ad56dd5a7335442b83dd88136ed846c66a8003bbb890ba86216e"} Oct 13 08:05:27 crc kubenswrapper[4833]: I1013 08:05:27.628394 4833 scope.go:117] "RemoveContainer" containerID="84144bfb0195d5a0d1b4378bb219c91ea274fee3df90291859e033d126a958d8" Oct 13 08:05:27 crc kubenswrapper[4833]: E1013 08:05:27.629212 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:05:29 crc kubenswrapper[4833]: I1013 08:05:29.508828 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b8cc95f99-9kjsw" Oct 13 08:05:29 crc kubenswrapper[4833]: I1013 08:05:29.541847 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.541800108 podStartE2EDuration="6.541800108s" podCreationTimestamp="2025-10-13 08:05:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:05:26.955962209 +0000 UTC m=+5817.056385145" watchObservedRunningTime="2025-10-13 08:05:29.541800108 +0000 UTC m=+5819.642223104" Oct 13 08:05:29 crc kubenswrapper[4833]: I1013 08:05:29.619005 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69565fc5c9-v8vbs"] Oct 13 08:05:29 crc kubenswrapper[4833]: I1013 08:05:29.619289 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-69565fc5c9-v8vbs" podUID="d0a0cc6b-1478-4d81-b17b-467aee896980" containerName="dnsmasq-dns" containerID="cri-o://e3900fde784d1198adfad6002cc06aeff629e1828a24a0674ae9e41ebdf0f38d" gracePeriod=10 Oct 13 08:05:29 crc kubenswrapper[4833]: I1013 08:05:29.958164 4833 generic.go:334] "Generic (PLEG): container finished" podID="d0a0cc6b-1478-4d81-b17b-467aee896980" containerID="e3900fde784d1198adfad6002cc06aeff629e1828a24a0674ae9e41ebdf0f38d" exitCode=0 Oct 13 08:05:29 crc kubenswrapper[4833]: I1013 08:05:29.958568 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69565fc5c9-v8vbs" event={"ID":"d0a0cc6b-1478-4d81-b17b-467aee896980","Type":"ContainerDied","Data":"e3900fde784d1198adfad6002cc06aeff629e1828a24a0674ae9e41ebdf0f38d"} Oct 13 08:05:30 crc kubenswrapper[4833]: I1013 08:05:30.115693 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69565fc5c9-v8vbs" Oct 13 08:05:30 crc kubenswrapper[4833]: I1013 08:05:30.296232 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0a0cc6b-1478-4d81-b17b-467aee896980-config\") pod \"d0a0cc6b-1478-4d81-b17b-467aee896980\" (UID: \"d0a0cc6b-1478-4d81-b17b-467aee896980\") " Oct 13 08:05:30 crc kubenswrapper[4833]: I1013 08:05:30.296762 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0a0cc6b-1478-4d81-b17b-467aee896980-ovsdbserver-sb\") pod \"d0a0cc6b-1478-4d81-b17b-467aee896980\" (UID: \"d0a0cc6b-1478-4d81-b17b-467aee896980\") " Oct 13 08:05:30 crc kubenswrapper[4833]: I1013 08:05:30.296789 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0a0cc6b-1478-4d81-b17b-467aee896980-ovsdbserver-nb\") pod \"d0a0cc6b-1478-4d81-b17b-467aee896980\" (UID: \"d0a0cc6b-1478-4d81-b17b-467aee896980\") " Oct 13 08:05:30 crc kubenswrapper[4833]: I1013 08:05:30.296835 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0a0cc6b-1478-4d81-b17b-467aee896980-dns-svc\") pod \"d0a0cc6b-1478-4d81-b17b-467aee896980\" (UID: \"d0a0cc6b-1478-4d81-b17b-467aee896980\") " Oct 13 08:05:30 crc kubenswrapper[4833]: I1013 08:05:30.296876 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72png\" (UniqueName: \"kubernetes.io/projected/d0a0cc6b-1478-4d81-b17b-467aee896980-kube-api-access-72png\") pod \"d0a0cc6b-1478-4d81-b17b-467aee896980\" (UID: \"d0a0cc6b-1478-4d81-b17b-467aee896980\") " Oct 13 08:05:30 crc kubenswrapper[4833]: I1013 08:05:30.303514 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0a0cc6b-1478-4d81-b17b-467aee896980-kube-api-access-72png" (OuterVolumeSpecName: "kube-api-access-72png") pod "d0a0cc6b-1478-4d81-b17b-467aee896980" (UID: "d0a0cc6b-1478-4d81-b17b-467aee896980"). InnerVolumeSpecName "kube-api-access-72png". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:05:30 crc kubenswrapper[4833]: I1013 08:05:30.352979 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0a0cc6b-1478-4d81-b17b-467aee896980-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d0a0cc6b-1478-4d81-b17b-467aee896980" (UID: "d0a0cc6b-1478-4d81-b17b-467aee896980"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:05:30 crc kubenswrapper[4833]: I1013 08:05:30.361139 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0a0cc6b-1478-4d81-b17b-467aee896980-config" (OuterVolumeSpecName: "config") pod "d0a0cc6b-1478-4d81-b17b-467aee896980" (UID: "d0a0cc6b-1478-4d81-b17b-467aee896980"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:05:30 crc kubenswrapper[4833]: I1013 08:05:30.363919 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0a0cc6b-1478-4d81-b17b-467aee896980-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d0a0cc6b-1478-4d81-b17b-467aee896980" (UID: "d0a0cc6b-1478-4d81-b17b-467aee896980"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:05:30 crc kubenswrapper[4833]: I1013 08:05:30.365506 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0a0cc6b-1478-4d81-b17b-467aee896980-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d0a0cc6b-1478-4d81-b17b-467aee896980" (UID: "d0a0cc6b-1478-4d81-b17b-467aee896980"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:05:30 crc kubenswrapper[4833]: I1013 08:05:30.398556 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0a0cc6b-1478-4d81-b17b-467aee896980-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 08:05:30 crc kubenswrapper[4833]: I1013 08:05:30.398600 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0a0cc6b-1478-4d81-b17b-467aee896980-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 08:05:30 crc kubenswrapper[4833]: I1013 08:05:30.398613 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0a0cc6b-1478-4d81-b17b-467aee896980-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 08:05:30 crc kubenswrapper[4833]: I1013 08:05:30.398625 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72png\" (UniqueName: \"kubernetes.io/projected/d0a0cc6b-1478-4d81-b17b-467aee896980-kube-api-access-72png\") on node \"crc\" DevicePath \"\"" Oct 13 08:05:30 crc kubenswrapper[4833]: I1013 08:05:30.398641 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0a0cc6b-1478-4d81-b17b-467aee896980-config\") on node \"crc\" DevicePath \"\"" Oct 13 08:05:30 crc kubenswrapper[4833]: I1013 08:05:30.969972 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69565fc5c9-v8vbs" event={"ID":"d0a0cc6b-1478-4d81-b17b-467aee896980","Type":"ContainerDied","Data":"f96daeafefa6ef09d513a1099d22e29e6545ec66698b04b43e6a2a6ddbffcc58"} Oct 13 08:05:30 crc kubenswrapper[4833]: I1013 08:05:30.970045 4833 scope.go:117] "RemoveContainer" containerID="e3900fde784d1198adfad6002cc06aeff629e1828a24a0674ae9e41ebdf0f38d" Oct 13 08:05:30 crc kubenswrapper[4833]: I1013 08:05:30.970074 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69565fc5c9-v8vbs" Oct 13 08:05:31 crc kubenswrapper[4833]: I1013 08:05:31.012651 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69565fc5c9-v8vbs"] Oct 13 08:05:31 crc kubenswrapper[4833]: I1013 08:05:31.027499 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-69565fc5c9-v8vbs"] Oct 13 08:05:31 crc kubenswrapper[4833]: I1013 08:05:31.032246 4833 scope.go:117] "RemoveContainer" containerID="bac3403947034b4a4eeffa06766504cf48545d6cc50def508471adcbd802e709" Oct 13 08:05:32 crc kubenswrapper[4833]: I1013 08:05:32.641375 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0a0cc6b-1478-4d81-b17b-467aee896980" path="/var/lib/kubelet/pods/d0a0cc6b-1478-4d81-b17b-467aee896980/volumes" Oct 13 08:05:33 crc kubenswrapper[4833]: I1013 08:05:33.248137 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 13 08:05:33 crc kubenswrapper[4833]: I1013 08:05:33.248599 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 13 08:05:33 crc kubenswrapper[4833]: I1013 08:05:33.283919 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 13 08:05:33 crc kubenswrapper[4833]: I1013 08:05:33.327095 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 13 08:05:34 crc kubenswrapper[4833]: I1013 08:05:34.002088 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 13 08:05:34 crc kubenswrapper[4833]: I1013 08:05:34.002466 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 13 08:05:34 crc kubenswrapper[4833]: I1013 08:05:34.313693 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 13 08:05:34 crc kubenswrapper[4833]: I1013 08:05:34.314360 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 13 08:05:34 crc kubenswrapper[4833]: I1013 08:05:34.352390 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 13 08:05:34 crc kubenswrapper[4833]: I1013 08:05:34.370368 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 13 08:05:35 crc kubenswrapper[4833]: I1013 08:05:35.010495 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 13 08:05:35 crc kubenswrapper[4833]: I1013 08:05:35.010570 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 13 08:05:35 crc kubenswrapper[4833]: I1013 08:05:35.927981 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 13 08:05:35 crc kubenswrapper[4833]: I1013 08:05:35.939495 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 13 08:05:36 crc kubenswrapper[4833]: I1013 08:05:36.835889 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 13 08:05:36 crc kubenswrapper[4833]: I1013 08:05:36.879011 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 13 08:05:38 crc kubenswrapper[4833]: I1013 08:05:38.627376 4833 scope.go:117] "RemoveContainer" containerID="84144bfb0195d5a0d1b4378bb219c91ea274fee3df90291859e033d126a958d8" Oct 13 08:05:38 crc kubenswrapper[4833]: E1013 08:05:38.628047 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:05:38 crc kubenswrapper[4833]: I1013 08:05:38.932497 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xx2gr"] Oct 13 08:05:38 crc kubenswrapper[4833]: E1013 08:05:38.933883 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0a0cc6b-1478-4d81-b17b-467aee896980" containerName="init" Oct 13 08:05:38 crc kubenswrapper[4833]: I1013 08:05:38.933916 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0a0cc6b-1478-4d81-b17b-467aee896980" containerName="init" Oct 13 08:05:38 crc kubenswrapper[4833]: E1013 08:05:38.933965 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0a0cc6b-1478-4d81-b17b-467aee896980" containerName="dnsmasq-dns" Oct 13 08:05:38 crc kubenswrapper[4833]: I1013 08:05:38.933978 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0a0cc6b-1478-4d81-b17b-467aee896980" containerName="dnsmasq-dns" Oct 13 08:05:38 crc kubenswrapper[4833]: I1013 08:05:38.934263 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0a0cc6b-1478-4d81-b17b-467aee896980" containerName="dnsmasq-dns" Oct 13 08:05:38 crc kubenswrapper[4833]: I1013 08:05:38.936844 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xx2gr" Oct 13 08:05:38 crc kubenswrapper[4833]: I1013 08:05:38.946482 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xx2gr"] Oct 13 08:05:39 crc kubenswrapper[4833]: I1013 08:05:39.086218 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee9fbe32-8932-46db-b1b2-07138c9dae9f-utilities\") pod \"redhat-marketplace-xx2gr\" (UID: \"ee9fbe32-8932-46db-b1b2-07138c9dae9f\") " pod="openshift-marketplace/redhat-marketplace-xx2gr" Oct 13 08:05:39 crc kubenswrapper[4833]: I1013 08:05:39.086565 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzk5p\" (UniqueName: \"kubernetes.io/projected/ee9fbe32-8932-46db-b1b2-07138c9dae9f-kube-api-access-fzk5p\") pod \"redhat-marketplace-xx2gr\" (UID: \"ee9fbe32-8932-46db-b1b2-07138c9dae9f\") " pod="openshift-marketplace/redhat-marketplace-xx2gr" Oct 13 08:05:39 crc kubenswrapper[4833]: I1013 08:05:39.086706 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee9fbe32-8932-46db-b1b2-07138c9dae9f-catalog-content\") pod \"redhat-marketplace-xx2gr\" (UID: \"ee9fbe32-8932-46db-b1b2-07138c9dae9f\") " pod="openshift-marketplace/redhat-marketplace-xx2gr" Oct 13 08:05:39 crc kubenswrapper[4833]: I1013 08:05:39.188397 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzk5p\" (UniqueName: \"kubernetes.io/projected/ee9fbe32-8932-46db-b1b2-07138c9dae9f-kube-api-access-fzk5p\") pod \"redhat-marketplace-xx2gr\" (UID: \"ee9fbe32-8932-46db-b1b2-07138c9dae9f\") " pod="openshift-marketplace/redhat-marketplace-xx2gr" Oct 13 08:05:39 crc kubenswrapper[4833]: I1013 08:05:39.188499 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee9fbe32-8932-46db-b1b2-07138c9dae9f-catalog-content\") pod \"redhat-marketplace-xx2gr\" (UID: \"ee9fbe32-8932-46db-b1b2-07138c9dae9f\") " pod="openshift-marketplace/redhat-marketplace-xx2gr" Oct 13 08:05:39 crc kubenswrapper[4833]: I1013 08:05:39.188627 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee9fbe32-8932-46db-b1b2-07138c9dae9f-utilities\") pod \"redhat-marketplace-xx2gr\" (UID: \"ee9fbe32-8932-46db-b1b2-07138c9dae9f\") " pod="openshift-marketplace/redhat-marketplace-xx2gr" Oct 13 08:05:39 crc kubenswrapper[4833]: I1013 08:05:39.189339 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee9fbe32-8932-46db-b1b2-07138c9dae9f-utilities\") pod \"redhat-marketplace-xx2gr\" (UID: \"ee9fbe32-8932-46db-b1b2-07138c9dae9f\") " pod="openshift-marketplace/redhat-marketplace-xx2gr" Oct 13 08:05:39 crc kubenswrapper[4833]: I1013 08:05:39.189593 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee9fbe32-8932-46db-b1b2-07138c9dae9f-catalog-content\") pod \"redhat-marketplace-xx2gr\" (UID: \"ee9fbe32-8932-46db-b1b2-07138c9dae9f\") " pod="openshift-marketplace/redhat-marketplace-xx2gr" Oct 13 08:05:39 crc kubenswrapper[4833]: I1013 08:05:39.217982 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzk5p\" (UniqueName: \"kubernetes.io/projected/ee9fbe32-8932-46db-b1b2-07138c9dae9f-kube-api-access-fzk5p\") pod \"redhat-marketplace-xx2gr\" (UID: \"ee9fbe32-8932-46db-b1b2-07138c9dae9f\") " pod="openshift-marketplace/redhat-marketplace-xx2gr" Oct 13 08:05:39 crc kubenswrapper[4833]: I1013 08:05:39.262797 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xx2gr" Oct 13 08:05:39 crc kubenswrapper[4833]: I1013 08:05:39.701488 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xx2gr"] Oct 13 08:05:39 crc kubenswrapper[4833]: W1013 08:05:39.719725 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee9fbe32_8932_46db_b1b2_07138c9dae9f.slice/crio-4ee9166bb046e11369ef782f53bb5196f5cca66755cc4ba8f3be8ed403085af9 WatchSource:0}: Error finding container 4ee9166bb046e11369ef782f53bb5196f5cca66755cc4ba8f3be8ed403085af9: Status 404 returned error can't find the container with id 4ee9166bb046e11369ef782f53bb5196f5cca66755cc4ba8f3be8ed403085af9 Oct 13 08:05:40 crc kubenswrapper[4833]: I1013 08:05:40.066402 4833 generic.go:334] "Generic (PLEG): container finished" podID="ee9fbe32-8932-46db-b1b2-07138c9dae9f" containerID="5661a78e855556af7df72699e0046b6416bdcf530d6bf545e82e6776e193272b" exitCode=0 Oct 13 08:05:40 crc kubenswrapper[4833]: I1013 08:05:40.066596 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xx2gr" event={"ID":"ee9fbe32-8932-46db-b1b2-07138c9dae9f","Type":"ContainerDied","Data":"5661a78e855556af7df72699e0046b6416bdcf530d6bf545e82e6776e193272b"} Oct 13 08:05:40 crc kubenswrapper[4833]: I1013 08:05:40.066967 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xx2gr" event={"ID":"ee9fbe32-8932-46db-b1b2-07138c9dae9f","Type":"ContainerStarted","Data":"4ee9166bb046e11369ef782f53bb5196f5cca66755cc4ba8f3be8ed403085af9"} Oct 13 08:05:42 crc kubenswrapper[4833]: I1013 08:05:42.091794 4833 generic.go:334] "Generic (PLEG): container finished" podID="ee9fbe32-8932-46db-b1b2-07138c9dae9f" containerID="db17adf69ab4f6e9fda68a77b1cd25908e500cc13e6ef5bbeee8c780d63f0998" exitCode=0 Oct 13 08:05:42 crc kubenswrapper[4833]: I1013 08:05:42.091856 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xx2gr" event={"ID":"ee9fbe32-8932-46db-b1b2-07138c9dae9f","Type":"ContainerDied","Data":"db17adf69ab4f6e9fda68a77b1cd25908e500cc13e6ef5bbeee8c780d63f0998"} Oct 13 08:05:43 crc kubenswrapper[4833]: I1013 08:05:43.013657 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-rdxgm"] Oct 13 08:05:43 crc kubenswrapper[4833]: I1013 08:05:43.016667 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rdxgm" Oct 13 08:05:43 crc kubenswrapper[4833]: I1013 08:05:43.022524 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-rdxgm"] Oct 13 08:05:43 crc kubenswrapper[4833]: I1013 08:05:43.169373 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6phwk\" (UniqueName: \"kubernetes.io/projected/e8a0bb42-7255-43d8-8cc6-3a3696c94cde-kube-api-access-6phwk\") pod \"placement-db-create-rdxgm\" (UID: \"e8a0bb42-7255-43d8-8cc6-3a3696c94cde\") " pod="openstack/placement-db-create-rdxgm" Oct 13 08:05:43 crc kubenswrapper[4833]: I1013 08:05:43.270990 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6phwk\" (UniqueName: \"kubernetes.io/projected/e8a0bb42-7255-43d8-8cc6-3a3696c94cde-kube-api-access-6phwk\") pod \"placement-db-create-rdxgm\" (UID: \"e8a0bb42-7255-43d8-8cc6-3a3696c94cde\") " pod="openstack/placement-db-create-rdxgm" Oct 13 08:05:43 crc kubenswrapper[4833]: I1013 08:05:43.299332 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6phwk\" (UniqueName: \"kubernetes.io/projected/e8a0bb42-7255-43d8-8cc6-3a3696c94cde-kube-api-access-6phwk\") pod \"placement-db-create-rdxgm\" (UID: \"e8a0bb42-7255-43d8-8cc6-3a3696c94cde\") " pod="openstack/placement-db-create-rdxgm" Oct 13 08:05:43 crc kubenswrapper[4833]: I1013 08:05:43.338681 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rdxgm" Oct 13 08:05:43 crc kubenswrapper[4833]: W1013 08:05:43.822308 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8a0bb42_7255_43d8_8cc6_3a3696c94cde.slice/crio-b653938e40ca6c83faa8a6a4d5215419e9f1c5a7ec5ac81a2086f46a162adcde WatchSource:0}: Error finding container b653938e40ca6c83faa8a6a4d5215419e9f1c5a7ec5ac81a2086f46a162adcde: Status 404 returned error can't find the container with id b653938e40ca6c83faa8a6a4d5215419e9f1c5a7ec5ac81a2086f46a162adcde Oct 13 08:05:43 crc kubenswrapper[4833]: I1013 08:05:43.823218 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-rdxgm"] Oct 13 08:05:44 crc kubenswrapper[4833]: I1013 08:05:44.112175 4833 generic.go:334] "Generic (PLEG): container finished" podID="e8a0bb42-7255-43d8-8cc6-3a3696c94cde" containerID="cf36ebb34bf44f17c44b05dc7c5dee6881a8de9e3b8a77267d14493792d17a58" exitCode=0 Oct 13 08:05:44 crc kubenswrapper[4833]: I1013 08:05:44.112228 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rdxgm" event={"ID":"e8a0bb42-7255-43d8-8cc6-3a3696c94cde","Type":"ContainerDied","Data":"cf36ebb34bf44f17c44b05dc7c5dee6881a8de9e3b8a77267d14493792d17a58"} Oct 13 08:05:44 crc kubenswrapper[4833]: I1013 08:05:44.112265 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rdxgm" event={"ID":"e8a0bb42-7255-43d8-8cc6-3a3696c94cde","Type":"ContainerStarted","Data":"b653938e40ca6c83faa8a6a4d5215419e9f1c5a7ec5ac81a2086f46a162adcde"} Oct 13 08:05:45 crc kubenswrapper[4833]: I1013 08:05:45.508283 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rdxgm" Oct 13 08:05:45 crc kubenswrapper[4833]: I1013 08:05:45.613961 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6phwk\" (UniqueName: \"kubernetes.io/projected/e8a0bb42-7255-43d8-8cc6-3a3696c94cde-kube-api-access-6phwk\") pod \"e8a0bb42-7255-43d8-8cc6-3a3696c94cde\" (UID: \"e8a0bb42-7255-43d8-8cc6-3a3696c94cde\") " Oct 13 08:05:45 crc kubenswrapper[4833]: I1013 08:05:45.619157 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8a0bb42-7255-43d8-8cc6-3a3696c94cde-kube-api-access-6phwk" (OuterVolumeSpecName: "kube-api-access-6phwk") pod "e8a0bb42-7255-43d8-8cc6-3a3696c94cde" (UID: "e8a0bb42-7255-43d8-8cc6-3a3696c94cde"). InnerVolumeSpecName "kube-api-access-6phwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:05:45 crc kubenswrapper[4833]: I1013 08:05:45.716897 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6phwk\" (UniqueName: \"kubernetes.io/projected/e8a0bb42-7255-43d8-8cc6-3a3696c94cde-kube-api-access-6phwk\") on node \"crc\" DevicePath \"\"" Oct 13 08:05:46 crc kubenswrapper[4833]: I1013 08:05:46.135922 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rdxgm" event={"ID":"e8a0bb42-7255-43d8-8cc6-3a3696c94cde","Type":"ContainerDied","Data":"b653938e40ca6c83faa8a6a4d5215419e9f1c5a7ec5ac81a2086f46a162adcde"} Oct 13 08:05:46 crc kubenswrapper[4833]: I1013 08:05:46.135987 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rdxgm" Oct 13 08:05:46 crc kubenswrapper[4833]: I1013 08:05:46.135993 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b653938e40ca6c83faa8a6a4d5215419e9f1c5a7ec5ac81a2086f46a162adcde" Oct 13 08:05:47 crc kubenswrapper[4833]: I1013 08:05:47.149645 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xx2gr" event={"ID":"ee9fbe32-8932-46db-b1b2-07138c9dae9f","Type":"ContainerStarted","Data":"de36c71cbcc9982b06c73f121b37d8f507898213bcfe309e7351f15fcd8349f4"} Oct 13 08:05:47 crc kubenswrapper[4833]: I1013 08:05:47.183053 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xx2gr" podStartSLOduration=2.724617907 podStartE2EDuration="9.183019219s" podCreationTimestamp="2025-10-13 08:05:38 +0000 UTC" firstStartedPulling="2025-10-13 08:05:40.068466197 +0000 UTC m=+5830.168889113" lastFinishedPulling="2025-10-13 08:05:46.526867509 +0000 UTC m=+5836.627290425" observedRunningTime="2025-10-13 08:05:47.174495117 +0000 UTC m=+5837.274918053" watchObservedRunningTime="2025-10-13 08:05:47.183019219 +0000 UTC m=+5837.283442145" Oct 13 08:05:49 crc kubenswrapper[4833]: I1013 08:05:49.263876 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xx2gr" Oct 13 08:05:49 crc kubenswrapper[4833]: I1013 08:05:49.264330 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xx2gr" Oct 13 08:05:49 crc kubenswrapper[4833]: I1013 08:05:49.324366 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xx2gr" Oct 13 08:05:51 crc kubenswrapper[4833]: I1013 08:05:51.704331 4833 scope.go:117] "RemoveContainer" containerID="c23cac5248e069421afeb41b6309979340b828b4aef28bf4635b8bc20f906f37" Oct 13 08:05:52 crc kubenswrapper[4833]: I1013 08:05:52.628871 4833 scope.go:117] "RemoveContainer" containerID="84144bfb0195d5a0d1b4378bb219c91ea274fee3df90291859e033d126a958d8" Oct 13 08:05:52 crc kubenswrapper[4833]: E1013 08:05:52.630871 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:05:53 crc kubenswrapper[4833]: I1013 08:05:53.156976 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6dca-account-create-9jxnp"] Oct 13 08:05:53 crc kubenswrapper[4833]: E1013 08:05:53.157449 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8a0bb42-7255-43d8-8cc6-3a3696c94cde" containerName="mariadb-database-create" Oct 13 08:05:53 crc kubenswrapper[4833]: I1013 08:05:53.157467 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8a0bb42-7255-43d8-8cc6-3a3696c94cde" containerName="mariadb-database-create" Oct 13 08:05:53 crc kubenswrapper[4833]: I1013 08:05:53.157720 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8a0bb42-7255-43d8-8cc6-3a3696c94cde" containerName="mariadb-database-create" Oct 13 08:05:53 crc kubenswrapper[4833]: I1013 08:05:53.158519 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6dca-account-create-9jxnp" Oct 13 08:05:53 crc kubenswrapper[4833]: I1013 08:05:53.161297 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Oct 13 08:05:53 crc kubenswrapper[4833]: I1013 08:05:53.187730 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6dca-account-create-9jxnp"] Oct 13 08:05:53 crc kubenswrapper[4833]: I1013 08:05:53.304940 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzkk7\" (UniqueName: \"kubernetes.io/projected/909cab0a-686f-44ed-a0e9-f17d241a151a-kube-api-access-zzkk7\") pod \"placement-6dca-account-create-9jxnp\" (UID: \"909cab0a-686f-44ed-a0e9-f17d241a151a\") " pod="openstack/placement-6dca-account-create-9jxnp" Oct 13 08:05:53 crc kubenswrapper[4833]: I1013 08:05:53.408582 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzkk7\" (UniqueName: \"kubernetes.io/projected/909cab0a-686f-44ed-a0e9-f17d241a151a-kube-api-access-zzkk7\") pod \"placement-6dca-account-create-9jxnp\" (UID: \"909cab0a-686f-44ed-a0e9-f17d241a151a\") " pod="openstack/placement-6dca-account-create-9jxnp" Oct 13 08:05:53 crc kubenswrapper[4833]: I1013 08:05:53.436409 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzkk7\" (UniqueName: \"kubernetes.io/projected/909cab0a-686f-44ed-a0e9-f17d241a151a-kube-api-access-zzkk7\") pod \"placement-6dca-account-create-9jxnp\" (UID: \"909cab0a-686f-44ed-a0e9-f17d241a151a\") " pod="openstack/placement-6dca-account-create-9jxnp" Oct 13 08:05:53 crc kubenswrapper[4833]: I1013 08:05:53.481409 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6dca-account-create-9jxnp" Oct 13 08:05:54 crc kubenswrapper[4833]: I1013 08:05:54.068021 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6dca-account-create-9jxnp"] Oct 13 08:05:54 crc kubenswrapper[4833]: I1013 08:05:54.241188 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6dca-account-create-9jxnp" event={"ID":"909cab0a-686f-44ed-a0e9-f17d241a151a","Type":"ContainerStarted","Data":"8eaacdaef2be75854b744b217ad8cfa636381b8769f241f17a8c9509aef15b5e"} Oct 13 08:05:55 crc kubenswrapper[4833]: I1013 08:05:55.257701 4833 generic.go:334] "Generic (PLEG): container finished" podID="909cab0a-686f-44ed-a0e9-f17d241a151a" containerID="bcb94f369faa5e18cbba0fb7155cbb65a93627e835a22c44cb646b5830dad5b4" exitCode=0 Oct 13 08:05:55 crc kubenswrapper[4833]: I1013 08:05:55.257782 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6dca-account-create-9jxnp" event={"ID":"909cab0a-686f-44ed-a0e9-f17d241a151a","Type":"ContainerDied","Data":"bcb94f369faa5e18cbba0fb7155cbb65a93627e835a22c44cb646b5830dad5b4"} Oct 13 08:05:56 crc kubenswrapper[4833]: I1013 08:05:56.684729 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6dca-account-create-9jxnp" Oct 13 08:05:56 crc kubenswrapper[4833]: I1013 08:05:56.788697 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzkk7\" (UniqueName: \"kubernetes.io/projected/909cab0a-686f-44ed-a0e9-f17d241a151a-kube-api-access-zzkk7\") pod \"909cab0a-686f-44ed-a0e9-f17d241a151a\" (UID: \"909cab0a-686f-44ed-a0e9-f17d241a151a\") " Oct 13 08:05:56 crc kubenswrapper[4833]: I1013 08:05:56.799082 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/909cab0a-686f-44ed-a0e9-f17d241a151a-kube-api-access-zzkk7" (OuterVolumeSpecName: "kube-api-access-zzkk7") pod "909cab0a-686f-44ed-a0e9-f17d241a151a" (UID: "909cab0a-686f-44ed-a0e9-f17d241a151a"). InnerVolumeSpecName "kube-api-access-zzkk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:05:56 crc kubenswrapper[4833]: I1013 08:05:56.892333 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzkk7\" (UniqueName: \"kubernetes.io/projected/909cab0a-686f-44ed-a0e9-f17d241a151a-kube-api-access-zzkk7\") on node \"crc\" DevicePath \"\"" Oct 13 08:05:57 crc kubenswrapper[4833]: I1013 08:05:57.283729 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6dca-account-create-9jxnp" event={"ID":"909cab0a-686f-44ed-a0e9-f17d241a151a","Type":"ContainerDied","Data":"8eaacdaef2be75854b744b217ad8cfa636381b8769f241f17a8c9509aef15b5e"} Oct 13 08:05:57 crc kubenswrapper[4833]: I1013 08:05:57.283782 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6dca-account-create-9jxnp" Oct 13 08:05:57 crc kubenswrapper[4833]: I1013 08:05:57.283794 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8eaacdaef2be75854b744b217ad8cfa636381b8769f241f17a8c9509aef15b5e" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.403359 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76c69c676c-gns7n"] Oct 13 08:05:58 crc kubenswrapper[4833]: E1013 08:05:58.404009 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="909cab0a-686f-44ed-a0e9-f17d241a151a" containerName="mariadb-account-create" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.404021 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="909cab0a-686f-44ed-a0e9-f17d241a151a" containerName="mariadb-account-create" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.404194 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="909cab0a-686f-44ed-a0e9-f17d241a151a" containerName="mariadb-account-create" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.407646 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c69c676c-gns7n" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.424813 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76c69c676c-gns7n"] Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.434186 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-k7s9w"] Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.435356 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-k7s9w" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.437623 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.437786 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.437974 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bjlkk" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.439447 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-k7s9w"] Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.529234 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/953364b7-0926-40ea-a171-667d73c6af22-dns-svc\") pod \"dnsmasq-dns-76c69c676c-gns7n\" (UID: \"953364b7-0926-40ea-a171-667d73c6af22\") " pod="openstack/dnsmasq-dns-76c69c676c-gns7n" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.529303 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b65tz\" (UniqueName: \"kubernetes.io/projected/953364b7-0926-40ea-a171-667d73c6af22-kube-api-access-b65tz\") pod \"dnsmasq-dns-76c69c676c-gns7n\" (UID: \"953364b7-0926-40ea-a171-667d73c6af22\") " pod="openstack/dnsmasq-dns-76c69c676c-gns7n" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.529343 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqxzj\" (UniqueName: \"kubernetes.io/projected/6b134464-aede-4c01-b02c-97b08f757af5-kube-api-access-cqxzj\") pod \"placement-db-sync-k7s9w\" (UID: \"6b134464-aede-4c01-b02c-97b08f757af5\") " pod="openstack/placement-db-sync-k7s9w" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.529368 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/953364b7-0926-40ea-a171-667d73c6af22-config\") pod \"dnsmasq-dns-76c69c676c-gns7n\" (UID: \"953364b7-0926-40ea-a171-667d73c6af22\") " pod="openstack/dnsmasq-dns-76c69c676c-gns7n" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.529389 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b134464-aede-4c01-b02c-97b08f757af5-logs\") pod \"placement-db-sync-k7s9w\" (UID: \"6b134464-aede-4c01-b02c-97b08f757af5\") " pod="openstack/placement-db-sync-k7s9w" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.529409 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/953364b7-0926-40ea-a171-667d73c6af22-ovsdbserver-sb\") pod \"dnsmasq-dns-76c69c676c-gns7n\" (UID: \"953364b7-0926-40ea-a171-667d73c6af22\") " pod="openstack/dnsmasq-dns-76c69c676c-gns7n" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.529444 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b134464-aede-4c01-b02c-97b08f757af5-scripts\") pod \"placement-db-sync-k7s9w\" (UID: \"6b134464-aede-4c01-b02c-97b08f757af5\") " pod="openstack/placement-db-sync-k7s9w" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.529457 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/953364b7-0926-40ea-a171-667d73c6af22-ovsdbserver-nb\") pod \"dnsmasq-dns-76c69c676c-gns7n\" (UID: \"953364b7-0926-40ea-a171-667d73c6af22\") " pod="openstack/dnsmasq-dns-76c69c676c-gns7n" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.529477 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b134464-aede-4c01-b02c-97b08f757af5-config-data\") pod \"placement-db-sync-k7s9w\" (UID: \"6b134464-aede-4c01-b02c-97b08f757af5\") " pod="openstack/placement-db-sync-k7s9w" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.529503 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b134464-aede-4c01-b02c-97b08f757af5-combined-ca-bundle\") pod \"placement-db-sync-k7s9w\" (UID: \"6b134464-aede-4c01-b02c-97b08f757af5\") " pod="openstack/placement-db-sync-k7s9w" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.630989 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b134464-aede-4c01-b02c-97b08f757af5-scripts\") pod \"placement-db-sync-k7s9w\" (UID: \"6b134464-aede-4c01-b02c-97b08f757af5\") " pod="openstack/placement-db-sync-k7s9w" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.631252 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/953364b7-0926-40ea-a171-667d73c6af22-ovsdbserver-nb\") pod \"dnsmasq-dns-76c69c676c-gns7n\" (UID: \"953364b7-0926-40ea-a171-667d73c6af22\") " pod="openstack/dnsmasq-dns-76c69c676c-gns7n" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.631342 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b134464-aede-4c01-b02c-97b08f757af5-config-data\") pod \"placement-db-sync-k7s9w\" (UID: \"6b134464-aede-4c01-b02c-97b08f757af5\") " pod="openstack/placement-db-sync-k7s9w" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.631429 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b134464-aede-4c01-b02c-97b08f757af5-combined-ca-bundle\") pod \"placement-db-sync-k7s9w\" (UID: \"6b134464-aede-4c01-b02c-97b08f757af5\") " pod="openstack/placement-db-sync-k7s9w" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.631549 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/953364b7-0926-40ea-a171-667d73c6af22-dns-svc\") pod \"dnsmasq-dns-76c69c676c-gns7n\" (UID: \"953364b7-0926-40ea-a171-667d73c6af22\") " pod="openstack/dnsmasq-dns-76c69c676c-gns7n" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.631724 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b65tz\" (UniqueName: \"kubernetes.io/projected/953364b7-0926-40ea-a171-667d73c6af22-kube-api-access-b65tz\") pod \"dnsmasq-dns-76c69c676c-gns7n\" (UID: \"953364b7-0926-40ea-a171-667d73c6af22\") " pod="openstack/dnsmasq-dns-76c69c676c-gns7n" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.632318 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqxzj\" (UniqueName: \"kubernetes.io/projected/6b134464-aede-4c01-b02c-97b08f757af5-kube-api-access-cqxzj\") pod \"placement-db-sync-k7s9w\" (UID: \"6b134464-aede-4c01-b02c-97b08f757af5\") " pod="openstack/placement-db-sync-k7s9w" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.632490 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/953364b7-0926-40ea-a171-667d73c6af22-config\") pod \"dnsmasq-dns-76c69c676c-gns7n\" (UID: \"953364b7-0926-40ea-a171-667d73c6af22\") " pod="openstack/dnsmasq-dns-76c69c676c-gns7n" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.632700 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b134464-aede-4c01-b02c-97b08f757af5-logs\") pod \"placement-db-sync-k7s9w\" (UID: \"6b134464-aede-4c01-b02c-97b08f757af5\") " pod="openstack/placement-db-sync-k7s9w" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.632806 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/953364b7-0926-40ea-a171-667d73c6af22-ovsdbserver-sb\") pod \"dnsmasq-dns-76c69c676c-gns7n\" (UID: \"953364b7-0926-40ea-a171-667d73c6af22\") " pod="openstack/dnsmasq-dns-76c69c676c-gns7n" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.633998 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/953364b7-0926-40ea-a171-667d73c6af22-ovsdbserver-sb\") pod \"dnsmasq-dns-76c69c676c-gns7n\" (UID: \"953364b7-0926-40ea-a171-667d73c6af22\") " pod="openstack/dnsmasq-dns-76c69c676c-gns7n" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.634000 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b134464-aede-4c01-b02c-97b08f757af5-logs\") pod \"placement-db-sync-k7s9w\" (UID: \"6b134464-aede-4c01-b02c-97b08f757af5\") " pod="openstack/placement-db-sync-k7s9w" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.634275 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/953364b7-0926-40ea-a171-667d73c6af22-ovsdbserver-nb\") pod \"dnsmasq-dns-76c69c676c-gns7n\" (UID: \"953364b7-0926-40ea-a171-667d73c6af22\") " pod="openstack/dnsmasq-dns-76c69c676c-gns7n" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.635249 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/953364b7-0926-40ea-a171-667d73c6af22-dns-svc\") pod \"dnsmasq-dns-76c69c676c-gns7n\" (UID: \"953364b7-0926-40ea-a171-667d73c6af22\") " pod="openstack/dnsmasq-dns-76c69c676c-gns7n" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.636166 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/953364b7-0926-40ea-a171-667d73c6af22-config\") pod \"dnsmasq-dns-76c69c676c-gns7n\" (UID: \"953364b7-0926-40ea-a171-667d73c6af22\") " pod="openstack/dnsmasq-dns-76c69c676c-gns7n" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.638052 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b134464-aede-4c01-b02c-97b08f757af5-config-data\") pod \"placement-db-sync-k7s9w\" (UID: \"6b134464-aede-4c01-b02c-97b08f757af5\") " pod="openstack/placement-db-sync-k7s9w" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.639019 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b134464-aede-4c01-b02c-97b08f757af5-combined-ca-bundle\") pod \"placement-db-sync-k7s9w\" (UID: \"6b134464-aede-4c01-b02c-97b08f757af5\") " pod="openstack/placement-db-sync-k7s9w" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.640974 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b134464-aede-4c01-b02c-97b08f757af5-scripts\") pod \"placement-db-sync-k7s9w\" (UID: \"6b134464-aede-4c01-b02c-97b08f757af5\") " pod="openstack/placement-db-sync-k7s9w" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.652312 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqxzj\" (UniqueName: \"kubernetes.io/projected/6b134464-aede-4c01-b02c-97b08f757af5-kube-api-access-cqxzj\") pod \"placement-db-sync-k7s9w\" (UID: \"6b134464-aede-4c01-b02c-97b08f757af5\") " pod="openstack/placement-db-sync-k7s9w" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.654448 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b65tz\" (UniqueName: \"kubernetes.io/projected/953364b7-0926-40ea-a171-667d73c6af22-kube-api-access-b65tz\") pod \"dnsmasq-dns-76c69c676c-gns7n\" (UID: \"953364b7-0926-40ea-a171-667d73c6af22\") " pod="openstack/dnsmasq-dns-76c69c676c-gns7n" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.775624 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c69c676c-gns7n" Oct 13 08:05:58 crc kubenswrapper[4833]: I1013 08:05:58.791909 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-k7s9w" Oct 13 08:05:59 crc kubenswrapper[4833]: I1013 08:05:59.245528 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76c69c676c-gns7n"] Oct 13 08:05:59 crc kubenswrapper[4833]: I1013 08:05:59.314178 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c69c676c-gns7n" event={"ID":"953364b7-0926-40ea-a171-667d73c6af22","Type":"ContainerStarted","Data":"136b55b285e867424cff27c11ab026238466b8bda2a2334ee6af7cc80e9022a7"} Oct 13 08:05:59 crc kubenswrapper[4833]: I1013 08:05:59.318720 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-k7s9w"] Oct 13 08:05:59 crc kubenswrapper[4833]: I1013 08:05:59.330083 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xx2gr" Oct 13 08:05:59 crc kubenswrapper[4833]: W1013 08:05:59.338908 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b134464_aede_4c01_b02c_97b08f757af5.slice/crio-569ab6f51e6ccb2f3406992867312c99b607ad9414f37ac80731d4272d393d0a WatchSource:0}: Error finding container 569ab6f51e6ccb2f3406992867312c99b607ad9414f37ac80731d4272d393d0a: Status 404 returned error can't find the container with id 569ab6f51e6ccb2f3406992867312c99b607ad9414f37ac80731d4272d393d0a Oct 13 08:05:59 crc kubenswrapper[4833]: I1013 08:05:59.386335 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xx2gr"] Oct 13 08:06:00 crc kubenswrapper[4833]: I1013 08:06:00.327301 4833 generic.go:334] "Generic (PLEG): container finished" podID="953364b7-0926-40ea-a171-667d73c6af22" containerID="ab80501931bd49f99404012678e293cf867dbbef41fd32e0d93bb2923d8e5b1d" exitCode=0 Oct 13 08:06:00 crc kubenswrapper[4833]: I1013 08:06:00.327390 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c69c676c-gns7n" event={"ID":"953364b7-0926-40ea-a171-667d73c6af22","Type":"ContainerDied","Data":"ab80501931bd49f99404012678e293cf867dbbef41fd32e0d93bb2923d8e5b1d"} Oct 13 08:06:00 crc kubenswrapper[4833]: I1013 08:06:00.338171 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-k7s9w" event={"ID":"6b134464-aede-4c01-b02c-97b08f757af5","Type":"ContainerStarted","Data":"039b522d1b46fc4a33e80b902619cb87f41f131570c9fbf285583446628721ed"} Oct 13 08:06:00 crc kubenswrapper[4833]: I1013 08:06:00.338254 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-k7s9w" event={"ID":"6b134464-aede-4c01-b02c-97b08f757af5","Type":"ContainerStarted","Data":"569ab6f51e6ccb2f3406992867312c99b607ad9414f37ac80731d4272d393d0a"} Oct 13 08:06:00 crc kubenswrapper[4833]: I1013 08:06:00.338306 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xx2gr" podUID="ee9fbe32-8932-46db-b1b2-07138c9dae9f" containerName="registry-server" containerID="cri-o://de36c71cbcc9982b06c73f121b37d8f507898213bcfe309e7351f15fcd8349f4" gracePeriod=2 Oct 13 08:06:00 crc kubenswrapper[4833]: I1013 08:06:00.393249 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-k7s9w" podStartSLOduration=2.393228935 podStartE2EDuration="2.393228935s" podCreationTimestamp="2025-10-13 08:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:06:00.388519121 +0000 UTC m=+5850.488942037" watchObservedRunningTime="2025-10-13 08:06:00.393228935 +0000 UTC m=+5850.493651861" Oct 13 08:06:00 crc kubenswrapper[4833]: I1013 08:06:00.781263 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xx2gr" Oct 13 08:06:00 crc kubenswrapper[4833]: I1013 08:06:00.899323 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee9fbe32-8932-46db-b1b2-07138c9dae9f-catalog-content\") pod \"ee9fbe32-8932-46db-b1b2-07138c9dae9f\" (UID: \"ee9fbe32-8932-46db-b1b2-07138c9dae9f\") " Oct 13 08:06:00 crc kubenswrapper[4833]: I1013 08:06:00.899972 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzk5p\" (UniqueName: \"kubernetes.io/projected/ee9fbe32-8932-46db-b1b2-07138c9dae9f-kube-api-access-fzk5p\") pod \"ee9fbe32-8932-46db-b1b2-07138c9dae9f\" (UID: \"ee9fbe32-8932-46db-b1b2-07138c9dae9f\") " Oct 13 08:06:00 crc kubenswrapper[4833]: I1013 08:06:00.900510 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee9fbe32-8932-46db-b1b2-07138c9dae9f-utilities\") pod \"ee9fbe32-8932-46db-b1b2-07138c9dae9f\" (UID: \"ee9fbe32-8932-46db-b1b2-07138c9dae9f\") " Oct 13 08:06:00 crc kubenswrapper[4833]: I1013 08:06:00.901415 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee9fbe32-8932-46db-b1b2-07138c9dae9f-utilities" (OuterVolumeSpecName: "utilities") pod "ee9fbe32-8932-46db-b1b2-07138c9dae9f" (UID: "ee9fbe32-8932-46db-b1b2-07138c9dae9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:06:00 crc kubenswrapper[4833]: I1013 08:06:00.901854 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee9fbe32-8932-46db-b1b2-07138c9dae9f-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 08:06:00 crc kubenswrapper[4833]: I1013 08:06:00.913816 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee9fbe32-8932-46db-b1b2-07138c9dae9f-kube-api-access-fzk5p" (OuterVolumeSpecName: "kube-api-access-fzk5p") pod "ee9fbe32-8932-46db-b1b2-07138c9dae9f" (UID: "ee9fbe32-8932-46db-b1b2-07138c9dae9f"). InnerVolumeSpecName "kube-api-access-fzk5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:06:00 crc kubenswrapper[4833]: I1013 08:06:00.919810 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee9fbe32-8932-46db-b1b2-07138c9dae9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee9fbe32-8932-46db-b1b2-07138c9dae9f" (UID: "ee9fbe32-8932-46db-b1b2-07138c9dae9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:06:01 crc kubenswrapper[4833]: I1013 08:06:01.003359 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee9fbe32-8932-46db-b1b2-07138c9dae9f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 08:06:01 crc kubenswrapper[4833]: I1013 08:06:01.003404 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzk5p\" (UniqueName: \"kubernetes.io/projected/ee9fbe32-8932-46db-b1b2-07138c9dae9f-kube-api-access-fzk5p\") on node \"crc\" DevicePath \"\"" Oct 13 08:06:01 crc kubenswrapper[4833]: I1013 08:06:01.352025 4833 generic.go:334] "Generic (PLEG): container finished" podID="ee9fbe32-8932-46db-b1b2-07138c9dae9f" containerID="de36c71cbcc9982b06c73f121b37d8f507898213bcfe309e7351f15fcd8349f4" exitCode=0 Oct 13 08:06:01 crc kubenswrapper[4833]: I1013 08:06:01.352089 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xx2gr" event={"ID":"ee9fbe32-8932-46db-b1b2-07138c9dae9f","Type":"ContainerDied","Data":"de36c71cbcc9982b06c73f121b37d8f507898213bcfe309e7351f15fcd8349f4"} Oct 13 08:06:01 crc kubenswrapper[4833]: I1013 08:06:01.352152 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xx2gr" event={"ID":"ee9fbe32-8932-46db-b1b2-07138c9dae9f","Type":"ContainerDied","Data":"4ee9166bb046e11369ef782f53bb5196f5cca66755cc4ba8f3be8ed403085af9"} Oct 13 08:06:01 crc kubenswrapper[4833]: I1013 08:06:01.352186 4833 scope.go:117] "RemoveContainer" containerID="de36c71cbcc9982b06c73f121b37d8f507898213bcfe309e7351f15fcd8349f4" Oct 13 08:06:01 crc kubenswrapper[4833]: I1013 08:06:01.353880 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xx2gr" Oct 13 08:06:01 crc kubenswrapper[4833]: I1013 08:06:01.355477 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c69c676c-gns7n" event={"ID":"953364b7-0926-40ea-a171-667d73c6af22","Type":"ContainerStarted","Data":"1cc71928a0bba02ca73ce7976ba78bca6fd510360f74300ca1af1758151da3d6"} Oct 13 08:06:01 crc kubenswrapper[4833]: I1013 08:06:01.355794 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76c69c676c-gns7n" Oct 13 08:06:01 crc kubenswrapper[4833]: I1013 08:06:01.358138 4833 generic.go:334] "Generic (PLEG): container finished" podID="6b134464-aede-4c01-b02c-97b08f757af5" containerID="039b522d1b46fc4a33e80b902619cb87f41f131570c9fbf285583446628721ed" exitCode=0 Oct 13 08:06:01 crc kubenswrapper[4833]: I1013 08:06:01.358239 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-k7s9w" event={"ID":"6b134464-aede-4c01-b02c-97b08f757af5","Type":"ContainerDied","Data":"039b522d1b46fc4a33e80b902619cb87f41f131570c9fbf285583446628721ed"} Oct 13 08:06:01 crc kubenswrapper[4833]: I1013 08:06:01.389221 4833 scope.go:117] "RemoveContainer" containerID="db17adf69ab4f6e9fda68a77b1cd25908e500cc13e6ef5bbeee8c780d63f0998" Oct 13 08:06:01 crc kubenswrapper[4833]: I1013 08:06:01.402709 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76c69c676c-gns7n" podStartSLOduration=3.402690208 podStartE2EDuration="3.402690208s" podCreationTimestamp="2025-10-13 08:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:06:01.395747431 +0000 UTC m=+5851.496170347" watchObservedRunningTime="2025-10-13 08:06:01.402690208 +0000 UTC m=+5851.503113114" Oct 13 08:06:01 crc kubenswrapper[4833]: I1013 08:06:01.433355 4833 scope.go:117] "RemoveContainer" containerID="5661a78e855556af7df72699e0046b6416bdcf530d6bf545e82e6776e193272b" Oct 13 08:06:01 crc kubenswrapper[4833]: I1013 08:06:01.440943 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xx2gr"] Oct 13 08:06:01 crc kubenswrapper[4833]: I1013 08:06:01.450487 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xx2gr"] Oct 13 08:06:01 crc kubenswrapper[4833]: I1013 08:06:01.485567 4833 scope.go:117] "RemoveContainer" containerID="de36c71cbcc9982b06c73f121b37d8f507898213bcfe309e7351f15fcd8349f4" Oct 13 08:06:01 crc kubenswrapper[4833]: E1013 08:06:01.486140 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de36c71cbcc9982b06c73f121b37d8f507898213bcfe309e7351f15fcd8349f4\": container with ID starting with de36c71cbcc9982b06c73f121b37d8f507898213bcfe309e7351f15fcd8349f4 not found: ID does not exist" containerID="de36c71cbcc9982b06c73f121b37d8f507898213bcfe309e7351f15fcd8349f4" Oct 13 08:06:01 crc kubenswrapper[4833]: I1013 08:06:01.486177 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de36c71cbcc9982b06c73f121b37d8f507898213bcfe309e7351f15fcd8349f4"} err="failed to get container status \"de36c71cbcc9982b06c73f121b37d8f507898213bcfe309e7351f15fcd8349f4\": rpc error: code = NotFound desc = could not find container \"de36c71cbcc9982b06c73f121b37d8f507898213bcfe309e7351f15fcd8349f4\": container with ID starting with de36c71cbcc9982b06c73f121b37d8f507898213bcfe309e7351f15fcd8349f4 not found: ID does not exist" Oct 13 08:06:01 crc kubenswrapper[4833]: I1013 08:06:01.486203 4833 scope.go:117] "RemoveContainer" containerID="db17adf69ab4f6e9fda68a77b1cd25908e500cc13e6ef5bbeee8c780d63f0998" Oct 13 08:06:01 crc kubenswrapper[4833]: E1013 08:06:01.486734 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db17adf69ab4f6e9fda68a77b1cd25908e500cc13e6ef5bbeee8c780d63f0998\": container with ID starting with db17adf69ab4f6e9fda68a77b1cd25908e500cc13e6ef5bbeee8c780d63f0998 not found: ID does not exist" containerID="db17adf69ab4f6e9fda68a77b1cd25908e500cc13e6ef5bbeee8c780d63f0998" Oct 13 08:06:01 crc kubenswrapper[4833]: I1013 08:06:01.486758 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db17adf69ab4f6e9fda68a77b1cd25908e500cc13e6ef5bbeee8c780d63f0998"} err="failed to get container status \"db17adf69ab4f6e9fda68a77b1cd25908e500cc13e6ef5bbeee8c780d63f0998\": rpc error: code = NotFound desc = could not find container \"db17adf69ab4f6e9fda68a77b1cd25908e500cc13e6ef5bbeee8c780d63f0998\": container with ID starting with db17adf69ab4f6e9fda68a77b1cd25908e500cc13e6ef5bbeee8c780d63f0998 not found: ID does not exist" Oct 13 08:06:01 crc kubenswrapper[4833]: I1013 08:06:01.486773 4833 scope.go:117] "RemoveContainer" containerID="5661a78e855556af7df72699e0046b6416bdcf530d6bf545e82e6776e193272b" Oct 13 08:06:01 crc kubenswrapper[4833]: E1013 08:06:01.487108 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5661a78e855556af7df72699e0046b6416bdcf530d6bf545e82e6776e193272b\": container with ID starting with 5661a78e855556af7df72699e0046b6416bdcf530d6bf545e82e6776e193272b not found: ID does not exist" containerID="5661a78e855556af7df72699e0046b6416bdcf530d6bf545e82e6776e193272b" Oct 13 08:06:01 crc kubenswrapper[4833]: I1013 08:06:01.487140 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5661a78e855556af7df72699e0046b6416bdcf530d6bf545e82e6776e193272b"} err="failed to get container status \"5661a78e855556af7df72699e0046b6416bdcf530d6bf545e82e6776e193272b\": rpc error: code = NotFound desc = could not find container \"5661a78e855556af7df72699e0046b6416bdcf530d6bf545e82e6776e193272b\": container with ID starting with 5661a78e855556af7df72699e0046b6416bdcf530d6bf545e82e6776e193272b not found: ID does not exist" Oct 13 08:06:02 crc kubenswrapper[4833]: I1013 08:06:02.642256 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee9fbe32-8932-46db-b1b2-07138c9dae9f" path="/var/lib/kubelet/pods/ee9fbe32-8932-46db-b1b2-07138c9dae9f/volumes" Oct 13 08:06:02 crc kubenswrapper[4833]: I1013 08:06:02.817050 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-k7s9w" Oct 13 08:06:02 crc kubenswrapper[4833]: I1013 08:06:02.842214 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqxzj\" (UniqueName: \"kubernetes.io/projected/6b134464-aede-4c01-b02c-97b08f757af5-kube-api-access-cqxzj\") pod \"6b134464-aede-4c01-b02c-97b08f757af5\" (UID: \"6b134464-aede-4c01-b02c-97b08f757af5\") " Oct 13 08:06:02 crc kubenswrapper[4833]: I1013 08:06:02.850948 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b134464-aede-4c01-b02c-97b08f757af5-kube-api-access-cqxzj" (OuterVolumeSpecName: "kube-api-access-cqxzj") pod "6b134464-aede-4c01-b02c-97b08f757af5" (UID: "6b134464-aede-4c01-b02c-97b08f757af5"). InnerVolumeSpecName "kube-api-access-cqxzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:06:02 crc kubenswrapper[4833]: I1013 08:06:02.943594 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b134464-aede-4c01-b02c-97b08f757af5-config-data\") pod \"6b134464-aede-4c01-b02c-97b08f757af5\" (UID: \"6b134464-aede-4c01-b02c-97b08f757af5\") " Oct 13 08:06:02 crc kubenswrapper[4833]: I1013 08:06:02.943726 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b134464-aede-4c01-b02c-97b08f757af5-scripts\") pod \"6b134464-aede-4c01-b02c-97b08f757af5\" (UID: \"6b134464-aede-4c01-b02c-97b08f757af5\") " Oct 13 08:06:02 crc kubenswrapper[4833]: I1013 08:06:02.944430 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b134464-aede-4c01-b02c-97b08f757af5-logs\") pod \"6b134464-aede-4c01-b02c-97b08f757af5\" (UID: \"6b134464-aede-4c01-b02c-97b08f757af5\") " Oct 13 08:06:02 crc kubenswrapper[4833]: I1013 08:06:02.944472 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b134464-aede-4c01-b02c-97b08f757af5-combined-ca-bundle\") pod \"6b134464-aede-4c01-b02c-97b08f757af5\" (UID: \"6b134464-aede-4c01-b02c-97b08f757af5\") " Oct 13 08:06:02 crc kubenswrapper[4833]: I1013 08:06:02.944962 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b134464-aede-4c01-b02c-97b08f757af5-logs" (OuterVolumeSpecName: "logs") pod "6b134464-aede-4c01-b02c-97b08f757af5" (UID: "6b134464-aede-4c01-b02c-97b08f757af5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:06:02 crc kubenswrapper[4833]: I1013 08:06:02.945064 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqxzj\" (UniqueName: \"kubernetes.io/projected/6b134464-aede-4c01-b02c-97b08f757af5-kube-api-access-cqxzj\") on node \"crc\" DevicePath \"\"" Oct 13 08:06:02 crc kubenswrapper[4833]: I1013 08:06:02.948197 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b134464-aede-4c01-b02c-97b08f757af5-scripts" (OuterVolumeSpecName: "scripts") pod "6b134464-aede-4c01-b02c-97b08f757af5" (UID: "6b134464-aede-4c01-b02c-97b08f757af5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:06:02 crc kubenswrapper[4833]: I1013 08:06:02.972703 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b134464-aede-4c01-b02c-97b08f757af5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b134464-aede-4c01-b02c-97b08f757af5" (UID: "6b134464-aede-4c01-b02c-97b08f757af5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:06:02 crc kubenswrapper[4833]: I1013 08:06:02.991385 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b134464-aede-4c01-b02c-97b08f757af5-config-data" (OuterVolumeSpecName: "config-data") pod "6b134464-aede-4c01-b02c-97b08f757af5" (UID: "6b134464-aede-4c01-b02c-97b08f757af5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.046198 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b134464-aede-4c01-b02c-97b08f757af5-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.046234 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b134464-aede-4c01-b02c-97b08f757af5-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.046246 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b134464-aede-4c01-b02c-97b08f757af5-logs\") on node \"crc\" DevicePath \"\"" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.046258 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b134464-aede-4c01-b02c-97b08f757af5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.390885 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-k7s9w" event={"ID":"6b134464-aede-4c01-b02c-97b08f757af5","Type":"ContainerDied","Data":"569ab6f51e6ccb2f3406992867312c99b607ad9414f37ac80731d4272d393d0a"} Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.390933 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="569ab6f51e6ccb2f3406992867312c99b607ad9414f37ac80731d4272d393d0a" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.390964 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-k7s9w" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.577345 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-864fb88886-jf42k"] Oct 13 08:06:03 crc kubenswrapper[4833]: E1013 08:06:03.578314 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee9fbe32-8932-46db-b1b2-07138c9dae9f" containerName="extract-utilities" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.578343 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee9fbe32-8932-46db-b1b2-07138c9dae9f" containerName="extract-utilities" Oct 13 08:06:03 crc kubenswrapper[4833]: E1013 08:06:03.578360 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee9fbe32-8932-46db-b1b2-07138c9dae9f" containerName="registry-server" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.578368 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee9fbe32-8932-46db-b1b2-07138c9dae9f" containerName="registry-server" Oct 13 08:06:03 crc kubenswrapper[4833]: E1013 08:06:03.578397 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b134464-aede-4c01-b02c-97b08f757af5" containerName="placement-db-sync" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.578405 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b134464-aede-4c01-b02c-97b08f757af5" containerName="placement-db-sync" Oct 13 08:06:03 crc kubenswrapper[4833]: E1013 08:06:03.578428 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee9fbe32-8932-46db-b1b2-07138c9dae9f" containerName="extract-content" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.578433 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee9fbe32-8932-46db-b1b2-07138c9dae9f" containerName="extract-content" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.578662 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b134464-aede-4c01-b02c-97b08f757af5" containerName="placement-db-sync" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.578684 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee9fbe32-8932-46db-b1b2-07138c9dae9f" containerName="registry-server" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.579892 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-864fb88886-jf42k" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.582709 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.582746 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bjlkk" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.583616 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.584859 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.586120 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.597521 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-864fb88886-jf42k"] Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.762891 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/009992ed-3b8d-457e-a32c-3119e80a90a7-combined-ca-bundle\") pod \"placement-864fb88886-jf42k\" (UID: \"009992ed-3b8d-457e-a32c-3119e80a90a7\") " pod="openstack/placement-864fb88886-jf42k" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.763038 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/009992ed-3b8d-457e-a32c-3119e80a90a7-scripts\") pod \"placement-864fb88886-jf42k\" (UID: \"009992ed-3b8d-457e-a32c-3119e80a90a7\") " pod="openstack/placement-864fb88886-jf42k" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.763073 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/009992ed-3b8d-457e-a32c-3119e80a90a7-public-tls-certs\") pod \"placement-864fb88886-jf42k\" (UID: \"009992ed-3b8d-457e-a32c-3119e80a90a7\") " pod="openstack/placement-864fb88886-jf42k" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.763119 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/009992ed-3b8d-457e-a32c-3119e80a90a7-internal-tls-certs\") pod \"placement-864fb88886-jf42k\" (UID: \"009992ed-3b8d-457e-a32c-3119e80a90a7\") " pod="openstack/placement-864fb88886-jf42k" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.763207 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/009992ed-3b8d-457e-a32c-3119e80a90a7-logs\") pod \"placement-864fb88886-jf42k\" (UID: \"009992ed-3b8d-457e-a32c-3119e80a90a7\") " pod="openstack/placement-864fb88886-jf42k" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.763294 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/009992ed-3b8d-457e-a32c-3119e80a90a7-config-data\") pod \"placement-864fb88886-jf42k\" (UID: \"009992ed-3b8d-457e-a32c-3119e80a90a7\") " pod="openstack/placement-864fb88886-jf42k" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.763355 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqm5x\" (UniqueName: \"kubernetes.io/projected/009992ed-3b8d-457e-a32c-3119e80a90a7-kube-api-access-wqm5x\") pod \"placement-864fb88886-jf42k\" (UID: \"009992ed-3b8d-457e-a32c-3119e80a90a7\") " pod="openstack/placement-864fb88886-jf42k" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.864939 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/009992ed-3b8d-457e-a32c-3119e80a90a7-scripts\") pod \"placement-864fb88886-jf42k\" (UID: \"009992ed-3b8d-457e-a32c-3119e80a90a7\") " pod="openstack/placement-864fb88886-jf42k" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.864990 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/009992ed-3b8d-457e-a32c-3119e80a90a7-public-tls-certs\") pod \"placement-864fb88886-jf42k\" (UID: \"009992ed-3b8d-457e-a32c-3119e80a90a7\") " pod="openstack/placement-864fb88886-jf42k" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.865031 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/009992ed-3b8d-457e-a32c-3119e80a90a7-internal-tls-certs\") pod \"placement-864fb88886-jf42k\" (UID: \"009992ed-3b8d-457e-a32c-3119e80a90a7\") " pod="openstack/placement-864fb88886-jf42k" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.865049 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/009992ed-3b8d-457e-a32c-3119e80a90a7-logs\") pod \"placement-864fb88886-jf42k\" (UID: \"009992ed-3b8d-457e-a32c-3119e80a90a7\") " pod="openstack/placement-864fb88886-jf42k" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.865075 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/009992ed-3b8d-457e-a32c-3119e80a90a7-config-data\") pod \"placement-864fb88886-jf42k\" (UID: \"009992ed-3b8d-457e-a32c-3119e80a90a7\") " pod="openstack/placement-864fb88886-jf42k" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.865107 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqm5x\" (UniqueName: \"kubernetes.io/projected/009992ed-3b8d-457e-a32c-3119e80a90a7-kube-api-access-wqm5x\") pod \"placement-864fb88886-jf42k\" (UID: \"009992ed-3b8d-457e-a32c-3119e80a90a7\") " pod="openstack/placement-864fb88886-jf42k" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.865176 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/009992ed-3b8d-457e-a32c-3119e80a90a7-combined-ca-bundle\") pod \"placement-864fb88886-jf42k\" (UID: \"009992ed-3b8d-457e-a32c-3119e80a90a7\") " pod="openstack/placement-864fb88886-jf42k" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.865860 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/009992ed-3b8d-457e-a32c-3119e80a90a7-logs\") pod \"placement-864fb88886-jf42k\" (UID: \"009992ed-3b8d-457e-a32c-3119e80a90a7\") " pod="openstack/placement-864fb88886-jf42k" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.869945 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/009992ed-3b8d-457e-a32c-3119e80a90a7-config-data\") pod \"placement-864fb88886-jf42k\" (UID: \"009992ed-3b8d-457e-a32c-3119e80a90a7\") " pod="openstack/placement-864fb88886-jf42k" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.870994 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/009992ed-3b8d-457e-a32c-3119e80a90a7-public-tls-certs\") pod \"placement-864fb88886-jf42k\" (UID: \"009992ed-3b8d-457e-a32c-3119e80a90a7\") " pod="openstack/placement-864fb88886-jf42k" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.871470 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/009992ed-3b8d-457e-a32c-3119e80a90a7-scripts\") pod \"placement-864fb88886-jf42k\" (UID: \"009992ed-3b8d-457e-a32c-3119e80a90a7\") " pod="openstack/placement-864fb88886-jf42k" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.872032 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/009992ed-3b8d-457e-a32c-3119e80a90a7-internal-tls-certs\") pod \"placement-864fb88886-jf42k\" (UID: \"009992ed-3b8d-457e-a32c-3119e80a90a7\") " pod="openstack/placement-864fb88886-jf42k" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.872851 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/009992ed-3b8d-457e-a32c-3119e80a90a7-combined-ca-bundle\") pod \"placement-864fb88886-jf42k\" (UID: \"009992ed-3b8d-457e-a32c-3119e80a90a7\") " pod="openstack/placement-864fb88886-jf42k" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.886111 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqm5x\" (UniqueName: \"kubernetes.io/projected/009992ed-3b8d-457e-a32c-3119e80a90a7-kube-api-access-wqm5x\") pod \"placement-864fb88886-jf42k\" (UID: \"009992ed-3b8d-457e-a32c-3119e80a90a7\") " pod="openstack/placement-864fb88886-jf42k" Oct 13 08:06:03 crc kubenswrapper[4833]: I1013 08:06:03.897263 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-864fb88886-jf42k" Oct 13 08:06:04 crc kubenswrapper[4833]: I1013 08:06:04.328425 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-864fb88886-jf42k"] Oct 13 08:06:04 crc kubenswrapper[4833]: I1013 08:06:04.417054 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-864fb88886-jf42k" event={"ID":"009992ed-3b8d-457e-a32c-3119e80a90a7","Type":"ContainerStarted","Data":"3ca33ce072b046774254adbd53c4ca0d02ec2ada561ae5710a13c7ade178e531"} Oct 13 08:06:05 crc kubenswrapper[4833]: I1013 08:06:05.432105 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-864fb88886-jf42k" event={"ID":"009992ed-3b8d-457e-a32c-3119e80a90a7","Type":"ContainerStarted","Data":"ed291927d20c0b018e52046e621caae650ef82fc8a0c8c9349c56bc7b418b8c8"} Oct 13 08:06:05 crc kubenswrapper[4833]: I1013 08:06:05.432662 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-864fb88886-jf42k" event={"ID":"009992ed-3b8d-457e-a32c-3119e80a90a7","Type":"ContainerStarted","Data":"e6ff1a9a4e5d822b1290c6fb95540f70dab96948c3753a6b287c64f48b476dec"} Oct 13 08:06:05 crc kubenswrapper[4833]: I1013 08:06:05.432695 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-864fb88886-jf42k" Oct 13 08:06:05 crc kubenswrapper[4833]: I1013 08:06:05.432717 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-864fb88886-jf42k" Oct 13 08:06:05 crc kubenswrapper[4833]: I1013 08:06:05.481149 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-864fb88886-jf42k" podStartSLOduration=2.481122633 podStartE2EDuration="2.481122633s" podCreationTimestamp="2025-10-13 08:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:06:05.468253137 +0000 UTC m=+5855.568676083" watchObservedRunningTime="2025-10-13 08:06:05.481122633 +0000 UTC m=+5855.581545589" Oct 13 08:06:05 crc kubenswrapper[4833]: I1013 08:06:05.628990 4833 scope.go:117] "RemoveContainer" containerID="84144bfb0195d5a0d1b4378bb219c91ea274fee3df90291859e033d126a958d8" Oct 13 08:06:05 crc kubenswrapper[4833]: E1013 08:06:05.630403 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:06:08 crc kubenswrapper[4833]: I1013 08:06:08.778196 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76c69c676c-gns7n" Oct 13 08:06:08 crc kubenswrapper[4833]: I1013 08:06:08.899083 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b8cc95f99-9kjsw"] Oct 13 08:06:08 crc kubenswrapper[4833]: I1013 08:06:08.899296 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b8cc95f99-9kjsw" podUID="a33e9d7e-8788-45d9-9a9c-74dff59236f7" containerName="dnsmasq-dns" containerID="cri-o://19b68b0ad8d8d1e30bc8f863279feec7caba2ca5a8b65d940f0eb34760b4db34" gracePeriod=10 Oct 13 08:06:09 crc kubenswrapper[4833]: I1013 08:06:09.478358 4833 generic.go:334] "Generic (PLEG): container finished" podID="a33e9d7e-8788-45d9-9a9c-74dff59236f7" containerID="19b68b0ad8d8d1e30bc8f863279feec7caba2ca5a8b65d940f0eb34760b4db34" exitCode=0 Oct 13 08:06:09 crc kubenswrapper[4833]: I1013 08:06:09.478509 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8cc95f99-9kjsw" event={"ID":"a33e9d7e-8788-45d9-9a9c-74dff59236f7","Type":"ContainerDied","Data":"19b68b0ad8d8d1e30bc8f863279feec7caba2ca5a8b65d940f0eb34760b4db34"} Oct 13 08:06:09 crc kubenswrapper[4833]: I1013 08:06:09.507192 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b8cc95f99-9kjsw" podUID="a33e9d7e-8788-45d9-9a9c-74dff59236f7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.65:5353: connect: connection refused" Oct 13 08:06:09 crc kubenswrapper[4833]: I1013 08:06:09.882843 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b8cc95f99-9kjsw" Oct 13 08:06:09 crc kubenswrapper[4833]: I1013 08:06:09.989717 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a33e9d7e-8788-45d9-9a9c-74dff59236f7-ovsdbserver-sb\") pod \"a33e9d7e-8788-45d9-9a9c-74dff59236f7\" (UID: \"a33e9d7e-8788-45d9-9a9c-74dff59236f7\") " Oct 13 08:06:09 crc kubenswrapper[4833]: I1013 08:06:09.990729 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a33e9d7e-8788-45d9-9a9c-74dff59236f7-config\") pod \"a33e9d7e-8788-45d9-9a9c-74dff59236f7\" (UID: \"a33e9d7e-8788-45d9-9a9c-74dff59236f7\") " Oct 13 08:06:09 crc kubenswrapper[4833]: I1013 08:06:09.990765 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a33e9d7e-8788-45d9-9a9c-74dff59236f7-ovsdbserver-nb\") pod \"a33e9d7e-8788-45d9-9a9c-74dff59236f7\" (UID: \"a33e9d7e-8788-45d9-9a9c-74dff59236f7\") " Oct 13 08:06:09 crc kubenswrapper[4833]: I1013 08:06:09.990870 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a33e9d7e-8788-45d9-9a9c-74dff59236f7-dns-svc\") pod \"a33e9d7e-8788-45d9-9a9c-74dff59236f7\" (UID: \"a33e9d7e-8788-45d9-9a9c-74dff59236f7\") " Oct 13 08:06:09 crc kubenswrapper[4833]: I1013 08:06:09.990939 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4zxv\" (UniqueName: \"kubernetes.io/projected/a33e9d7e-8788-45d9-9a9c-74dff59236f7-kube-api-access-f4zxv\") pod \"a33e9d7e-8788-45d9-9a9c-74dff59236f7\" (UID: \"a33e9d7e-8788-45d9-9a9c-74dff59236f7\") " Oct 13 08:06:10 crc kubenswrapper[4833]: I1013 08:06:10.001156 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a33e9d7e-8788-45d9-9a9c-74dff59236f7-kube-api-access-f4zxv" (OuterVolumeSpecName: "kube-api-access-f4zxv") pod "a33e9d7e-8788-45d9-9a9c-74dff59236f7" (UID: "a33e9d7e-8788-45d9-9a9c-74dff59236f7"). InnerVolumeSpecName "kube-api-access-f4zxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:06:10 crc kubenswrapper[4833]: I1013 08:06:10.043515 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a33e9d7e-8788-45d9-9a9c-74dff59236f7-config" (OuterVolumeSpecName: "config") pod "a33e9d7e-8788-45d9-9a9c-74dff59236f7" (UID: "a33e9d7e-8788-45d9-9a9c-74dff59236f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:06:10 crc kubenswrapper[4833]: I1013 08:06:10.055956 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a33e9d7e-8788-45d9-9a9c-74dff59236f7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a33e9d7e-8788-45d9-9a9c-74dff59236f7" (UID: "a33e9d7e-8788-45d9-9a9c-74dff59236f7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:06:10 crc kubenswrapper[4833]: I1013 08:06:10.063415 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a33e9d7e-8788-45d9-9a9c-74dff59236f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a33e9d7e-8788-45d9-9a9c-74dff59236f7" (UID: "a33e9d7e-8788-45d9-9a9c-74dff59236f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:06:10 crc kubenswrapper[4833]: I1013 08:06:10.069968 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a33e9d7e-8788-45d9-9a9c-74dff59236f7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a33e9d7e-8788-45d9-9a9c-74dff59236f7" (UID: "a33e9d7e-8788-45d9-9a9c-74dff59236f7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:06:10 crc kubenswrapper[4833]: I1013 08:06:10.092405 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a33e9d7e-8788-45d9-9a9c-74dff59236f7-config\") on node \"crc\" DevicePath \"\"" Oct 13 08:06:10 crc kubenswrapper[4833]: I1013 08:06:10.092433 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a33e9d7e-8788-45d9-9a9c-74dff59236f7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 08:06:10 crc kubenswrapper[4833]: I1013 08:06:10.092442 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a33e9d7e-8788-45d9-9a9c-74dff59236f7-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 08:06:10 crc kubenswrapper[4833]: I1013 08:06:10.092451 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4zxv\" (UniqueName: \"kubernetes.io/projected/a33e9d7e-8788-45d9-9a9c-74dff59236f7-kube-api-access-f4zxv\") on node \"crc\" DevicePath \"\"" Oct 13 08:06:10 crc kubenswrapper[4833]: I1013 08:06:10.092459 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a33e9d7e-8788-45d9-9a9c-74dff59236f7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 08:06:10 crc kubenswrapper[4833]: I1013 08:06:10.490988 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8cc95f99-9kjsw" event={"ID":"a33e9d7e-8788-45d9-9a9c-74dff59236f7","Type":"ContainerDied","Data":"23185dbd40cdd3cb99fc1657970dc10514e791bca9143eb72ce1b8a05e27dbba"} Oct 13 08:06:10 crc kubenswrapper[4833]: I1013 08:06:10.491063 4833 scope.go:117] "RemoveContainer" containerID="19b68b0ad8d8d1e30bc8f863279feec7caba2ca5a8b65d940f0eb34760b4db34" Oct 13 08:06:10 crc kubenswrapper[4833]: I1013 08:06:10.491243 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b8cc95f99-9kjsw" Oct 13 08:06:10 crc kubenswrapper[4833]: I1013 08:06:10.534852 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b8cc95f99-9kjsw"] Oct 13 08:06:10 crc kubenswrapper[4833]: I1013 08:06:10.535708 4833 scope.go:117] "RemoveContainer" containerID="fafd8a8a1de2dcdcbbf64fe36c7d7062471517266af53ccc7fa5b8532a27b2ed" Oct 13 08:06:10 crc kubenswrapper[4833]: I1013 08:06:10.541616 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b8cc95f99-9kjsw"] Oct 13 08:06:10 crc kubenswrapper[4833]: I1013 08:06:10.655118 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a33e9d7e-8788-45d9-9a9c-74dff59236f7" path="/var/lib/kubelet/pods/a33e9d7e-8788-45d9-9a9c-74dff59236f7/volumes" Oct 13 08:06:18 crc kubenswrapper[4833]: I1013 08:06:18.627942 4833 scope.go:117] "RemoveContainer" containerID="84144bfb0195d5a0d1b4378bb219c91ea274fee3df90291859e033d126a958d8" Oct 13 08:06:18 crc kubenswrapper[4833]: E1013 08:06:18.628793 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:06:33 crc kubenswrapper[4833]: I1013 08:06:33.627038 4833 scope.go:117] "RemoveContainer" containerID="84144bfb0195d5a0d1b4378bb219c91ea274fee3df90291859e033d126a958d8" Oct 13 08:06:33 crc kubenswrapper[4833]: E1013 08:06:33.627982 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:06:34 crc kubenswrapper[4833]: I1013 08:06:34.968748 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-864fb88886-jf42k" Oct 13 08:06:35 crc kubenswrapper[4833]: I1013 08:06:35.932671 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-864fb88886-jf42k" Oct 13 08:06:45 crc kubenswrapper[4833]: I1013 08:06:45.628045 4833 scope.go:117] "RemoveContainer" containerID="84144bfb0195d5a0d1b4378bb219c91ea274fee3df90291859e033d126a958d8" Oct 13 08:06:45 crc kubenswrapper[4833]: E1013 08:06:45.629048 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:06:56 crc kubenswrapper[4833]: I1013 08:06:56.530303 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-dmzfp"] Oct 13 08:06:56 crc kubenswrapper[4833]: E1013 08:06:56.535961 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a33e9d7e-8788-45d9-9a9c-74dff59236f7" containerName="dnsmasq-dns" Oct 13 08:06:56 crc kubenswrapper[4833]: I1013 08:06:56.535982 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a33e9d7e-8788-45d9-9a9c-74dff59236f7" containerName="dnsmasq-dns" Oct 13 08:06:56 crc kubenswrapper[4833]: E1013 08:06:56.535995 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a33e9d7e-8788-45d9-9a9c-74dff59236f7" containerName="init" Oct 13 08:06:56 crc kubenswrapper[4833]: I1013 08:06:56.536001 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a33e9d7e-8788-45d9-9a9c-74dff59236f7" containerName="init" Oct 13 08:06:56 crc kubenswrapper[4833]: I1013 08:06:56.536191 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a33e9d7e-8788-45d9-9a9c-74dff59236f7" containerName="dnsmasq-dns" Oct 13 08:06:56 crc kubenswrapper[4833]: I1013 08:06:56.536817 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dmzfp" Oct 13 08:06:56 crc kubenswrapper[4833]: I1013 08:06:56.540417 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dmzfp"] Oct 13 08:06:56 crc kubenswrapper[4833]: I1013 08:06:56.584785 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnrqc\" (UniqueName: \"kubernetes.io/projected/e82baa4c-bdbb-4c92-b1eb-2a47303b5e70-kube-api-access-bnrqc\") pod \"nova-api-db-create-dmzfp\" (UID: \"e82baa4c-bdbb-4c92-b1eb-2a47303b5e70\") " pod="openstack/nova-api-db-create-dmzfp" Oct 13 08:06:56 crc kubenswrapper[4833]: I1013 08:06:56.620471 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-w2h2m"] Oct 13 08:06:56 crc kubenswrapper[4833]: I1013 08:06:56.622277 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-w2h2m" Oct 13 08:06:56 crc kubenswrapper[4833]: I1013 08:06:56.640232 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-w2h2m"] Oct 13 08:06:56 crc kubenswrapper[4833]: I1013 08:06:56.686413 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crbqw\" (UniqueName: \"kubernetes.io/projected/1573d87d-2039-4e07-8b10-40e82b030687-kube-api-access-crbqw\") pod \"nova-cell0-db-create-w2h2m\" (UID: \"1573d87d-2039-4e07-8b10-40e82b030687\") " pod="openstack/nova-cell0-db-create-w2h2m" Oct 13 08:06:56 crc kubenswrapper[4833]: I1013 08:06:56.687473 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnrqc\" (UniqueName: \"kubernetes.io/projected/e82baa4c-bdbb-4c92-b1eb-2a47303b5e70-kube-api-access-bnrqc\") pod \"nova-api-db-create-dmzfp\" (UID: \"e82baa4c-bdbb-4c92-b1eb-2a47303b5e70\") " pod="openstack/nova-api-db-create-dmzfp" Oct 13 08:06:56 crc kubenswrapper[4833]: I1013 08:06:56.714170 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnrqc\" (UniqueName: \"kubernetes.io/projected/e82baa4c-bdbb-4c92-b1eb-2a47303b5e70-kube-api-access-bnrqc\") pod \"nova-api-db-create-dmzfp\" (UID: \"e82baa4c-bdbb-4c92-b1eb-2a47303b5e70\") " pod="openstack/nova-api-db-create-dmzfp" Oct 13 08:06:56 crc kubenswrapper[4833]: I1013 08:06:56.721738 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-vngxq"] Oct 13 08:06:56 crc kubenswrapper[4833]: I1013 08:06:56.723266 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vngxq" Oct 13 08:06:56 crc kubenswrapper[4833]: I1013 08:06:56.731692 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vngxq"] Oct 13 08:06:56 crc kubenswrapper[4833]: I1013 08:06:56.792201 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s58dw\" (UniqueName: \"kubernetes.io/projected/3fd16fb1-4ba4-40b5-8299-4ecb0a951d7c-kube-api-access-s58dw\") pod \"nova-cell1-db-create-vngxq\" (UID: \"3fd16fb1-4ba4-40b5-8299-4ecb0a951d7c\") " pod="openstack/nova-cell1-db-create-vngxq" Oct 13 08:06:56 crc kubenswrapper[4833]: I1013 08:06:56.792999 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crbqw\" (UniqueName: \"kubernetes.io/projected/1573d87d-2039-4e07-8b10-40e82b030687-kube-api-access-crbqw\") pod \"nova-cell0-db-create-w2h2m\" (UID: \"1573d87d-2039-4e07-8b10-40e82b030687\") " pod="openstack/nova-cell0-db-create-w2h2m" Oct 13 08:06:56 crc kubenswrapper[4833]: I1013 08:06:56.817972 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crbqw\" (UniqueName: \"kubernetes.io/projected/1573d87d-2039-4e07-8b10-40e82b030687-kube-api-access-crbqw\") pod \"nova-cell0-db-create-w2h2m\" (UID: \"1573d87d-2039-4e07-8b10-40e82b030687\") " pod="openstack/nova-cell0-db-create-w2h2m" Oct 13 08:06:56 crc kubenswrapper[4833]: I1013 08:06:56.860918 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dmzfp" Oct 13 08:06:56 crc kubenswrapper[4833]: I1013 08:06:56.894690 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s58dw\" (UniqueName: \"kubernetes.io/projected/3fd16fb1-4ba4-40b5-8299-4ecb0a951d7c-kube-api-access-s58dw\") pod \"nova-cell1-db-create-vngxq\" (UID: \"3fd16fb1-4ba4-40b5-8299-4ecb0a951d7c\") " pod="openstack/nova-cell1-db-create-vngxq" Oct 13 08:06:56 crc kubenswrapper[4833]: I1013 08:06:56.915145 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s58dw\" (UniqueName: \"kubernetes.io/projected/3fd16fb1-4ba4-40b5-8299-4ecb0a951d7c-kube-api-access-s58dw\") pod \"nova-cell1-db-create-vngxq\" (UID: \"3fd16fb1-4ba4-40b5-8299-4ecb0a951d7c\") " pod="openstack/nova-cell1-db-create-vngxq" Oct 13 08:06:56 crc kubenswrapper[4833]: I1013 08:06:56.942759 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-w2h2m" Oct 13 08:06:57 crc kubenswrapper[4833]: I1013 08:06:57.092256 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vngxq" Oct 13 08:06:57 crc kubenswrapper[4833]: I1013 08:06:57.267829 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-w2h2m"] Oct 13 08:06:57 crc kubenswrapper[4833]: I1013 08:06:57.410907 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dmzfp"] Oct 13 08:06:57 crc kubenswrapper[4833]: I1013 08:06:57.553790 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vngxq"] Oct 13 08:06:57 crc kubenswrapper[4833]: W1013 08:06:57.570502 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fd16fb1_4ba4_40b5_8299_4ecb0a951d7c.slice/crio-3eb855d06ff8d09e65a51067914bedfb31876f5bdfe6225464e7529f41ebd38d WatchSource:0}: Error finding container 3eb855d06ff8d09e65a51067914bedfb31876f5bdfe6225464e7529f41ebd38d: Status 404 returned error can't find the container with id 3eb855d06ff8d09e65a51067914bedfb31876f5bdfe6225464e7529f41ebd38d Oct 13 08:06:57 crc kubenswrapper[4833]: I1013 08:06:57.991207 4833 generic.go:334] "Generic (PLEG): container finished" podID="3fd16fb1-4ba4-40b5-8299-4ecb0a951d7c" containerID="5ab1216531247aa7605e193a382346549c9f04a606ba2986266dc8f51ab8b208" exitCode=0 Oct 13 08:06:57 crc kubenswrapper[4833]: I1013 08:06:57.991327 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vngxq" event={"ID":"3fd16fb1-4ba4-40b5-8299-4ecb0a951d7c","Type":"ContainerDied","Data":"5ab1216531247aa7605e193a382346549c9f04a606ba2986266dc8f51ab8b208"} Oct 13 08:06:57 crc kubenswrapper[4833]: I1013 08:06:57.991408 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vngxq" event={"ID":"3fd16fb1-4ba4-40b5-8299-4ecb0a951d7c","Type":"ContainerStarted","Data":"3eb855d06ff8d09e65a51067914bedfb31876f5bdfe6225464e7529f41ebd38d"} Oct 13 08:06:57 crc kubenswrapper[4833]: I1013 08:06:57.992484 4833 generic.go:334] "Generic (PLEG): container finished" podID="1573d87d-2039-4e07-8b10-40e82b030687" containerID="1e019ee8ab7636d3f4ae413125bf85692314ff29fbba2f5957ca978905a632ff" exitCode=0 Oct 13 08:06:57 crc kubenswrapper[4833]: I1013 08:06:57.992578 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-w2h2m" event={"ID":"1573d87d-2039-4e07-8b10-40e82b030687","Type":"ContainerDied","Data":"1e019ee8ab7636d3f4ae413125bf85692314ff29fbba2f5957ca978905a632ff"} Oct 13 08:06:57 crc kubenswrapper[4833]: I1013 08:06:57.992747 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-w2h2m" event={"ID":"1573d87d-2039-4e07-8b10-40e82b030687","Type":"ContainerStarted","Data":"91978da2eb279b7b9991ed2102dab3acf1817eb9b53d8721d67bbc17335ad634"} Oct 13 08:06:57 crc kubenswrapper[4833]: I1013 08:06:57.993874 4833 generic.go:334] "Generic (PLEG): container finished" podID="e82baa4c-bdbb-4c92-b1eb-2a47303b5e70" containerID="11063971e92da1fa7a57efc156de6920465e825efc9ea219766c066bf4888b14" exitCode=0 Oct 13 08:06:57 crc kubenswrapper[4833]: I1013 08:06:57.993904 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dmzfp" event={"ID":"e82baa4c-bdbb-4c92-b1eb-2a47303b5e70","Type":"ContainerDied","Data":"11063971e92da1fa7a57efc156de6920465e825efc9ea219766c066bf4888b14"} Oct 13 08:06:57 crc kubenswrapper[4833]: I1013 08:06:57.993919 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dmzfp" event={"ID":"e82baa4c-bdbb-4c92-b1eb-2a47303b5e70","Type":"ContainerStarted","Data":"647cdec2bde310357efbdc08671afd744bd5c8df16723667784b16939a689528"} Oct 13 08:06:59 crc kubenswrapper[4833]: I1013 08:06:59.478034 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dmzfp" Oct 13 08:06:59 crc kubenswrapper[4833]: I1013 08:06:59.484167 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-w2h2m" Oct 13 08:06:59 crc kubenswrapper[4833]: I1013 08:06:59.504682 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vngxq" Oct 13 08:06:59 crc kubenswrapper[4833]: I1013 08:06:59.556382 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s58dw\" (UniqueName: \"kubernetes.io/projected/3fd16fb1-4ba4-40b5-8299-4ecb0a951d7c-kube-api-access-s58dw\") pod \"3fd16fb1-4ba4-40b5-8299-4ecb0a951d7c\" (UID: \"3fd16fb1-4ba4-40b5-8299-4ecb0a951d7c\") " Oct 13 08:06:59 crc kubenswrapper[4833]: I1013 08:06:59.556636 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crbqw\" (UniqueName: \"kubernetes.io/projected/1573d87d-2039-4e07-8b10-40e82b030687-kube-api-access-crbqw\") pod \"1573d87d-2039-4e07-8b10-40e82b030687\" (UID: \"1573d87d-2039-4e07-8b10-40e82b030687\") " Oct 13 08:06:59 crc kubenswrapper[4833]: I1013 08:06:59.556666 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnrqc\" (UniqueName: \"kubernetes.io/projected/e82baa4c-bdbb-4c92-b1eb-2a47303b5e70-kube-api-access-bnrqc\") pod \"e82baa4c-bdbb-4c92-b1eb-2a47303b5e70\" (UID: \"e82baa4c-bdbb-4c92-b1eb-2a47303b5e70\") " Oct 13 08:06:59 crc kubenswrapper[4833]: I1013 08:06:59.566817 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1573d87d-2039-4e07-8b10-40e82b030687-kube-api-access-crbqw" (OuterVolumeSpecName: "kube-api-access-crbqw") pod "1573d87d-2039-4e07-8b10-40e82b030687" (UID: "1573d87d-2039-4e07-8b10-40e82b030687"). InnerVolumeSpecName "kube-api-access-crbqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:06:59 crc kubenswrapper[4833]: I1013 08:06:59.567884 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e82baa4c-bdbb-4c92-b1eb-2a47303b5e70-kube-api-access-bnrqc" (OuterVolumeSpecName: "kube-api-access-bnrqc") pod "e82baa4c-bdbb-4c92-b1eb-2a47303b5e70" (UID: "e82baa4c-bdbb-4c92-b1eb-2a47303b5e70"). InnerVolumeSpecName "kube-api-access-bnrqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:06:59 crc kubenswrapper[4833]: I1013 08:06:59.567983 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fd16fb1-4ba4-40b5-8299-4ecb0a951d7c-kube-api-access-s58dw" (OuterVolumeSpecName: "kube-api-access-s58dw") pod "3fd16fb1-4ba4-40b5-8299-4ecb0a951d7c" (UID: "3fd16fb1-4ba4-40b5-8299-4ecb0a951d7c"). InnerVolumeSpecName "kube-api-access-s58dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:06:59 crc kubenswrapper[4833]: I1013 08:06:59.659436 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crbqw\" (UniqueName: \"kubernetes.io/projected/1573d87d-2039-4e07-8b10-40e82b030687-kube-api-access-crbqw\") on node \"crc\" DevicePath \"\"" Oct 13 08:06:59 crc kubenswrapper[4833]: I1013 08:06:59.659491 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnrqc\" (UniqueName: \"kubernetes.io/projected/e82baa4c-bdbb-4c92-b1eb-2a47303b5e70-kube-api-access-bnrqc\") on node \"crc\" DevicePath \"\"" Oct 13 08:06:59 crc kubenswrapper[4833]: I1013 08:06:59.659505 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s58dw\" (UniqueName: \"kubernetes.io/projected/3fd16fb1-4ba4-40b5-8299-4ecb0a951d7c-kube-api-access-s58dw\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:00 crc kubenswrapper[4833]: I1013 08:07:00.023145 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dmzfp" Oct 13 08:07:00 crc kubenswrapper[4833]: I1013 08:07:00.023349 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dmzfp" event={"ID":"e82baa4c-bdbb-4c92-b1eb-2a47303b5e70","Type":"ContainerDied","Data":"647cdec2bde310357efbdc08671afd744bd5c8df16723667784b16939a689528"} Oct 13 08:07:00 crc kubenswrapper[4833]: I1013 08:07:00.023406 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="647cdec2bde310357efbdc08671afd744bd5c8df16723667784b16939a689528" Oct 13 08:07:00 crc kubenswrapper[4833]: I1013 08:07:00.026075 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vngxq" event={"ID":"3fd16fb1-4ba4-40b5-8299-4ecb0a951d7c","Type":"ContainerDied","Data":"3eb855d06ff8d09e65a51067914bedfb31876f5bdfe6225464e7529f41ebd38d"} Oct 13 08:07:00 crc kubenswrapper[4833]: I1013 08:07:00.026122 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3eb855d06ff8d09e65a51067914bedfb31876f5bdfe6225464e7529f41ebd38d" Oct 13 08:07:00 crc kubenswrapper[4833]: I1013 08:07:00.026095 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vngxq" Oct 13 08:07:00 crc kubenswrapper[4833]: I1013 08:07:00.029309 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-w2h2m" event={"ID":"1573d87d-2039-4e07-8b10-40e82b030687","Type":"ContainerDied","Data":"91978da2eb279b7b9991ed2102dab3acf1817eb9b53d8721d67bbc17335ad634"} Oct 13 08:07:00 crc kubenswrapper[4833]: I1013 08:07:00.029568 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91978da2eb279b7b9991ed2102dab3acf1817eb9b53d8721d67bbc17335ad634" Oct 13 08:07:00 crc kubenswrapper[4833]: I1013 08:07:00.029380 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-w2h2m" Oct 13 08:07:00 crc kubenswrapper[4833]: I1013 08:07:00.634875 4833 scope.go:117] "RemoveContainer" containerID="84144bfb0195d5a0d1b4378bb219c91ea274fee3df90291859e033d126a958d8" Oct 13 08:07:00 crc kubenswrapper[4833]: E1013 08:07:00.639484 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:07:06 crc kubenswrapper[4833]: I1013 08:07:06.767932 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-1d3f-account-create-7kwk2"] Oct 13 08:07:06 crc kubenswrapper[4833]: E1013 08:07:06.768935 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e82baa4c-bdbb-4c92-b1eb-2a47303b5e70" containerName="mariadb-database-create" Oct 13 08:07:06 crc kubenswrapper[4833]: I1013 08:07:06.768952 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e82baa4c-bdbb-4c92-b1eb-2a47303b5e70" containerName="mariadb-database-create" Oct 13 08:07:06 crc kubenswrapper[4833]: E1013 08:07:06.768980 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fd16fb1-4ba4-40b5-8299-4ecb0a951d7c" containerName="mariadb-database-create" Oct 13 08:07:06 crc kubenswrapper[4833]: I1013 08:07:06.768986 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fd16fb1-4ba4-40b5-8299-4ecb0a951d7c" containerName="mariadb-database-create" Oct 13 08:07:06 crc kubenswrapper[4833]: E1013 08:07:06.769002 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1573d87d-2039-4e07-8b10-40e82b030687" containerName="mariadb-database-create" Oct 13 08:07:06 crc kubenswrapper[4833]: I1013 08:07:06.769009 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1573d87d-2039-4e07-8b10-40e82b030687" containerName="mariadb-database-create" Oct 13 08:07:06 crc kubenswrapper[4833]: I1013 08:07:06.769228 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fd16fb1-4ba4-40b5-8299-4ecb0a951d7c" containerName="mariadb-database-create" Oct 13 08:07:06 crc kubenswrapper[4833]: I1013 08:07:06.769249 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="e82baa4c-bdbb-4c92-b1eb-2a47303b5e70" containerName="mariadb-database-create" Oct 13 08:07:06 crc kubenswrapper[4833]: I1013 08:07:06.769263 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="1573d87d-2039-4e07-8b10-40e82b030687" containerName="mariadb-database-create" Oct 13 08:07:06 crc kubenswrapper[4833]: I1013 08:07:06.769944 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1d3f-account-create-7kwk2" Oct 13 08:07:06 crc kubenswrapper[4833]: I1013 08:07:06.772576 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Oct 13 08:07:06 crc kubenswrapper[4833]: I1013 08:07:06.791251 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1d3f-account-create-7kwk2"] Oct 13 08:07:06 crc kubenswrapper[4833]: I1013 08:07:06.941430 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjhlm\" (UniqueName: \"kubernetes.io/projected/03f7fa85-3f1e-4c32-bea6-dff73995d9bb-kube-api-access-sjhlm\") pod \"nova-api-1d3f-account-create-7kwk2\" (UID: \"03f7fa85-3f1e-4c32-bea6-dff73995d9bb\") " pod="openstack/nova-api-1d3f-account-create-7kwk2" Oct 13 08:07:06 crc kubenswrapper[4833]: I1013 08:07:06.958495 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-96ac-account-create-4wt7n"] Oct 13 08:07:06 crc kubenswrapper[4833]: I1013 08:07:06.959681 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-96ac-account-create-4wt7n" Oct 13 08:07:06 crc kubenswrapper[4833]: I1013 08:07:06.962047 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Oct 13 08:07:06 crc kubenswrapper[4833]: I1013 08:07:06.984869 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-96ac-account-create-4wt7n"] Oct 13 08:07:07 crc kubenswrapper[4833]: I1013 08:07:07.044128 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbzxh\" (UniqueName: \"kubernetes.io/projected/f912654a-1ed1-419f-a02a-42dc38b92b75-kube-api-access-kbzxh\") pod \"nova-cell0-96ac-account-create-4wt7n\" (UID: \"f912654a-1ed1-419f-a02a-42dc38b92b75\") " pod="openstack/nova-cell0-96ac-account-create-4wt7n" Oct 13 08:07:07 crc kubenswrapper[4833]: I1013 08:07:07.044267 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjhlm\" (UniqueName: \"kubernetes.io/projected/03f7fa85-3f1e-4c32-bea6-dff73995d9bb-kube-api-access-sjhlm\") pod \"nova-api-1d3f-account-create-7kwk2\" (UID: \"03f7fa85-3f1e-4c32-bea6-dff73995d9bb\") " pod="openstack/nova-api-1d3f-account-create-7kwk2" Oct 13 08:07:07 crc kubenswrapper[4833]: I1013 08:07:07.075550 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjhlm\" (UniqueName: \"kubernetes.io/projected/03f7fa85-3f1e-4c32-bea6-dff73995d9bb-kube-api-access-sjhlm\") pod \"nova-api-1d3f-account-create-7kwk2\" (UID: \"03f7fa85-3f1e-4c32-bea6-dff73995d9bb\") " pod="openstack/nova-api-1d3f-account-create-7kwk2" Oct 13 08:07:07 crc kubenswrapper[4833]: I1013 08:07:07.095597 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1d3f-account-create-7kwk2" Oct 13 08:07:07 crc kubenswrapper[4833]: I1013 08:07:07.146475 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbzxh\" (UniqueName: \"kubernetes.io/projected/f912654a-1ed1-419f-a02a-42dc38b92b75-kube-api-access-kbzxh\") pod \"nova-cell0-96ac-account-create-4wt7n\" (UID: \"f912654a-1ed1-419f-a02a-42dc38b92b75\") " pod="openstack/nova-cell0-96ac-account-create-4wt7n" Oct 13 08:07:07 crc kubenswrapper[4833]: I1013 08:07:07.157978 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-777d-account-create-vqgb4"] Oct 13 08:07:07 crc kubenswrapper[4833]: I1013 08:07:07.160696 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-777d-account-create-vqgb4" Oct 13 08:07:07 crc kubenswrapper[4833]: I1013 08:07:07.163583 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Oct 13 08:07:07 crc kubenswrapper[4833]: I1013 08:07:07.168336 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbzxh\" (UniqueName: \"kubernetes.io/projected/f912654a-1ed1-419f-a02a-42dc38b92b75-kube-api-access-kbzxh\") pod \"nova-cell0-96ac-account-create-4wt7n\" (UID: \"f912654a-1ed1-419f-a02a-42dc38b92b75\") " pod="openstack/nova-cell0-96ac-account-create-4wt7n" Oct 13 08:07:07 crc kubenswrapper[4833]: I1013 08:07:07.181952 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-777d-account-create-vqgb4"] Oct 13 08:07:07 crc kubenswrapper[4833]: I1013 08:07:07.281634 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-96ac-account-create-4wt7n" Oct 13 08:07:07 crc kubenswrapper[4833]: I1013 08:07:07.351657 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8kgs\" (UniqueName: \"kubernetes.io/projected/3e3c4faf-8bcb-470d-ba72-7d24bdd8ddf6-kube-api-access-h8kgs\") pod \"nova-cell1-777d-account-create-vqgb4\" (UID: \"3e3c4faf-8bcb-470d-ba72-7d24bdd8ddf6\") " pod="openstack/nova-cell1-777d-account-create-vqgb4" Oct 13 08:07:07 crc kubenswrapper[4833]: I1013 08:07:07.453796 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8kgs\" (UniqueName: \"kubernetes.io/projected/3e3c4faf-8bcb-470d-ba72-7d24bdd8ddf6-kube-api-access-h8kgs\") pod \"nova-cell1-777d-account-create-vqgb4\" (UID: \"3e3c4faf-8bcb-470d-ba72-7d24bdd8ddf6\") " pod="openstack/nova-cell1-777d-account-create-vqgb4" Oct 13 08:07:07 crc kubenswrapper[4833]: I1013 08:07:07.472053 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8kgs\" (UniqueName: \"kubernetes.io/projected/3e3c4faf-8bcb-470d-ba72-7d24bdd8ddf6-kube-api-access-h8kgs\") pod \"nova-cell1-777d-account-create-vqgb4\" (UID: \"3e3c4faf-8bcb-470d-ba72-7d24bdd8ddf6\") " pod="openstack/nova-cell1-777d-account-create-vqgb4" Oct 13 08:07:07 crc kubenswrapper[4833]: I1013 08:07:07.560300 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-777d-account-create-vqgb4" Oct 13 08:07:07 crc kubenswrapper[4833]: I1013 08:07:07.579357 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1d3f-account-create-7kwk2"] Oct 13 08:07:07 crc kubenswrapper[4833]: I1013 08:07:07.731580 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-96ac-account-create-4wt7n"] Oct 13 08:07:07 crc kubenswrapper[4833]: W1013 08:07:07.740974 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf912654a_1ed1_419f_a02a_42dc38b92b75.slice/crio-37b50ed2e4a198496ca4af0e519e532c7e526712d0878847bb13b9d346feecc5 WatchSource:0}: Error finding container 37b50ed2e4a198496ca4af0e519e532c7e526712d0878847bb13b9d346feecc5: Status 404 returned error can't find the container with id 37b50ed2e4a198496ca4af0e519e532c7e526712d0878847bb13b9d346feecc5 Oct 13 08:07:08 crc kubenswrapper[4833]: I1013 08:07:08.010159 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-777d-account-create-vqgb4"] Oct 13 08:07:08 crc kubenswrapper[4833]: W1013 08:07:08.017480 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e3c4faf_8bcb_470d_ba72_7d24bdd8ddf6.slice/crio-93595ae8183149d1679c2134a170815ad5f779dea386ec238bd78cf630bd9ecf WatchSource:0}: Error finding container 93595ae8183149d1679c2134a170815ad5f779dea386ec238bd78cf630bd9ecf: Status 404 returned error can't find the container with id 93595ae8183149d1679c2134a170815ad5f779dea386ec238bd78cf630bd9ecf Oct 13 08:07:08 crc kubenswrapper[4833]: I1013 08:07:08.143507 4833 generic.go:334] "Generic (PLEG): container finished" podID="f912654a-1ed1-419f-a02a-42dc38b92b75" containerID="fb5432e2b6e6d68e6afd93f40a08f0a16e890be308d9d3c59af25bec8fd4be20" exitCode=0 Oct 13 08:07:08 crc kubenswrapper[4833]: I1013 08:07:08.143805 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-96ac-account-create-4wt7n" event={"ID":"f912654a-1ed1-419f-a02a-42dc38b92b75","Type":"ContainerDied","Data":"fb5432e2b6e6d68e6afd93f40a08f0a16e890be308d9d3c59af25bec8fd4be20"} Oct 13 08:07:08 crc kubenswrapper[4833]: I1013 08:07:08.143877 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-96ac-account-create-4wt7n" event={"ID":"f912654a-1ed1-419f-a02a-42dc38b92b75","Type":"ContainerStarted","Data":"37b50ed2e4a198496ca4af0e519e532c7e526712d0878847bb13b9d346feecc5"} Oct 13 08:07:08 crc kubenswrapper[4833]: I1013 08:07:08.147587 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-777d-account-create-vqgb4" event={"ID":"3e3c4faf-8bcb-470d-ba72-7d24bdd8ddf6","Type":"ContainerStarted","Data":"93595ae8183149d1679c2134a170815ad5f779dea386ec238bd78cf630bd9ecf"} Oct 13 08:07:08 crc kubenswrapper[4833]: I1013 08:07:08.150837 4833 generic.go:334] "Generic (PLEG): container finished" podID="03f7fa85-3f1e-4c32-bea6-dff73995d9bb" containerID="844da1007de31aaeb1f08883e7fe1588a1633322d95536a4866c627f144a2b52" exitCode=0 Oct 13 08:07:08 crc kubenswrapper[4833]: I1013 08:07:08.150964 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1d3f-account-create-7kwk2" event={"ID":"03f7fa85-3f1e-4c32-bea6-dff73995d9bb","Type":"ContainerDied","Data":"844da1007de31aaeb1f08883e7fe1588a1633322d95536a4866c627f144a2b52"} Oct 13 08:07:08 crc kubenswrapper[4833]: I1013 08:07:08.151334 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1d3f-account-create-7kwk2" event={"ID":"03f7fa85-3f1e-4c32-bea6-dff73995d9bb","Type":"ContainerStarted","Data":"da6e40606072a11bfceb9aa9ae9106f4f75389772281b4e7bf1a4844e698b864"} Oct 13 08:07:09 crc kubenswrapper[4833]: I1013 08:07:09.165184 4833 generic.go:334] "Generic (PLEG): container finished" podID="3e3c4faf-8bcb-470d-ba72-7d24bdd8ddf6" containerID="b9528076d34c6f4ab7de8725ef42f400732f54ae3847ccac307abd56be6dcd10" exitCode=0 Oct 13 08:07:09 crc kubenswrapper[4833]: I1013 08:07:09.165390 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-777d-account-create-vqgb4" event={"ID":"3e3c4faf-8bcb-470d-ba72-7d24bdd8ddf6","Type":"ContainerDied","Data":"b9528076d34c6f4ab7de8725ef42f400732f54ae3847ccac307abd56be6dcd10"} Oct 13 08:07:09 crc kubenswrapper[4833]: I1013 08:07:09.672885 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1d3f-account-create-7kwk2" Oct 13 08:07:09 crc kubenswrapper[4833]: I1013 08:07:09.681151 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-96ac-account-create-4wt7n" Oct 13 08:07:09 crc kubenswrapper[4833]: I1013 08:07:09.705663 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbzxh\" (UniqueName: \"kubernetes.io/projected/f912654a-1ed1-419f-a02a-42dc38b92b75-kube-api-access-kbzxh\") pod \"f912654a-1ed1-419f-a02a-42dc38b92b75\" (UID: \"f912654a-1ed1-419f-a02a-42dc38b92b75\") " Oct 13 08:07:09 crc kubenswrapper[4833]: I1013 08:07:09.706020 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjhlm\" (UniqueName: \"kubernetes.io/projected/03f7fa85-3f1e-4c32-bea6-dff73995d9bb-kube-api-access-sjhlm\") pod \"03f7fa85-3f1e-4c32-bea6-dff73995d9bb\" (UID: \"03f7fa85-3f1e-4c32-bea6-dff73995d9bb\") " Oct 13 08:07:09 crc kubenswrapper[4833]: I1013 08:07:09.712564 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f912654a-1ed1-419f-a02a-42dc38b92b75-kube-api-access-kbzxh" (OuterVolumeSpecName: "kube-api-access-kbzxh") pod "f912654a-1ed1-419f-a02a-42dc38b92b75" (UID: "f912654a-1ed1-419f-a02a-42dc38b92b75"). InnerVolumeSpecName "kube-api-access-kbzxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:07:09 crc kubenswrapper[4833]: I1013 08:07:09.720840 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03f7fa85-3f1e-4c32-bea6-dff73995d9bb-kube-api-access-sjhlm" (OuterVolumeSpecName: "kube-api-access-sjhlm") pod "03f7fa85-3f1e-4c32-bea6-dff73995d9bb" (UID: "03f7fa85-3f1e-4c32-bea6-dff73995d9bb"). InnerVolumeSpecName "kube-api-access-sjhlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:07:09 crc kubenswrapper[4833]: I1013 08:07:09.809871 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjhlm\" (UniqueName: \"kubernetes.io/projected/03f7fa85-3f1e-4c32-bea6-dff73995d9bb-kube-api-access-sjhlm\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:09 crc kubenswrapper[4833]: I1013 08:07:09.809922 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbzxh\" (UniqueName: \"kubernetes.io/projected/f912654a-1ed1-419f-a02a-42dc38b92b75-kube-api-access-kbzxh\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:10 crc kubenswrapper[4833]: I1013 08:07:10.180219 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1d3f-account-create-7kwk2" event={"ID":"03f7fa85-3f1e-4c32-bea6-dff73995d9bb","Type":"ContainerDied","Data":"da6e40606072a11bfceb9aa9ae9106f4f75389772281b4e7bf1a4844e698b864"} Oct 13 08:07:10 crc kubenswrapper[4833]: I1013 08:07:10.180295 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da6e40606072a11bfceb9aa9ae9106f4f75389772281b4e7bf1a4844e698b864" Oct 13 08:07:10 crc kubenswrapper[4833]: I1013 08:07:10.180240 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1d3f-account-create-7kwk2" Oct 13 08:07:10 crc kubenswrapper[4833]: I1013 08:07:10.183366 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-96ac-account-create-4wt7n" Oct 13 08:07:10 crc kubenswrapper[4833]: I1013 08:07:10.183640 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-96ac-account-create-4wt7n" event={"ID":"f912654a-1ed1-419f-a02a-42dc38b92b75","Type":"ContainerDied","Data":"37b50ed2e4a198496ca4af0e519e532c7e526712d0878847bb13b9d346feecc5"} Oct 13 08:07:10 crc kubenswrapper[4833]: I1013 08:07:10.183887 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37b50ed2e4a198496ca4af0e519e532c7e526712d0878847bb13b9d346feecc5" Oct 13 08:07:10 crc kubenswrapper[4833]: I1013 08:07:10.653790 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-777d-account-create-vqgb4" Oct 13 08:07:10 crc kubenswrapper[4833]: I1013 08:07:10.736818 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8kgs\" (UniqueName: \"kubernetes.io/projected/3e3c4faf-8bcb-470d-ba72-7d24bdd8ddf6-kube-api-access-h8kgs\") pod \"3e3c4faf-8bcb-470d-ba72-7d24bdd8ddf6\" (UID: \"3e3c4faf-8bcb-470d-ba72-7d24bdd8ddf6\") " Oct 13 08:07:10 crc kubenswrapper[4833]: I1013 08:07:10.745891 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e3c4faf-8bcb-470d-ba72-7d24bdd8ddf6-kube-api-access-h8kgs" (OuterVolumeSpecName: "kube-api-access-h8kgs") pod "3e3c4faf-8bcb-470d-ba72-7d24bdd8ddf6" (UID: "3e3c4faf-8bcb-470d-ba72-7d24bdd8ddf6"). InnerVolumeSpecName "kube-api-access-h8kgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:07:10 crc kubenswrapper[4833]: I1013 08:07:10.839770 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8kgs\" (UniqueName: \"kubernetes.io/projected/3e3c4faf-8bcb-470d-ba72-7d24bdd8ddf6-kube-api-access-h8kgs\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:11 crc kubenswrapper[4833]: I1013 08:07:11.195358 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-777d-account-create-vqgb4" event={"ID":"3e3c4faf-8bcb-470d-ba72-7d24bdd8ddf6","Type":"ContainerDied","Data":"93595ae8183149d1679c2134a170815ad5f779dea386ec238bd78cf630bd9ecf"} Oct 13 08:07:11 crc kubenswrapper[4833]: I1013 08:07:11.195614 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93595ae8183149d1679c2134a170815ad5f779dea386ec238bd78cf630bd9ecf" Oct 13 08:07:11 crc kubenswrapper[4833]: I1013 08:07:11.195657 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-777d-account-create-vqgb4" Oct 13 08:07:12 crc kubenswrapper[4833]: I1013 08:07:12.166613 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-d5qlw"] Oct 13 08:07:12 crc kubenswrapper[4833]: E1013 08:07:12.167356 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f912654a-1ed1-419f-a02a-42dc38b92b75" containerName="mariadb-account-create" Oct 13 08:07:12 crc kubenswrapper[4833]: I1013 08:07:12.167375 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f912654a-1ed1-419f-a02a-42dc38b92b75" containerName="mariadb-account-create" Oct 13 08:07:12 crc kubenswrapper[4833]: E1013 08:07:12.167415 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e3c4faf-8bcb-470d-ba72-7d24bdd8ddf6" containerName="mariadb-account-create" Oct 13 08:07:12 crc kubenswrapper[4833]: I1013 08:07:12.167424 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e3c4faf-8bcb-470d-ba72-7d24bdd8ddf6" containerName="mariadb-account-create" Oct 13 08:07:12 crc kubenswrapper[4833]: E1013 08:07:12.167456 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03f7fa85-3f1e-4c32-bea6-dff73995d9bb" containerName="mariadb-account-create" Oct 13 08:07:12 crc kubenswrapper[4833]: I1013 08:07:12.167465 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f7fa85-3f1e-4c32-bea6-dff73995d9bb" containerName="mariadb-account-create" Oct 13 08:07:12 crc kubenswrapper[4833]: I1013 08:07:12.167709 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f912654a-1ed1-419f-a02a-42dc38b92b75" containerName="mariadb-account-create" Oct 13 08:07:12 crc kubenswrapper[4833]: I1013 08:07:12.167737 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="03f7fa85-3f1e-4c32-bea6-dff73995d9bb" containerName="mariadb-account-create" Oct 13 08:07:12 crc kubenswrapper[4833]: I1013 08:07:12.167764 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e3c4faf-8bcb-470d-ba72-7d24bdd8ddf6" containerName="mariadb-account-create" Oct 13 08:07:12 crc kubenswrapper[4833]: I1013 08:07:12.168551 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-d5qlw" Oct 13 08:07:12 crc kubenswrapper[4833]: I1013 08:07:12.170115 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-qsqpg" Oct 13 08:07:12 crc kubenswrapper[4833]: I1013 08:07:12.170374 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Oct 13 08:07:12 crc kubenswrapper[4833]: I1013 08:07:12.170993 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 13 08:07:12 crc kubenswrapper[4833]: I1013 08:07:12.179925 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-d5qlw"] Oct 13 08:07:12 crc kubenswrapper[4833]: I1013 08:07:12.267980 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg45d\" (UniqueName: \"kubernetes.io/projected/bc34cc55-e653-4a59-ae09-762011632de0-kube-api-access-mg45d\") pod \"nova-cell0-conductor-db-sync-d5qlw\" (UID: \"bc34cc55-e653-4a59-ae09-762011632de0\") " pod="openstack/nova-cell0-conductor-db-sync-d5qlw" Oct 13 08:07:12 crc kubenswrapper[4833]: I1013 08:07:12.268054 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc34cc55-e653-4a59-ae09-762011632de0-scripts\") pod \"nova-cell0-conductor-db-sync-d5qlw\" (UID: \"bc34cc55-e653-4a59-ae09-762011632de0\") " pod="openstack/nova-cell0-conductor-db-sync-d5qlw" Oct 13 08:07:12 crc kubenswrapper[4833]: I1013 08:07:12.268240 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc34cc55-e653-4a59-ae09-762011632de0-config-data\") pod \"nova-cell0-conductor-db-sync-d5qlw\" (UID: \"bc34cc55-e653-4a59-ae09-762011632de0\") " pod="openstack/nova-cell0-conductor-db-sync-d5qlw" Oct 13 08:07:12 crc kubenswrapper[4833]: I1013 08:07:12.268380 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc34cc55-e653-4a59-ae09-762011632de0-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-d5qlw\" (UID: \"bc34cc55-e653-4a59-ae09-762011632de0\") " pod="openstack/nova-cell0-conductor-db-sync-d5qlw" Oct 13 08:07:12 crc kubenswrapper[4833]: I1013 08:07:12.371394 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg45d\" (UniqueName: \"kubernetes.io/projected/bc34cc55-e653-4a59-ae09-762011632de0-kube-api-access-mg45d\") pod \"nova-cell0-conductor-db-sync-d5qlw\" (UID: \"bc34cc55-e653-4a59-ae09-762011632de0\") " pod="openstack/nova-cell0-conductor-db-sync-d5qlw" Oct 13 08:07:12 crc kubenswrapper[4833]: I1013 08:07:12.371737 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc34cc55-e653-4a59-ae09-762011632de0-scripts\") pod \"nova-cell0-conductor-db-sync-d5qlw\" (UID: \"bc34cc55-e653-4a59-ae09-762011632de0\") " pod="openstack/nova-cell0-conductor-db-sync-d5qlw" Oct 13 08:07:12 crc kubenswrapper[4833]: I1013 08:07:12.371834 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc34cc55-e653-4a59-ae09-762011632de0-config-data\") pod \"nova-cell0-conductor-db-sync-d5qlw\" (UID: \"bc34cc55-e653-4a59-ae09-762011632de0\") " pod="openstack/nova-cell0-conductor-db-sync-d5qlw" Oct 13 08:07:12 crc kubenswrapper[4833]: I1013 08:07:12.371948 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc34cc55-e653-4a59-ae09-762011632de0-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-d5qlw\" (UID: \"bc34cc55-e653-4a59-ae09-762011632de0\") " pod="openstack/nova-cell0-conductor-db-sync-d5qlw" Oct 13 08:07:12 crc kubenswrapper[4833]: I1013 08:07:12.380156 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc34cc55-e653-4a59-ae09-762011632de0-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-d5qlw\" (UID: \"bc34cc55-e653-4a59-ae09-762011632de0\") " pod="openstack/nova-cell0-conductor-db-sync-d5qlw" Oct 13 08:07:12 crc kubenswrapper[4833]: I1013 08:07:12.380288 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc34cc55-e653-4a59-ae09-762011632de0-scripts\") pod \"nova-cell0-conductor-db-sync-d5qlw\" (UID: \"bc34cc55-e653-4a59-ae09-762011632de0\") " pod="openstack/nova-cell0-conductor-db-sync-d5qlw" Oct 13 08:07:12 crc kubenswrapper[4833]: I1013 08:07:12.387684 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg45d\" (UniqueName: \"kubernetes.io/projected/bc34cc55-e653-4a59-ae09-762011632de0-kube-api-access-mg45d\") pod \"nova-cell0-conductor-db-sync-d5qlw\" (UID: \"bc34cc55-e653-4a59-ae09-762011632de0\") " pod="openstack/nova-cell0-conductor-db-sync-d5qlw" Oct 13 08:07:12 crc kubenswrapper[4833]: I1013 08:07:12.388758 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc34cc55-e653-4a59-ae09-762011632de0-config-data\") pod \"nova-cell0-conductor-db-sync-d5qlw\" (UID: \"bc34cc55-e653-4a59-ae09-762011632de0\") " pod="openstack/nova-cell0-conductor-db-sync-d5qlw" Oct 13 08:07:12 crc kubenswrapper[4833]: I1013 08:07:12.491239 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-d5qlw" Oct 13 08:07:12 crc kubenswrapper[4833]: I1013 08:07:12.669458 4833 scope.go:117] "RemoveContainer" containerID="84144bfb0195d5a0d1b4378bb219c91ea274fee3df90291859e033d126a958d8" Oct 13 08:07:12 crc kubenswrapper[4833]: E1013 08:07:12.669697 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:07:13 crc kubenswrapper[4833]: I1013 08:07:13.023725 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-d5qlw"] Oct 13 08:07:13 crc kubenswrapper[4833]: I1013 08:07:13.215738 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-d5qlw" event={"ID":"bc34cc55-e653-4a59-ae09-762011632de0","Type":"ContainerStarted","Data":"a69667235e16a56c18fbbea52a51157683029ce5975f076bbdfaefb13771b9eb"} Oct 13 08:07:13 crc kubenswrapper[4833]: I1013 08:07:13.216070 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-d5qlw" event={"ID":"bc34cc55-e653-4a59-ae09-762011632de0","Type":"ContainerStarted","Data":"0a1c5853a61206d0d167a87345969c74adf6ba8acf128d665ef6e5d7eb81f8b9"} Oct 13 08:07:14 crc kubenswrapper[4833]: I1013 08:07:14.281101 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-d5qlw" podStartSLOduration=2.281066854 podStartE2EDuration="2.281066854s" podCreationTimestamp="2025-10-13 08:07:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:07:14.277949665 +0000 UTC m=+5924.378372591" watchObservedRunningTime="2025-10-13 08:07:14.281066854 +0000 UTC m=+5924.381489810" Oct 13 08:07:18 crc kubenswrapper[4833]: I1013 08:07:18.270751 4833 generic.go:334] "Generic (PLEG): container finished" podID="bc34cc55-e653-4a59-ae09-762011632de0" containerID="a69667235e16a56c18fbbea52a51157683029ce5975f076bbdfaefb13771b9eb" exitCode=0 Oct 13 08:07:18 crc kubenswrapper[4833]: I1013 08:07:18.270943 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-d5qlw" event={"ID":"bc34cc55-e653-4a59-ae09-762011632de0","Type":"ContainerDied","Data":"a69667235e16a56c18fbbea52a51157683029ce5975f076bbdfaefb13771b9eb"} Oct 13 08:07:19 crc kubenswrapper[4833]: I1013 08:07:19.695894 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-d5qlw" Oct 13 08:07:19 crc kubenswrapper[4833]: I1013 08:07:19.830725 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc34cc55-e653-4a59-ae09-762011632de0-combined-ca-bundle\") pod \"bc34cc55-e653-4a59-ae09-762011632de0\" (UID: \"bc34cc55-e653-4a59-ae09-762011632de0\") " Oct 13 08:07:19 crc kubenswrapper[4833]: I1013 08:07:19.830837 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc34cc55-e653-4a59-ae09-762011632de0-config-data\") pod \"bc34cc55-e653-4a59-ae09-762011632de0\" (UID: \"bc34cc55-e653-4a59-ae09-762011632de0\") " Oct 13 08:07:19 crc kubenswrapper[4833]: I1013 08:07:19.830918 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc34cc55-e653-4a59-ae09-762011632de0-scripts\") pod \"bc34cc55-e653-4a59-ae09-762011632de0\" (UID: \"bc34cc55-e653-4a59-ae09-762011632de0\") " Oct 13 08:07:19 crc kubenswrapper[4833]: I1013 08:07:19.831011 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg45d\" (UniqueName: \"kubernetes.io/projected/bc34cc55-e653-4a59-ae09-762011632de0-kube-api-access-mg45d\") pod \"bc34cc55-e653-4a59-ae09-762011632de0\" (UID: \"bc34cc55-e653-4a59-ae09-762011632de0\") " Oct 13 08:07:19 crc kubenswrapper[4833]: I1013 08:07:19.837792 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc34cc55-e653-4a59-ae09-762011632de0-kube-api-access-mg45d" (OuterVolumeSpecName: "kube-api-access-mg45d") pod "bc34cc55-e653-4a59-ae09-762011632de0" (UID: "bc34cc55-e653-4a59-ae09-762011632de0"). InnerVolumeSpecName "kube-api-access-mg45d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:07:19 crc kubenswrapper[4833]: I1013 08:07:19.844925 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc34cc55-e653-4a59-ae09-762011632de0-scripts" (OuterVolumeSpecName: "scripts") pod "bc34cc55-e653-4a59-ae09-762011632de0" (UID: "bc34cc55-e653-4a59-ae09-762011632de0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:07:19 crc kubenswrapper[4833]: I1013 08:07:19.878321 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc34cc55-e653-4a59-ae09-762011632de0-config-data" (OuterVolumeSpecName: "config-data") pod "bc34cc55-e653-4a59-ae09-762011632de0" (UID: "bc34cc55-e653-4a59-ae09-762011632de0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:07:19 crc kubenswrapper[4833]: I1013 08:07:19.879321 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc34cc55-e653-4a59-ae09-762011632de0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc34cc55-e653-4a59-ae09-762011632de0" (UID: "bc34cc55-e653-4a59-ae09-762011632de0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:07:19 crc kubenswrapper[4833]: I1013 08:07:19.934004 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc34cc55-e653-4a59-ae09-762011632de0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:19 crc kubenswrapper[4833]: I1013 08:07:19.934031 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc34cc55-e653-4a59-ae09-762011632de0-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:19 crc kubenswrapper[4833]: I1013 08:07:19.934040 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc34cc55-e653-4a59-ae09-762011632de0-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:19 crc kubenswrapper[4833]: I1013 08:07:19.934049 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg45d\" (UniqueName: \"kubernetes.io/projected/bc34cc55-e653-4a59-ae09-762011632de0-kube-api-access-mg45d\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:20 crc kubenswrapper[4833]: I1013 08:07:20.300043 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-d5qlw" event={"ID":"bc34cc55-e653-4a59-ae09-762011632de0","Type":"ContainerDied","Data":"0a1c5853a61206d0d167a87345969c74adf6ba8acf128d665ef6e5d7eb81f8b9"} Oct 13 08:07:20 crc kubenswrapper[4833]: I1013 08:07:20.300201 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a1c5853a61206d0d167a87345969c74adf6ba8acf128d665ef6e5d7eb81f8b9" Oct 13 08:07:20 crc kubenswrapper[4833]: I1013 08:07:20.300357 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-d5qlw" Oct 13 08:07:20 crc kubenswrapper[4833]: I1013 08:07:20.418050 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 13 08:07:20 crc kubenswrapper[4833]: E1013 08:07:20.418913 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc34cc55-e653-4a59-ae09-762011632de0" containerName="nova-cell0-conductor-db-sync" Oct 13 08:07:20 crc kubenswrapper[4833]: I1013 08:07:20.418951 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc34cc55-e653-4a59-ae09-762011632de0" containerName="nova-cell0-conductor-db-sync" Oct 13 08:07:20 crc kubenswrapper[4833]: I1013 08:07:20.419269 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc34cc55-e653-4a59-ae09-762011632de0" containerName="nova-cell0-conductor-db-sync" Oct 13 08:07:20 crc kubenswrapper[4833]: I1013 08:07:20.423140 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 13 08:07:20 crc kubenswrapper[4833]: I1013 08:07:20.426753 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 13 08:07:20 crc kubenswrapper[4833]: I1013 08:07:20.427273 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-qsqpg" Oct 13 08:07:20 crc kubenswrapper[4833]: I1013 08:07:20.438871 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 13 08:07:20 crc kubenswrapper[4833]: I1013 08:07:20.546077 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89lj8\" (UniqueName: \"kubernetes.io/projected/31c249ee-bf3f-4fdd-8f22-56ef6f18881b-kube-api-access-89lj8\") pod \"nova-cell0-conductor-0\" (UID: \"31c249ee-bf3f-4fdd-8f22-56ef6f18881b\") " pod="openstack/nova-cell0-conductor-0" Oct 13 08:07:20 crc kubenswrapper[4833]: I1013 08:07:20.546524 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31c249ee-bf3f-4fdd-8f22-56ef6f18881b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"31c249ee-bf3f-4fdd-8f22-56ef6f18881b\") " pod="openstack/nova-cell0-conductor-0" Oct 13 08:07:20 crc kubenswrapper[4833]: I1013 08:07:20.546761 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31c249ee-bf3f-4fdd-8f22-56ef6f18881b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"31c249ee-bf3f-4fdd-8f22-56ef6f18881b\") " pod="openstack/nova-cell0-conductor-0" Oct 13 08:07:20 crc kubenswrapper[4833]: I1013 08:07:20.648791 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31c249ee-bf3f-4fdd-8f22-56ef6f18881b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"31c249ee-bf3f-4fdd-8f22-56ef6f18881b\") " pod="openstack/nova-cell0-conductor-0" Oct 13 08:07:20 crc kubenswrapper[4833]: I1013 08:07:20.648909 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89lj8\" (UniqueName: \"kubernetes.io/projected/31c249ee-bf3f-4fdd-8f22-56ef6f18881b-kube-api-access-89lj8\") pod \"nova-cell0-conductor-0\" (UID: \"31c249ee-bf3f-4fdd-8f22-56ef6f18881b\") " pod="openstack/nova-cell0-conductor-0" Oct 13 08:07:20 crc kubenswrapper[4833]: I1013 08:07:20.649018 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31c249ee-bf3f-4fdd-8f22-56ef6f18881b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"31c249ee-bf3f-4fdd-8f22-56ef6f18881b\") " pod="openstack/nova-cell0-conductor-0" Oct 13 08:07:20 crc kubenswrapper[4833]: I1013 08:07:20.653991 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31c249ee-bf3f-4fdd-8f22-56ef6f18881b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"31c249ee-bf3f-4fdd-8f22-56ef6f18881b\") " pod="openstack/nova-cell0-conductor-0" Oct 13 08:07:20 crc kubenswrapper[4833]: I1013 08:07:20.663228 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31c249ee-bf3f-4fdd-8f22-56ef6f18881b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"31c249ee-bf3f-4fdd-8f22-56ef6f18881b\") " pod="openstack/nova-cell0-conductor-0" Oct 13 08:07:20 crc kubenswrapper[4833]: I1013 08:07:20.672128 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89lj8\" (UniqueName: \"kubernetes.io/projected/31c249ee-bf3f-4fdd-8f22-56ef6f18881b-kube-api-access-89lj8\") pod \"nova-cell0-conductor-0\" (UID: \"31c249ee-bf3f-4fdd-8f22-56ef6f18881b\") " pod="openstack/nova-cell0-conductor-0" Oct 13 08:07:20 crc kubenswrapper[4833]: I1013 08:07:20.760378 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 13 08:07:21 crc kubenswrapper[4833]: I1013 08:07:21.347801 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 13 08:07:22 crc kubenswrapper[4833]: I1013 08:07:22.330422 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"31c249ee-bf3f-4fdd-8f22-56ef6f18881b","Type":"ContainerStarted","Data":"51fda6b52644a269bdc7ae0f18a7b037d0dca524844f50d5ea60e2488fd33840"} Oct 13 08:07:22 crc kubenswrapper[4833]: I1013 08:07:22.330487 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"31c249ee-bf3f-4fdd-8f22-56ef6f18881b","Type":"ContainerStarted","Data":"6d3c0b39506d869cf1ebd1940f119acca1e8bd5c90ac3fc7618827598e28d1b3"} Oct 13 08:07:22 crc kubenswrapper[4833]: I1013 08:07:22.330597 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Oct 13 08:07:22 crc kubenswrapper[4833]: I1013 08:07:22.356240 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.35617884 podStartE2EDuration="2.35617884s" podCreationTimestamp="2025-10-13 08:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:07:22.347685108 +0000 UTC m=+5932.448108034" watchObservedRunningTime="2025-10-13 08:07:22.35617884 +0000 UTC m=+5932.456601796" Oct 13 08:07:26 crc kubenswrapper[4833]: I1013 08:07:26.627580 4833 scope.go:117] "RemoveContainer" containerID="84144bfb0195d5a0d1b4378bb219c91ea274fee3df90291859e033d126a958d8" Oct 13 08:07:26 crc kubenswrapper[4833]: E1013 08:07:26.628314 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:07:30 crc kubenswrapper[4833]: I1013 08:07:30.791831 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.246521 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-s55h8"] Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.247904 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-s55h8" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.249981 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.269260 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.283652 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-s55h8"] Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.385708 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.387560 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5505d43c-ff8b-4643-876b-843f46e64eb4-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-s55h8\" (UID: \"5505d43c-ff8b-4643-876b-843f46e64eb4\") " pod="openstack/nova-cell0-cell-mapping-s55h8" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.387696 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbprn\" (UniqueName: \"kubernetes.io/projected/5505d43c-ff8b-4643-876b-843f46e64eb4-kube-api-access-lbprn\") pod \"nova-cell0-cell-mapping-s55h8\" (UID: \"5505d43c-ff8b-4643-876b-843f46e64eb4\") " pod="openstack/nova-cell0-cell-mapping-s55h8" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.387760 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5505d43c-ff8b-4643-876b-843f46e64eb4-scripts\") pod \"nova-cell0-cell-mapping-s55h8\" (UID: \"5505d43c-ff8b-4643-876b-843f46e64eb4\") " pod="openstack/nova-cell0-cell-mapping-s55h8" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.387916 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5505d43c-ff8b-4643-876b-843f46e64eb4-config-data\") pod \"nova-cell0-cell-mapping-s55h8\" (UID: \"5505d43c-ff8b-4643-876b-843f46e64eb4\") " pod="openstack/nova-cell0-cell-mapping-s55h8" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.391241 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.396141 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.400159 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.455465 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.456906 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.459195 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.461334 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.473908 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.475348 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.477885 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.489575 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5505d43c-ff8b-4643-876b-843f46e64eb4-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-s55h8\" (UID: \"5505d43c-ff8b-4643-876b-843f46e64eb4\") " pod="openstack/nova-cell0-cell-mapping-s55h8" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.489669 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbprn\" (UniqueName: \"kubernetes.io/projected/5505d43c-ff8b-4643-876b-843f46e64eb4-kube-api-access-lbprn\") pod \"nova-cell0-cell-mapping-s55h8\" (UID: \"5505d43c-ff8b-4643-876b-843f46e64eb4\") " pod="openstack/nova-cell0-cell-mapping-s55h8" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.489716 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5505d43c-ff8b-4643-876b-843f46e64eb4-scripts\") pod \"nova-cell0-cell-mapping-s55h8\" (UID: \"5505d43c-ff8b-4643-876b-843f46e64eb4\") " pod="openstack/nova-cell0-cell-mapping-s55h8" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.489767 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5505d43c-ff8b-4643-876b-843f46e64eb4-config-data\") pod \"nova-cell0-cell-mapping-s55h8\" (UID: \"5505d43c-ff8b-4643-876b-843f46e64eb4\") " pod="openstack/nova-cell0-cell-mapping-s55h8" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.495633 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5505d43c-ff8b-4643-876b-843f46e64eb4-config-data\") pod \"nova-cell0-cell-mapping-s55h8\" (UID: \"5505d43c-ff8b-4643-876b-843f46e64eb4\") " pod="openstack/nova-cell0-cell-mapping-s55h8" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.495689 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.498493 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5505d43c-ff8b-4643-876b-843f46e64eb4-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-s55h8\" (UID: \"5505d43c-ff8b-4643-876b-843f46e64eb4\") " pod="openstack/nova-cell0-cell-mapping-s55h8" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.506593 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5505d43c-ff8b-4643-876b-843f46e64eb4-scripts\") pod \"nova-cell0-cell-mapping-s55h8\" (UID: \"5505d43c-ff8b-4643-876b-843f46e64eb4\") " pod="openstack/nova-cell0-cell-mapping-s55h8" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.542928 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbprn\" (UniqueName: \"kubernetes.io/projected/5505d43c-ff8b-4643-876b-843f46e64eb4-kube-api-access-lbprn\") pod \"nova-cell0-cell-mapping-s55h8\" (UID: \"5505d43c-ff8b-4643-876b-843f46e64eb4\") " pod="openstack/nova-cell0-cell-mapping-s55h8" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.569205 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-s55h8" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.594622 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpmql\" (UniqueName: \"kubernetes.io/projected/9a19ced6-af31-4914-ae8b-d1fca1d24580-kube-api-access-jpmql\") pod \"nova-cell1-novncproxy-0\" (UID: \"9a19ced6-af31-4914-ae8b-d1fca1d24580\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.594698 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n4jp\" (UniqueName: \"kubernetes.io/projected/50974d74-3ac7-44b9-89dc-6e6eb7cff4b3-kube-api-access-4n4jp\") pod \"nova-api-0\" (UID: \"50974d74-3ac7-44b9-89dc-6e6eb7cff4b3\") " pod="openstack/nova-api-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.594758 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rlcv\" (UniqueName: \"kubernetes.io/projected/7c6d3ff7-e719-4d13-a433-2a29d18f854c-kube-api-access-9rlcv\") pod \"nova-scheduler-0\" (UID: \"7c6d3ff7-e719-4d13-a433-2a29d18f854c\") " pod="openstack/nova-scheduler-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.594779 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50974d74-3ac7-44b9-89dc-6e6eb7cff4b3-config-data\") pod \"nova-api-0\" (UID: \"50974d74-3ac7-44b9-89dc-6e6eb7cff4b3\") " pod="openstack/nova-api-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.594806 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50974d74-3ac7-44b9-89dc-6e6eb7cff4b3-logs\") pod \"nova-api-0\" (UID: \"50974d74-3ac7-44b9-89dc-6e6eb7cff4b3\") " pod="openstack/nova-api-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.594826 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a19ced6-af31-4914-ae8b-d1fca1d24580-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9a19ced6-af31-4914-ae8b-d1fca1d24580\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.594840 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50974d74-3ac7-44b9-89dc-6e6eb7cff4b3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"50974d74-3ac7-44b9-89dc-6e6eb7cff4b3\") " pod="openstack/nova-api-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.594872 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a19ced6-af31-4914-ae8b-d1fca1d24580-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9a19ced6-af31-4914-ae8b-d1fca1d24580\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.594907 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c6d3ff7-e719-4d13-a433-2a29d18f854c-config-data\") pod \"nova-scheduler-0\" (UID: \"7c6d3ff7-e719-4d13-a433-2a29d18f854c\") " pod="openstack/nova-scheduler-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.594932 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c6d3ff7-e719-4d13-a433-2a29d18f854c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7c6d3ff7-e719-4d13-a433-2a29d18f854c\") " pod="openstack/nova-scheduler-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.652281 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.653757 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.661694 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.696038 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpmql\" (UniqueName: \"kubernetes.io/projected/9a19ced6-af31-4914-ae8b-d1fca1d24580-kube-api-access-jpmql\") pod \"nova-cell1-novncproxy-0\" (UID: \"9a19ced6-af31-4914-ae8b-d1fca1d24580\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.696112 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n4jp\" (UniqueName: \"kubernetes.io/projected/50974d74-3ac7-44b9-89dc-6e6eb7cff4b3-kube-api-access-4n4jp\") pod \"nova-api-0\" (UID: \"50974d74-3ac7-44b9-89dc-6e6eb7cff4b3\") " pod="openstack/nova-api-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.696170 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rlcv\" (UniqueName: \"kubernetes.io/projected/7c6d3ff7-e719-4d13-a433-2a29d18f854c-kube-api-access-9rlcv\") pod \"nova-scheduler-0\" (UID: \"7c6d3ff7-e719-4d13-a433-2a29d18f854c\") " pod="openstack/nova-scheduler-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.696191 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50974d74-3ac7-44b9-89dc-6e6eb7cff4b3-config-data\") pod \"nova-api-0\" (UID: \"50974d74-3ac7-44b9-89dc-6e6eb7cff4b3\") " pod="openstack/nova-api-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.696221 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50974d74-3ac7-44b9-89dc-6e6eb7cff4b3-logs\") pod \"nova-api-0\" (UID: \"50974d74-3ac7-44b9-89dc-6e6eb7cff4b3\") " pod="openstack/nova-api-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.696238 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a19ced6-af31-4914-ae8b-d1fca1d24580-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9a19ced6-af31-4914-ae8b-d1fca1d24580\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.696253 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50974d74-3ac7-44b9-89dc-6e6eb7cff4b3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"50974d74-3ac7-44b9-89dc-6e6eb7cff4b3\") " pod="openstack/nova-api-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.696283 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a19ced6-af31-4914-ae8b-d1fca1d24580-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9a19ced6-af31-4914-ae8b-d1fca1d24580\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.696314 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c6d3ff7-e719-4d13-a433-2a29d18f854c-config-data\") pod \"nova-scheduler-0\" (UID: \"7c6d3ff7-e719-4d13-a433-2a29d18f854c\") " pod="openstack/nova-scheduler-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.696344 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c6d3ff7-e719-4d13-a433-2a29d18f854c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7c6d3ff7-e719-4d13-a433-2a29d18f854c\") " pod="openstack/nova-scheduler-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.710016 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.710819 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c6d3ff7-e719-4d13-a433-2a29d18f854c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7c6d3ff7-e719-4d13-a433-2a29d18f854c\") " pod="openstack/nova-scheduler-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.718679 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c6d3ff7-e719-4d13-a433-2a29d18f854c-config-data\") pod \"nova-scheduler-0\" (UID: \"7c6d3ff7-e719-4d13-a433-2a29d18f854c\") " pod="openstack/nova-scheduler-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.729636 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a19ced6-af31-4914-ae8b-d1fca1d24580-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9a19ced6-af31-4914-ae8b-d1fca1d24580\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.735180 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50974d74-3ac7-44b9-89dc-6e6eb7cff4b3-config-data\") pod \"nova-api-0\" (UID: \"50974d74-3ac7-44b9-89dc-6e6eb7cff4b3\") " pod="openstack/nova-api-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.740139 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50974d74-3ac7-44b9-89dc-6e6eb7cff4b3-logs\") pod \"nova-api-0\" (UID: \"50974d74-3ac7-44b9-89dc-6e6eb7cff4b3\") " pod="openstack/nova-api-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.740617 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50974d74-3ac7-44b9-89dc-6e6eb7cff4b3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"50974d74-3ac7-44b9-89dc-6e6eb7cff4b3\") " pod="openstack/nova-api-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.748740 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a19ced6-af31-4914-ae8b-d1fca1d24580-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9a19ced6-af31-4914-ae8b-d1fca1d24580\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.754301 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rlcv\" (UniqueName: \"kubernetes.io/projected/7c6d3ff7-e719-4d13-a433-2a29d18f854c-kube-api-access-9rlcv\") pod \"nova-scheduler-0\" (UID: \"7c6d3ff7-e719-4d13-a433-2a29d18f854c\") " pod="openstack/nova-scheduler-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.754924 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n4jp\" (UniqueName: \"kubernetes.io/projected/50974d74-3ac7-44b9-89dc-6e6eb7cff4b3-kube-api-access-4n4jp\") pod \"nova-api-0\" (UID: \"50974d74-3ac7-44b9-89dc-6e6eb7cff4b3\") " pod="openstack/nova-api-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.755651 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpmql\" (UniqueName: \"kubernetes.io/projected/9a19ced6-af31-4914-ae8b-d1fca1d24580-kube-api-access-jpmql\") pod \"nova-cell1-novncproxy-0\" (UID: \"9a19ced6-af31-4914-ae8b-d1fca1d24580\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.765667 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-759f89cf5-f8l8v"] Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.767240 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-759f89cf5-f8l8v" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.777969 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.789666 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-759f89cf5-f8l8v"] Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.799458 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0-config-data\") pod \"nova-metadata-0\" (UID: \"bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0\") " pod="openstack/nova-metadata-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.799734 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0-logs\") pod \"nova-metadata-0\" (UID: \"bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0\") " pod="openstack/nova-metadata-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.799764 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkbtx\" (UniqueName: \"kubernetes.io/projected/bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0-kube-api-access-gkbtx\") pod \"nova-metadata-0\" (UID: \"bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0\") " pod="openstack/nova-metadata-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.799795 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0\") " pod="openstack/nova-metadata-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.801972 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.901575 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/526665e6-74a4-4ca4-a786-1ba03f0381e7-config\") pod \"dnsmasq-dns-759f89cf5-f8l8v\" (UID: \"526665e6-74a4-4ca4-a786-1ba03f0381e7\") " pod="openstack/dnsmasq-dns-759f89cf5-f8l8v" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.901615 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nm9n\" (UniqueName: \"kubernetes.io/projected/526665e6-74a4-4ca4-a786-1ba03f0381e7-kube-api-access-6nm9n\") pod \"dnsmasq-dns-759f89cf5-f8l8v\" (UID: \"526665e6-74a4-4ca4-a786-1ba03f0381e7\") " pod="openstack/dnsmasq-dns-759f89cf5-f8l8v" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.901674 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0-config-data\") pod \"nova-metadata-0\" (UID: \"bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0\") " pod="openstack/nova-metadata-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.901845 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0-logs\") pod \"nova-metadata-0\" (UID: \"bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0\") " pod="openstack/nova-metadata-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.901893 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/526665e6-74a4-4ca4-a786-1ba03f0381e7-dns-svc\") pod \"dnsmasq-dns-759f89cf5-f8l8v\" (UID: \"526665e6-74a4-4ca4-a786-1ba03f0381e7\") " pod="openstack/dnsmasq-dns-759f89cf5-f8l8v" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.901948 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkbtx\" (UniqueName: \"kubernetes.io/projected/bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0-kube-api-access-gkbtx\") pod \"nova-metadata-0\" (UID: \"bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0\") " pod="openstack/nova-metadata-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.902026 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0\") " pod="openstack/nova-metadata-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.902062 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/526665e6-74a4-4ca4-a786-1ba03f0381e7-ovsdbserver-nb\") pod \"dnsmasq-dns-759f89cf5-f8l8v\" (UID: \"526665e6-74a4-4ca4-a786-1ba03f0381e7\") " pod="openstack/dnsmasq-dns-759f89cf5-f8l8v" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.902122 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/526665e6-74a4-4ca4-a786-1ba03f0381e7-ovsdbserver-sb\") pod \"dnsmasq-dns-759f89cf5-f8l8v\" (UID: \"526665e6-74a4-4ca4-a786-1ba03f0381e7\") " pod="openstack/dnsmasq-dns-759f89cf5-f8l8v" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.902430 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0-logs\") pod \"nova-metadata-0\" (UID: \"bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0\") " pod="openstack/nova-metadata-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.917988 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0-config-data\") pod \"nova-metadata-0\" (UID: \"bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0\") " pod="openstack/nova-metadata-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.926311 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkbtx\" (UniqueName: \"kubernetes.io/projected/bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0-kube-api-access-gkbtx\") pod \"nova-metadata-0\" (UID: \"bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0\") " pod="openstack/nova-metadata-0" Oct 13 08:07:31 crc kubenswrapper[4833]: I1013 08:07:31.926509 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0\") " pod="openstack/nova-metadata-0" Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.003605 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/526665e6-74a4-4ca4-a786-1ba03f0381e7-ovsdbserver-sb\") pod \"dnsmasq-dns-759f89cf5-f8l8v\" (UID: \"526665e6-74a4-4ca4-a786-1ba03f0381e7\") " pod="openstack/dnsmasq-dns-759f89cf5-f8l8v" Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.003702 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/526665e6-74a4-4ca4-a786-1ba03f0381e7-config\") pod \"dnsmasq-dns-759f89cf5-f8l8v\" (UID: \"526665e6-74a4-4ca4-a786-1ba03f0381e7\") " pod="openstack/dnsmasq-dns-759f89cf5-f8l8v" Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.003719 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nm9n\" (UniqueName: \"kubernetes.io/projected/526665e6-74a4-4ca4-a786-1ba03f0381e7-kube-api-access-6nm9n\") pod \"dnsmasq-dns-759f89cf5-f8l8v\" (UID: \"526665e6-74a4-4ca4-a786-1ba03f0381e7\") " pod="openstack/dnsmasq-dns-759f89cf5-f8l8v" Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.003788 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/526665e6-74a4-4ca4-a786-1ba03f0381e7-dns-svc\") pod \"dnsmasq-dns-759f89cf5-f8l8v\" (UID: \"526665e6-74a4-4ca4-a786-1ba03f0381e7\") " pod="openstack/dnsmasq-dns-759f89cf5-f8l8v" Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.003839 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/526665e6-74a4-4ca4-a786-1ba03f0381e7-ovsdbserver-nb\") pod \"dnsmasq-dns-759f89cf5-f8l8v\" (UID: \"526665e6-74a4-4ca4-a786-1ba03f0381e7\") " pod="openstack/dnsmasq-dns-759f89cf5-f8l8v" Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.004646 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/526665e6-74a4-4ca4-a786-1ba03f0381e7-ovsdbserver-nb\") pod \"dnsmasq-dns-759f89cf5-f8l8v\" (UID: \"526665e6-74a4-4ca4-a786-1ba03f0381e7\") " pod="openstack/dnsmasq-dns-759f89cf5-f8l8v" Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.007247 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.007709 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/526665e6-74a4-4ca4-a786-1ba03f0381e7-dns-svc\") pod \"dnsmasq-dns-759f89cf5-f8l8v\" (UID: \"526665e6-74a4-4ca4-a786-1ba03f0381e7\") " pod="openstack/dnsmasq-dns-759f89cf5-f8l8v" Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.007817 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/526665e6-74a4-4ca4-a786-1ba03f0381e7-config\") pod \"dnsmasq-dns-759f89cf5-f8l8v\" (UID: \"526665e6-74a4-4ca4-a786-1ba03f0381e7\") " pod="openstack/dnsmasq-dns-759f89cf5-f8l8v" Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.007929 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/526665e6-74a4-4ca4-a786-1ba03f0381e7-ovsdbserver-sb\") pod \"dnsmasq-dns-759f89cf5-f8l8v\" (UID: \"526665e6-74a4-4ca4-a786-1ba03f0381e7\") " pod="openstack/dnsmasq-dns-759f89cf5-f8l8v" Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.024187 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nm9n\" (UniqueName: \"kubernetes.io/projected/526665e6-74a4-4ca4-a786-1ba03f0381e7-kube-api-access-6nm9n\") pod \"dnsmasq-dns-759f89cf5-f8l8v\" (UID: \"526665e6-74a4-4ca4-a786-1ba03f0381e7\") " pod="openstack/dnsmasq-dns-759f89cf5-f8l8v" Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.040464 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.100180 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-759f89cf5-f8l8v" Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.256811 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-s55h8"] Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.345777 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 08:07:32 crc kubenswrapper[4833]: W1013 08:07:32.383767 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a19ced6_af31_4914_ae8b_d1fca1d24580.slice/crio-f8a1fcf47c87ceeca0a17eeff996a47af64e25128abc31069feed916b28f5d11 WatchSource:0}: Error finding container f8a1fcf47c87ceeca0a17eeff996a47af64e25128abc31069feed916b28f5d11: Status 404 returned error can't find the container with id f8a1fcf47c87ceeca0a17eeff996a47af64e25128abc31069feed916b28f5d11 Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.404267 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.448343 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7c6d3ff7-e719-4d13-a433-2a29d18f854c","Type":"ContainerStarted","Data":"b09f385897a5b309437b1bc16229b6905a88f7f3b6a9a18962c9c6014dde2d15"} Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.455090 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-s55h8" event={"ID":"5505d43c-ff8b-4643-876b-843f46e64eb4","Type":"ContainerStarted","Data":"c78f08d4e7a0beebc34b9f9555af413f994961dfe9c8ecee5e548b9393eb4874"} Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.456654 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9a19ced6-af31-4914-ae8b-d1fca1d24580","Type":"ContainerStarted","Data":"f8a1fcf47c87ceeca0a17eeff996a47af64e25128abc31069feed916b28f5d11"} Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.460113 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4gwzv"] Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.462165 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4gwzv" Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.466205 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4gwzv"] Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.466736 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.467274 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.640643 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f01ed8f4-afc5-4464-ab51-73cab30ef8d3-config-data\") pod \"nova-cell1-conductor-db-sync-4gwzv\" (UID: \"f01ed8f4-afc5-4464-ab51-73cab30ef8d3\") " pod="openstack/nova-cell1-conductor-db-sync-4gwzv" Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.641318 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f01ed8f4-afc5-4464-ab51-73cab30ef8d3-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4gwzv\" (UID: \"f01ed8f4-afc5-4464-ab51-73cab30ef8d3\") " pod="openstack/nova-cell1-conductor-db-sync-4gwzv" Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.641517 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f01ed8f4-afc5-4464-ab51-73cab30ef8d3-scripts\") pod \"nova-cell1-conductor-db-sync-4gwzv\" (UID: \"f01ed8f4-afc5-4464-ab51-73cab30ef8d3\") " pod="openstack/nova-cell1-conductor-db-sync-4gwzv" Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.641668 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2ngg\" (UniqueName: \"kubernetes.io/projected/f01ed8f4-afc5-4464-ab51-73cab30ef8d3-kube-api-access-d2ngg\") pod \"nova-cell1-conductor-db-sync-4gwzv\" (UID: \"f01ed8f4-afc5-4464-ab51-73cab30ef8d3\") " pod="openstack/nova-cell1-conductor-db-sync-4gwzv" Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.701368 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.705142 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.726312 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-759f89cf5-f8l8v"] Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.744260 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f01ed8f4-afc5-4464-ab51-73cab30ef8d3-config-data\") pod \"nova-cell1-conductor-db-sync-4gwzv\" (UID: \"f01ed8f4-afc5-4464-ab51-73cab30ef8d3\") " pod="openstack/nova-cell1-conductor-db-sync-4gwzv" Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.744358 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f01ed8f4-afc5-4464-ab51-73cab30ef8d3-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4gwzv\" (UID: \"f01ed8f4-afc5-4464-ab51-73cab30ef8d3\") " pod="openstack/nova-cell1-conductor-db-sync-4gwzv" Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.744412 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f01ed8f4-afc5-4464-ab51-73cab30ef8d3-scripts\") pod \"nova-cell1-conductor-db-sync-4gwzv\" (UID: \"f01ed8f4-afc5-4464-ab51-73cab30ef8d3\") " pod="openstack/nova-cell1-conductor-db-sync-4gwzv" Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.744460 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2ngg\" (UniqueName: \"kubernetes.io/projected/f01ed8f4-afc5-4464-ab51-73cab30ef8d3-kube-api-access-d2ngg\") pod \"nova-cell1-conductor-db-sync-4gwzv\" (UID: \"f01ed8f4-afc5-4464-ab51-73cab30ef8d3\") " pod="openstack/nova-cell1-conductor-db-sync-4gwzv" Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.766656 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f01ed8f4-afc5-4464-ab51-73cab30ef8d3-config-data\") pod \"nova-cell1-conductor-db-sync-4gwzv\" (UID: \"f01ed8f4-afc5-4464-ab51-73cab30ef8d3\") " pod="openstack/nova-cell1-conductor-db-sync-4gwzv" Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.768768 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2ngg\" (UniqueName: \"kubernetes.io/projected/f01ed8f4-afc5-4464-ab51-73cab30ef8d3-kube-api-access-d2ngg\") pod \"nova-cell1-conductor-db-sync-4gwzv\" (UID: \"f01ed8f4-afc5-4464-ab51-73cab30ef8d3\") " pod="openstack/nova-cell1-conductor-db-sync-4gwzv" Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.771723 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f01ed8f4-afc5-4464-ab51-73cab30ef8d3-scripts\") pod \"nova-cell1-conductor-db-sync-4gwzv\" (UID: \"f01ed8f4-afc5-4464-ab51-73cab30ef8d3\") " pod="openstack/nova-cell1-conductor-db-sync-4gwzv" Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.778081 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f01ed8f4-afc5-4464-ab51-73cab30ef8d3-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4gwzv\" (UID: \"f01ed8f4-afc5-4464-ab51-73cab30ef8d3\") " pod="openstack/nova-cell1-conductor-db-sync-4gwzv" Oct 13 08:07:32 crc kubenswrapper[4833]: I1013 08:07:32.930408 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4gwzv" Oct 13 08:07:33 crc kubenswrapper[4833]: I1013 08:07:33.415277 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4gwzv"] Oct 13 08:07:33 crc kubenswrapper[4833]: W1013 08:07:33.419885 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf01ed8f4_afc5_4464_ab51_73cab30ef8d3.slice/crio-871aa26393bdd14a382055986c7ae41293d1f0d8e89a15ed4d12c32c1740df75 WatchSource:0}: Error finding container 871aa26393bdd14a382055986c7ae41293d1f0d8e89a15ed4d12c32c1740df75: Status 404 returned error can't find the container with id 871aa26393bdd14a382055986c7ae41293d1f0d8e89a15ed4d12c32c1740df75 Oct 13 08:07:33 crc kubenswrapper[4833]: I1013 08:07:33.481823 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4gwzv" event={"ID":"f01ed8f4-afc5-4464-ab51-73cab30ef8d3","Type":"ContainerStarted","Data":"871aa26393bdd14a382055986c7ae41293d1f0d8e89a15ed4d12c32c1740df75"} Oct 13 08:07:33 crc kubenswrapper[4833]: I1013 08:07:33.486755 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0","Type":"ContainerStarted","Data":"bdd7503cdac36f5800097f8b9c8bc7c3847e326cdf37a0c18c32b9ea3c495a26"} Oct 13 08:07:33 crc kubenswrapper[4833]: I1013 08:07:33.486800 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0","Type":"ContainerStarted","Data":"5d6db2f133c9da49c1f87cae1675ab0581935de792f3f990541f93cf7ac04128"} Oct 13 08:07:33 crc kubenswrapper[4833]: I1013 08:07:33.486812 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0","Type":"ContainerStarted","Data":"3d0872aed308ad1ae3cf5a5fd74f02592cbbd48836eb1c2c33ece71cb4a4386c"} Oct 13 08:07:33 crc kubenswrapper[4833]: I1013 08:07:33.488806 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"50974d74-3ac7-44b9-89dc-6e6eb7cff4b3","Type":"ContainerStarted","Data":"b22396e86a0e3e2a0adeab1c4cb3efd43977d33bf32d478856efb510635ec7ec"} Oct 13 08:07:33 crc kubenswrapper[4833]: I1013 08:07:33.488831 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"50974d74-3ac7-44b9-89dc-6e6eb7cff4b3","Type":"ContainerStarted","Data":"019469fdd92f610ba9165e869fff36c9ab9939ba3dc7edba36f14a827ed68ffa"} Oct 13 08:07:33 crc kubenswrapper[4833]: I1013 08:07:33.488841 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"50974d74-3ac7-44b9-89dc-6e6eb7cff4b3","Type":"ContainerStarted","Data":"7fde0e4466da006526962793bb3ef9b37d490143f4c2adee2829b6523f91e539"} Oct 13 08:07:33 crc kubenswrapper[4833]: I1013 08:07:33.495573 4833 generic.go:334] "Generic (PLEG): container finished" podID="526665e6-74a4-4ca4-a786-1ba03f0381e7" containerID="3a12950486a442b8068f2be39e8faf0ce419cd56e2549098993d33d9fdd6600f" exitCode=0 Oct 13 08:07:33 crc kubenswrapper[4833]: I1013 08:07:33.495978 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-759f89cf5-f8l8v" event={"ID":"526665e6-74a4-4ca4-a786-1ba03f0381e7","Type":"ContainerDied","Data":"3a12950486a442b8068f2be39e8faf0ce419cd56e2549098993d33d9fdd6600f"} Oct 13 08:07:33 crc kubenswrapper[4833]: I1013 08:07:33.496049 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-759f89cf5-f8l8v" event={"ID":"526665e6-74a4-4ca4-a786-1ba03f0381e7","Type":"ContainerStarted","Data":"d2e60e5ad4c59b4ce79192fe75948038058e4c9c5b389b56c2400e5cdc6cf8b4"} Oct 13 08:07:33 crc kubenswrapper[4833]: I1013 08:07:33.500706 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9a19ced6-af31-4914-ae8b-d1fca1d24580","Type":"ContainerStarted","Data":"9922d338e11ab2e964cedc0603b6f4656c3f43fcf739f737108c656a8505f28e"} Oct 13 08:07:33 crc kubenswrapper[4833]: I1013 08:07:33.502570 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7c6d3ff7-e719-4d13-a433-2a29d18f854c","Type":"ContainerStarted","Data":"024ba1767a31c709557d04aed2148b98ac9900471f3c88728823a8a55f89fbca"} Oct 13 08:07:33 crc kubenswrapper[4833]: I1013 08:07:33.504462 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-s55h8" event={"ID":"5505d43c-ff8b-4643-876b-843f46e64eb4","Type":"ContainerStarted","Data":"007d98f99ca4287b3c41915150b2deb6d9ef38eed2eb98d51a738384eb2af938"} Oct 13 08:07:33 crc kubenswrapper[4833]: I1013 08:07:33.521508 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.521492172 podStartE2EDuration="2.521492172s" podCreationTimestamp="2025-10-13 08:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:07:33.510694245 +0000 UTC m=+5943.611117161" watchObservedRunningTime="2025-10-13 08:07:33.521492172 +0000 UTC m=+5943.621915088" Oct 13 08:07:33 crc kubenswrapper[4833]: I1013 08:07:33.574695 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.574677183 podStartE2EDuration="2.574677183s" podCreationTimestamp="2025-10-13 08:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:07:33.571266506 +0000 UTC m=+5943.671689442" watchObservedRunningTime="2025-10-13 08:07:33.574677183 +0000 UTC m=+5943.675100099" Oct 13 08:07:33 crc kubenswrapper[4833]: I1013 08:07:33.670356 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.670332922 podStartE2EDuration="2.670332922s" podCreationTimestamp="2025-10-13 08:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:07:33.610021018 +0000 UTC m=+5943.710443934" watchObservedRunningTime="2025-10-13 08:07:33.670332922 +0000 UTC m=+5943.770755838" Oct 13 08:07:33 crc kubenswrapper[4833]: I1013 08:07:33.672661 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-s55h8" podStartSLOduration=2.672650568 podStartE2EDuration="2.672650568s" podCreationTimestamp="2025-10-13 08:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:07:33.628153263 +0000 UTC m=+5943.728576179" watchObservedRunningTime="2025-10-13 08:07:33.672650568 +0000 UTC m=+5943.773073494" Oct 13 08:07:33 crc kubenswrapper[4833]: I1013 08:07:33.695114 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.695080916 podStartE2EDuration="2.695080916s" podCreationTimestamp="2025-10-13 08:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:07:33.653901765 +0000 UTC m=+5943.754324681" watchObservedRunningTime="2025-10-13 08:07:33.695080916 +0000 UTC m=+5943.795503832" Oct 13 08:07:34 crc kubenswrapper[4833]: I1013 08:07:34.568068 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4gwzv" event={"ID":"f01ed8f4-afc5-4464-ab51-73cab30ef8d3","Type":"ContainerStarted","Data":"ae26aa2c4f38d7fe0ed33a96d3b8a7b74bd259f213a5dac353fd4f8f94978da1"} Oct 13 08:07:34 crc kubenswrapper[4833]: I1013 08:07:34.583481 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-759f89cf5-f8l8v" event={"ID":"526665e6-74a4-4ca4-a786-1ba03f0381e7","Type":"ContainerStarted","Data":"89bff2068716fb0f8da1b85376cd84820b0a6df2416a945920897bc97b68a777"} Oct 13 08:07:34 crc kubenswrapper[4833]: I1013 08:07:34.583591 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-759f89cf5-f8l8v" Oct 13 08:07:34 crc kubenswrapper[4833]: I1013 08:07:34.588323 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-4gwzv" podStartSLOduration=2.588302445 podStartE2EDuration="2.588302445s" podCreationTimestamp="2025-10-13 08:07:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:07:34.582106699 +0000 UTC m=+5944.682529615" watchObservedRunningTime="2025-10-13 08:07:34.588302445 +0000 UTC m=+5944.688725361" Oct 13 08:07:34 crc kubenswrapper[4833]: I1013 08:07:34.613522 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-759f89cf5-f8l8v" podStartSLOduration=3.613498221 podStartE2EDuration="3.613498221s" podCreationTimestamp="2025-10-13 08:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:07:34.612368059 +0000 UTC m=+5944.712790975" watchObservedRunningTime="2025-10-13 08:07:34.613498221 +0000 UTC m=+5944.713921127" Oct 13 08:07:35 crc kubenswrapper[4833]: I1013 08:07:35.533231 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 13 08:07:35 crc kubenswrapper[4833]: I1013 08:07:35.557473 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 08:07:35 crc kubenswrapper[4833]: I1013 08:07:35.615398 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="9a19ced6-af31-4914-ae8b-d1fca1d24580" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://9922d338e11ab2e964cedc0603b6f4656c3f43fcf739f737108c656a8505f28e" gracePeriod=30 Oct 13 08:07:35 crc kubenswrapper[4833]: I1013 08:07:35.615852 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0" containerName="nova-metadata-log" containerID="cri-o://5d6db2f133c9da49c1f87cae1675ab0581935de792f3f990541f93cf7ac04128" gracePeriod=30 Oct 13 08:07:35 crc kubenswrapper[4833]: I1013 08:07:35.615923 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0" containerName="nova-metadata-metadata" containerID="cri-o://bdd7503cdac36f5800097f8b9c8bc7c3847e326cdf37a0c18c32b9ea3c495a26" gracePeriod=30 Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.629965 4833 generic.go:334] "Generic (PLEG): container finished" podID="9a19ced6-af31-4914-ae8b-d1fca1d24580" containerID="9922d338e11ab2e964cedc0603b6f4656c3f43fcf739f737108c656a8505f28e" exitCode=0 Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.631959 4833 generic.go:334] "Generic (PLEG): container finished" podID="bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0" containerID="bdd7503cdac36f5800097f8b9c8bc7c3847e326cdf37a0c18c32b9ea3c495a26" exitCode=0 Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.632001 4833 generic.go:334] "Generic (PLEG): container finished" podID="bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0" containerID="5d6db2f133c9da49c1f87cae1675ab0581935de792f3f990541f93cf7ac04128" exitCode=143 Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.647800 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9a19ced6-af31-4914-ae8b-d1fca1d24580","Type":"ContainerDied","Data":"9922d338e11ab2e964cedc0603b6f4656c3f43fcf739f737108c656a8505f28e"} Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.647841 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9a19ced6-af31-4914-ae8b-d1fca1d24580","Type":"ContainerDied","Data":"f8a1fcf47c87ceeca0a17eeff996a47af64e25128abc31069feed916b28f5d11"} Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.647853 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8a1fcf47c87ceeca0a17eeff996a47af64e25128abc31069feed916b28f5d11" Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.647863 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0","Type":"ContainerDied","Data":"bdd7503cdac36f5800097f8b9c8bc7c3847e326cdf37a0c18c32b9ea3c495a26"} Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.647874 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0","Type":"ContainerDied","Data":"5d6db2f133c9da49c1f87cae1675ab0581935de792f3f990541f93cf7ac04128"} Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.699066 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.715483 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.773697 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a19ced6-af31-4914-ae8b-d1fca1d24580-combined-ca-bundle\") pod \"9a19ced6-af31-4914-ae8b-d1fca1d24580\" (UID: \"9a19ced6-af31-4914-ae8b-d1fca1d24580\") " Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.773962 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0-logs\") pod \"bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0\" (UID: \"bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0\") " Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.774011 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpmql\" (UniqueName: \"kubernetes.io/projected/9a19ced6-af31-4914-ae8b-d1fca1d24580-kube-api-access-jpmql\") pod \"9a19ced6-af31-4914-ae8b-d1fca1d24580\" (UID: \"9a19ced6-af31-4914-ae8b-d1fca1d24580\") " Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.774054 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a19ced6-af31-4914-ae8b-d1fca1d24580-config-data\") pod \"9a19ced6-af31-4914-ae8b-d1fca1d24580\" (UID: \"9a19ced6-af31-4914-ae8b-d1fca1d24580\") " Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.774103 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0-combined-ca-bundle\") pod \"bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0\" (UID: \"bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0\") " Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.774142 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0-config-data\") pod \"bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0\" (UID: \"bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0\") " Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.774185 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkbtx\" (UniqueName: \"kubernetes.io/projected/bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0-kube-api-access-gkbtx\") pod \"bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0\" (UID: \"bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0\") " Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.775862 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0-logs" (OuterVolumeSpecName: "logs") pod "bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0" (UID: "bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.776390 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0-logs\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.779680 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.786519 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0-kube-api-access-gkbtx" (OuterVolumeSpecName: "kube-api-access-gkbtx") pod "bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0" (UID: "bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0"). InnerVolumeSpecName "kube-api-access-gkbtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.786944 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a19ced6-af31-4914-ae8b-d1fca1d24580-kube-api-access-jpmql" (OuterVolumeSpecName: "kube-api-access-jpmql") pod "9a19ced6-af31-4914-ae8b-d1fca1d24580" (UID: "9a19ced6-af31-4914-ae8b-d1fca1d24580"). InnerVolumeSpecName "kube-api-access-jpmql". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.815665 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a19ced6-af31-4914-ae8b-d1fca1d24580-config-data" (OuterVolumeSpecName: "config-data") pod "9a19ced6-af31-4914-ae8b-d1fca1d24580" (UID: "9a19ced6-af31-4914-ae8b-d1fca1d24580"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.816721 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0-config-data" (OuterVolumeSpecName: "config-data") pod "bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0" (UID: "bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.819645 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0" (UID: "bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.820866 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a19ced6-af31-4914-ae8b-d1fca1d24580-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a19ced6-af31-4914-ae8b-d1fca1d24580" (UID: "9a19ced6-af31-4914-ae8b-d1fca1d24580"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.878611 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpmql\" (UniqueName: \"kubernetes.io/projected/9a19ced6-af31-4914-ae8b-d1fca1d24580-kube-api-access-jpmql\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.878656 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a19ced6-af31-4914-ae8b-d1fca1d24580-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.878669 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.878679 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.878690 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkbtx\" (UniqueName: \"kubernetes.io/projected/bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0-kube-api-access-gkbtx\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:36 crc kubenswrapper[4833]: I1013 08:07:36.878701 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a19ced6-af31-4914-ae8b-d1fca1d24580-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.642094 4833 generic.go:334] "Generic (PLEG): container finished" podID="5505d43c-ff8b-4643-876b-843f46e64eb4" containerID="007d98f99ca4287b3c41915150b2deb6d9ef38eed2eb98d51a738384eb2af938" exitCode=0 Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.642160 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-s55h8" event={"ID":"5505d43c-ff8b-4643-876b-843f46e64eb4","Type":"ContainerDied","Data":"007d98f99ca4287b3c41915150b2deb6d9ef38eed2eb98d51a738384eb2af938"} Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.643734 4833 generic.go:334] "Generic (PLEG): container finished" podID="f01ed8f4-afc5-4464-ab51-73cab30ef8d3" containerID="ae26aa2c4f38d7fe0ed33a96d3b8a7b74bd259f213a5dac353fd4f8f94978da1" exitCode=0 Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.643775 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4gwzv" event={"ID":"f01ed8f4-afc5-4464-ab51-73cab30ef8d3","Type":"ContainerDied","Data":"ae26aa2c4f38d7fe0ed33a96d3b8a7b74bd259f213a5dac353fd4f8f94978da1"} Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.647251 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.647621 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0","Type":"ContainerDied","Data":"3d0872aed308ad1ae3cf5a5fd74f02592cbbd48836eb1c2c33ece71cb4a4386c"} Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.647657 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.647661 4833 scope.go:117] "RemoveContainer" containerID="bdd7503cdac36f5800097f8b9c8bc7c3847e326cdf37a0c18c32b9ea3c495a26" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.672169 4833 scope.go:117] "RemoveContainer" containerID="5d6db2f133c9da49c1f87cae1675ab0581935de792f3f990541f93cf7ac04128" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.714808 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.730697 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.757620 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.757917 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 13 08:07:37 crc kubenswrapper[4833]: E1013 08:07:37.758373 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a19ced6-af31-4914-ae8b-d1fca1d24580" containerName="nova-cell1-novncproxy-novncproxy" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.758395 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a19ced6-af31-4914-ae8b-d1fca1d24580" containerName="nova-cell1-novncproxy-novncproxy" Oct 13 08:07:37 crc kubenswrapper[4833]: E1013 08:07:37.758422 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0" containerName="nova-metadata-metadata" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.758430 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0" containerName="nova-metadata-metadata" Oct 13 08:07:37 crc kubenswrapper[4833]: E1013 08:07:37.758468 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0" containerName="nova-metadata-log" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.758477 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0" containerName="nova-metadata-log" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.758714 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0" containerName="nova-metadata-log" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.758744 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0" containerName="nova-metadata-metadata" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.758762 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a19ced6-af31-4914-ae8b-d1fca1d24580" containerName="nova-cell1-novncproxy-novncproxy" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.759958 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.765329 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.765586 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.773227 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.794407 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8lgg\" (UniqueName: \"kubernetes.io/projected/3fee74ca-0acd-4fc9-850c-cff641106990-kube-api-access-p8lgg\") pod \"nova-metadata-0\" (UID: \"3fee74ca-0acd-4fc9-850c-cff641106990\") " pod="openstack/nova-metadata-0" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.794483 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fee74ca-0acd-4fc9-850c-cff641106990-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3fee74ca-0acd-4fc9-850c-cff641106990\") " pod="openstack/nova-metadata-0" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.794619 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fee74ca-0acd-4fc9-850c-cff641106990-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3fee74ca-0acd-4fc9-850c-cff641106990\") " pod="openstack/nova-metadata-0" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.794659 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fee74ca-0acd-4fc9-850c-cff641106990-config-data\") pod \"nova-metadata-0\" (UID: \"3fee74ca-0acd-4fc9-850c-cff641106990\") " pod="openstack/nova-metadata-0" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.794786 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fee74ca-0acd-4fc9-850c-cff641106990-logs\") pod \"nova-metadata-0\" (UID: \"3fee74ca-0acd-4fc9-850c-cff641106990\") " pod="openstack/nova-metadata-0" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.803553 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.815608 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.817455 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.822100 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.822329 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.822476 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.824165 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.896110 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fee74ca-0acd-4fc9-850c-cff641106990-config-data\") pod \"nova-metadata-0\" (UID: \"3fee74ca-0acd-4fc9-850c-cff641106990\") " pod="openstack/nova-metadata-0" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.896184 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.896246 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fee74ca-0acd-4fc9-850c-cff641106990-logs\") pod \"nova-metadata-0\" (UID: \"3fee74ca-0acd-4fc9-850c-cff641106990\") " pod="openstack/nova-metadata-0" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.896291 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.896315 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8lgg\" (UniqueName: \"kubernetes.io/projected/3fee74ca-0acd-4fc9-850c-cff641106990-kube-api-access-p8lgg\") pod \"nova-metadata-0\" (UID: \"3fee74ca-0acd-4fc9-850c-cff641106990\") " pod="openstack/nova-metadata-0" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.896359 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fee74ca-0acd-4fc9-850c-cff641106990-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3fee74ca-0acd-4fc9-850c-cff641106990\") " pod="openstack/nova-metadata-0" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.896404 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8ql4\" (UniqueName: \"kubernetes.io/projected/a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7-kube-api-access-n8ql4\") pod \"nova-cell1-novncproxy-0\" (UID: \"a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.896449 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.896487 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.896697 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fee74ca-0acd-4fc9-850c-cff641106990-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3fee74ca-0acd-4fc9-850c-cff641106990\") " pod="openstack/nova-metadata-0" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.896817 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fee74ca-0acd-4fc9-850c-cff641106990-logs\") pod \"nova-metadata-0\" (UID: \"3fee74ca-0acd-4fc9-850c-cff641106990\") " pod="openstack/nova-metadata-0" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.900622 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fee74ca-0acd-4fc9-850c-cff641106990-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3fee74ca-0acd-4fc9-850c-cff641106990\") " pod="openstack/nova-metadata-0" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.907999 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fee74ca-0acd-4fc9-850c-cff641106990-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3fee74ca-0acd-4fc9-850c-cff641106990\") " pod="openstack/nova-metadata-0" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.908713 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fee74ca-0acd-4fc9-850c-cff641106990-config-data\") pod \"nova-metadata-0\" (UID: \"3fee74ca-0acd-4fc9-850c-cff641106990\") " pod="openstack/nova-metadata-0" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.911078 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8lgg\" (UniqueName: \"kubernetes.io/projected/3fee74ca-0acd-4fc9-850c-cff641106990-kube-api-access-p8lgg\") pod \"nova-metadata-0\" (UID: \"3fee74ca-0acd-4fc9-850c-cff641106990\") " pod="openstack/nova-metadata-0" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.998831 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:37 crc kubenswrapper[4833]: I1013 08:07:37.999468 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:38 crc kubenswrapper[4833]: I1013 08:07:38.000292 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8ql4\" (UniqueName: \"kubernetes.io/projected/a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7-kube-api-access-n8ql4\") pod \"nova-cell1-novncproxy-0\" (UID: \"a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:38 crc kubenswrapper[4833]: I1013 08:07:38.000341 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:38 crc kubenswrapper[4833]: I1013 08:07:38.000394 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:38 crc kubenswrapper[4833]: I1013 08:07:38.004374 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:38 crc kubenswrapper[4833]: I1013 08:07:38.005066 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:38 crc kubenswrapper[4833]: I1013 08:07:38.005367 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:38 crc kubenswrapper[4833]: I1013 08:07:38.006406 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:38 crc kubenswrapper[4833]: I1013 08:07:38.017756 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8ql4\" (UniqueName: \"kubernetes.io/projected/a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7-kube-api-access-n8ql4\") pod \"nova-cell1-novncproxy-0\" (UID: \"a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:38 crc kubenswrapper[4833]: I1013 08:07:38.105597 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 08:07:38 crc kubenswrapper[4833]: I1013 08:07:38.144784 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:38 crc kubenswrapper[4833]: I1013 08:07:38.431210 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 08:07:38 crc kubenswrapper[4833]: W1013 08:07:38.434909 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fee74ca_0acd_4fc9_850c_cff641106990.slice/crio-42ea1f2e4814fd42635ecb301461c4b32c6eeb2114913e12e7515bdea14605fa WatchSource:0}: Error finding container 42ea1f2e4814fd42635ecb301461c4b32c6eeb2114913e12e7515bdea14605fa: Status 404 returned error can't find the container with id 42ea1f2e4814fd42635ecb301461c4b32c6eeb2114913e12e7515bdea14605fa Oct 13 08:07:38 crc kubenswrapper[4833]: I1013 08:07:38.528953 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 13 08:07:38 crc kubenswrapper[4833]: W1013 08:07:38.529721 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda37e4fb8_49e9_4edd_aea8_a60a7b3e1db7.slice/crio-ccc8aacf0cb4f2b184f91361593a44409b32bb21805800580940d0f2827700f5 WatchSource:0}: Error finding container ccc8aacf0cb4f2b184f91361593a44409b32bb21805800580940d0f2827700f5: Status 404 returned error can't find the container with id ccc8aacf0cb4f2b184f91361593a44409b32bb21805800580940d0f2827700f5 Oct 13 08:07:38 crc kubenswrapper[4833]: I1013 08:07:38.641702 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a19ced6-af31-4914-ae8b-d1fca1d24580" path="/var/lib/kubelet/pods/9a19ced6-af31-4914-ae8b-d1fca1d24580/volumes" Oct 13 08:07:38 crc kubenswrapper[4833]: I1013 08:07:38.642733 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0" path="/var/lib/kubelet/pods/bdefeb80-f2e0-46a4-8e31-fbf2e4ec21f0/volumes" Oct 13 08:07:38 crc kubenswrapper[4833]: I1013 08:07:38.658475 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3fee74ca-0acd-4fc9-850c-cff641106990","Type":"ContainerStarted","Data":"61763daa876d1710fe89487d01e61d149b57388bfb3f0267d457d7d2add69191"} Oct 13 08:07:38 crc kubenswrapper[4833]: I1013 08:07:38.658564 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3fee74ca-0acd-4fc9-850c-cff641106990","Type":"ContainerStarted","Data":"42ea1f2e4814fd42635ecb301461c4b32c6eeb2114913e12e7515bdea14605fa"} Oct 13 08:07:38 crc kubenswrapper[4833]: I1013 08:07:38.661425 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7","Type":"ContainerStarted","Data":"ccc8aacf0cb4f2b184f91361593a44409b32bb21805800580940d0f2827700f5"} Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.091319 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4gwzv" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.097583 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-s55h8" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.132341 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbprn\" (UniqueName: \"kubernetes.io/projected/5505d43c-ff8b-4643-876b-843f46e64eb4-kube-api-access-lbprn\") pod \"5505d43c-ff8b-4643-876b-843f46e64eb4\" (UID: \"5505d43c-ff8b-4643-876b-843f46e64eb4\") " Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.132728 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f01ed8f4-afc5-4464-ab51-73cab30ef8d3-scripts\") pod \"f01ed8f4-afc5-4464-ab51-73cab30ef8d3\" (UID: \"f01ed8f4-afc5-4464-ab51-73cab30ef8d3\") " Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.132989 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f01ed8f4-afc5-4464-ab51-73cab30ef8d3-config-data\") pod \"f01ed8f4-afc5-4464-ab51-73cab30ef8d3\" (UID: \"f01ed8f4-afc5-4464-ab51-73cab30ef8d3\") " Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.133077 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5505d43c-ff8b-4643-876b-843f46e64eb4-combined-ca-bundle\") pod \"5505d43c-ff8b-4643-876b-843f46e64eb4\" (UID: \"5505d43c-ff8b-4643-876b-843f46e64eb4\") " Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.133175 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5505d43c-ff8b-4643-876b-843f46e64eb4-config-data\") pod \"5505d43c-ff8b-4643-876b-843f46e64eb4\" (UID: \"5505d43c-ff8b-4643-876b-843f46e64eb4\") " Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.133267 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2ngg\" (UniqueName: \"kubernetes.io/projected/f01ed8f4-afc5-4464-ab51-73cab30ef8d3-kube-api-access-d2ngg\") pod \"f01ed8f4-afc5-4464-ab51-73cab30ef8d3\" (UID: \"f01ed8f4-afc5-4464-ab51-73cab30ef8d3\") " Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.133422 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f01ed8f4-afc5-4464-ab51-73cab30ef8d3-combined-ca-bundle\") pod \"f01ed8f4-afc5-4464-ab51-73cab30ef8d3\" (UID: \"f01ed8f4-afc5-4464-ab51-73cab30ef8d3\") " Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.133519 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5505d43c-ff8b-4643-876b-843f46e64eb4-scripts\") pod \"5505d43c-ff8b-4643-876b-843f46e64eb4\" (UID: \"5505d43c-ff8b-4643-876b-843f46e64eb4\") " Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.153424 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f01ed8f4-afc5-4464-ab51-73cab30ef8d3-scripts" (OuterVolumeSpecName: "scripts") pod "f01ed8f4-afc5-4464-ab51-73cab30ef8d3" (UID: "f01ed8f4-afc5-4464-ab51-73cab30ef8d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.153552 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5505d43c-ff8b-4643-876b-843f46e64eb4-kube-api-access-lbprn" (OuterVolumeSpecName: "kube-api-access-lbprn") pod "5505d43c-ff8b-4643-876b-843f46e64eb4" (UID: "5505d43c-ff8b-4643-876b-843f46e64eb4"). InnerVolumeSpecName "kube-api-access-lbprn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.153890 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5505d43c-ff8b-4643-876b-843f46e64eb4-scripts" (OuterVolumeSpecName: "scripts") pod "5505d43c-ff8b-4643-876b-843f46e64eb4" (UID: "5505d43c-ff8b-4643-876b-843f46e64eb4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.154508 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f01ed8f4-afc5-4464-ab51-73cab30ef8d3-kube-api-access-d2ngg" (OuterVolumeSpecName: "kube-api-access-d2ngg") pod "f01ed8f4-afc5-4464-ab51-73cab30ef8d3" (UID: "f01ed8f4-afc5-4464-ab51-73cab30ef8d3"). InnerVolumeSpecName "kube-api-access-d2ngg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.170253 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f01ed8f4-afc5-4464-ab51-73cab30ef8d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f01ed8f4-afc5-4464-ab51-73cab30ef8d3" (UID: "f01ed8f4-afc5-4464-ab51-73cab30ef8d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.176135 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5505d43c-ff8b-4643-876b-843f46e64eb4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5505d43c-ff8b-4643-876b-843f46e64eb4" (UID: "5505d43c-ff8b-4643-876b-843f46e64eb4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.181604 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f01ed8f4-afc5-4464-ab51-73cab30ef8d3-config-data" (OuterVolumeSpecName: "config-data") pod "f01ed8f4-afc5-4464-ab51-73cab30ef8d3" (UID: "f01ed8f4-afc5-4464-ab51-73cab30ef8d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.202467 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5505d43c-ff8b-4643-876b-843f46e64eb4-config-data" (OuterVolumeSpecName: "config-data") pod "5505d43c-ff8b-4643-876b-843f46e64eb4" (UID: "5505d43c-ff8b-4643-876b-843f46e64eb4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.236564 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbprn\" (UniqueName: \"kubernetes.io/projected/5505d43c-ff8b-4643-876b-843f46e64eb4-kube-api-access-lbprn\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.236597 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f01ed8f4-afc5-4464-ab51-73cab30ef8d3-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.236606 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f01ed8f4-afc5-4464-ab51-73cab30ef8d3-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.236616 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5505d43c-ff8b-4643-876b-843f46e64eb4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.236625 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5505d43c-ff8b-4643-876b-843f46e64eb4-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.236633 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2ngg\" (UniqueName: \"kubernetes.io/projected/f01ed8f4-afc5-4464-ab51-73cab30ef8d3-kube-api-access-d2ngg\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.236642 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f01ed8f4-afc5-4464-ab51-73cab30ef8d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.236678 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5505d43c-ff8b-4643-876b-843f46e64eb4-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.628240 4833 scope.go:117] "RemoveContainer" containerID="84144bfb0195d5a0d1b4378bb219c91ea274fee3df90291859e033d126a958d8" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.685146 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-s55h8" event={"ID":"5505d43c-ff8b-4643-876b-843f46e64eb4","Type":"ContainerDied","Data":"c78f08d4e7a0beebc34b9f9555af413f994961dfe9c8ecee5e548b9393eb4874"} Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.685199 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c78f08d4e7a0beebc34b9f9555af413f994961dfe9c8ecee5e548b9393eb4874" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.685321 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-s55h8" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.701988 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4gwzv" event={"ID":"f01ed8f4-afc5-4464-ab51-73cab30ef8d3","Type":"ContainerDied","Data":"871aa26393bdd14a382055986c7ae41293d1f0d8e89a15ed4d12c32c1740df75"} Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.702024 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="871aa26393bdd14a382055986c7ae41293d1f0d8e89a15ed4d12c32c1740df75" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.702092 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4gwzv" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.708386 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3fee74ca-0acd-4fc9-850c-cff641106990","Type":"ContainerStarted","Data":"7e22b315b38d621d705bb4f03e90ae30c780e0b49bda5309bbfc142305fc2ef3"} Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.715869 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7","Type":"ContainerStarted","Data":"56f50c8b843e161650915e5906833accc6b8aec6b83ad4cc4aae2b7fca02b0af"} Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.754329 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.754303623 podStartE2EDuration="2.754303623s" podCreationTimestamp="2025-10-13 08:07:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:07:39.736047564 +0000 UTC m=+5949.836470520" watchObservedRunningTime="2025-10-13 08:07:39.754303623 +0000 UTC m=+5949.854726549" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.786129 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 13 08:07:39 crc kubenswrapper[4833]: E1013 08:07:39.786639 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5505d43c-ff8b-4643-876b-843f46e64eb4" containerName="nova-manage" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.786660 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5505d43c-ff8b-4643-876b-843f46e64eb4" containerName="nova-manage" Oct 13 08:07:39 crc kubenswrapper[4833]: E1013 08:07:39.786696 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f01ed8f4-afc5-4464-ab51-73cab30ef8d3" containerName="nova-cell1-conductor-db-sync" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.786707 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f01ed8f4-afc5-4464-ab51-73cab30ef8d3" containerName="nova-cell1-conductor-db-sync" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.786952 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="5505d43c-ff8b-4643-876b-843f46e64eb4" containerName="nova-manage" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.786969 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f01ed8f4-afc5-4464-ab51-73cab30ef8d3" containerName="nova-cell1-conductor-db-sync" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.787774 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.791702 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.793672 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.793654311 podStartE2EDuration="2.793654311s" podCreationTimestamp="2025-10-13 08:07:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:07:39.778611314 +0000 UTC m=+5949.879034270" watchObservedRunningTime="2025-10-13 08:07:39.793654311 +0000 UTC m=+5949.894077227" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.807678 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.846186 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4\") " pod="openstack/nova-cell1-conductor-0" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.846426 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk29g\" (UniqueName: \"kubernetes.io/projected/b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4-kube-api-access-hk29g\") pod \"nova-cell1-conductor-0\" (UID: \"b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4\") " pod="openstack/nova-cell1-conductor-0" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.846501 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4\") " pod="openstack/nova-cell1-conductor-0" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.943612 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.943906 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="50974d74-3ac7-44b9-89dc-6e6eb7cff4b3" containerName="nova-api-log" containerID="cri-o://019469fdd92f610ba9165e869fff36c9ab9939ba3dc7edba36f14a827ed68ffa" gracePeriod=30 Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.944060 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="50974d74-3ac7-44b9-89dc-6e6eb7cff4b3" containerName="nova-api-api" containerID="cri-o://b22396e86a0e3e2a0adeab1c4cb3efd43977d33bf32d478856efb510635ec7ec" gracePeriod=30 Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.948635 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4\") " pod="openstack/nova-cell1-conductor-0" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.948747 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4\") " pod="openstack/nova-cell1-conductor-0" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.948776 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk29g\" (UniqueName: \"kubernetes.io/projected/b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4-kube-api-access-hk29g\") pod \"nova-cell1-conductor-0\" (UID: \"b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4\") " pod="openstack/nova-cell1-conductor-0" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.954682 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4\") " pod="openstack/nova-cell1-conductor-0" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.956068 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4\") " pod="openstack/nova-cell1-conductor-0" Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.956965 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.957205 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="7c6d3ff7-e719-4d13-a433-2a29d18f854c" containerName="nova-scheduler-scheduler" containerID="cri-o://024ba1767a31c709557d04aed2148b98ac9900471f3c88728823a8a55f89fbca" gracePeriod=30 Oct 13 08:07:39 crc kubenswrapper[4833]: I1013 08:07:39.967083 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk29g\" (UniqueName: \"kubernetes.io/projected/b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4-kube-api-access-hk29g\") pod \"nova-cell1-conductor-0\" (UID: \"b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4\") " pod="openstack/nova-cell1-conductor-0" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.015733 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.125914 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.461914 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.560046 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50974d74-3ac7-44b9-89dc-6e6eb7cff4b3-combined-ca-bundle\") pod \"50974d74-3ac7-44b9-89dc-6e6eb7cff4b3\" (UID: \"50974d74-3ac7-44b9-89dc-6e6eb7cff4b3\") " Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.560119 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50974d74-3ac7-44b9-89dc-6e6eb7cff4b3-config-data\") pod \"50974d74-3ac7-44b9-89dc-6e6eb7cff4b3\" (UID: \"50974d74-3ac7-44b9-89dc-6e6eb7cff4b3\") " Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.560258 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n4jp\" (UniqueName: \"kubernetes.io/projected/50974d74-3ac7-44b9-89dc-6e6eb7cff4b3-kube-api-access-4n4jp\") pod \"50974d74-3ac7-44b9-89dc-6e6eb7cff4b3\" (UID: \"50974d74-3ac7-44b9-89dc-6e6eb7cff4b3\") " Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.560298 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50974d74-3ac7-44b9-89dc-6e6eb7cff4b3-logs\") pod \"50974d74-3ac7-44b9-89dc-6e6eb7cff4b3\" (UID: \"50974d74-3ac7-44b9-89dc-6e6eb7cff4b3\") " Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.560944 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50974d74-3ac7-44b9-89dc-6e6eb7cff4b3-logs" (OuterVolumeSpecName: "logs") pod "50974d74-3ac7-44b9-89dc-6e6eb7cff4b3" (UID: "50974d74-3ac7-44b9-89dc-6e6eb7cff4b3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.565759 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50974d74-3ac7-44b9-89dc-6e6eb7cff4b3-kube-api-access-4n4jp" (OuterVolumeSpecName: "kube-api-access-4n4jp") pod "50974d74-3ac7-44b9-89dc-6e6eb7cff4b3" (UID: "50974d74-3ac7-44b9-89dc-6e6eb7cff4b3"). InnerVolumeSpecName "kube-api-access-4n4jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.595410 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50974d74-3ac7-44b9-89dc-6e6eb7cff4b3-config-data" (OuterVolumeSpecName: "config-data") pod "50974d74-3ac7-44b9-89dc-6e6eb7cff4b3" (UID: "50974d74-3ac7-44b9-89dc-6e6eb7cff4b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.603316 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50974d74-3ac7-44b9-89dc-6e6eb7cff4b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50974d74-3ac7-44b9-89dc-6e6eb7cff4b3" (UID: "50974d74-3ac7-44b9-89dc-6e6eb7cff4b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.667487 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n4jp\" (UniqueName: \"kubernetes.io/projected/50974d74-3ac7-44b9-89dc-6e6eb7cff4b3-kube-api-access-4n4jp\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.668157 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50974d74-3ac7-44b9-89dc-6e6eb7cff4b3-logs\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.668271 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50974d74-3ac7-44b9-89dc-6e6eb7cff4b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.668359 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50974d74-3ac7-44b9-89dc-6e6eb7cff4b3-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.692467 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.727841 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4","Type":"ContainerStarted","Data":"3883e17b878d1363e5b7015af706abad93c7898a89f3fee27da8817323702a79"} Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.731487 4833 generic.go:334] "Generic (PLEG): container finished" podID="50974d74-3ac7-44b9-89dc-6e6eb7cff4b3" containerID="b22396e86a0e3e2a0adeab1c4cb3efd43977d33bf32d478856efb510635ec7ec" exitCode=0 Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.731658 4833 generic.go:334] "Generic (PLEG): container finished" podID="50974d74-3ac7-44b9-89dc-6e6eb7cff4b3" containerID="019469fdd92f610ba9165e869fff36c9ab9939ba3dc7edba36f14a827ed68ffa" exitCode=143 Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.731898 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"50974d74-3ac7-44b9-89dc-6e6eb7cff4b3","Type":"ContainerDied","Data":"b22396e86a0e3e2a0adeab1c4cb3efd43977d33bf32d478856efb510635ec7ec"} Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.732110 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"50974d74-3ac7-44b9-89dc-6e6eb7cff4b3","Type":"ContainerDied","Data":"019469fdd92f610ba9165e869fff36c9ab9939ba3dc7edba36f14a827ed68ffa"} Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.732231 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"50974d74-3ac7-44b9-89dc-6e6eb7cff4b3","Type":"ContainerDied","Data":"7fde0e4466da006526962793bb3ef9b37d490143f4c2adee2829b6523f91e539"} Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.732142 4833 scope.go:117] "RemoveContainer" containerID="b22396e86a0e3e2a0adeab1c4cb3efd43977d33bf32d478856efb510635ec7ec" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.731904 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.738946 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"e914b3d8cdaf60183f90c0954b20183878f4438a5f2a58b95970c43060e5f653"} Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.768958 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.776677 4833 scope.go:117] "RemoveContainer" containerID="019469fdd92f610ba9165e869fff36c9ab9939ba3dc7edba36f14a827ed68ffa" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.785644 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.807774 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 13 08:07:40 crc kubenswrapper[4833]: E1013 08:07:40.808140 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50974d74-3ac7-44b9-89dc-6e6eb7cff4b3" containerName="nova-api-log" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.808154 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="50974d74-3ac7-44b9-89dc-6e6eb7cff4b3" containerName="nova-api-log" Oct 13 08:07:40 crc kubenswrapper[4833]: E1013 08:07:40.808173 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50974d74-3ac7-44b9-89dc-6e6eb7cff4b3" containerName="nova-api-api" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.808180 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="50974d74-3ac7-44b9-89dc-6e6eb7cff4b3" containerName="nova-api-api" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.808659 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="50974d74-3ac7-44b9-89dc-6e6eb7cff4b3" containerName="nova-api-api" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.808678 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="50974d74-3ac7-44b9-89dc-6e6eb7cff4b3" containerName="nova-api-log" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.810007 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.812123 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.828458 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.828904 4833 scope.go:117] "RemoveContainer" containerID="b22396e86a0e3e2a0adeab1c4cb3efd43977d33bf32d478856efb510635ec7ec" Oct 13 08:07:40 crc kubenswrapper[4833]: E1013 08:07:40.829291 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b22396e86a0e3e2a0adeab1c4cb3efd43977d33bf32d478856efb510635ec7ec\": container with ID starting with b22396e86a0e3e2a0adeab1c4cb3efd43977d33bf32d478856efb510635ec7ec not found: ID does not exist" containerID="b22396e86a0e3e2a0adeab1c4cb3efd43977d33bf32d478856efb510635ec7ec" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.829318 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b22396e86a0e3e2a0adeab1c4cb3efd43977d33bf32d478856efb510635ec7ec"} err="failed to get container status \"b22396e86a0e3e2a0adeab1c4cb3efd43977d33bf32d478856efb510635ec7ec\": rpc error: code = NotFound desc = could not find container \"b22396e86a0e3e2a0adeab1c4cb3efd43977d33bf32d478856efb510635ec7ec\": container with ID starting with b22396e86a0e3e2a0adeab1c4cb3efd43977d33bf32d478856efb510635ec7ec not found: ID does not exist" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.829339 4833 scope.go:117] "RemoveContainer" containerID="019469fdd92f610ba9165e869fff36c9ab9939ba3dc7edba36f14a827ed68ffa" Oct 13 08:07:40 crc kubenswrapper[4833]: E1013 08:07:40.830916 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"019469fdd92f610ba9165e869fff36c9ab9939ba3dc7edba36f14a827ed68ffa\": container with ID starting with 019469fdd92f610ba9165e869fff36c9ab9939ba3dc7edba36f14a827ed68ffa not found: ID does not exist" containerID="019469fdd92f610ba9165e869fff36c9ab9939ba3dc7edba36f14a827ed68ffa" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.830941 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"019469fdd92f610ba9165e869fff36c9ab9939ba3dc7edba36f14a827ed68ffa"} err="failed to get container status \"019469fdd92f610ba9165e869fff36c9ab9939ba3dc7edba36f14a827ed68ffa\": rpc error: code = NotFound desc = could not find container \"019469fdd92f610ba9165e869fff36c9ab9939ba3dc7edba36f14a827ed68ffa\": container with ID starting with 019469fdd92f610ba9165e869fff36c9ab9939ba3dc7edba36f14a827ed68ffa not found: ID does not exist" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.830955 4833 scope.go:117] "RemoveContainer" containerID="b22396e86a0e3e2a0adeab1c4cb3efd43977d33bf32d478856efb510635ec7ec" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.831381 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b22396e86a0e3e2a0adeab1c4cb3efd43977d33bf32d478856efb510635ec7ec"} err="failed to get container status \"b22396e86a0e3e2a0adeab1c4cb3efd43977d33bf32d478856efb510635ec7ec\": rpc error: code = NotFound desc = could not find container \"b22396e86a0e3e2a0adeab1c4cb3efd43977d33bf32d478856efb510635ec7ec\": container with ID starting with b22396e86a0e3e2a0adeab1c4cb3efd43977d33bf32d478856efb510635ec7ec not found: ID does not exist" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.831399 4833 scope.go:117] "RemoveContainer" containerID="019469fdd92f610ba9165e869fff36c9ab9939ba3dc7edba36f14a827ed68ffa" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.831981 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"019469fdd92f610ba9165e869fff36c9ab9939ba3dc7edba36f14a827ed68ffa"} err="failed to get container status \"019469fdd92f610ba9165e869fff36c9ab9939ba3dc7edba36f14a827ed68ffa\": rpc error: code = NotFound desc = could not find container \"019469fdd92f610ba9165e869fff36c9ab9939ba3dc7edba36f14a827ed68ffa\": container with ID starting with 019469fdd92f610ba9165e869fff36c9ab9939ba3dc7edba36f14a827ed68ffa not found: ID does not exist" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.872530 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbb1bba-9f62-4e2b-9a56-4eeccbadc866-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7fbb1bba-9f62-4e2b-9a56-4eeccbadc866\") " pod="openstack/nova-api-0" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.872610 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6cwf\" (UniqueName: \"kubernetes.io/projected/7fbb1bba-9f62-4e2b-9a56-4eeccbadc866-kube-api-access-v6cwf\") pod \"nova-api-0\" (UID: \"7fbb1bba-9f62-4e2b-9a56-4eeccbadc866\") " pod="openstack/nova-api-0" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.872714 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fbb1bba-9f62-4e2b-9a56-4eeccbadc866-config-data\") pod \"nova-api-0\" (UID: \"7fbb1bba-9f62-4e2b-9a56-4eeccbadc866\") " pod="openstack/nova-api-0" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.873123 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fbb1bba-9f62-4e2b-9a56-4eeccbadc866-logs\") pod \"nova-api-0\" (UID: \"7fbb1bba-9f62-4e2b-9a56-4eeccbadc866\") " pod="openstack/nova-api-0" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.975156 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fbb1bba-9f62-4e2b-9a56-4eeccbadc866-logs\") pod \"nova-api-0\" (UID: \"7fbb1bba-9f62-4e2b-9a56-4eeccbadc866\") " pod="openstack/nova-api-0" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.975210 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbb1bba-9f62-4e2b-9a56-4eeccbadc866-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7fbb1bba-9f62-4e2b-9a56-4eeccbadc866\") " pod="openstack/nova-api-0" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.975249 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6cwf\" (UniqueName: \"kubernetes.io/projected/7fbb1bba-9f62-4e2b-9a56-4eeccbadc866-kube-api-access-v6cwf\") pod \"nova-api-0\" (UID: \"7fbb1bba-9f62-4e2b-9a56-4eeccbadc866\") " pod="openstack/nova-api-0" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.975352 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fbb1bba-9f62-4e2b-9a56-4eeccbadc866-config-data\") pod \"nova-api-0\" (UID: \"7fbb1bba-9f62-4e2b-9a56-4eeccbadc866\") " pod="openstack/nova-api-0" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.975705 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fbb1bba-9f62-4e2b-9a56-4eeccbadc866-logs\") pod \"nova-api-0\" (UID: \"7fbb1bba-9f62-4e2b-9a56-4eeccbadc866\") " pod="openstack/nova-api-0" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.980979 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbb1bba-9f62-4e2b-9a56-4eeccbadc866-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7fbb1bba-9f62-4e2b-9a56-4eeccbadc866\") " pod="openstack/nova-api-0" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.981994 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fbb1bba-9f62-4e2b-9a56-4eeccbadc866-config-data\") pod \"nova-api-0\" (UID: \"7fbb1bba-9f62-4e2b-9a56-4eeccbadc866\") " pod="openstack/nova-api-0" Oct 13 08:07:40 crc kubenswrapper[4833]: I1013 08:07:40.993804 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6cwf\" (UniqueName: \"kubernetes.io/projected/7fbb1bba-9f62-4e2b-9a56-4eeccbadc866-kube-api-access-v6cwf\") pod \"nova-api-0\" (UID: \"7fbb1bba-9f62-4e2b-9a56-4eeccbadc866\") " pod="openstack/nova-api-0" Oct 13 08:07:41 crc kubenswrapper[4833]: I1013 08:07:41.146121 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 08:07:41 crc kubenswrapper[4833]: W1013 08:07:41.474255 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fbb1bba_9f62_4e2b_9a56_4eeccbadc866.slice/crio-102220f42f498452b93181c260c6575773f9a0324b39be159cb4d45bb76154ca WatchSource:0}: Error finding container 102220f42f498452b93181c260c6575773f9a0324b39be159cb4d45bb76154ca: Status 404 returned error can't find the container with id 102220f42f498452b93181c260c6575773f9a0324b39be159cb4d45bb76154ca Oct 13 08:07:41 crc kubenswrapper[4833]: I1013 08:07:41.481268 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 13 08:07:41 crc kubenswrapper[4833]: I1013 08:07:41.761086 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7fbb1bba-9f62-4e2b-9a56-4eeccbadc866","Type":"ContainerStarted","Data":"ce15656998aa9c8772205f2f2050fafdbdb2acb2c77dd51c2748662c61e27ee4"} Oct 13 08:07:41 crc kubenswrapper[4833]: I1013 08:07:41.761481 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7fbb1bba-9f62-4e2b-9a56-4eeccbadc866","Type":"ContainerStarted","Data":"102220f42f498452b93181c260c6575773f9a0324b39be159cb4d45bb76154ca"} Oct 13 08:07:41 crc kubenswrapper[4833]: I1013 08:07:41.764925 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3fee74ca-0acd-4fc9-850c-cff641106990" containerName="nova-metadata-log" containerID="cri-o://61763daa876d1710fe89487d01e61d149b57388bfb3f0267d457d7d2add69191" gracePeriod=30 Oct 13 08:07:41 crc kubenswrapper[4833]: I1013 08:07:41.766365 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4","Type":"ContainerStarted","Data":"9a9b0f375eac92bc57cf8dafdca3f62c30a35eb6b448ebba820347736a0bd092"} Oct 13 08:07:41 crc kubenswrapper[4833]: I1013 08:07:41.766414 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Oct 13 08:07:41 crc kubenswrapper[4833]: I1013 08:07:41.766629 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3fee74ca-0acd-4fc9-850c-cff641106990" containerName="nova-metadata-metadata" containerID="cri-o://7e22b315b38d621d705bb4f03e90ae30c780e0b49bda5309bbfc142305fc2ef3" gracePeriod=30 Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.102802 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-759f89cf5-f8l8v" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.145648 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.145623653 podStartE2EDuration="3.145623653s" podCreationTimestamp="2025-10-13 08:07:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:07:41.788317077 +0000 UTC m=+5951.888740033" watchObservedRunningTime="2025-10-13 08:07:42.145623653 +0000 UTC m=+5952.246046579" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.179858 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76c69c676c-gns7n"] Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.180704 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76c69c676c-gns7n" podUID="953364b7-0926-40ea-a171-667d73c6af22" containerName="dnsmasq-dns" containerID="cri-o://1cc71928a0bba02ca73ce7976ba78bca6fd510360f74300ca1af1758151da3d6" gracePeriod=10 Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.381858 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.407305 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8lgg\" (UniqueName: \"kubernetes.io/projected/3fee74ca-0acd-4fc9-850c-cff641106990-kube-api-access-p8lgg\") pod \"3fee74ca-0acd-4fc9-850c-cff641106990\" (UID: \"3fee74ca-0acd-4fc9-850c-cff641106990\") " Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.407424 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fee74ca-0acd-4fc9-850c-cff641106990-nova-metadata-tls-certs\") pod \"3fee74ca-0acd-4fc9-850c-cff641106990\" (UID: \"3fee74ca-0acd-4fc9-850c-cff641106990\") " Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.407564 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fee74ca-0acd-4fc9-850c-cff641106990-logs\") pod \"3fee74ca-0acd-4fc9-850c-cff641106990\" (UID: \"3fee74ca-0acd-4fc9-850c-cff641106990\") " Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.407666 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fee74ca-0acd-4fc9-850c-cff641106990-config-data\") pod \"3fee74ca-0acd-4fc9-850c-cff641106990\" (UID: \"3fee74ca-0acd-4fc9-850c-cff641106990\") " Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.407716 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fee74ca-0acd-4fc9-850c-cff641106990-combined-ca-bundle\") pod \"3fee74ca-0acd-4fc9-850c-cff641106990\" (UID: \"3fee74ca-0acd-4fc9-850c-cff641106990\") " Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.414623 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fee74ca-0acd-4fc9-850c-cff641106990-kube-api-access-p8lgg" (OuterVolumeSpecName: "kube-api-access-p8lgg") pod "3fee74ca-0acd-4fc9-850c-cff641106990" (UID: "3fee74ca-0acd-4fc9-850c-cff641106990"). InnerVolumeSpecName "kube-api-access-p8lgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.417000 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fee74ca-0acd-4fc9-850c-cff641106990-logs" (OuterVolumeSpecName: "logs") pod "3fee74ca-0acd-4fc9-850c-cff641106990" (UID: "3fee74ca-0acd-4fc9-850c-cff641106990"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.438762 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fee74ca-0acd-4fc9-850c-cff641106990-config-data" (OuterVolumeSpecName: "config-data") pod "3fee74ca-0acd-4fc9-850c-cff641106990" (UID: "3fee74ca-0acd-4fc9-850c-cff641106990"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.444730 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fee74ca-0acd-4fc9-850c-cff641106990-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3fee74ca-0acd-4fc9-850c-cff641106990" (UID: "3fee74ca-0acd-4fc9-850c-cff641106990"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.500748 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fee74ca-0acd-4fc9-850c-cff641106990-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3fee74ca-0acd-4fc9-850c-cff641106990" (UID: "3fee74ca-0acd-4fc9-850c-cff641106990"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.515110 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fee74ca-0acd-4fc9-850c-cff641106990-logs\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.515145 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fee74ca-0acd-4fc9-850c-cff641106990-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.515154 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fee74ca-0acd-4fc9-850c-cff641106990-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.515165 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8lgg\" (UniqueName: \"kubernetes.io/projected/3fee74ca-0acd-4fc9-850c-cff641106990-kube-api-access-p8lgg\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.515174 4833 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fee74ca-0acd-4fc9-850c-cff641106990-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.643795 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50974d74-3ac7-44b9-89dc-6e6eb7cff4b3" path="/var/lib/kubelet/pods/50974d74-3ac7-44b9-89dc-6e6eb7cff4b3/volumes" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.653096 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c69c676c-gns7n" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.717421 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b65tz\" (UniqueName: \"kubernetes.io/projected/953364b7-0926-40ea-a171-667d73c6af22-kube-api-access-b65tz\") pod \"953364b7-0926-40ea-a171-667d73c6af22\" (UID: \"953364b7-0926-40ea-a171-667d73c6af22\") " Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.717472 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/953364b7-0926-40ea-a171-667d73c6af22-config\") pod \"953364b7-0926-40ea-a171-667d73c6af22\" (UID: \"953364b7-0926-40ea-a171-667d73c6af22\") " Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.717648 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/953364b7-0926-40ea-a171-667d73c6af22-ovsdbserver-nb\") pod \"953364b7-0926-40ea-a171-667d73c6af22\" (UID: \"953364b7-0926-40ea-a171-667d73c6af22\") " Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.717670 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/953364b7-0926-40ea-a171-667d73c6af22-dns-svc\") pod \"953364b7-0926-40ea-a171-667d73c6af22\" (UID: \"953364b7-0926-40ea-a171-667d73c6af22\") " Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.717712 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/953364b7-0926-40ea-a171-667d73c6af22-ovsdbserver-sb\") pod \"953364b7-0926-40ea-a171-667d73c6af22\" (UID: \"953364b7-0926-40ea-a171-667d73c6af22\") " Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.722527 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/953364b7-0926-40ea-a171-667d73c6af22-kube-api-access-b65tz" (OuterVolumeSpecName: "kube-api-access-b65tz") pod "953364b7-0926-40ea-a171-667d73c6af22" (UID: "953364b7-0926-40ea-a171-667d73c6af22"). InnerVolumeSpecName "kube-api-access-b65tz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.784803 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/953364b7-0926-40ea-a171-667d73c6af22-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "953364b7-0926-40ea-a171-667d73c6af22" (UID: "953364b7-0926-40ea-a171-667d73c6af22"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.785758 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/953364b7-0926-40ea-a171-667d73c6af22-config" (OuterVolumeSpecName: "config") pod "953364b7-0926-40ea-a171-667d73c6af22" (UID: "953364b7-0926-40ea-a171-667d73c6af22"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.789633 4833 generic.go:334] "Generic (PLEG): container finished" podID="953364b7-0926-40ea-a171-667d73c6af22" containerID="1cc71928a0bba02ca73ce7976ba78bca6fd510360f74300ca1af1758151da3d6" exitCode=0 Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.789711 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c69c676c-gns7n" event={"ID":"953364b7-0926-40ea-a171-667d73c6af22","Type":"ContainerDied","Data":"1cc71928a0bba02ca73ce7976ba78bca6fd510360f74300ca1af1758151da3d6"} Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.789742 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c69c676c-gns7n" event={"ID":"953364b7-0926-40ea-a171-667d73c6af22","Type":"ContainerDied","Data":"136b55b285e867424cff27c11ab026238466b8bda2a2334ee6af7cc80e9022a7"} Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.789765 4833 scope.go:117] "RemoveContainer" containerID="1cc71928a0bba02ca73ce7976ba78bca6fd510360f74300ca1af1758151da3d6" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.789940 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c69c676c-gns7n" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.791775 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/953364b7-0926-40ea-a171-667d73c6af22-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "953364b7-0926-40ea-a171-667d73c6af22" (UID: "953364b7-0926-40ea-a171-667d73c6af22"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.796651 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/953364b7-0926-40ea-a171-667d73c6af22-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "953364b7-0926-40ea-a171-667d73c6af22" (UID: "953364b7-0926-40ea-a171-667d73c6af22"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.798309 4833 generic.go:334] "Generic (PLEG): container finished" podID="3fee74ca-0acd-4fc9-850c-cff641106990" containerID="7e22b315b38d621d705bb4f03e90ae30c780e0b49bda5309bbfc142305fc2ef3" exitCode=0 Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.798335 4833 generic.go:334] "Generic (PLEG): container finished" podID="3fee74ca-0acd-4fc9-850c-cff641106990" containerID="61763daa876d1710fe89487d01e61d149b57388bfb3f0267d457d7d2add69191" exitCode=143 Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.798362 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.798391 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3fee74ca-0acd-4fc9-850c-cff641106990","Type":"ContainerDied","Data":"7e22b315b38d621d705bb4f03e90ae30c780e0b49bda5309bbfc142305fc2ef3"} Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.798435 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3fee74ca-0acd-4fc9-850c-cff641106990","Type":"ContainerDied","Data":"61763daa876d1710fe89487d01e61d149b57388bfb3f0267d457d7d2add69191"} Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.798447 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3fee74ca-0acd-4fc9-850c-cff641106990","Type":"ContainerDied","Data":"42ea1f2e4814fd42635ecb301461c4b32c6eeb2114913e12e7515bdea14605fa"} Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.805003 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7fbb1bba-9f62-4e2b-9a56-4eeccbadc866","Type":"ContainerStarted","Data":"db3b11fde02911f650c70d13aa4c2fc5cc4089e025af378e16d42281042cb86a"} Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.819728 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/953364b7-0926-40ea-a171-667d73c6af22-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.819760 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/953364b7-0926-40ea-a171-667d73c6af22-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.819776 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/953364b7-0926-40ea-a171-667d73c6af22-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.819789 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b65tz\" (UniqueName: \"kubernetes.io/projected/953364b7-0926-40ea-a171-667d73c6af22-kube-api-access-b65tz\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.819804 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/953364b7-0926-40ea-a171-667d73c6af22-config\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.819885 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.819938 4833 scope.go:117] "RemoveContainer" containerID="ab80501931bd49f99404012678e293cf867dbbef41fd32e0d93bb2923d8e5b1d" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.836582 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.840452 4833 scope.go:117] "RemoveContainer" containerID="1cc71928a0bba02ca73ce7976ba78bca6fd510360f74300ca1af1758151da3d6" Oct 13 08:07:42 crc kubenswrapper[4833]: E1013 08:07:42.840839 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cc71928a0bba02ca73ce7976ba78bca6fd510360f74300ca1af1758151da3d6\": container with ID starting with 1cc71928a0bba02ca73ce7976ba78bca6fd510360f74300ca1af1758151da3d6 not found: ID does not exist" containerID="1cc71928a0bba02ca73ce7976ba78bca6fd510360f74300ca1af1758151da3d6" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.840872 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cc71928a0bba02ca73ce7976ba78bca6fd510360f74300ca1af1758151da3d6"} err="failed to get container status \"1cc71928a0bba02ca73ce7976ba78bca6fd510360f74300ca1af1758151da3d6\": rpc error: code = NotFound desc = could not find container \"1cc71928a0bba02ca73ce7976ba78bca6fd510360f74300ca1af1758151da3d6\": container with ID starting with 1cc71928a0bba02ca73ce7976ba78bca6fd510360f74300ca1af1758151da3d6 not found: ID does not exist" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.840898 4833 scope.go:117] "RemoveContainer" containerID="ab80501931bd49f99404012678e293cf867dbbef41fd32e0d93bb2923d8e5b1d" Oct 13 08:07:42 crc kubenswrapper[4833]: E1013 08:07:42.841151 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab80501931bd49f99404012678e293cf867dbbef41fd32e0d93bb2923d8e5b1d\": container with ID starting with ab80501931bd49f99404012678e293cf867dbbef41fd32e0d93bb2923d8e5b1d not found: ID does not exist" containerID="ab80501931bd49f99404012678e293cf867dbbef41fd32e0d93bb2923d8e5b1d" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.841173 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab80501931bd49f99404012678e293cf867dbbef41fd32e0d93bb2923d8e5b1d"} err="failed to get container status \"ab80501931bd49f99404012678e293cf867dbbef41fd32e0d93bb2923d8e5b1d\": rpc error: code = NotFound desc = could not find container \"ab80501931bd49f99404012678e293cf867dbbef41fd32e0d93bb2923d8e5b1d\": container with ID starting with ab80501931bd49f99404012678e293cf867dbbef41fd32e0d93bb2923d8e5b1d not found: ID does not exist" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.841185 4833 scope.go:117] "RemoveContainer" containerID="7e22b315b38d621d705bb4f03e90ae30c780e0b49bda5309bbfc142305fc2ef3" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.846977 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 13 08:07:42 crc kubenswrapper[4833]: E1013 08:07:42.847655 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fee74ca-0acd-4fc9-850c-cff641106990" containerName="nova-metadata-log" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.847680 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fee74ca-0acd-4fc9-850c-cff641106990" containerName="nova-metadata-log" Oct 13 08:07:42 crc kubenswrapper[4833]: E1013 08:07:42.847714 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953364b7-0926-40ea-a171-667d73c6af22" containerName="dnsmasq-dns" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.847722 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="953364b7-0926-40ea-a171-667d73c6af22" containerName="dnsmasq-dns" Oct 13 08:07:42 crc kubenswrapper[4833]: E1013 08:07:42.847742 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953364b7-0926-40ea-a171-667d73c6af22" containerName="init" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.847751 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="953364b7-0926-40ea-a171-667d73c6af22" containerName="init" Oct 13 08:07:42 crc kubenswrapper[4833]: E1013 08:07:42.847771 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fee74ca-0acd-4fc9-850c-cff641106990" containerName="nova-metadata-metadata" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.847779 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fee74ca-0acd-4fc9-850c-cff641106990" containerName="nova-metadata-metadata" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.847974 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fee74ca-0acd-4fc9-850c-cff641106990" containerName="nova-metadata-log" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.847994 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="953364b7-0926-40ea-a171-667d73c6af22" containerName="dnsmasq-dns" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.848008 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fee74ca-0acd-4fc9-850c-cff641106990" containerName="nova-metadata-metadata" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.850363 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.851940 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.851923469 podStartE2EDuration="2.851923469s" podCreationTimestamp="2025-10-13 08:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:07:42.830608863 +0000 UTC m=+5952.931031779" watchObservedRunningTime="2025-10-13 08:07:42.851923469 +0000 UTC m=+5952.952346385" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.852473 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.854044 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.862583 4833 scope.go:117] "RemoveContainer" containerID="61763daa876d1710fe89487d01e61d149b57388bfb3f0267d457d7d2add69191" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.877482 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.880145 4833 scope.go:117] "RemoveContainer" containerID="7e22b315b38d621d705bb4f03e90ae30c780e0b49bda5309bbfc142305fc2ef3" Oct 13 08:07:42 crc kubenswrapper[4833]: E1013 08:07:42.882107 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e22b315b38d621d705bb4f03e90ae30c780e0b49bda5309bbfc142305fc2ef3\": container with ID starting with 7e22b315b38d621d705bb4f03e90ae30c780e0b49bda5309bbfc142305fc2ef3 not found: ID does not exist" containerID="7e22b315b38d621d705bb4f03e90ae30c780e0b49bda5309bbfc142305fc2ef3" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.882137 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e22b315b38d621d705bb4f03e90ae30c780e0b49bda5309bbfc142305fc2ef3"} err="failed to get container status \"7e22b315b38d621d705bb4f03e90ae30c780e0b49bda5309bbfc142305fc2ef3\": rpc error: code = NotFound desc = could not find container \"7e22b315b38d621d705bb4f03e90ae30c780e0b49bda5309bbfc142305fc2ef3\": container with ID starting with 7e22b315b38d621d705bb4f03e90ae30c780e0b49bda5309bbfc142305fc2ef3 not found: ID does not exist" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.882161 4833 scope.go:117] "RemoveContainer" containerID="61763daa876d1710fe89487d01e61d149b57388bfb3f0267d457d7d2add69191" Oct 13 08:07:42 crc kubenswrapper[4833]: E1013 08:07:42.883156 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61763daa876d1710fe89487d01e61d149b57388bfb3f0267d457d7d2add69191\": container with ID starting with 61763daa876d1710fe89487d01e61d149b57388bfb3f0267d457d7d2add69191 not found: ID does not exist" containerID="61763daa876d1710fe89487d01e61d149b57388bfb3f0267d457d7d2add69191" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.883193 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61763daa876d1710fe89487d01e61d149b57388bfb3f0267d457d7d2add69191"} err="failed to get container status \"61763daa876d1710fe89487d01e61d149b57388bfb3f0267d457d7d2add69191\": rpc error: code = NotFound desc = could not find container \"61763daa876d1710fe89487d01e61d149b57388bfb3f0267d457d7d2add69191\": container with ID starting with 61763daa876d1710fe89487d01e61d149b57388bfb3f0267d457d7d2add69191 not found: ID does not exist" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.883219 4833 scope.go:117] "RemoveContainer" containerID="7e22b315b38d621d705bb4f03e90ae30c780e0b49bda5309bbfc142305fc2ef3" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.883823 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e22b315b38d621d705bb4f03e90ae30c780e0b49bda5309bbfc142305fc2ef3"} err="failed to get container status \"7e22b315b38d621d705bb4f03e90ae30c780e0b49bda5309bbfc142305fc2ef3\": rpc error: code = NotFound desc = could not find container \"7e22b315b38d621d705bb4f03e90ae30c780e0b49bda5309bbfc142305fc2ef3\": container with ID starting with 7e22b315b38d621d705bb4f03e90ae30c780e0b49bda5309bbfc142305fc2ef3 not found: ID does not exist" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.883872 4833 scope.go:117] "RemoveContainer" containerID="61763daa876d1710fe89487d01e61d149b57388bfb3f0267d457d7d2add69191" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.884213 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61763daa876d1710fe89487d01e61d149b57388bfb3f0267d457d7d2add69191"} err="failed to get container status \"61763daa876d1710fe89487d01e61d149b57388bfb3f0267d457d7d2add69191\": rpc error: code = NotFound desc = could not find container \"61763daa876d1710fe89487d01e61d149b57388bfb3f0267d457d7d2add69191\": container with ID starting with 61763daa876d1710fe89487d01e61d149b57388bfb3f0267d457d7d2add69191 not found: ID does not exist" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.921694 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxqbx\" (UniqueName: \"kubernetes.io/projected/0244f671-bab9-471f-aa1a-557de2d2763d-kube-api-access-jxqbx\") pod \"nova-metadata-0\" (UID: \"0244f671-bab9-471f-aa1a-557de2d2763d\") " pod="openstack/nova-metadata-0" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.921774 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0244f671-bab9-471f-aa1a-557de2d2763d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0244f671-bab9-471f-aa1a-557de2d2763d\") " pod="openstack/nova-metadata-0" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.921830 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0244f671-bab9-471f-aa1a-557de2d2763d-config-data\") pod \"nova-metadata-0\" (UID: \"0244f671-bab9-471f-aa1a-557de2d2763d\") " pod="openstack/nova-metadata-0" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.921909 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0244f671-bab9-471f-aa1a-557de2d2763d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0244f671-bab9-471f-aa1a-557de2d2763d\") " pod="openstack/nova-metadata-0" Oct 13 08:07:42 crc kubenswrapper[4833]: I1013 08:07:42.921967 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0244f671-bab9-471f-aa1a-557de2d2763d-logs\") pod \"nova-metadata-0\" (UID: \"0244f671-bab9-471f-aa1a-557de2d2763d\") " pod="openstack/nova-metadata-0" Oct 13 08:07:43 crc kubenswrapper[4833]: I1013 08:07:43.023206 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0244f671-bab9-471f-aa1a-557de2d2763d-config-data\") pod \"nova-metadata-0\" (UID: \"0244f671-bab9-471f-aa1a-557de2d2763d\") " pod="openstack/nova-metadata-0" Oct 13 08:07:43 crc kubenswrapper[4833]: I1013 08:07:43.023303 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0244f671-bab9-471f-aa1a-557de2d2763d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0244f671-bab9-471f-aa1a-557de2d2763d\") " pod="openstack/nova-metadata-0" Oct 13 08:07:43 crc kubenswrapper[4833]: I1013 08:07:43.023349 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0244f671-bab9-471f-aa1a-557de2d2763d-logs\") pod \"nova-metadata-0\" (UID: \"0244f671-bab9-471f-aa1a-557de2d2763d\") " pod="openstack/nova-metadata-0" Oct 13 08:07:43 crc kubenswrapper[4833]: I1013 08:07:43.023385 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxqbx\" (UniqueName: \"kubernetes.io/projected/0244f671-bab9-471f-aa1a-557de2d2763d-kube-api-access-jxqbx\") pod \"nova-metadata-0\" (UID: \"0244f671-bab9-471f-aa1a-557de2d2763d\") " pod="openstack/nova-metadata-0" Oct 13 08:07:43 crc kubenswrapper[4833]: I1013 08:07:43.023419 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0244f671-bab9-471f-aa1a-557de2d2763d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0244f671-bab9-471f-aa1a-557de2d2763d\") " pod="openstack/nova-metadata-0" Oct 13 08:07:43 crc kubenswrapper[4833]: I1013 08:07:43.023957 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0244f671-bab9-471f-aa1a-557de2d2763d-logs\") pod \"nova-metadata-0\" (UID: \"0244f671-bab9-471f-aa1a-557de2d2763d\") " pod="openstack/nova-metadata-0" Oct 13 08:07:43 crc kubenswrapper[4833]: I1013 08:07:43.026924 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0244f671-bab9-471f-aa1a-557de2d2763d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0244f671-bab9-471f-aa1a-557de2d2763d\") " pod="openstack/nova-metadata-0" Oct 13 08:07:43 crc kubenswrapper[4833]: I1013 08:07:43.027149 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0244f671-bab9-471f-aa1a-557de2d2763d-config-data\") pod \"nova-metadata-0\" (UID: \"0244f671-bab9-471f-aa1a-557de2d2763d\") " pod="openstack/nova-metadata-0" Oct 13 08:07:43 crc kubenswrapper[4833]: I1013 08:07:43.028123 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0244f671-bab9-471f-aa1a-557de2d2763d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0244f671-bab9-471f-aa1a-557de2d2763d\") " pod="openstack/nova-metadata-0" Oct 13 08:07:43 crc kubenswrapper[4833]: I1013 08:07:43.057167 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxqbx\" (UniqueName: \"kubernetes.io/projected/0244f671-bab9-471f-aa1a-557de2d2763d-kube-api-access-jxqbx\") pod \"nova-metadata-0\" (UID: \"0244f671-bab9-471f-aa1a-557de2d2763d\") " pod="openstack/nova-metadata-0" Oct 13 08:07:43 crc kubenswrapper[4833]: I1013 08:07:43.130977 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76c69c676c-gns7n"] Oct 13 08:07:43 crc kubenswrapper[4833]: I1013 08:07:43.139737 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76c69c676c-gns7n"] Oct 13 08:07:43 crc kubenswrapper[4833]: I1013 08:07:43.145871 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:43 crc kubenswrapper[4833]: I1013 08:07:43.168516 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 08:07:43 crc kubenswrapper[4833]: I1013 08:07:43.452859 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 08:07:43 crc kubenswrapper[4833]: I1013 08:07:43.813977 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0244f671-bab9-471f-aa1a-557de2d2763d","Type":"ContainerStarted","Data":"bde1074814d247bbe64f25c553a260c0e0a23185a842d8128ff8627255c6b82f"} Oct 13 08:07:43 crc kubenswrapper[4833]: I1013 08:07:43.814251 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0244f671-bab9-471f-aa1a-557de2d2763d","Type":"ContainerStarted","Data":"391f0ded19983ffdf2ea44bb1249f6f20f1f548bb12ef373e68e1d55ccbef5e5"} Oct 13 08:07:44 crc kubenswrapper[4833]: I1013 08:07:44.645954 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fee74ca-0acd-4fc9-850c-cff641106990" path="/var/lib/kubelet/pods/3fee74ca-0acd-4fc9-850c-cff641106990/volumes" Oct 13 08:07:44 crc kubenswrapper[4833]: I1013 08:07:44.647206 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="953364b7-0926-40ea-a171-667d73c6af22" path="/var/lib/kubelet/pods/953364b7-0926-40ea-a171-667d73c6af22/volumes" Oct 13 08:07:44 crc kubenswrapper[4833]: I1013 08:07:44.832390 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0244f671-bab9-471f-aa1a-557de2d2763d","Type":"ContainerStarted","Data":"0bd2ef0e1951590095cb02c8630946b6cdf98d5554b1fb30f18a846af8fa7e6a"} Oct 13 08:07:44 crc kubenswrapper[4833]: I1013 08:07:44.875421 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.875396424 podStartE2EDuration="2.875396424s" podCreationTimestamp="2025-10-13 08:07:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:07:44.859582144 +0000 UTC m=+5954.960005100" watchObservedRunningTime="2025-10-13 08:07:44.875396424 +0000 UTC m=+5954.975819370" Oct 13 08:07:45 crc kubenswrapper[4833]: I1013 08:07:45.182819 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Oct 13 08:07:48 crc kubenswrapper[4833]: I1013 08:07:48.145428 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:48 crc kubenswrapper[4833]: I1013 08:07:48.169257 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 13 08:07:48 crc kubenswrapper[4833]: I1013 08:07:48.169314 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 13 08:07:48 crc kubenswrapper[4833]: I1013 08:07:48.171469 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:48 crc kubenswrapper[4833]: I1013 08:07:48.894365 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Oct 13 08:07:49 crc kubenswrapper[4833]: I1013 08:07:49.046672 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-k6ds6"] Oct 13 08:07:49 crc kubenswrapper[4833]: I1013 08:07:49.047770 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k6ds6" Oct 13 08:07:49 crc kubenswrapper[4833]: I1013 08:07:49.049482 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Oct 13 08:07:49 crc kubenswrapper[4833]: I1013 08:07:49.052046 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Oct 13 08:07:49 crc kubenswrapper[4833]: I1013 08:07:49.076845 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-k6ds6"] Oct 13 08:07:49 crc kubenswrapper[4833]: I1013 08:07:49.180236 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1320a48-dc29-40b2-bdb6-b46a47c920a8-config-data\") pod \"nova-cell1-cell-mapping-k6ds6\" (UID: \"e1320a48-dc29-40b2-bdb6-b46a47c920a8\") " pod="openstack/nova-cell1-cell-mapping-k6ds6" Oct 13 08:07:49 crc kubenswrapper[4833]: I1013 08:07:49.180700 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1320a48-dc29-40b2-bdb6-b46a47c920a8-scripts\") pod \"nova-cell1-cell-mapping-k6ds6\" (UID: \"e1320a48-dc29-40b2-bdb6-b46a47c920a8\") " pod="openstack/nova-cell1-cell-mapping-k6ds6" Oct 13 08:07:49 crc kubenswrapper[4833]: I1013 08:07:49.180815 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs4gs\" (UniqueName: \"kubernetes.io/projected/e1320a48-dc29-40b2-bdb6-b46a47c920a8-kube-api-access-vs4gs\") pod \"nova-cell1-cell-mapping-k6ds6\" (UID: \"e1320a48-dc29-40b2-bdb6-b46a47c920a8\") " pod="openstack/nova-cell1-cell-mapping-k6ds6" Oct 13 08:07:49 crc kubenswrapper[4833]: I1013 08:07:49.180924 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1320a48-dc29-40b2-bdb6-b46a47c920a8-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k6ds6\" (UID: \"e1320a48-dc29-40b2-bdb6-b46a47c920a8\") " pod="openstack/nova-cell1-cell-mapping-k6ds6" Oct 13 08:07:49 crc kubenswrapper[4833]: I1013 08:07:49.282233 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1320a48-dc29-40b2-bdb6-b46a47c920a8-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k6ds6\" (UID: \"e1320a48-dc29-40b2-bdb6-b46a47c920a8\") " pod="openstack/nova-cell1-cell-mapping-k6ds6" Oct 13 08:07:49 crc kubenswrapper[4833]: I1013 08:07:49.282359 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1320a48-dc29-40b2-bdb6-b46a47c920a8-config-data\") pod \"nova-cell1-cell-mapping-k6ds6\" (UID: \"e1320a48-dc29-40b2-bdb6-b46a47c920a8\") " pod="openstack/nova-cell1-cell-mapping-k6ds6" Oct 13 08:07:49 crc kubenswrapper[4833]: I1013 08:07:49.283459 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1320a48-dc29-40b2-bdb6-b46a47c920a8-scripts\") pod \"nova-cell1-cell-mapping-k6ds6\" (UID: \"e1320a48-dc29-40b2-bdb6-b46a47c920a8\") " pod="openstack/nova-cell1-cell-mapping-k6ds6" Oct 13 08:07:49 crc kubenswrapper[4833]: I1013 08:07:49.283600 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs4gs\" (UniqueName: \"kubernetes.io/projected/e1320a48-dc29-40b2-bdb6-b46a47c920a8-kube-api-access-vs4gs\") pod \"nova-cell1-cell-mapping-k6ds6\" (UID: \"e1320a48-dc29-40b2-bdb6-b46a47c920a8\") " pod="openstack/nova-cell1-cell-mapping-k6ds6" Oct 13 08:07:49 crc kubenswrapper[4833]: I1013 08:07:49.289541 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1320a48-dc29-40b2-bdb6-b46a47c920a8-scripts\") pod \"nova-cell1-cell-mapping-k6ds6\" (UID: \"e1320a48-dc29-40b2-bdb6-b46a47c920a8\") " pod="openstack/nova-cell1-cell-mapping-k6ds6" Oct 13 08:07:49 crc kubenswrapper[4833]: I1013 08:07:49.289551 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1320a48-dc29-40b2-bdb6-b46a47c920a8-config-data\") pod \"nova-cell1-cell-mapping-k6ds6\" (UID: \"e1320a48-dc29-40b2-bdb6-b46a47c920a8\") " pod="openstack/nova-cell1-cell-mapping-k6ds6" Oct 13 08:07:49 crc kubenswrapper[4833]: I1013 08:07:49.296007 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1320a48-dc29-40b2-bdb6-b46a47c920a8-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k6ds6\" (UID: \"e1320a48-dc29-40b2-bdb6-b46a47c920a8\") " pod="openstack/nova-cell1-cell-mapping-k6ds6" Oct 13 08:07:49 crc kubenswrapper[4833]: I1013 08:07:49.305679 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs4gs\" (UniqueName: \"kubernetes.io/projected/e1320a48-dc29-40b2-bdb6-b46a47c920a8-kube-api-access-vs4gs\") pod \"nova-cell1-cell-mapping-k6ds6\" (UID: \"e1320a48-dc29-40b2-bdb6-b46a47c920a8\") " pod="openstack/nova-cell1-cell-mapping-k6ds6" Oct 13 08:07:49 crc kubenswrapper[4833]: I1013 08:07:49.368505 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k6ds6" Oct 13 08:07:49 crc kubenswrapper[4833]: I1013 08:07:49.666760 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-k6ds6"] Oct 13 08:07:49 crc kubenswrapper[4833]: I1013 08:07:49.884947 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k6ds6" event={"ID":"e1320a48-dc29-40b2-bdb6-b46a47c920a8","Type":"ContainerStarted","Data":"04303dc519d7f152ba090fffe804beeca60ccce1aa5d48b0723905b79b3ff0f3"} Oct 13 08:07:49 crc kubenswrapper[4833]: I1013 08:07:49.885270 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k6ds6" event={"ID":"e1320a48-dc29-40b2-bdb6-b46a47c920a8","Type":"ContainerStarted","Data":"8736dd7438c39fc420ee3ca612fbcf90484c2985984659d26f0a36ff1b46a283"} Oct 13 08:07:49 crc kubenswrapper[4833]: I1013 08:07:49.916468 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-k6ds6" podStartSLOduration=0.91644672 podStartE2EDuration="916.44672ms" podCreationTimestamp="2025-10-13 08:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:07:49.906711503 +0000 UTC m=+5960.007134419" watchObservedRunningTime="2025-10-13 08:07:49.91644672 +0000 UTC m=+5960.016869646" Oct 13 08:07:51 crc kubenswrapper[4833]: I1013 08:07:51.146918 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 13 08:07:51 crc kubenswrapper[4833]: I1013 08:07:51.147241 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 13 08:07:52 crc kubenswrapper[4833]: I1013 08:07:52.228928 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7fbb1bba-9f62-4e2b-9a56-4eeccbadc866" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.93:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 13 08:07:52 crc kubenswrapper[4833]: I1013 08:07:52.228947 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7fbb1bba-9f62-4e2b-9a56-4eeccbadc866" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.93:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 13 08:07:53 crc kubenswrapper[4833]: I1013 08:07:53.169376 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 13 08:07:53 crc kubenswrapper[4833]: I1013 08:07:53.176120 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 13 08:07:54 crc kubenswrapper[4833]: I1013 08:07:54.184756 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0244f671-bab9-471f-aa1a-557de2d2763d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.94:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 13 08:07:54 crc kubenswrapper[4833]: I1013 08:07:54.184844 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0244f671-bab9-471f-aa1a-557de2d2763d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.94:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 13 08:07:54 crc kubenswrapper[4833]: I1013 08:07:54.941760 4833 generic.go:334] "Generic (PLEG): container finished" podID="e1320a48-dc29-40b2-bdb6-b46a47c920a8" containerID="04303dc519d7f152ba090fffe804beeca60ccce1aa5d48b0723905b79b3ff0f3" exitCode=0 Oct 13 08:07:54 crc kubenswrapper[4833]: I1013 08:07:54.941858 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k6ds6" event={"ID":"e1320a48-dc29-40b2-bdb6-b46a47c920a8","Type":"ContainerDied","Data":"04303dc519d7f152ba090fffe804beeca60ccce1aa5d48b0723905b79b3ff0f3"} Oct 13 08:07:56 crc kubenswrapper[4833]: I1013 08:07:56.320633 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k6ds6" Oct 13 08:07:56 crc kubenswrapper[4833]: I1013 08:07:56.426206 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1320a48-dc29-40b2-bdb6-b46a47c920a8-scripts\") pod \"e1320a48-dc29-40b2-bdb6-b46a47c920a8\" (UID: \"e1320a48-dc29-40b2-bdb6-b46a47c920a8\") " Oct 13 08:07:56 crc kubenswrapper[4833]: I1013 08:07:56.426306 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs4gs\" (UniqueName: \"kubernetes.io/projected/e1320a48-dc29-40b2-bdb6-b46a47c920a8-kube-api-access-vs4gs\") pod \"e1320a48-dc29-40b2-bdb6-b46a47c920a8\" (UID: \"e1320a48-dc29-40b2-bdb6-b46a47c920a8\") " Oct 13 08:07:56 crc kubenswrapper[4833]: I1013 08:07:56.426350 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1320a48-dc29-40b2-bdb6-b46a47c920a8-combined-ca-bundle\") pod \"e1320a48-dc29-40b2-bdb6-b46a47c920a8\" (UID: \"e1320a48-dc29-40b2-bdb6-b46a47c920a8\") " Oct 13 08:07:56 crc kubenswrapper[4833]: I1013 08:07:56.426391 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1320a48-dc29-40b2-bdb6-b46a47c920a8-config-data\") pod \"e1320a48-dc29-40b2-bdb6-b46a47c920a8\" (UID: \"e1320a48-dc29-40b2-bdb6-b46a47c920a8\") " Oct 13 08:07:56 crc kubenswrapper[4833]: I1013 08:07:56.433456 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1320a48-dc29-40b2-bdb6-b46a47c920a8-kube-api-access-vs4gs" (OuterVolumeSpecName: "kube-api-access-vs4gs") pod "e1320a48-dc29-40b2-bdb6-b46a47c920a8" (UID: "e1320a48-dc29-40b2-bdb6-b46a47c920a8"). InnerVolumeSpecName "kube-api-access-vs4gs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:07:56 crc kubenswrapper[4833]: I1013 08:07:56.433841 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1320a48-dc29-40b2-bdb6-b46a47c920a8-scripts" (OuterVolumeSpecName: "scripts") pod "e1320a48-dc29-40b2-bdb6-b46a47c920a8" (UID: "e1320a48-dc29-40b2-bdb6-b46a47c920a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:07:56 crc kubenswrapper[4833]: I1013 08:07:56.452255 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1320a48-dc29-40b2-bdb6-b46a47c920a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1320a48-dc29-40b2-bdb6-b46a47c920a8" (UID: "e1320a48-dc29-40b2-bdb6-b46a47c920a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:07:56 crc kubenswrapper[4833]: I1013 08:07:56.455708 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1320a48-dc29-40b2-bdb6-b46a47c920a8-config-data" (OuterVolumeSpecName: "config-data") pod "e1320a48-dc29-40b2-bdb6-b46a47c920a8" (UID: "e1320a48-dc29-40b2-bdb6-b46a47c920a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:07:56 crc kubenswrapper[4833]: I1013 08:07:56.528469 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1320a48-dc29-40b2-bdb6-b46a47c920a8-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:56 crc kubenswrapper[4833]: I1013 08:07:56.528506 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vs4gs\" (UniqueName: \"kubernetes.io/projected/e1320a48-dc29-40b2-bdb6-b46a47c920a8-kube-api-access-vs4gs\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:56 crc kubenswrapper[4833]: I1013 08:07:56.528518 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1320a48-dc29-40b2-bdb6-b46a47c920a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:56 crc kubenswrapper[4833]: I1013 08:07:56.528527 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1320a48-dc29-40b2-bdb6-b46a47c920a8-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:07:56 crc kubenswrapper[4833]: I1013 08:07:56.980529 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k6ds6" event={"ID":"e1320a48-dc29-40b2-bdb6-b46a47c920a8","Type":"ContainerDied","Data":"8736dd7438c39fc420ee3ca612fbcf90484c2985984659d26f0a36ff1b46a283"} Oct 13 08:07:56 crc kubenswrapper[4833]: I1013 08:07:56.980589 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8736dd7438c39fc420ee3ca612fbcf90484c2985984659d26f0a36ff1b46a283" Oct 13 08:07:56 crc kubenswrapper[4833]: I1013 08:07:56.980659 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k6ds6" Oct 13 08:07:57 crc kubenswrapper[4833]: I1013 08:07:57.168504 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 13 08:07:57 crc kubenswrapper[4833]: I1013 08:07:57.168841 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7fbb1bba-9f62-4e2b-9a56-4eeccbadc866" containerName="nova-api-log" containerID="cri-o://ce15656998aa9c8772205f2f2050fafdbdb2acb2c77dd51c2748662c61e27ee4" gracePeriod=30 Oct 13 08:07:57 crc kubenswrapper[4833]: I1013 08:07:57.168958 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7fbb1bba-9f62-4e2b-9a56-4eeccbadc866" containerName="nova-api-api" containerID="cri-o://db3b11fde02911f650c70d13aa4c2fc5cc4089e025af378e16d42281042cb86a" gracePeriod=30 Oct 13 08:07:57 crc kubenswrapper[4833]: I1013 08:07:57.245791 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 08:07:57 crc kubenswrapper[4833]: I1013 08:07:57.246274 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0244f671-bab9-471f-aa1a-557de2d2763d" containerName="nova-metadata-log" containerID="cri-o://bde1074814d247bbe64f25c553a260c0e0a23185a842d8128ff8627255c6b82f" gracePeriod=30 Oct 13 08:07:57 crc kubenswrapper[4833]: I1013 08:07:57.246363 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0244f671-bab9-471f-aa1a-557de2d2763d" containerName="nova-metadata-metadata" containerID="cri-o://0bd2ef0e1951590095cb02c8630946b6cdf98d5554b1fb30f18a846af8fa7e6a" gracePeriod=30 Oct 13 08:07:57 crc kubenswrapper[4833]: I1013 08:07:57.992924 4833 generic.go:334] "Generic (PLEG): container finished" podID="7fbb1bba-9f62-4e2b-9a56-4eeccbadc866" containerID="ce15656998aa9c8772205f2f2050fafdbdb2acb2c77dd51c2748662c61e27ee4" exitCode=143 Oct 13 08:07:57 crc kubenswrapper[4833]: I1013 08:07:57.993017 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7fbb1bba-9f62-4e2b-9a56-4eeccbadc866","Type":"ContainerDied","Data":"ce15656998aa9c8772205f2f2050fafdbdb2acb2c77dd51c2748662c61e27ee4"} Oct 13 08:07:57 crc kubenswrapper[4833]: I1013 08:07:57.995505 4833 generic.go:334] "Generic (PLEG): container finished" podID="0244f671-bab9-471f-aa1a-557de2d2763d" containerID="bde1074814d247bbe64f25c553a260c0e0a23185a842d8128ff8627255c6b82f" exitCode=143 Oct 13 08:07:57 crc kubenswrapper[4833]: I1013 08:07:57.995614 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0244f671-bab9-471f-aa1a-557de2d2763d","Type":"ContainerDied","Data":"bde1074814d247bbe64f25c553a260c0e0a23185a842d8128ff8627255c6b82f"} Oct 13 08:08:00 crc kubenswrapper[4833]: I1013 08:08:00.868151 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 08:08:00 crc kubenswrapper[4833]: I1013 08:08:00.908197 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0244f671-bab9-471f-aa1a-557de2d2763d-logs\") pod \"0244f671-bab9-471f-aa1a-557de2d2763d\" (UID: \"0244f671-bab9-471f-aa1a-557de2d2763d\") " Oct 13 08:08:00 crc kubenswrapper[4833]: I1013 08:08:00.908304 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxqbx\" (UniqueName: \"kubernetes.io/projected/0244f671-bab9-471f-aa1a-557de2d2763d-kube-api-access-jxqbx\") pod \"0244f671-bab9-471f-aa1a-557de2d2763d\" (UID: \"0244f671-bab9-471f-aa1a-557de2d2763d\") " Oct 13 08:08:00 crc kubenswrapper[4833]: I1013 08:08:00.908355 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0244f671-bab9-471f-aa1a-557de2d2763d-config-data\") pod \"0244f671-bab9-471f-aa1a-557de2d2763d\" (UID: \"0244f671-bab9-471f-aa1a-557de2d2763d\") " Oct 13 08:08:00 crc kubenswrapper[4833]: I1013 08:08:00.908393 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0244f671-bab9-471f-aa1a-557de2d2763d-combined-ca-bundle\") pod \"0244f671-bab9-471f-aa1a-557de2d2763d\" (UID: \"0244f671-bab9-471f-aa1a-557de2d2763d\") " Oct 13 08:08:00 crc kubenswrapper[4833]: I1013 08:08:00.908415 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0244f671-bab9-471f-aa1a-557de2d2763d-nova-metadata-tls-certs\") pod \"0244f671-bab9-471f-aa1a-557de2d2763d\" (UID: \"0244f671-bab9-471f-aa1a-557de2d2763d\") " Oct 13 08:08:00 crc kubenswrapper[4833]: I1013 08:08:00.908850 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0244f671-bab9-471f-aa1a-557de2d2763d-logs" (OuterVolumeSpecName: "logs") pod "0244f671-bab9-471f-aa1a-557de2d2763d" (UID: "0244f671-bab9-471f-aa1a-557de2d2763d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:08:00 crc kubenswrapper[4833]: I1013 08:08:00.913230 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0244f671-bab9-471f-aa1a-557de2d2763d-kube-api-access-jxqbx" (OuterVolumeSpecName: "kube-api-access-jxqbx") pod "0244f671-bab9-471f-aa1a-557de2d2763d" (UID: "0244f671-bab9-471f-aa1a-557de2d2763d"). InnerVolumeSpecName "kube-api-access-jxqbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:08:00 crc kubenswrapper[4833]: I1013 08:08:00.935291 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0244f671-bab9-471f-aa1a-557de2d2763d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0244f671-bab9-471f-aa1a-557de2d2763d" (UID: "0244f671-bab9-471f-aa1a-557de2d2763d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:08:00 crc kubenswrapper[4833]: I1013 08:08:00.936850 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0244f671-bab9-471f-aa1a-557de2d2763d-config-data" (OuterVolumeSpecName: "config-data") pod "0244f671-bab9-471f-aa1a-557de2d2763d" (UID: "0244f671-bab9-471f-aa1a-557de2d2763d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:08:00 crc kubenswrapper[4833]: I1013 08:08:00.951694 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0244f671-bab9-471f-aa1a-557de2d2763d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "0244f671-bab9-471f-aa1a-557de2d2763d" (UID: "0244f671-bab9-471f-aa1a-557de2d2763d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.010323 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0244f671-bab9-471f-aa1a-557de2d2763d-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.010350 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0244f671-bab9-471f-aa1a-557de2d2763d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.010361 4833 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0244f671-bab9-471f-aa1a-557de2d2763d-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.010371 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0244f671-bab9-471f-aa1a-557de2d2763d-logs\") on node \"crc\" DevicePath \"\"" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.010380 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxqbx\" (UniqueName: \"kubernetes.io/projected/0244f671-bab9-471f-aa1a-557de2d2763d-kube-api-access-jxqbx\") on node \"crc\" DevicePath \"\"" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.022651 4833 generic.go:334] "Generic (PLEG): container finished" podID="0244f671-bab9-471f-aa1a-557de2d2763d" containerID="0bd2ef0e1951590095cb02c8630946b6cdf98d5554b1fb30f18a846af8fa7e6a" exitCode=0 Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.022691 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0244f671-bab9-471f-aa1a-557de2d2763d","Type":"ContainerDied","Data":"0bd2ef0e1951590095cb02c8630946b6cdf98d5554b1fb30f18a846af8fa7e6a"} Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.022738 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0244f671-bab9-471f-aa1a-557de2d2763d","Type":"ContainerDied","Data":"391f0ded19983ffdf2ea44bb1249f6f20f1f548bb12ef373e68e1d55ccbef5e5"} Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.022746 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.022756 4833 scope.go:117] "RemoveContainer" containerID="0bd2ef0e1951590095cb02c8630946b6cdf98d5554b1fb30f18a846af8fa7e6a" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.050884 4833 scope.go:117] "RemoveContainer" containerID="bde1074814d247bbe64f25c553a260c0e0a23185a842d8128ff8627255c6b82f" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.057746 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.069272 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.075351 4833 scope.go:117] "RemoveContainer" containerID="0bd2ef0e1951590095cb02c8630946b6cdf98d5554b1fb30f18a846af8fa7e6a" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.075869 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 13 08:08:01 crc kubenswrapper[4833]: E1013 08:08:01.077185 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0244f671-bab9-471f-aa1a-557de2d2763d" containerName="nova-metadata-log" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.077200 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0244f671-bab9-471f-aa1a-557de2d2763d" containerName="nova-metadata-log" Oct 13 08:08:01 crc kubenswrapper[4833]: E1013 08:08:01.077224 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1320a48-dc29-40b2-bdb6-b46a47c920a8" containerName="nova-manage" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.077230 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1320a48-dc29-40b2-bdb6-b46a47c920a8" containerName="nova-manage" Oct 13 08:08:01 crc kubenswrapper[4833]: E1013 08:08:01.077245 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0244f671-bab9-471f-aa1a-557de2d2763d" containerName="nova-metadata-metadata" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.077251 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0244f671-bab9-471f-aa1a-557de2d2763d" containerName="nova-metadata-metadata" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.077422 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0244f671-bab9-471f-aa1a-557de2d2763d" containerName="nova-metadata-log" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.077439 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1320a48-dc29-40b2-bdb6-b46a47c920a8" containerName="nova-manage" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.077454 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0244f671-bab9-471f-aa1a-557de2d2763d" containerName="nova-metadata-metadata" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.078735 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 08:08:01 crc kubenswrapper[4833]: E1013 08:08:01.079264 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bd2ef0e1951590095cb02c8630946b6cdf98d5554b1fb30f18a846af8fa7e6a\": container with ID starting with 0bd2ef0e1951590095cb02c8630946b6cdf98d5554b1fb30f18a846af8fa7e6a not found: ID does not exist" containerID="0bd2ef0e1951590095cb02c8630946b6cdf98d5554b1fb30f18a846af8fa7e6a" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.079324 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bd2ef0e1951590095cb02c8630946b6cdf98d5554b1fb30f18a846af8fa7e6a"} err="failed to get container status \"0bd2ef0e1951590095cb02c8630946b6cdf98d5554b1fb30f18a846af8fa7e6a\": rpc error: code = NotFound desc = could not find container \"0bd2ef0e1951590095cb02c8630946b6cdf98d5554b1fb30f18a846af8fa7e6a\": container with ID starting with 0bd2ef0e1951590095cb02c8630946b6cdf98d5554b1fb30f18a846af8fa7e6a not found: ID does not exist" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.079357 4833 scope.go:117] "RemoveContainer" containerID="bde1074814d247bbe64f25c553a260c0e0a23185a842d8128ff8627255c6b82f" Oct 13 08:08:01 crc kubenswrapper[4833]: E1013 08:08:01.080287 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bde1074814d247bbe64f25c553a260c0e0a23185a842d8128ff8627255c6b82f\": container with ID starting with bde1074814d247bbe64f25c553a260c0e0a23185a842d8128ff8627255c6b82f not found: ID does not exist" containerID="bde1074814d247bbe64f25c553a260c0e0a23185a842d8128ff8627255c6b82f" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.080334 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bde1074814d247bbe64f25c553a260c0e0a23185a842d8128ff8627255c6b82f"} err="failed to get container status \"bde1074814d247bbe64f25c553a260c0e0a23185a842d8128ff8627255c6b82f\": rpc error: code = NotFound desc = could not find container \"bde1074814d247bbe64f25c553a260c0e0a23185a842d8128ff8627255c6b82f\": container with ID starting with bde1074814d247bbe64f25c553a260c0e0a23185a842d8128ff8627255c6b82f not found: ID does not exist" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.080356 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.083894 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.084870 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.114073 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/be48f910-53ab-4cbd-a846-543d163f0edc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"be48f910-53ab-4cbd-a846-543d163f0edc\") " pod="openstack/nova-metadata-0" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.114124 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b2cj\" (UniqueName: \"kubernetes.io/projected/be48f910-53ab-4cbd-a846-543d163f0edc-kube-api-access-9b2cj\") pod \"nova-metadata-0\" (UID: \"be48f910-53ab-4cbd-a846-543d163f0edc\") " pod="openstack/nova-metadata-0" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.114184 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be48f910-53ab-4cbd-a846-543d163f0edc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"be48f910-53ab-4cbd-a846-543d163f0edc\") " pod="openstack/nova-metadata-0" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.114274 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be48f910-53ab-4cbd-a846-543d163f0edc-config-data\") pod \"nova-metadata-0\" (UID: \"be48f910-53ab-4cbd-a846-543d163f0edc\") " pod="openstack/nova-metadata-0" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.114324 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be48f910-53ab-4cbd-a846-543d163f0edc-logs\") pod \"nova-metadata-0\" (UID: \"be48f910-53ab-4cbd-a846-543d163f0edc\") " pod="openstack/nova-metadata-0" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.215745 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be48f910-53ab-4cbd-a846-543d163f0edc-config-data\") pod \"nova-metadata-0\" (UID: \"be48f910-53ab-4cbd-a846-543d163f0edc\") " pod="openstack/nova-metadata-0" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.215823 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be48f910-53ab-4cbd-a846-543d163f0edc-logs\") pod \"nova-metadata-0\" (UID: \"be48f910-53ab-4cbd-a846-543d163f0edc\") " pod="openstack/nova-metadata-0" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.215859 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/be48f910-53ab-4cbd-a846-543d163f0edc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"be48f910-53ab-4cbd-a846-543d163f0edc\") " pod="openstack/nova-metadata-0" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.215881 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b2cj\" (UniqueName: \"kubernetes.io/projected/be48f910-53ab-4cbd-a846-543d163f0edc-kube-api-access-9b2cj\") pod \"nova-metadata-0\" (UID: \"be48f910-53ab-4cbd-a846-543d163f0edc\") " pod="openstack/nova-metadata-0" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.215918 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be48f910-53ab-4cbd-a846-543d163f0edc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"be48f910-53ab-4cbd-a846-543d163f0edc\") " pod="openstack/nova-metadata-0" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.216883 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be48f910-53ab-4cbd-a846-543d163f0edc-logs\") pod \"nova-metadata-0\" (UID: \"be48f910-53ab-4cbd-a846-543d163f0edc\") " pod="openstack/nova-metadata-0" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.219120 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be48f910-53ab-4cbd-a846-543d163f0edc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"be48f910-53ab-4cbd-a846-543d163f0edc\") " pod="openstack/nova-metadata-0" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.219619 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be48f910-53ab-4cbd-a846-543d163f0edc-config-data\") pod \"nova-metadata-0\" (UID: \"be48f910-53ab-4cbd-a846-543d163f0edc\") " pod="openstack/nova-metadata-0" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.220173 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/be48f910-53ab-4cbd-a846-543d163f0edc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"be48f910-53ab-4cbd-a846-543d163f0edc\") " pod="openstack/nova-metadata-0" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.230612 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b2cj\" (UniqueName: \"kubernetes.io/projected/be48f910-53ab-4cbd-a846-543d163f0edc-kube-api-access-9b2cj\") pod \"nova-metadata-0\" (UID: \"be48f910-53ab-4cbd-a846-543d163f0edc\") " pod="openstack/nova-metadata-0" Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.395252 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 08:08:01 crc kubenswrapper[4833]: W1013 08:08:01.883406 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe48f910_53ab_4cbd_a846_543d163f0edc.slice/crio-c6ad4afc0709fe5b7d35e61e2b8bfe97d0e6e5c35832a90a70bc8b55cc0f5246 WatchSource:0}: Error finding container c6ad4afc0709fe5b7d35e61e2b8bfe97d0e6e5c35832a90a70bc8b55cc0f5246: Status 404 returned error can't find the container with id c6ad4afc0709fe5b7d35e61e2b8bfe97d0e6e5c35832a90a70bc8b55cc0f5246 Oct 13 08:08:01 crc kubenswrapper[4833]: I1013 08:08:01.885884 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 08:08:02 crc kubenswrapper[4833]: I1013 08:08:02.045383 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be48f910-53ab-4cbd-a846-543d163f0edc","Type":"ContainerStarted","Data":"c6ad4afc0709fe5b7d35e61e2b8bfe97d0e6e5c35832a90a70bc8b55cc0f5246"} Oct 13 08:08:02 crc kubenswrapper[4833]: I1013 08:08:02.641396 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0244f671-bab9-471f-aa1a-557de2d2763d" path="/var/lib/kubelet/pods/0244f671-bab9-471f-aa1a-557de2d2763d/volumes" Oct 13 08:08:03 crc kubenswrapper[4833]: I1013 08:08:03.057784 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be48f910-53ab-4cbd-a846-543d163f0edc","Type":"ContainerStarted","Data":"ac0219a87c1325770c4894722161b965a03354f38c4c6cf49adc9e88a879ac7c"} Oct 13 08:08:03 crc kubenswrapper[4833]: I1013 08:08:03.058149 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be48f910-53ab-4cbd-a846-543d163f0edc","Type":"ContainerStarted","Data":"a0e5c6b3ae0fd9cc383069142322c85a9c6ce9b3367d56d64ab94495047f3e8a"} Oct 13 08:08:03 crc kubenswrapper[4833]: I1013 08:08:03.086944 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.086928005 podStartE2EDuration="2.086928005s" podCreationTimestamp="2025-10-13 08:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:08:03.086127432 +0000 UTC m=+5973.186550358" watchObservedRunningTime="2025-10-13 08:08:03.086928005 +0000 UTC m=+5973.187350931" Oct 13 08:08:06 crc kubenswrapper[4833]: I1013 08:08:06.395990 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 13 08:08:06 crc kubenswrapper[4833]: I1013 08:08:06.396629 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 13 08:08:10 crc kubenswrapper[4833]: I1013 08:08:10.138984 4833 generic.go:334] "Generic (PLEG): container finished" podID="7c6d3ff7-e719-4d13-a433-2a29d18f854c" containerID="024ba1767a31c709557d04aed2148b98ac9900471f3c88728823a8a55f89fbca" exitCode=137 Oct 13 08:08:10 crc kubenswrapper[4833]: I1013 08:08:10.139070 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7c6d3ff7-e719-4d13-a433-2a29d18f854c","Type":"ContainerDied","Data":"024ba1767a31c709557d04aed2148b98ac9900471f3c88728823a8a55f89fbca"} Oct 13 08:08:10 crc kubenswrapper[4833]: I1013 08:08:10.452323 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 08:08:10 crc kubenswrapper[4833]: I1013 08:08:10.538560 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c6d3ff7-e719-4d13-a433-2a29d18f854c-combined-ca-bundle\") pod \"7c6d3ff7-e719-4d13-a433-2a29d18f854c\" (UID: \"7c6d3ff7-e719-4d13-a433-2a29d18f854c\") " Oct 13 08:08:10 crc kubenswrapper[4833]: I1013 08:08:10.538728 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rlcv\" (UniqueName: \"kubernetes.io/projected/7c6d3ff7-e719-4d13-a433-2a29d18f854c-kube-api-access-9rlcv\") pod \"7c6d3ff7-e719-4d13-a433-2a29d18f854c\" (UID: \"7c6d3ff7-e719-4d13-a433-2a29d18f854c\") " Oct 13 08:08:10 crc kubenswrapper[4833]: I1013 08:08:10.538846 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c6d3ff7-e719-4d13-a433-2a29d18f854c-config-data\") pod \"7c6d3ff7-e719-4d13-a433-2a29d18f854c\" (UID: \"7c6d3ff7-e719-4d13-a433-2a29d18f854c\") " Oct 13 08:08:10 crc kubenswrapper[4833]: I1013 08:08:10.545559 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c6d3ff7-e719-4d13-a433-2a29d18f854c-kube-api-access-9rlcv" (OuterVolumeSpecName: "kube-api-access-9rlcv") pod "7c6d3ff7-e719-4d13-a433-2a29d18f854c" (UID: "7c6d3ff7-e719-4d13-a433-2a29d18f854c"). InnerVolumeSpecName "kube-api-access-9rlcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:08:10 crc kubenswrapper[4833]: I1013 08:08:10.576751 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c6d3ff7-e719-4d13-a433-2a29d18f854c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c6d3ff7-e719-4d13-a433-2a29d18f854c" (UID: "7c6d3ff7-e719-4d13-a433-2a29d18f854c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:08:10 crc kubenswrapper[4833]: I1013 08:08:10.585322 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c6d3ff7-e719-4d13-a433-2a29d18f854c-config-data" (OuterVolumeSpecName: "config-data") pod "7c6d3ff7-e719-4d13-a433-2a29d18f854c" (UID: "7c6d3ff7-e719-4d13-a433-2a29d18f854c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:08:10 crc kubenswrapper[4833]: I1013 08:08:10.640841 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c6d3ff7-e719-4d13-a433-2a29d18f854c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:08:10 crc kubenswrapper[4833]: I1013 08:08:10.640871 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rlcv\" (UniqueName: \"kubernetes.io/projected/7c6d3ff7-e719-4d13-a433-2a29d18f854c-kube-api-access-9rlcv\") on node \"crc\" DevicePath \"\"" Oct 13 08:08:10 crc kubenswrapper[4833]: I1013 08:08:10.640885 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c6d3ff7-e719-4d13-a433-2a29d18f854c-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.001469 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.048877 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6cwf\" (UniqueName: \"kubernetes.io/projected/7fbb1bba-9f62-4e2b-9a56-4eeccbadc866-kube-api-access-v6cwf\") pod \"7fbb1bba-9f62-4e2b-9a56-4eeccbadc866\" (UID: \"7fbb1bba-9f62-4e2b-9a56-4eeccbadc866\") " Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.048960 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbb1bba-9f62-4e2b-9a56-4eeccbadc866-combined-ca-bundle\") pod \"7fbb1bba-9f62-4e2b-9a56-4eeccbadc866\" (UID: \"7fbb1bba-9f62-4e2b-9a56-4eeccbadc866\") " Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.049106 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fbb1bba-9f62-4e2b-9a56-4eeccbadc866-config-data\") pod \"7fbb1bba-9f62-4e2b-9a56-4eeccbadc866\" (UID: \"7fbb1bba-9f62-4e2b-9a56-4eeccbadc866\") " Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.049360 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fbb1bba-9f62-4e2b-9a56-4eeccbadc866-logs\") pod \"7fbb1bba-9f62-4e2b-9a56-4eeccbadc866\" (UID: \"7fbb1bba-9f62-4e2b-9a56-4eeccbadc866\") " Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.049817 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fbb1bba-9f62-4e2b-9a56-4eeccbadc866-logs" (OuterVolumeSpecName: "logs") pod "7fbb1bba-9f62-4e2b-9a56-4eeccbadc866" (UID: "7fbb1bba-9f62-4e2b-9a56-4eeccbadc866"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.050061 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fbb1bba-9f62-4e2b-9a56-4eeccbadc866-logs\") on node \"crc\" DevicePath \"\"" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.052908 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fbb1bba-9f62-4e2b-9a56-4eeccbadc866-kube-api-access-v6cwf" (OuterVolumeSpecName: "kube-api-access-v6cwf") pod "7fbb1bba-9f62-4e2b-9a56-4eeccbadc866" (UID: "7fbb1bba-9f62-4e2b-9a56-4eeccbadc866"). InnerVolumeSpecName "kube-api-access-v6cwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.083956 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fbb1bba-9f62-4e2b-9a56-4eeccbadc866-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fbb1bba-9f62-4e2b-9a56-4eeccbadc866" (UID: "7fbb1bba-9f62-4e2b-9a56-4eeccbadc866"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.085195 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fbb1bba-9f62-4e2b-9a56-4eeccbadc866-config-data" (OuterVolumeSpecName: "config-data") pod "7fbb1bba-9f62-4e2b-9a56-4eeccbadc866" (UID: "7fbb1bba-9f62-4e2b-9a56-4eeccbadc866"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.151997 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6cwf\" (UniqueName: \"kubernetes.io/projected/7fbb1bba-9f62-4e2b-9a56-4eeccbadc866-kube-api-access-v6cwf\") on node \"crc\" DevicePath \"\"" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.152052 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbb1bba-9f62-4e2b-9a56-4eeccbadc866-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.152082 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fbb1bba-9f62-4e2b-9a56-4eeccbadc866-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.152471 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.152476 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7c6d3ff7-e719-4d13-a433-2a29d18f854c","Type":"ContainerDied","Data":"b09f385897a5b309437b1bc16229b6905a88f7f3b6a9a18962c9c6014dde2d15"} Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.152624 4833 scope.go:117] "RemoveContainer" containerID="024ba1767a31c709557d04aed2148b98ac9900471f3c88728823a8a55f89fbca" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.156232 4833 generic.go:334] "Generic (PLEG): container finished" podID="7fbb1bba-9f62-4e2b-9a56-4eeccbadc866" containerID="db3b11fde02911f650c70d13aa4c2fc5cc4089e025af378e16d42281042cb86a" exitCode=0 Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.156289 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7fbb1bba-9f62-4e2b-9a56-4eeccbadc866","Type":"ContainerDied","Data":"db3b11fde02911f650c70d13aa4c2fc5cc4089e025af378e16d42281042cb86a"} Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.156324 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7fbb1bba-9f62-4e2b-9a56-4eeccbadc866","Type":"ContainerDied","Data":"102220f42f498452b93181c260c6575773f9a0324b39be159cb4d45bb76154ca"} Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.156398 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.199467 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.200080 4833 scope.go:117] "RemoveContainer" containerID="db3b11fde02911f650c70d13aa4c2fc5cc4089e025af378e16d42281042cb86a" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.213958 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.231948 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.243470 4833 scope.go:117] "RemoveContainer" containerID="ce15656998aa9c8772205f2f2050fafdbdb2acb2c77dd51c2748662c61e27ee4" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.251691 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.263799 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 08:08:11 crc kubenswrapper[4833]: E1013 08:08:11.264150 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c6d3ff7-e719-4d13-a433-2a29d18f854c" containerName="nova-scheduler-scheduler" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.264166 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c6d3ff7-e719-4d13-a433-2a29d18f854c" containerName="nova-scheduler-scheduler" Oct 13 08:08:11 crc kubenswrapper[4833]: E1013 08:08:11.264176 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fbb1bba-9f62-4e2b-9a56-4eeccbadc866" containerName="nova-api-api" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.264182 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fbb1bba-9f62-4e2b-9a56-4eeccbadc866" containerName="nova-api-api" Oct 13 08:08:11 crc kubenswrapper[4833]: E1013 08:08:11.264216 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fbb1bba-9f62-4e2b-9a56-4eeccbadc866" containerName="nova-api-log" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.264222 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fbb1bba-9f62-4e2b-9a56-4eeccbadc866" containerName="nova-api-log" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.264389 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fbb1bba-9f62-4e2b-9a56-4eeccbadc866" containerName="nova-api-log" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.264427 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c6d3ff7-e719-4d13-a433-2a29d18f854c" containerName="nova-scheduler-scheduler" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.264441 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fbb1bba-9f62-4e2b-9a56-4eeccbadc866" containerName="nova-api-api" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.265020 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.267047 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.274187 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.275518 4833 scope.go:117] "RemoveContainer" containerID="db3b11fde02911f650c70d13aa4c2fc5cc4089e025af378e16d42281042cb86a" Oct 13 08:08:11 crc kubenswrapper[4833]: E1013 08:08:11.275866 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db3b11fde02911f650c70d13aa4c2fc5cc4089e025af378e16d42281042cb86a\": container with ID starting with db3b11fde02911f650c70d13aa4c2fc5cc4089e025af378e16d42281042cb86a not found: ID does not exist" containerID="db3b11fde02911f650c70d13aa4c2fc5cc4089e025af378e16d42281042cb86a" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.275891 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db3b11fde02911f650c70d13aa4c2fc5cc4089e025af378e16d42281042cb86a"} err="failed to get container status \"db3b11fde02911f650c70d13aa4c2fc5cc4089e025af378e16d42281042cb86a\": rpc error: code = NotFound desc = could not find container \"db3b11fde02911f650c70d13aa4c2fc5cc4089e025af378e16d42281042cb86a\": container with ID starting with db3b11fde02911f650c70d13aa4c2fc5cc4089e025af378e16d42281042cb86a not found: ID does not exist" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.275911 4833 scope.go:117] "RemoveContainer" containerID="ce15656998aa9c8772205f2f2050fafdbdb2acb2c77dd51c2748662c61e27ee4" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.276254 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 08:08:11 crc kubenswrapper[4833]: E1013 08:08:11.276409 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce15656998aa9c8772205f2f2050fafdbdb2acb2c77dd51c2748662c61e27ee4\": container with ID starting with ce15656998aa9c8772205f2f2050fafdbdb2acb2c77dd51c2748662c61e27ee4 not found: ID does not exist" containerID="ce15656998aa9c8772205f2f2050fafdbdb2acb2c77dd51c2748662c61e27ee4" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.276471 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce15656998aa9c8772205f2f2050fafdbdb2acb2c77dd51c2748662c61e27ee4"} err="failed to get container status \"ce15656998aa9c8772205f2f2050fafdbdb2acb2c77dd51c2748662c61e27ee4\": rpc error: code = NotFound desc = could not find container \"ce15656998aa9c8772205f2f2050fafdbdb2acb2c77dd51c2748662c61e27ee4\": container with ID starting with ce15656998aa9c8772205f2f2050fafdbdb2acb2c77dd51c2748662c61e27ee4 not found: ID does not exist" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.283126 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.286667 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.296206 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.355641 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6klz\" (UniqueName: \"kubernetes.io/projected/6f63dce5-488f-43e5-8217-5c855de31f30-kube-api-access-m6klz\") pod \"nova-scheduler-0\" (UID: \"6f63dce5-488f-43e5-8217-5c855de31f30\") " pod="openstack/nova-scheduler-0" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.355701 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdpth\" (UniqueName: \"kubernetes.io/projected/7e4d3e81-c9e7-4424-b93e-67e6563537fe-kube-api-access-wdpth\") pod \"nova-api-0\" (UID: \"7e4d3e81-c9e7-4424-b93e-67e6563537fe\") " pod="openstack/nova-api-0" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.355829 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f63dce5-488f-43e5-8217-5c855de31f30-config-data\") pod \"nova-scheduler-0\" (UID: \"6f63dce5-488f-43e5-8217-5c855de31f30\") " pod="openstack/nova-scheduler-0" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.356024 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e4d3e81-c9e7-4424-b93e-67e6563537fe-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7e4d3e81-c9e7-4424-b93e-67e6563537fe\") " pod="openstack/nova-api-0" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.356074 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e4d3e81-c9e7-4424-b93e-67e6563537fe-logs\") pod \"nova-api-0\" (UID: \"7e4d3e81-c9e7-4424-b93e-67e6563537fe\") " pod="openstack/nova-api-0" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.356238 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e4d3e81-c9e7-4424-b93e-67e6563537fe-config-data\") pod \"nova-api-0\" (UID: \"7e4d3e81-c9e7-4424-b93e-67e6563537fe\") " pod="openstack/nova-api-0" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.356293 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f63dce5-488f-43e5-8217-5c855de31f30-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6f63dce5-488f-43e5-8217-5c855de31f30\") " pod="openstack/nova-scheduler-0" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.396089 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.396172 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.458495 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e4d3e81-c9e7-4424-b93e-67e6563537fe-logs\") pod \"nova-api-0\" (UID: \"7e4d3e81-c9e7-4424-b93e-67e6563537fe\") " pod="openstack/nova-api-0" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.458645 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e4d3e81-c9e7-4424-b93e-67e6563537fe-config-data\") pod \"nova-api-0\" (UID: \"7e4d3e81-c9e7-4424-b93e-67e6563537fe\") " pod="openstack/nova-api-0" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.458683 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f63dce5-488f-43e5-8217-5c855de31f30-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6f63dce5-488f-43e5-8217-5c855de31f30\") " pod="openstack/nova-scheduler-0" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.458723 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6klz\" (UniqueName: \"kubernetes.io/projected/6f63dce5-488f-43e5-8217-5c855de31f30-kube-api-access-m6klz\") pod \"nova-scheduler-0\" (UID: \"6f63dce5-488f-43e5-8217-5c855de31f30\") " pod="openstack/nova-scheduler-0" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.458754 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdpth\" (UniqueName: \"kubernetes.io/projected/7e4d3e81-c9e7-4424-b93e-67e6563537fe-kube-api-access-wdpth\") pod \"nova-api-0\" (UID: \"7e4d3e81-c9e7-4424-b93e-67e6563537fe\") " pod="openstack/nova-api-0" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.458869 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f63dce5-488f-43e5-8217-5c855de31f30-config-data\") pod \"nova-scheduler-0\" (UID: \"6f63dce5-488f-43e5-8217-5c855de31f30\") " pod="openstack/nova-scheduler-0" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.458980 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e4d3e81-c9e7-4424-b93e-67e6563537fe-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7e4d3e81-c9e7-4424-b93e-67e6563537fe\") " pod="openstack/nova-api-0" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.459165 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e4d3e81-c9e7-4424-b93e-67e6563537fe-logs\") pod \"nova-api-0\" (UID: \"7e4d3e81-c9e7-4424-b93e-67e6563537fe\") " pod="openstack/nova-api-0" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.463680 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e4d3e81-c9e7-4424-b93e-67e6563537fe-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7e4d3e81-c9e7-4424-b93e-67e6563537fe\") " pod="openstack/nova-api-0" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.465104 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e4d3e81-c9e7-4424-b93e-67e6563537fe-config-data\") pod \"nova-api-0\" (UID: \"7e4d3e81-c9e7-4424-b93e-67e6563537fe\") " pod="openstack/nova-api-0" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.465974 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f63dce5-488f-43e5-8217-5c855de31f30-config-data\") pod \"nova-scheduler-0\" (UID: \"6f63dce5-488f-43e5-8217-5c855de31f30\") " pod="openstack/nova-scheduler-0" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.473340 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f63dce5-488f-43e5-8217-5c855de31f30-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6f63dce5-488f-43e5-8217-5c855de31f30\") " pod="openstack/nova-scheduler-0" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.497644 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdpth\" (UniqueName: \"kubernetes.io/projected/7e4d3e81-c9e7-4424-b93e-67e6563537fe-kube-api-access-wdpth\") pod \"nova-api-0\" (UID: \"7e4d3e81-c9e7-4424-b93e-67e6563537fe\") " pod="openstack/nova-api-0" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.508012 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6klz\" (UniqueName: \"kubernetes.io/projected/6f63dce5-488f-43e5-8217-5c855de31f30-kube-api-access-m6klz\") pod \"nova-scheduler-0\" (UID: \"6f63dce5-488f-43e5-8217-5c855de31f30\") " pod="openstack/nova-scheduler-0" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.588972 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 08:08:11 crc kubenswrapper[4833]: I1013 08:08:11.594142 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 08:08:12 crc kubenswrapper[4833]: I1013 08:08:12.083794 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 13 08:08:12 crc kubenswrapper[4833]: I1013 08:08:12.137783 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 08:08:12 crc kubenswrapper[4833]: I1013 08:08:12.167865 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6f63dce5-488f-43e5-8217-5c855de31f30","Type":"ContainerStarted","Data":"0cd933f168e8ecaa2d3c078603a96b419e40c4f5dca6de96a79d50261b2006ba"} Oct 13 08:08:12 crc kubenswrapper[4833]: I1013 08:08:12.168940 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e4d3e81-c9e7-4424-b93e-67e6563537fe","Type":"ContainerStarted","Data":"1ae53c4d943573c641fb156799df8b18d077f2a07624b127c449a1621b6cad23"} Oct 13 08:08:12 crc kubenswrapper[4833]: I1013 08:08:12.411076 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="be48f910-53ab-4cbd-a846-543d163f0edc" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.96:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 13 08:08:12 crc kubenswrapper[4833]: I1013 08:08:12.411145 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="be48f910-53ab-4cbd-a846-543d163f0edc" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.96:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 13 08:08:12 crc kubenswrapper[4833]: I1013 08:08:12.637297 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c6d3ff7-e719-4d13-a433-2a29d18f854c" path="/var/lib/kubelet/pods/7c6d3ff7-e719-4d13-a433-2a29d18f854c/volumes" Oct 13 08:08:12 crc kubenswrapper[4833]: I1013 08:08:12.637960 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fbb1bba-9f62-4e2b-9a56-4eeccbadc866" path="/var/lib/kubelet/pods/7fbb1bba-9f62-4e2b-9a56-4eeccbadc866/volumes" Oct 13 08:08:13 crc kubenswrapper[4833]: I1013 08:08:13.183181 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6f63dce5-488f-43e5-8217-5c855de31f30","Type":"ContainerStarted","Data":"545758d2f6f4b36d5c73ab20671c7d7211159aa78a02182bff7454e234246929"} Oct 13 08:08:13 crc kubenswrapper[4833]: I1013 08:08:13.186096 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e4d3e81-c9e7-4424-b93e-67e6563537fe","Type":"ContainerStarted","Data":"548b30cec187d729b29fe7cc1bf23e9917d576558ed094f63c2f930509426d5b"} Oct 13 08:08:13 crc kubenswrapper[4833]: I1013 08:08:13.186133 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e4d3e81-c9e7-4424-b93e-67e6563537fe","Type":"ContainerStarted","Data":"db121a7e92ec89177070921236a47121e75943146fc9b3da14afb85e9fdf42dd"} Oct 13 08:08:13 crc kubenswrapper[4833]: I1013 08:08:13.224050 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.224027891 podStartE2EDuration="2.224027891s" podCreationTimestamp="2025-10-13 08:08:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:08:13.203624401 +0000 UTC m=+5983.304047337" watchObservedRunningTime="2025-10-13 08:08:13.224027891 +0000 UTC m=+5983.324450807" Oct 13 08:08:13 crc kubenswrapper[4833]: I1013 08:08:13.231861 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.231837823 podStartE2EDuration="2.231837823s" podCreationTimestamp="2025-10-13 08:08:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:08:13.21871549 +0000 UTC m=+5983.319138416" watchObservedRunningTime="2025-10-13 08:08:13.231837823 +0000 UTC m=+5983.332260749" Oct 13 08:08:16 crc kubenswrapper[4833]: I1013 08:08:16.589206 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 13 08:08:21 crc kubenswrapper[4833]: I1013 08:08:21.401433 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 13 08:08:21 crc kubenswrapper[4833]: I1013 08:08:21.407366 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 13 08:08:21 crc kubenswrapper[4833]: I1013 08:08:21.415733 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 13 08:08:21 crc kubenswrapper[4833]: I1013 08:08:21.589497 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 13 08:08:21 crc kubenswrapper[4833]: I1013 08:08:21.595428 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 13 08:08:21 crc kubenswrapper[4833]: I1013 08:08:21.599212 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 13 08:08:21 crc kubenswrapper[4833]: I1013 08:08:21.621808 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 13 08:08:22 crc kubenswrapper[4833]: I1013 08:08:22.295822 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 13 08:08:22 crc kubenswrapper[4833]: I1013 08:08:22.353892 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 13 08:08:22 crc kubenswrapper[4833]: I1013 08:08:22.677702 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7e4d3e81-c9e7-4424-b93e-67e6563537fe" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.98:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 13 08:08:22 crc kubenswrapper[4833]: I1013 08:08:22.677714 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7e4d3e81-c9e7-4424-b93e-67e6563537fe" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.98:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 13 08:08:31 crc kubenswrapper[4833]: I1013 08:08:31.600591 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 13 08:08:31 crc kubenswrapper[4833]: I1013 08:08:31.601162 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 13 08:08:31 crc kubenswrapper[4833]: I1013 08:08:31.602649 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 13 08:08:31 crc kubenswrapper[4833]: I1013 08:08:31.602677 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 13 08:08:31 crc kubenswrapper[4833]: I1013 08:08:31.613209 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 13 08:08:31 crc kubenswrapper[4833]: I1013 08:08:31.617442 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 13 08:08:31 crc kubenswrapper[4833]: I1013 08:08:31.814339 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cc6dfdd47-bv4vt"] Oct 13 08:08:31 crc kubenswrapper[4833]: I1013 08:08:31.816289 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cc6dfdd47-bv4vt" Oct 13 08:08:31 crc kubenswrapper[4833]: I1013 08:08:31.826336 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cc6dfdd47-bv4vt"] Oct 13 08:08:31 crc kubenswrapper[4833]: I1013 08:08:31.992946 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e5711f8-a06c-4ca2-88eb-9a128bc0ace5-config\") pod \"dnsmasq-dns-cc6dfdd47-bv4vt\" (UID: \"5e5711f8-a06c-4ca2-88eb-9a128bc0ace5\") " pod="openstack/dnsmasq-dns-cc6dfdd47-bv4vt" Oct 13 08:08:31 crc kubenswrapper[4833]: I1013 08:08:31.993013 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6fbn\" (UniqueName: \"kubernetes.io/projected/5e5711f8-a06c-4ca2-88eb-9a128bc0ace5-kube-api-access-p6fbn\") pod \"dnsmasq-dns-cc6dfdd47-bv4vt\" (UID: \"5e5711f8-a06c-4ca2-88eb-9a128bc0ace5\") " pod="openstack/dnsmasq-dns-cc6dfdd47-bv4vt" Oct 13 08:08:31 crc kubenswrapper[4833]: I1013 08:08:31.993070 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e5711f8-a06c-4ca2-88eb-9a128bc0ace5-ovsdbserver-nb\") pod \"dnsmasq-dns-cc6dfdd47-bv4vt\" (UID: \"5e5711f8-a06c-4ca2-88eb-9a128bc0ace5\") " pod="openstack/dnsmasq-dns-cc6dfdd47-bv4vt" Oct 13 08:08:31 crc kubenswrapper[4833]: I1013 08:08:31.993156 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e5711f8-a06c-4ca2-88eb-9a128bc0ace5-dns-svc\") pod \"dnsmasq-dns-cc6dfdd47-bv4vt\" (UID: \"5e5711f8-a06c-4ca2-88eb-9a128bc0ace5\") " pod="openstack/dnsmasq-dns-cc6dfdd47-bv4vt" Oct 13 08:08:31 crc kubenswrapper[4833]: I1013 08:08:31.993179 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e5711f8-a06c-4ca2-88eb-9a128bc0ace5-ovsdbserver-sb\") pod \"dnsmasq-dns-cc6dfdd47-bv4vt\" (UID: \"5e5711f8-a06c-4ca2-88eb-9a128bc0ace5\") " pod="openstack/dnsmasq-dns-cc6dfdd47-bv4vt" Oct 13 08:08:32 crc kubenswrapper[4833]: I1013 08:08:32.095483 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e5711f8-a06c-4ca2-88eb-9a128bc0ace5-config\") pod \"dnsmasq-dns-cc6dfdd47-bv4vt\" (UID: \"5e5711f8-a06c-4ca2-88eb-9a128bc0ace5\") " pod="openstack/dnsmasq-dns-cc6dfdd47-bv4vt" Oct 13 08:08:32 crc kubenswrapper[4833]: I1013 08:08:32.095599 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6fbn\" (UniqueName: \"kubernetes.io/projected/5e5711f8-a06c-4ca2-88eb-9a128bc0ace5-kube-api-access-p6fbn\") pod \"dnsmasq-dns-cc6dfdd47-bv4vt\" (UID: \"5e5711f8-a06c-4ca2-88eb-9a128bc0ace5\") " pod="openstack/dnsmasq-dns-cc6dfdd47-bv4vt" Oct 13 08:08:32 crc kubenswrapper[4833]: I1013 08:08:32.095664 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e5711f8-a06c-4ca2-88eb-9a128bc0ace5-ovsdbserver-nb\") pod \"dnsmasq-dns-cc6dfdd47-bv4vt\" (UID: \"5e5711f8-a06c-4ca2-88eb-9a128bc0ace5\") " pod="openstack/dnsmasq-dns-cc6dfdd47-bv4vt" Oct 13 08:08:32 crc kubenswrapper[4833]: I1013 08:08:32.095740 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e5711f8-a06c-4ca2-88eb-9a128bc0ace5-dns-svc\") pod \"dnsmasq-dns-cc6dfdd47-bv4vt\" (UID: \"5e5711f8-a06c-4ca2-88eb-9a128bc0ace5\") " pod="openstack/dnsmasq-dns-cc6dfdd47-bv4vt" Oct 13 08:08:32 crc kubenswrapper[4833]: I1013 08:08:32.095767 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e5711f8-a06c-4ca2-88eb-9a128bc0ace5-ovsdbserver-sb\") pod \"dnsmasq-dns-cc6dfdd47-bv4vt\" (UID: \"5e5711f8-a06c-4ca2-88eb-9a128bc0ace5\") " pod="openstack/dnsmasq-dns-cc6dfdd47-bv4vt" Oct 13 08:08:32 crc kubenswrapper[4833]: I1013 08:08:32.096710 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e5711f8-a06c-4ca2-88eb-9a128bc0ace5-config\") pod \"dnsmasq-dns-cc6dfdd47-bv4vt\" (UID: \"5e5711f8-a06c-4ca2-88eb-9a128bc0ace5\") " pod="openstack/dnsmasq-dns-cc6dfdd47-bv4vt" Oct 13 08:08:32 crc kubenswrapper[4833]: I1013 08:08:32.097041 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e5711f8-a06c-4ca2-88eb-9a128bc0ace5-ovsdbserver-sb\") pod \"dnsmasq-dns-cc6dfdd47-bv4vt\" (UID: \"5e5711f8-a06c-4ca2-88eb-9a128bc0ace5\") " pod="openstack/dnsmasq-dns-cc6dfdd47-bv4vt" Oct 13 08:08:32 crc kubenswrapper[4833]: I1013 08:08:32.097068 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e5711f8-a06c-4ca2-88eb-9a128bc0ace5-ovsdbserver-nb\") pod \"dnsmasq-dns-cc6dfdd47-bv4vt\" (UID: \"5e5711f8-a06c-4ca2-88eb-9a128bc0ace5\") " pod="openstack/dnsmasq-dns-cc6dfdd47-bv4vt" Oct 13 08:08:32 crc kubenswrapper[4833]: I1013 08:08:32.097911 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e5711f8-a06c-4ca2-88eb-9a128bc0ace5-dns-svc\") pod \"dnsmasq-dns-cc6dfdd47-bv4vt\" (UID: \"5e5711f8-a06c-4ca2-88eb-9a128bc0ace5\") " pod="openstack/dnsmasq-dns-cc6dfdd47-bv4vt" Oct 13 08:08:32 crc kubenswrapper[4833]: I1013 08:08:32.117616 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6fbn\" (UniqueName: \"kubernetes.io/projected/5e5711f8-a06c-4ca2-88eb-9a128bc0ace5-kube-api-access-p6fbn\") pod \"dnsmasq-dns-cc6dfdd47-bv4vt\" (UID: \"5e5711f8-a06c-4ca2-88eb-9a128bc0ace5\") " pod="openstack/dnsmasq-dns-cc6dfdd47-bv4vt" Oct 13 08:08:32 crc kubenswrapper[4833]: I1013 08:08:32.154908 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cc6dfdd47-bv4vt" Oct 13 08:08:32 crc kubenswrapper[4833]: I1013 08:08:32.604304 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cc6dfdd47-bv4vt"] Oct 13 08:08:33 crc kubenswrapper[4833]: I1013 08:08:33.445377 4833 generic.go:334] "Generic (PLEG): container finished" podID="5e5711f8-a06c-4ca2-88eb-9a128bc0ace5" containerID="493e87936fe944374ecd58c0e93ec067a671b5653f29be33abaae8c9b791ab65" exitCode=0 Oct 13 08:08:33 crc kubenswrapper[4833]: I1013 08:08:33.447928 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cc6dfdd47-bv4vt" event={"ID":"5e5711f8-a06c-4ca2-88eb-9a128bc0ace5","Type":"ContainerDied","Data":"493e87936fe944374ecd58c0e93ec067a671b5653f29be33abaae8c9b791ab65"} Oct 13 08:08:33 crc kubenswrapper[4833]: I1013 08:08:33.448047 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cc6dfdd47-bv4vt" event={"ID":"5e5711f8-a06c-4ca2-88eb-9a128bc0ace5","Type":"ContainerStarted","Data":"611fd4a0844b3c501efca3ba9137dc44ce1a139da723e34a03cc1d5c646b2283"} Oct 13 08:08:34 crc kubenswrapper[4833]: I1013 08:08:34.465969 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cc6dfdd47-bv4vt" event={"ID":"5e5711f8-a06c-4ca2-88eb-9a128bc0ace5","Type":"ContainerStarted","Data":"386fabf5f1532486db306a843dfea8ffd580af5383c8c1a25d14b6943d8ebfd9"} Oct 13 08:08:34 crc kubenswrapper[4833]: I1013 08:08:34.466315 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cc6dfdd47-bv4vt" Oct 13 08:08:34 crc kubenswrapper[4833]: I1013 08:08:34.506185 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cc6dfdd47-bv4vt" podStartSLOduration=3.506155923 podStartE2EDuration="3.506155923s" podCreationTimestamp="2025-10-13 08:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:08:34.492864475 +0000 UTC m=+6004.593287381" watchObservedRunningTime="2025-10-13 08:08:34.506155923 +0000 UTC m=+6004.606578849" Oct 13 08:08:34 crc kubenswrapper[4833]: I1013 08:08:34.735607 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 13 08:08:34 crc kubenswrapper[4833]: I1013 08:08:34.735938 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7e4d3e81-c9e7-4424-b93e-67e6563537fe" containerName="nova-api-log" containerID="cri-o://db121a7e92ec89177070921236a47121e75943146fc9b3da14afb85e9fdf42dd" gracePeriod=30 Oct 13 08:08:34 crc kubenswrapper[4833]: I1013 08:08:34.736083 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7e4d3e81-c9e7-4424-b93e-67e6563537fe" containerName="nova-api-api" containerID="cri-o://548b30cec187d729b29fe7cc1bf23e9917d576558ed094f63c2f930509426d5b" gracePeriod=30 Oct 13 08:08:35 crc kubenswrapper[4833]: I1013 08:08:35.475355 4833 generic.go:334] "Generic (PLEG): container finished" podID="7e4d3e81-c9e7-4424-b93e-67e6563537fe" containerID="db121a7e92ec89177070921236a47121e75943146fc9b3da14afb85e9fdf42dd" exitCode=143 Oct 13 08:08:35 crc kubenswrapper[4833]: I1013 08:08:35.475423 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e4d3e81-c9e7-4424-b93e-67e6563537fe","Type":"ContainerDied","Data":"db121a7e92ec89177070921236a47121e75943146fc9b3da14afb85e9fdf42dd"} Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.381989 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.508308 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e4d3e81-c9e7-4424-b93e-67e6563537fe-config-data\") pod \"7e4d3e81-c9e7-4424-b93e-67e6563537fe\" (UID: \"7e4d3e81-c9e7-4424-b93e-67e6563537fe\") " Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.508403 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e4d3e81-c9e7-4424-b93e-67e6563537fe-logs\") pod \"7e4d3e81-c9e7-4424-b93e-67e6563537fe\" (UID: \"7e4d3e81-c9e7-4424-b93e-67e6563537fe\") " Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.508588 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdpth\" (UniqueName: \"kubernetes.io/projected/7e4d3e81-c9e7-4424-b93e-67e6563537fe-kube-api-access-wdpth\") pod \"7e4d3e81-c9e7-4424-b93e-67e6563537fe\" (UID: \"7e4d3e81-c9e7-4424-b93e-67e6563537fe\") " Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.508658 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e4d3e81-c9e7-4424-b93e-67e6563537fe-combined-ca-bundle\") pod \"7e4d3e81-c9e7-4424-b93e-67e6563537fe\" (UID: \"7e4d3e81-c9e7-4424-b93e-67e6563537fe\") " Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.509153 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e4d3e81-c9e7-4424-b93e-67e6563537fe-logs" (OuterVolumeSpecName: "logs") pod "7e4d3e81-c9e7-4424-b93e-67e6563537fe" (UID: "7e4d3e81-c9e7-4424-b93e-67e6563537fe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.517188 4833 generic.go:334] "Generic (PLEG): container finished" podID="7e4d3e81-c9e7-4424-b93e-67e6563537fe" containerID="548b30cec187d729b29fe7cc1bf23e9917d576558ed094f63c2f930509426d5b" exitCode=0 Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.517264 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e4d3e81-c9e7-4424-b93e-67e6563537fe","Type":"ContainerDied","Data":"548b30cec187d729b29fe7cc1bf23e9917d576558ed094f63c2f930509426d5b"} Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.517318 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e4d3e81-c9e7-4424-b93e-67e6563537fe","Type":"ContainerDied","Data":"1ae53c4d943573c641fb156799df8b18d077f2a07624b127c449a1621b6cad23"} Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.517347 4833 scope.go:117] "RemoveContainer" containerID="548b30cec187d729b29fe7cc1bf23e9917d576558ed094f63c2f930509426d5b" Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.517349 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.530641 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e4d3e81-c9e7-4424-b93e-67e6563537fe-kube-api-access-wdpth" (OuterVolumeSpecName: "kube-api-access-wdpth") pod "7e4d3e81-c9e7-4424-b93e-67e6563537fe" (UID: "7e4d3e81-c9e7-4424-b93e-67e6563537fe"). InnerVolumeSpecName "kube-api-access-wdpth". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.549571 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e4d3e81-c9e7-4424-b93e-67e6563537fe-config-data" (OuterVolumeSpecName: "config-data") pod "7e4d3e81-c9e7-4424-b93e-67e6563537fe" (UID: "7e4d3e81-c9e7-4424-b93e-67e6563537fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.560226 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e4d3e81-c9e7-4424-b93e-67e6563537fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e4d3e81-c9e7-4424-b93e-67e6563537fe" (UID: "7e4d3e81-c9e7-4424-b93e-67e6563537fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.610764 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e4d3e81-c9e7-4424-b93e-67e6563537fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.610812 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e4d3e81-c9e7-4424-b93e-67e6563537fe-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.610825 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e4d3e81-c9e7-4424-b93e-67e6563537fe-logs\") on node \"crc\" DevicePath \"\"" Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.610837 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdpth\" (UniqueName: \"kubernetes.io/projected/7e4d3e81-c9e7-4424-b93e-67e6563537fe-kube-api-access-wdpth\") on node \"crc\" DevicePath \"\"" Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.621164 4833 scope.go:117] "RemoveContainer" containerID="db121a7e92ec89177070921236a47121e75943146fc9b3da14afb85e9fdf42dd" Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.639763 4833 scope.go:117] "RemoveContainer" containerID="548b30cec187d729b29fe7cc1bf23e9917d576558ed094f63c2f930509426d5b" Oct 13 08:08:38 crc kubenswrapper[4833]: E1013 08:08:38.640091 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"548b30cec187d729b29fe7cc1bf23e9917d576558ed094f63c2f930509426d5b\": container with ID starting with 548b30cec187d729b29fe7cc1bf23e9917d576558ed094f63c2f930509426d5b not found: ID does not exist" containerID="548b30cec187d729b29fe7cc1bf23e9917d576558ed094f63c2f930509426d5b" Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.640138 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"548b30cec187d729b29fe7cc1bf23e9917d576558ed094f63c2f930509426d5b"} err="failed to get container status \"548b30cec187d729b29fe7cc1bf23e9917d576558ed094f63c2f930509426d5b\": rpc error: code = NotFound desc = could not find container \"548b30cec187d729b29fe7cc1bf23e9917d576558ed094f63c2f930509426d5b\": container with ID starting with 548b30cec187d729b29fe7cc1bf23e9917d576558ed094f63c2f930509426d5b not found: ID does not exist" Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.640169 4833 scope.go:117] "RemoveContainer" containerID="db121a7e92ec89177070921236a47121e75943146fc9b3da14afb85e9fdf42dd" Oct 13 08:08:38 crc kubenswrapper[4833]: E1013 08:08:38.640531 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db121a7e92ec89177070921236a47121e75943146fc9b3da14afb85e9fdf42dd\": container with ID starting with db121a7e92ec89177070921236a47121e75943146fc9b3da14afb85e9fdf42dd not found: ID does not exist" containerID="db121a7e92ec89177070921236a47121e75943146fc9b3da14afb85e9fdf42dd" Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.640575 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db121a7e92ec89177070921236a47121e75943146fc9b3da14afb85e9fdf42dd"} err="failed to get container status \"db121a7e92ec89177070921236a47121e75943146fc9b3da14afb85e9fdf42dd\": rpc error: code = NotFound desc = could not find container \"db121a7e92ec89177070921236a47121e75943146fc9b3da14afb85e9fdf42dd\": container with ID starting with db121a7e92ec89177070921236a47121e75943146fc9b3da14afb85e9fdf42dd not found: ID does not exist" Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.851337 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.881655 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.896715 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 13 08:08:38 crc kubenswrapper[4833]: E1013 08:08:38.897360 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e4d3e81-c9e7-4424-b93e-67e6563537fe" containerName="nova-api-api" Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.897390 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e4d3e81-c9e7-4424-b93e-67e6563537fe" containerName="nova-api-api" Oct 13 08:08:38 crc kubenswrapper[4833]: E1013 08:08:38.897437 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e4d3e81-c9e7-4424-b93e-67e6563537fe" containerName="nova-api-log" Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.897452 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e4d3e81-c9e7-4424-b93e-67e6563537fe" containerName="nova-api-log" Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.897824 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e4d3e81-c9e7-4424-b93e-67e6563537fe" containerName="nova-api-api" Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.897870 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e4d3e81-c9e7-4424-b93e-67e6563537fe" containerName="nova-api-log" Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.899688 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.902067 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.902276 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.904178 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 13 08:08:38 crc kubenswrapper[4833]: I1013 08:08:38.906342 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 13 08:08:39 crc kubenswrapper[4833]: I1013 08:08:39.032446 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-config-data\") pod \"nova-api-0\" (UID: \"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5\") " pod="openstack/nova-api-0" Oct 13 08:08:39 crc kubenswrapper[4833]: I1013 08:08:39.032640 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-logs\") pod \"nova-api-0\" (UID: \"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5\") " pod="openstack/nova-api-0" Oct 13 08:08:39 crc kubenswrapper[4833]: I1013 08:08:39.032719 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5\") " pod="openstack/nova-api-0" Oct 13 08:08:39 crc kubenswrapper[4833]: I1013 08:08:39.032786 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-public-tls-certs\") pod \"nova-api-0\" (UID: \"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5\") " pod="openstack/nova-api-0" Oct 13 08:08:39 crc kubenswrapper[4833]: I1013 08:08:39.032861 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjzlp\" (UniqueName: \"kubernetes.io/projected/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-kube-api-access-gjzlp\") pod \"nova-api-0\" (UID: \"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5\") " pod="openstack/nova-api-0" Oct 13 08:08:39 crc kubenswrapper[4833]: I1013 08:08:39.032960 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5\") " pod="openstack/nova-api-0" Oct 13 08:08:39 crc kubenswrapper[4833]: I1013 08:08:39.135122 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-config-data\") pod \"nova-api-0\" (UID: \"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5\") " pod="openstack/nova-api-0" Oct 13 08:08:39 crc kubenswrapper[4833]: I1013 08:08:39.135204 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-logs\") pod \"nova-api-0\" (UID: \"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5\") " pod="openstack/nova-api-0" Oct 13 08:08:39 crc kubenswrapper[4833]: I1013 08:08:39.135226 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5\") " pod="openstack/nova-api-0" Oct 13 08:08:39 crc kubenswrapper[4833]: I1013 08:08:39.135254 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-public-tls-certs\") pod \"nova-api-0\" (UID: \"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5\") " pod="openstack/nova-api-0" Oct 13 08:08:39 crc kubenswrapper[4833]: I1013 08:08:39.135279 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjzlp\" (UniqueName: \"kubernetes.io/projected/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-kube-api-access-gjzlp\") pod \"nova-api-0\" (UID: \"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5\") " pod="openstack/nova-api-0" Oct 13 08:08:39 crc kubenswrapper[4833]: I1013 08:08:39.135334 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5\") " pod="openstack/nova-api-0" Oct 13 08:08:39 crc kubenswrapper[4833]: I1013 08:08:39.136033 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-logs\") pod \"nova-api-0\" (UID: \"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5\") " pod="openstack/nova-api-0" Oct 13 08:08:39 crc kubenswrapper[4833]: I1013 08:08:39.140433 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-config-data\") pod \"nova-api-0\" (UID: \"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5\") " pod="openstack/nova-api-0" Oct 13 08:08:39 crc kubenswrapper[4833]: I1013 08:08:39.140958 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5\") " pod="openstack/nova-api-0" Oct 13 08:08:39 crc kubenswrapper[4833]: I1013 08:08:39.144195 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5\") " pod="openstack/nova-api-0" Oct 13 08:08:39 crc kubenswrapper[4833]: I1013 08:08:39.144839 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-public-tls-certs\") pod \"nova-api-0\" (UID: \"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5\") " pod="openstack/nova-api-0" Oct 13 08:08:39 crc kubenswrapper[4833]: I1013 08:08:39.152147 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjzlp\" (UniqueName: \"kubernetes.io/projected/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-kube-api-access-gjzlp\") pod \"nova-api-0\" (UID: \"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5\") " pod="openstack/nova-api-0" Oct 13 08:08:39 crc kubenswrapper[4833]: I1013 08:08:39.223987 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 08:08:39 crc kubenswrapper[4833]: I1013 08:08:39.483378 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 13 08:08:39 crc kubenswrapper[4833]: I1013 08:08:39.525581 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5","Type":"ContainerStarted","Data":"f0d38829228e619af6ee53f2ce0c0920d2cabb9f3e6e17f58d17d62fd45af8c7"} Oct 13 08:08:40 crc kubenswrapper[4833]: I1013 08:08:40.541144 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5","Type":"ContainerStarted","Data":"8aae16e4081dc7b95230701a9443f695850056f967001774ef8fbe7a619cdd28"} Oct 13 08:08:40 crc kubenswrapper[4833]: I1013 08:08:40.541737 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5","Type":"ContainerStarted","Data":"e308f63114bd164e298425d0bba795d1fad67e30b533635b1948b02f03e31c53"} Oct 13 08:08:40 crc kubenswrapper[4833]: I1013 08:08:40.616167 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.616139033 podStartE2EDuration="2.616139033s" podCreationTimestamp="2025-10-13 08:08:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:08:40.57595804 +0000 UTC m=+6010.676380986" watchObservedRunningTime="2025-10-13 08:08:40.616139033 +0000 UTC m=+6010.716561989" Oct 13 08:08:40 crc kubenswrapper[4833]: I1013 08:08:40.647327 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e4d3e81-c9e7-4424-b93e-67e6563537fe" path="/var/lib/kubelet/pods/7e4d3e81-c9e7-4424-b93e-67e6563537fe/volumes" Oct 13 08:08:42 crc kubenswrapper[4833]: I1013 08:08:42.157943 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cc6dfdd47-bv4vt" Oct 13 08:08:42 crc kubenswrapper[4833]: I1013 08:08:42.245376 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-759f89cf5-f8l8v"] Oct 13 08:08:42 crc kubenswrapper[4833]: I1013 08:08:42.245716 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-759f89cf5-f8l8v" podUID="526665e6-74a4-4ca4-a786-1ba03f0381e7" containerName="dnsmasq-dns" containerID="cri-o://89bff2068716fb0f8da1b85376cd84820b0a6df2416a945920897bc97b68a777" gracePeriod=10 Oct 13 08:08:42 crc kubenswrapper[4833]: I1013 08:08:42.576605 4833 generic.go:334] "Generic (PLEG): container finished" podID="526665e6-74a4-4ca4-a786-1ba03f0381e7" containerID="89bff2068716fb0f8da1b85376cd84820b0a6df2416a945920897bc97b68a777" exitCode=0 Oct 13 08:08:42 crc kubenswrapper[4833]: I1013 08:08:42.576992 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-759f89cf5-f8l8v" event={"ID":"526665e6-74a4-4ca4-a786-1ba03f0381e7","Type":"ContainerDied","Data":"89bff2068716fb0f8da1b85376cd84820b0a6df2416a945920897bc97b68a777"} Oct 13 08:08:42 crc kubenswrapper[4833]: I1013 08:08:42.779128 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-759f89cf5-f8l8v" Oct 13 08:08:42 crc kubenswrapper[4833]: I1013 08:08:42.913020 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nm9n\" (UniqueName: \"kubernetes.io/projected/526665e6-74a4-4ca4-a786-1ba03f0381e7-kube-api-access-6nm9n\") pod \"526665e6-74a4-4ca4-a786-1ba03f0381e7\" (UID: \"526665e6-74a4-4ca4-a786-1ba03f0381e7\") " Oct 13 08:08:42 crc kubenswrapper[4833]: I1013 08:08:42.913059 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/526665e6-74a4-4ca4-a786-1ba03f0381e7-ovsdbserver-nb\") pod \"526665e6-74a4-4ca4-a786-1ba03f0381e7\" (UID: \"526665e6-74a4-4ca4-a786-1ba03f0381e7\") " Oct 13 08:08:42 crc kubenswrapper[4833]: I1013 08:08:42.913120 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/526665e6-74a4-4ca4-a786-1ba03f0381e7-ovsdbserver-sb\") pod \"526665e6-74a4-4ca4-a786-1ba03f0381e7\" (UID: \"526665e6-74a4-4ca4-a786-1ba03f0381e7\") " Oct 13 08:08:42 crc kubenswrapper[4833]: I1013 08:08:42.913280 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/526665e6-74a4-4ca4-a786-1ba03f0381e7-config\") pod \"526665e6-74a4-4ca4-a786-1ba03f0381e7\" (UID: \"526665e6-74a4-4ca4-a786-1ba03f0381e7\") " Oct 13 08:08:42 crc kubenswrapper[4833]: I1013 08:08:42.913299 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/526665e6-74a4-4ca4-a786-1ba03f0381e7-dns-svc\") pod \"526665e6-74a4-4ca4-a786-1ba03f0381e7\" (UID: \"526665e6-74a4-4ca4-a786-1ba03f0381e7\") " Oct 13 08:08:42 crc kubenswrapper[4833]: I1013 08:08:42.919236 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/526665e6-74a4-4ca4-a786-1ba03f0381e7-kube-api-access-6nm9n" (OuterVolumeSpecName: "kube-api-access-6nm9n") pod "526665e6-74a4-4ca4-a786-1ba03f0381e7" (UID: "526665e6-74a4-4ca4-a786-1ba03f0381e7"). InnerVolumeSpecName "kube-api-access-6nm9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:08:42 crc kubenswrapper[4833]: I1013 08:08:42.965902 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/526665e6-74a4-4ca4-a786-1ba03f0381e7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "526665e6-74a4-4ca4-a786-1ba03f0381e7" (UID: "526665e6-74a4-4ca4-a786-1ba03f0381e7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:08:42 crc kubenswrapper[4833]: I1013 08:08:42.981938 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/526665e6-74a4-4ca4-a786-1ba03f0381e7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "526665e6-74a4-4ca4-a786-1ba03f0381e7" (UID: "526665e6-74a4-4ca4-a786-1ba03f0381e7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:08:42 crc kubenswrapper[4833]: I1013 08:08:42.983870 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/526665e6-74a4-4ca4-a786-1ba03f0381e7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "526665e6-74a4-4ca4-a786-1ba03f0381e7" (UID: "526665e6-74a4-4ca4-a786-1ba03f0381e7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:08:42 crc kubenswrapper[4833]: I1013 08:08:42.988388 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/526665e6-74a4-4ca4-a786-1ba03f0381e7-config" (OuterVolumeSpecName: "config") pod "526665e6-74a4-4ca4-a786-1ba03f0381e7" (UID: "526665e6-74a4-4ca4-a786-1ba03f0381e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:08:43 crc kubenswrapper[4833]: I1013 08:08:43.015496 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nm9n\" (UniqueName: \"kubernetes.io/projected/526665e6-74a4-4ca4-a786-1ba03f0381e7-kube-api-access-6nm9n\") on node \"crc\" DevicePath \"\"" Oct 13 08:08:43 crc kubenswrapper[4833]: I1013 08:08:43.015561 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/526665e6-74a4-4ca4-a786-1ba03f0381e7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 08:08:43 crc kubenswrapper[4833]: I1013 08:08:43.015574 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/526665e6-74a4-4ca4-a786-1ba03f0381e7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 08:08:43 crc kubenswrapper[4833]: I1013 08:08:43.015587 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/526665e6-74a4-4ca4-a786-1ba03f0381e7-config\") on node \"crc\" DevicePath \"\"" Oct 13 08:08:43 crc kubenswrapper[4833]: I1013 08:08:43.015598 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/526665e6-74a4-4ca4-a786-1ba03f0381e7-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 08:08:43 crc kubenswrapper[4833]: I1013 08:08:43.589644 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-759f89cf5-f8l8v" event={"ID":"526665e6-74a4-4ca4-a786-1ba03f0381e7","Type":"ContainerDied","Data":"d2e60e5ad4c59b4ce79192fe75948038058e4c9c5b389b56c2400e5cdc6cf8b4"} Oct 13 08:08:43 crc kubenswrapper[4833]: I1013 08:08:43.589875 4833 scope.go:117] "RemoveContainer" containerID="89bff2068716fb0f8da1b85376cd84820b0a6df2416a945920897bc97b68a777" Oct 13 08:08:43 crc kubenswrapper[4833]: I1013 08:08:43.589706 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-759f89cf5-f8l8v" Oct 13 08:08:43 crc kubenswrapper[4833]: I1013 08:08:43.628264 4833 scope.go:117] "RemoveContainer" containerID="3a12950486a442b8068f2be39e8faf0ce419cd56e2549098993d33d9fdd6600f" Oct 13 08:08:43 crc kubenswrapper[4833]: I1013 08:08:43.631375 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-759f89cf5-f8l8v"] Oct 13 08:08:43 crc kubenswrapper[4833]: I1013 08:08:43.639080 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-759f89cf5-f8l8v"] Oct 13 08:08:44 crc kubenswrapper[4833]: I1013 08:08:44.648064 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="526665e6-74a4-4ca4-a786-1ba03f0381e7" path="/var/lib/kubelet/pods/526665e6-74a4-4ca4-a786-1ba03f0381e7/volumes" Oct 13 08:08:48 crc kubenswrapper[4833]: I1013 08:08:48.086504 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-fpjgk"] Oct 13 08:08:48 crc kubenswrapper[4833]: I1013 08:08:48.108873 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-fpjgk"] Oct 13 08:08:48 crc kubenswrapper[4833]: I1013 08:08:48.646296 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64a09cec-505e-46b9-9be8-163a017dd1e9" path="/var/lib/kubelet/pods/64a09cec-505e-46b9-9be8-163a017dd1e9/volumes" Oct 13 08:08:49 crc kubenswrapper[4833]: I1013 08:08:49.224383 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 13 08:08:49 crc kubenswrapper[4833]: I1013 08:08:49.224467 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 13 08:08:50 crc kubenswrapper[4833]: I1013 08:08:50.245839 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ef7ccb3e-8928-43a2-abdb-6225bfabd4e5" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.100:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 13 08:08:50 crc kubenswrapper[4833]: I1013 08:08:50.245892 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ef7ccb3e-8928-43a2-abdb-6225bfabd4e5" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.100:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 13 08:08:51 crc kubenswrapper[4833]: I1013 08:08:51.980799 4833 scope.go:117] "RemoveContainer" containerID="385a5ecc93b9e35ecae77545bc42aa1c021f860e6d6720584a1bca79e82bca53" Oct 13 08:08:52 crc kubenswrapper[4833]: I1013 08:08:52.017746 4833 scope.go:117] "RemoveContainer" containerID="b6b8c79d3f056b14774dc9e5d8468f9192da270491d51102fe1ef1f5c99a34ed" Oct 13 08:08:52 crc kubenswrapper[4833]: I1013 08:08:52.059481 4833 scope.go:117] "RemoveContainer" containerID="ab834059bc8d962374421434c7f5bd3761319215c22db327e9c67458d33aa56b" Oct 13 08:08:58 crc kubenswrapper[4833]: I1013 08:08:58.025202 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6458-account-create-zlnxq"] Oct 13 08:08:58 crc kubenswrapper[4833]: I1013 08:08:58.032650 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6458-account-create-zlnxq"] Oct 13 08:08:58 crc kubenswrapper[4833]: I1013 08:08:58.641603 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f300349-5dfc-4c90-b952-e89f89a74f9d" path="/var/lib/kubelet/pods/4f300349-5dfc-4c90-b952-e89f89a74f9d/volumes" Oct 13 08:08:59 crc kubenswrapper[4833]: I1013 08:08:59.233890 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 13 08:08:59 crc kubenswrapper[4833]: I1013 08:08:59.234679 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 13 08:08:59 crc kubenswrapper[4833]: I1013 08:08:59.236791 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 13 08:08:59 crc kubenswrapper[4833]: I1013 08:08:59.241794 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 13 08:08:59 crc kubenswrapper[4833]: I1013 08:08:59.784403 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 13 08:08:59 crc kubenswrapper[4833]: I1013 08:08:59.796836 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 13 08:09:05 crc kubenswrapper[4833]: I1013 08:09:05.034816 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-96x5z"] Oct 13 08:09:05 crc kubenswrapper[4833]: I1013 08:09:05.041626 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-96x5z"] Oct 13 08:09:06 crc kubenswrapper[4833]: I1013 08:09:06.643917 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b179700-e622-4497-8d98-096ddef6c4bf" path="/var/lib/kubelet/pods/9b179700-e622-4497-8d98-096ddef6c4bf/volumes" Oct 13 08:09:18 crc kubenswrapper[4833]: I1013 08:09:18.076281 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-8mgv5"] Oct 13 08:09:18 crc kubenswrapper[4833]: I1013 08:09:18.086744 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-8mgv5"] Oct 13 08:09:18 crc kubenswrapper[4833]: I1013 08:09:18.636414 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de418a90-689a-4e67-83db-8d62633f8657" path="/var/lib/kubelet/pods/de418a90-689a-4e67-83db-8d62633f8657/volumes" Oct 13 08:09:23 crc kubenswrapper[4833]: I1013 08:09:23.926077 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-rd2wk"] Oct 13 08:09:23 crc kubenswrapper[4833]: E1013 08:09:23.927157 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="526665e6-74a4-4ca4-a786-1ba03f0381e7" containerName="init" Oct 13 08:09:23 crc kubenswrapper[4833]: I1013 08:09:23.927177 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="526665e6-74a4-4ca4-a786-1ba03f0381e7" containerName="init" Oct 13 08:09:23 crc kubenswrapper[4833]: E1013 08:09:23.927202 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="526665e6-74a4-4ca4-a786-1ba03f0381e7" containerName="dnsmasq-dns" Oct 13 08:09:23 crc kubenswrapper[4833]: I1013 08:09:23.927208 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="526665e6-74a4-4ca4-a786-1ba03f0381e7" containerName="dnsmasq-dns" Oct 13 08:09:23 crc kubenswrapper[4833]: I1013 08:09:23.927390 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="526665e6-74a4-4ca4-a786-1ba03f0381e7" containerName="dnsmasq-dns" Oct 13 08:09:23 crc kubenswrapper[4833]: I1013 08:09:23.928078 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rd2wk" Oct 13 08:09:23 crc kubenswrapper[4833]: I1013 08:09:23.930908 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-p5p5p" Oct 13 08:09:23 crc kubenswrapper[4833]: I1013 08:09:23.931254 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Oct 13 08:09:23 crc kubenswrapper[4833]: I1013 08:09:23.931418 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Oct 13 08:09:23 crc kubenswrapper[4833]: I1013 08:09:23.935102 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rd2wk"] Oct 13 08:09:23 crc kubenswrapper[4833]: I1013 08:09:23.976123 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-c74mk"] Oct 13 08:09:23 crc kubenswrapper[4833]: I1013 08:09:23.978159 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-c74mk" Oct 13 08:09:23 crc kubenswrapper[4833]: I1013 08:09:23.994762 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-c74mk"] Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.073050 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/61d0e633-8305-4d08-b83a-af05a6abbb96-ovn-controller-tls-certs\") pod \"ovn-controller-rd2wk\" (UID: \"61d0e633-8305-4d08-b83a-af05a6abbb96\") " pod="openstack/ovn-controller-rd2wk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.073098 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6aaf4fde-4669-4220-97e2-f04d63727284-etc-ovs\") pod \"ovn-controller-ovs-c74mk\" (UID: \"6aaf4fde-4669-4220-97e2-f04d63727284\") " pod="openstack/ovn-controller-ovs-c74mk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.073137 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msmp8\" (UniqueName: \"kubernetes.io/projected/6aaf4fde-4669-4220-97e2-f04d63727284-kube-api-access-msmp8\") pod \"ovn-controller-ovs-c74mk\" (UID: \"6aaf4fde-4669-4220-97e2-f04d63727284\") " pod="openstack/ovn-controller-ovs-c74mk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.073157 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/61d0e633-8305-4d08-b83a-af05a6abbb96-var-log-ovn\") pod \"ovn-controller-rd2wk\" (UID: \"61d0e633-8305-4d08-b83a-af05a6abbb96\") " pod="openstack/ovn-controller-rd2wk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.073171 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk2c9\" (UniqueName: \"kubernetes.io/projected/61d0e633-8305-4d08-b83a-af05a6abbb96-kube-api-access-lk2c9\") pod \"ovn-controller-rd2wk\" (UID: \"61d0e633-8305-4d08-b83a-af05a6abbb96\") " pod="openstack/ovn-controller-rd2wk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.073218 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/61d0e633-8305-4d08-b83a-af05a6abbb96-var-run-ovn\") pod \"ovn-controller-rd2wk\" (UID: \"61d0e633-8305-4d08-b83a-af05a6abbb96\") " pod="openstack/ovn-controller-rd2wk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.073265 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6aaf4fde-4669-4220-97e2-f04d63727284-var-lib\") pod \"ovn-controller-ovs-c74mk\" (UID: \"6aaf4fde-4669-4220-97e2-f04d63727284\") " pod="openstack/ovn-controller-ovs-c74mk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.073300 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/61d0e633-8305-4d08-b83a-af05a6abbb96-var-run\") pod \"ovn-controller-rd2wk\" (UID: \"61d0e633-8305-4d08-b83a-af05a6abbb96\") " pod="openstack/ovn-controller-rd2wk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.073324 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61d0e633-8305-4d08-b83a-af05a6abbb96-scripts\") pod \"ovn-controller-rd2wk\" (UID: \"61d0e633-8305-4d08-b83a-af05a6abbb96\") " pod="openstack/ovn-controller-rd2wk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.073349 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6aaf4fde-4669-4220-97e2-f04d63727284-var-run\") pod \"ovn-controller-ovs-c74mk\" (UID: \"6aaf4fde-4669-4220-97e2-f04d63727284\") " pod="openstack/ovn-controller-ovs-c74mk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.073378 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6aaf4fde-4669-4220-97e2-f04d63727284-var-log\") pod \"ovn-controller-ovs-c74mk\" (UID: \"6aaf4fde-4669-4220-97e2-f04d63727284\") " pod="openstack/ovn-controller-ovs-c74mk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.073412 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6aaf4fde-4669-4220-97e2-f04d63727284-scripts\") pod \"ovn-controller-ovs-c74mk\" (UID: \"6aaf4fde-4669-4220-97e2-f04d63727284\") " pod="openstack/ovn-controller-ovs-c74mk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.073507 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61d0e633-8305-4d08-b83a-af05a6abbb96-combined-ca-bundle\") pod \"ovn-controller-rd2wk\" (UID: \"61d0e633-8305-4d08-b83a-af05a6abbb96\") " pod="openstack/ovn-controller-rd2wk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.175041 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6aaf4fde-4669-4220-97e2-f04d63727284-scripts\") pod \"ovn-controller-ovs-c74mk\" (UID: \"6aaf4fde-4669-4220-97e2-f04d63727284\") " pod="openstack/ovn-controller-ovs-c74mk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.175089 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61d0e633-8305-4d08-b83a-af05a6abbb96-combined-ca-bundle\") pod \"ovn-controller-rd2wk\" (UID: \"61d0e633-8305-4d08-b83a-af05a6abbb96\") " pod="openstack/ovn-controller-rd2wk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.175156 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/61d0e633-8305-4d08-b83a-af05a6abbb96-ovn-controller-tls-certs\") pod \"ovn-controller-rd2wk\" (UID: \"61d0e633-8305-4d08-b83a-af05a6abbb96\") " pod="openstack/ovn-controller-rd2wk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.175189 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6aaf4fde-4669-4220-97e2-f04d63727284-etc-ovs\") pod \"ovn-controller-ovs-c74mk\" (UID: \"6aaf4fde-4669-4220-97e2-f04d63727284\") " pod="openstack/ovn-controller-ovs-c74mk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.175239 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msmp8\" (UniqueName: \"kubernetes.io/projected/6aaf4fde-4669-4220-97e2-f04d63727284-kube-api-access-msmp8\") pod \"ovn-controller-ovs-c74mk\" (UID: \"6aaf4fde-4669-4220-97e2-f04d63727284\") " pod="openstack/ovn-controller-ovs-c74mk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.175258 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/61d0e633-8305-4d08-b83a-af05a6abbb96-var-log-ovn\") pod \"ovn-controller-rd2wk\" (UID: \"61d0e633-8305-4d08-b83a-af05a6abbb96\") " pod="openstack/ovn-controller-rd2wk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.175275 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk2c9\" (UniqueName: \"kubernetes.io/projected/61d0e633-8305-4d08-b83a-af05a6abbb96-kube-api-access-lk2c9\") pod \"ovn-controller-rd2wk\" (UID: \"61d0e633-8305-4d08-b83a-af05a6abbb96\") " pod="openstack/ovn-controller-rd2wk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.175334 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/61d0e633-8305-4d08-b83a-af05a6abbb96-var-run-ovn\") pod \"ovn-controller-rd2wk\" (UID: \"61d0e633-8305-4d08-b83a-af05a6abbb96\") " pod="openstack/ovn-controller-rd2wk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.175373 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6aaf4fde-4669-4220-97e2-f04d63727284-var-lib\") pod \"ovn-controller-ovs-c74mk\" (UID: \"6aaf4fde-4669-4220-97e2-f04d63727284\") " pod="openstack/ovn-controller-ovs-c74mk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.175402 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/61d0e633-8305-4d08-b83a-af05a6abbb96-var-run\") pod \"ovn-controller-rd2wk\" (UID: \"61d0e633-8305-4d08-b83a-af05a6abbb96\") " pod="openstack/ovn-controller-rd2wk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.175420 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61d0e633-8305-4d08-b83a-af05a6abbb96-scripts\") pod \"ovn-controller-rd2wk\" (UID: \"61d0e633-8305-4d08-b83a-af05a6abbb96\") " pod="openstack/ovn-controller-rd2wk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.175444 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6aaf4fde-4669-4220-97e2-f04d63727284-var-run\") pod \"ovn-controller-ovs-c74mk\" (UID: \"6aaf4fde-4669-4220-97e2-f04d63727284\") " pod="openstack/ovn-controller-ovs-c74mk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.175469 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6aaf4fde-4669-4220-97e2-f04d63727284-var-log\") pod \"ovn-controller-ovs-c74mk\" (UID: \"6aaf4fde-4669-4220-97e2-f04d63727284\") " pod="openstack/ovn-controller-ovs-c74mk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.175593 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6aaf4fde-4669-4220-97e2-f04d63727284-etc-ovs\") pod \"ovn-controller-ovs-c74mk\" (UID: \"6aaf4fde-4669-4220-97e2-f04d63727284\") " pod="openstack/ovn-controller-ovs-c74mk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.175627 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6aaf4fde-4669-4220-97e2-f04d63727284-var-log\") pod \"ovn-controller-ovs-c74mk\" (UID: \"6aaf4fde-4669-4220-97e2-f04d63727284\") " pod="openstack/ovn-controller-ovs-c74mk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.175692 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6aaf4fde-4669-4220-97e2-f04d63727284-var-lib\") pod \"ovn-controller-ovs-c74mk\" (UID: \"6aaf4fde-4669-4220-97e2-f04d63727284\") " pod="openstack/ovn-controller-ovs-c74mk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.175712 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/61d0e633-8305-4d08-b83a-af05a6abbb96-var-run-ovn\") pod \"ovn-controller-rd2wk\" (UID: \"61d0e633-8305-4d08-b83a-af05a6abbb96\") " pod="openstack/ovn-controller-rd2wk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.175741 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/61d0e633-8305-4d08-b83a-af05a6abbb96-var-run\") pod \"ovn-controller-rd2wk\" (UID: \"61d0e633-8305-4d08-b83a-af05a6abbb96\") " pod="openstack/ovn-controller-rd2wk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.175795 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/61d0e633-8305-4d08-b83a-af05a6abbb96-var-log-ovn\") pod \"ovn-controller-rd2wk\" (UID: \"61d0e633-8305-4d08-b83a-af05a6abbb96\") " pod="openstack/ovn-controller-rd2wk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.176028 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6aaf4fde-4669-4220-97e2-f04d63727284-var-run\") pod \"ovn-controller-ovs-c74mk\" (UID: \"6aaf4fde-4669-4220-97e2-f04d63727284\") " pod="openstack/ovn-controller-ovs-c74mk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.177377 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6aaf4fde-4669-4220-97e2-f04d63727284-scripts\") pod \"ovn-controller-ovs-c74mk\" (UID: \"6aaf4fde-4669-4220-97e2-f04d63727284\") " pod="openstack/ovn-controller-ovs-c74mk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.177797 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61d0e633-8305-4d08-b83a-af05a6abbb96-scripts\") pod \"ovn-controller-rd2wk\" (UID: \"61d0e633-8305-4d08-b83a-af05a6abbb96\") " pod="openstack/ovn-controller-rd2wk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.183441 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61d0e633-8305-4d08-b83a-af05a6abbb96-combined-ca-bundle\") pod \"ovn-controller-rd2wk\" (UID: \"61d0e633-8305-4d08-b83a-af05a6abbb96\") " pod="openstack/ovn-controller-rd2wk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.192227 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/61d0e633-8305-4d08-b83a-af05a6abbb96-ovn-controller-tls-certs\") pod \"ovn-controller-rd2wk\" (UID: \"61d0e633-8305-4d08-b83a-af05a6abbb96\") " pod="openstack/ovn-controller-rd2wk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.193822 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk2c9\" (UniqueName: \"kubernetes.io/projected/61d0e633-8305-4d08-b83a-af05a6abbb96-kube-api-access-lk2c9\") pod \"ovn-controller-rd2wk\" (UID: \"61d0e633-8305-4d08-b83a-af05a6abbb96\") " pod="openstack/ovn-controller-rd2wk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.196910 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msmp8\" (UniqueName: \"kubernetes.io/projected/6aaf4fde-4669-4220-97e2-f04d63727284-kube-api-access-msmp8\") pod \"ovn-controller-ovs-c74mk\" (UID: \"6aaf4fde-4669-4220-97e2-f04d63727284\") " pod="openstack/ovn-controller-ovs-c74mk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.247664 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rd2wk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.304048 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-c74mk" Oct 13 08:09:24 crc kubenswrapper[4833]: I1013 08:09:24.729996 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rd2wk"] Oct 13 08:09:24 crc kubenswrapper[4833]: W1013 08:09:24.740017 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61d0e633_8305_4d08_b83a_af05a6abbb96.slice/crio-c83e70f8e5fe0b29fd013508302468a2594c1206ca82da322fc959cab7d52780 WatchSource:0}: Error finding container c83e70f8e5fe0b29fd013508302468a2594c1206ca82da322fc959cab7d52780: Status 404 returned error can't find the container with id c83e70f8e5fe0b29fd013508302468a2594c1206ca82da322fc959cab7d52780 Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.111655 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rd2wk" event={"ID":"61d0e633-8305-4d08-b83a-af05a6abbb96","Type":"ContainerStarted","Data":"7e599b263200fc3281c55d1cdaa02b36c7a61c22893b81ada1bf154f69257cf3"} Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.112201 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-rd2wk" Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.112221 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rd2wk" event={"ID":"61d0e633-8305-4d08-b83a-af05a6abbb96","Type":"ContainerStarted","Data":"c83e70f8e5fe0b29fd013508302468a2594c1206ca82da322fc959cab7d52780"} Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.138997 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-rd2wk" podStartSLOduration=2.138979243 podStartE2EDuration="2.138979243s" podCreationTimestamp="2025-10-13 08:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:09:25.135933206 +0000 UTC m=+6055.236356122" watchObservedRunningTime="2025-10-13 08:09:25.138979243 +0000 UTC m=+6055.239402149" Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.141685 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-c74mk"] Oct 13 08:09:25 crc kubenswrapper[4833]: W1013 08:09:25.154600 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6aaf4fde_4669_4220_97e2_f04d63727284.slice/crio-e92a81d2181e23b45d30fd058330a78069346f78841e20bb5c72d2642d5dc3d7 WatchSource:0}: Error finding container e92a81d2181e23b45d30fd058330a78069346f78841e20bb5c72d2642d5dc3d7: Status 404 returned error can't find the container with id e92a81d2181e23b45d30fd058330a78069346f78841e20bb5c72d2642d5dc3d7 Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.385780 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-create-8hbrm"] Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.389401 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-8hbrm" Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.420793 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-8hbrm"] Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.461413 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-hptlp"] Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.462724 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-hptlp" Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.465817 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.474687 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-hptlp"] Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.517553 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjlpq\" (UniqueName: \"kubernetes.io/projected/26b9de1b-e509-4ca6-8eb1-d31cade8c30e-kube-api-access-pjlpq\") pod \"octavia-db-create-8hbrm\" (UID: \"26b9de1b-e509-4ca6-8eb1-d31cade8c30e\") " pod="openstack/octavia-db-create-8hbrm" Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.619516 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7189a2c9-b33a-4701-a8af-43ad816b793e-ovn-rundir\") pod \"ovn-controller-metrics-hptlp\" (UID: \"7189a2c9-b33a-4701-a8af-43ad816b793e\") " pod="openstack/ovn-controller-metrics-hptlp" Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.619669 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjlpq\" (UniqueName: \"kubernetes.io/projected/26b9de1b-e509-4ca6-8eb1-d31cade8c30e-kube-api-access-pjlpq\") pod \"octavia-db-create-8hbrm\" (UID: \"26b9de1b-e509-4ca6-8eb1-d31cade8c30e\") " pod="openstack/octavia-db-create-8hbrm" Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.619701 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7189a2c9-b33a-4701-a8af-43ad816b793e-combined-ca-bundle\") pod \"ovn-controller-metrics-hptlp\" (UID: \"7189a2c9-b33a-4701-a8af-43ad816b793e\") " pod="openstack/ovn-controller-metrics-hptlp" Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.619721 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7189a2c9-b33a-4701-a8af-43ad816b793e-ovs-rundir\") pod \"ovn-controller-metrics-hptlp\" (UID: \"7189a2c9-b33a-4701-a8af-43ad816b793e\") " pod="openstack/ovn-controller-metrics-hptlp" Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.619755 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7189a2c9-b33a-4701-a8af-43ad816b793e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hptlp\" (UID: \"7189a2c9-b33a-4701-a8af-43ad816b793e\") " pod="openstack/ovn-controller-metrics-hptlp" Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.619778 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl4dx\" (UniqueName: \"kubernetes.io/projected/7189a2c9-b33a-4701-a8af-43ad816b793e-kube-api-access-cl4dx\") pod \"ovn-controller-metrics-hptlp\" (UID: \"7189a2c9-b33a-4701-a8af-43ad816b793e\") " pod="openstack/ovn-controller-metrics-hptlp" Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.619814 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7189a2c9-b33a-4701-a8af-43ad816b793e-config\") pod \"ovn-controller-metrics-hptlp\" (UID: \"7189a2c9-b33a-4701-a8af-43ad816b793e\") " pod="openstack/ovn-controller-metrics-hptlp" Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.637478 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjlpq\" (UniqueName: \"kubernetes.io/projected/26b9de1b-e509-4ca6-8eb1-d31cade8c30e-kube-api-access-pjlpq\") pod \"octavia-db-create-8hbrm\" (UID: \"26b9de1b-e509-4ca6-8eb1-d31cade8c30e\") " pod="openstack/octavia-db-create-8hbrm" Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.720102 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-8hbrm" Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.721799 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl4dx\" (UniqueName: \"kubernetes.io/projected/7189a2c9-b33a-4701-a8af-43ad816b793e-kube-api-access-cl4dx\") pod \"ovn-controller-metrics-hptlp\" (UID: \"7189a2c9-b33a-4701-a8af-43ad816b793e\") " pod="openstack/ovn-controller-metrics-hptlp" Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.721868 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7189a2c9-b33a-4701-a8af-43ad816b793e-config\") pod \"ovn-controller-metrics-hptlp\" (UID: \"7189a2c9-b33a-4701-a8af-43ad816b793e\") " pod="openstack/ovn-controller-metrics-hptlp" Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.721928 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7189a2c9-b33a-4701-a8af-43ad816b793e-ovn-rundir\") pod \"ovn-controller-metrics-hptlp\" (UID: \"7189a2c9-b33a-4701-a8af-43ad816b793e\") " pod="openstack/ovn-controller-metrics-hptlp" Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.722002 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7189a2c9-b33a-4701-a8af-43ad816b793e-combined-ca-bundle\") pod \"ovn-controller-metrics-hptlp\" (UID: \"7189a2c9-b33a-4701-a8af-43ad816b793e\") " pod="openstack/ovn-controller-metrics-hptlp" Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.722023 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7189a2c9-b33a-4701-a8af-43ad816b793e-ovs-rundir\") pod \"ovn-controller-metrics-hptlp\" (UID: \"7189a2c9-b33a-4701-a8af-43ad816b793e\") " pod="openstack/ovn-controller-metrics-hptlp" Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.722056 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7189a2c9-b33a-4701-a8af-43ad816b793e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hptlp\" (UID: \"7189a2c9-b33a-4701-a8af-43ad816b793e\") " pod="openstack/ovn-controller-metrics-hptlp" Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.722594 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7189a2c9-b33a-4701-a8af-43ad816b793e-ovn-rundir\") pod \"ovn-controller-metrics-hptlp\" (UID: \"7189a2c9-b33a-4701-a8af-43ad816b793e\") " pod="openstack/ovn-controller-metrics-hptlp" Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.723394 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7189a2c9-b33a-4701-a8af-43ad816b793e-config\") pod \"ovn-controller-metrics-hptlp\" (UID: \"7189a2c9-b33a-4701-a8af-43ad816b793e\") " pod="openstack/ovn-controller-metrics-hptlp" Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.726268 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7189a2c9-b33a-4701-a8af-43ad816b793e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hptlp\" (UID: \"7189a2c9-b33a-4701-a8af-43ad816b793e\") " pod="openstack/ovn-controller-metrics-hptlp" Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.726402 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7189a2c9-b33a-4701-a8af-43ad816b793e-ovs-rundir\") pod \"ovn-controller-metrics-hptlp\" (UID: \"7189a2c9-b33a-4701-a8af-43ad816b793e\") " pod="openstack/ovn-controller-metrics-hptlp" Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.729796 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7189a2c9-b33a-4701-a8af-43ad816b793e-combined-ca-bundle\") pod \"ovn-controller-metrics-hptlp\" (UID: \"7189a2c9-b33a-4701-a8af-43ad816b793e\") " pod="openstack/ovn-controller-metrics-hptlp" Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.751271 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl4dx\" (UniqueName: \"kubernetes.io/projected/7189a2c9-b33a-4701-a8af-43ad816b793e-kube-api-access-cl4dx\") pod \"ovn-controller-metrics-hptlp\" (UID: \"7189a2c9-b33a-4701-a8af-43ad816b793e\") " pod="openstack/ovn-controller-metrics-hptlp" Oct 13 08:09:25 crc kubenswrapper[4833]: I1013 08:09:25.794958 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-hptlp" Oct 13 08:09:26 crc kubenswrapper[4833]: I1013 08:09:26.126195 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-c74mk" event={"ID":"6aaf4fde-4669-4220-97e2-f04d63727284","Type":"ContainerStarted","Data":"e35e5aa11b29504fd03a40eca1c594374d4b599d4d96f8463e2e291a9138b110"} Oct 13 08:09:26 crc kubenswrapper[4833]: I1013 08:09:26.126237 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-c74mk" event={"ID":"6aaf4fde-4669-4220-97e2-f04d63727284","Type":"ContainerStarted","Data":"e92a81d2181e23b45d30fd058330a78069346f78841e20bb5c72d2642d5dc3d7"} Oct 13 08:09:26 crc kubenswrapper[4833]: I1013 08:09:26.335966 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-8hbrm"] Oct 13 08:09:26 crc kubenswrapper[4833]: I1013 08:09:26.362128 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-hptlp"] Oct 13 08:09:27 crc kubenswrapper[4833]: I1013 08:09:27.142881 4833 generic.go:334] "Generic (PLEG): container finished" podID="6aaf4fde-4669-4220-97e2-f04d63727284" containerID="e35e5aa11b29504fd03a40eca1c594374d4b599d4d96f8463e2e291a9138b110" exitCode=0 Oct 13 08:09:27 crc kubenswrapper[4833]: I1013 08:09:27.142942 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-c74mk" event={"ID":"6aaf4fde-4669-4220-97e2-f04d63727284","Type":"ContainerDied","Data":"e35e5aa11b29504fd03a40eca1c594374d4b599d4d96f8463e2e291a9138b110"} Oct 13 08:09:27 crc kubenswrapper[4833]: I1013 08:09:27.146371 4833 generic.go:334] "Generic (PLEG): container finished" podID="26b9de1b-e509-4ca6-8eb1-d31cade8c30e" containerID="d6acac150a4c0368f9593e9cdba383e6d0fcb161bb8aa90d2d5807df2ceafa78" exitCode=0 Oct 13 08:09:27 crc kubenswrapper[4833]: I1013 08:09:27.146610 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-8hbrm" event={"ID":"26b9de1b-e509-4ca6-8eb1-d31cade8c30e","Type":"ContainerDied","Data":"d6acac150a4c0368f9593e9cdba383e6d0fcb161bb8aa90d2d5807df2ceafa78"} Oct 13 08:09:27 crc kubenswrapper[4833]: I1013 08:09:27.146667 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-8hbrm" event={"ID":"26b9de1b-e509-4ca6-8eb1-d31cade8c30e","Type":"ContainerStarted","Data":"78851b69dd938a46fe7cd3136cd50e9538a3abd26cc17ce3ab407151ac79b839"} Oct 13 08:09:27 crc kubenswrapper[4833]: I1013 08:09:27.151177 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-hptlp" event={"ID":"7189a2c9-b33a-4701-a8af-43ad816b793e","Type":"ContainerStarted","Data":"a4db22b46f3ada49982d5c15da13a71dcf509825a25bf8da021563f3a3b41cf1"} Oct 13 08:09:27 crc kubenswrapper[4833]: I1013 08:09:27.151218 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-hptlp" event={"ID":"7189a2c9-b33a-4701-a8af-43ad816b793e","Type":"ContainerStarted","Data":"5ac0a422f8f7151dfc475c3662f267c5ef274b18dc25f24b10f097b28d35e344"} Oct 13 08:09:27 crc kubenswrapper[4833]: I1013 08:09:27.218014 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-hptlp" podStartSLOduration=2.217989897 podStartE2EDuration="2.217989897s" podCreationTimestamp="2025-10-13 08:09:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:09:27.207446187 +0000 UTC m=+6057.307869103" watchObservedRunningTime="2025-10-13 08:09:27.217989897 +0000 UTC m=+6057.318412813" Oct 13 08:09:28 crc kubenswrapper[4833]: I1013 08:09:28.163684 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-c74mk" event={"ID":"6aaf4fde-4669-4220-97e2-f04d63727284","Type":"ContainerStarted","Data":"03616065de0bb28199bb931540bea3aba6d23aca873f2a3e2e765a4ce1ea7398"} Oct 13 08:09:28 crc kubenswrapper[4833]: I1013 08:09:28.164940 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-c74mk" Oct 13 08:09:28 crc kubenswrapper[4833]: I1013 08:09:28.165016 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-c74mk" event={"ID":"6aaf4fde-4669-4220-97e2-f04d63727284","Type":"ContainerStarted","Data":"2ac6a462eab7ffb1250ed80366a0c0e109691e488c7c7cded9bb191fc8a9230a"} Oct 13 08:09:28 crc kubenswrapper[4833]: I1013 08:09:28.165342 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-c74mk" Oct 13 08:09:28 crc kubenswrapper[4833]: I1013 08:09:28.194680 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-c74mk" podStartSLOduration=5.194662377 podStartE2EDuration="5.194662377s" podCreationTimestamp="2025-10-13 08:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:09:28.187509264 +0000 UTC m=+6058.287932190" watchObservedRunningTime="2025-10-13 08:09:28.194662377 +0000 UTC m=+6058.295085293" Oct 13 08:09:28 crc kubenswrapper[4833]: I1013 08:09:28.556152 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-8hbrm" Oct 13 08:09:28 crc kubenswrapper[4833]: I1013 08:09:28.680999 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjlpq\" (UniqueName: \"kubernetes.io/projected/26b9de1b-e509-4ca6-8eb1-d31cade8c30e-kube-api-access-pjlpq\") pod \"26b9de1b-e509-4ca6-8eb1-d31cade8c30e\" (UID: \"26b9de1b-e509-4ca6-8eb1-d31cade8c30e\") " Oct 13 08:09:28 crc kubenswrapper[4833]: I1013 08:09:28.690981 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26b9de1b-e509-4ca6-8eb1-d31cade8c30e-kube-api-access-pjlpq" (OuterVolumeSpecName: "kube-api-access-pjlpq") pod "26b9de1b-e509-4ca6-8eb1-d31cade8c30e" (UID: "26b9de1b-e509-4ca6-8eb1-d31cade8c30e"). InnerVolumeSpecName "kube-api-access-pjlpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:09:28 crc kubenswrapper[4833]: I1013 08:09:28.783454 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjlpq\" (UniqueName: \"kubernetes.io/projected/26b9de1b-e509-4ca6-8eb1-d31cade8c30e-kube-api-access-pjlpq\") on node \"crc\" DevicePath \"\"" Oct 13 08:09:29 crc kubenswrapper[4833]: I1013 08:09:29.177782 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-8hbrm" event={"ID":"26b9de1b-e509-4ca6-8eb1-d31cade8c30e","Type":"ContainerDied","Data":"78851b69dd938a46fe7cd3136cd50e9538a3abd26cc17ce3ab407151ac79b839"} Oct 13 08:09:29 crc kubenswrapper[4833]: I1013 08:09:29.177840 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78851b69dd938a46fe7cd3136cd50e9538a3abd26cc17ce3ab407151ac79b839" Oct 13 08:09:29 crc kubenswrapper[4833]: I1013 08:09:29.177873 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-8hbrm" Oct 13 08:09:36 crc kubenswrapper[4833]: I1013 08:09:36.360073 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-7442-account-create-qdw4q"] Oct 13 08:09:36 crc kubenswrapper[4833]: E1013 08:09:36.361135 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b9de1b-e509-4ca6-8eb1-d31cade8c30e" containerName="mariadb-database-create" Oct 13 08:09:36 crc kubenswrapper[4833]: I1013 08:09:36.361154 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b9de1b-e509-4ca6-8eb1-d31cade8c30e" containerName="mariadb-database-create" Oct 13 08:09:36 crc kubenswrapper[4833]: I1013 08:09:36.361376 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="26b9de1b-e509-4ca6-8eb1-d31cade8c30e" containerName="mariadb-database-create" Oct 13 08:09:36 crc kubenswrapper[4833]: I1013 08:09:36.362180 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-7442-account-create-qdw4q" Oct 13 08:09:36 crc kubenswrapper[4833]: I1013 08:09:36.369089 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-db-secret" Oct 13 08:09:36 crc kubenswrapper[4833]: I1013 08:09:36.378954 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-7442-account-create-qdw4q"] Oct 13 08:09:36 crc kubenswrapper[4833]: I1013 08:09:36.451966 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ck2h\" (UniqueName: \"kubernetes.io/projected/8d7fd704-430c-4f1f-9250-5e0619873cd0-kube-api-access-6ck2h\") pod \"octavia-7442-account-create-qdw4q\" (UID: \"8d7fd704-430c-4f1f-9250-5e0619873cd0\") " pod="openstack/octavia-7442-account-create-qdw4q" Oct 13 08:09:36 crc kubenswrapper[4833]: I1013 08:09:36.553679 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ck2h\" (UniqueName: \"kubernetes.io/projected/8d7fd704-430c-4f1f-9250-5e0619873cd0-kube-api-access-6ck2h\") pod \"octavia-7442-account-create-qdw4q\" (UID: \"8d7fd704-430c-4f1f-9250-5e0619873cd0\") " pod="openstack/octavia-7442-account-create-qdw4q" Oct 13 08:09:36 crc kubenswrapper[4833]: I1013 08:09:36.597074 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ck2h\" (UniqueName: \"kubernetes.io/projected/8d7fd704-430c-4f1f-9250-5e0619873cd0-kube-api-access-6ck2h\") pod \"octavia-7442-account-create-qdw4q\" (UID: \"8d7fd704-430c-4f1f-9250-5e0619873cd0\") " pod="openstack/octavia-7442-account-create-qdw4q" Oct 13 08:09:36 crc kubenswrapper[4833]: I1013 08:09:36.729898 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-7442-account-create-qdw4q" Oct 13 08:09:37 crc kubenswrapper[4833]: I1013 08:09:37.212953 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-7442-account-create-qdw4q"] Oct 13 08:09:37 crc kubenswrapper[4833]: I1013 08:09:37.272610 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-7442-account-create-qdw4q" event={"ID":"8d7fd704-430c-4f1f-9250-5e0619873cd0","Type":"ContainerStarted","Data":"fc61fd31735ded2509251cb660c039c1d7051d798abd36cfd2d7f437409448bd"} Oct 13 08:09:38 crc kubenswrapper[4833]: I1013 08:09:38.286886 4833 generic.go:334] "Generic (PLEG): container finished" podID="8d7fd704-430c-4f1f-9250-5e0619873cd0" containerID="48f0df259e23bba0065574c5ff0ff317ad966590bdf222575956dc62322c9e62" exitCode=0 Oct 13 08:09:38 crc kubenswrapper[4833]: I1013 08:09:38.286983 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-7442-account-create-qdw4q" event={"ID":"8d7fd704-430c-4f1f-9250-5e0619873cd0","Type":"ContainerDied","Data":"48f0df259e23bba0065574c5ff0ff317ad966590bdf222575956dc62322c9e62"} Oct 13 08:09:39 crc kubenswrapper[4833]: I1013 08:09:39.709443 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-7442-account-create-qdw4q" Oct 13 08:09:39 crc kubenswrapper[4833]: I1013 08:09:39.830912 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ck2h\" (UniqueName: \"kubernetes.io/projected/8d7fd704-430c-4f1f-9250-5e0619873cd0-kube-api-access-6ck2h\") pod \"8d7fd704-430c-4f1f-9250-5e0619873cd0\" (UID: \"8d7fd704-430c-4f1f-9250-5e0619873cd0\") " Oct 13 08:09:39 crc kubenswrapper[4833]: I1013 08:09:39.839271 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d7fd704-430c-4f1f-9250-5e0619873cd0-kube-api-access-6ck2h" (OuterVolumeSpecName: "kube-api-access-6ck2h") pod "8d7fd704-430c-4f1f-9250-5e0619873cd0" (UID: "8d7fd704-430c-4f1f-9250-5e0619873cd0"). InnerVolumeSpecName "kube-api-access-6ck2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:09:39 crc kubenswrapper[4833]: I1013 08:09:39.932677 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ck2h\" (UniqueName: \"kubernetes.io/projected/8d7fd704-430c-4f1f-9250-5e0619873cd0-kube-api-access-6ck2h\") on node \"crc\" DevicePath \"\"" Oct 13 08:09:40 crc kubenswrapper[4833]: I1013 08:09:40.318141 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-7442-account-create-qdw4q" event={"ID":"8d7fd704-430c-4f1f-9250-5e0619873cd0","Type":"ContainerDied","Data":"fc61fd31735ded2509251cb660c039c1d7051d798abd36cfd2d7f437409448bd"} Oct 13 08:09:40 crc kubenswrapper[4833]: I1013 08:09:40.318193 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc61fd31735ded2509251cb660c039c1d7051d798abd36cfd2d7f437409448bd" Oct 13 08:09:40 crc kubenswrapper[4833]: I1013 08:09:40.318229 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-7442-account-create-qdw4q" Oct 13 08:09:42 crc kubenswrapper[4833]: I1013 08:09:42.414214 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-persistence-db-create-lzsts"] Oct 13 08:09:42 crc kubenswrapper[4833]: E1013 08:09:42.415329 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d7fd704-430c-4f1f-9250-5e0619873cd0" containerName="mariadb-account-create" Oct 13 08:09:42 crc kubenswrapper[4833]: I1013 08:09:42.415353 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d7fd704-430c-4f1f-9250-5e0619873cd0" containerName="mariadb-account-create" Oct 13 08:09:42 crc kubenswrapper[4833]: I1013 08:09:42.415762 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d7fd704-430c-4f1f-9250-5e0619873cd0" containerName="mariadb-account-create" Oct 13 08:09:42 crc kubenswrapper[4833]: I1013 08:09:42.416867 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-lzsts" Oct 13 08:09:42 crc kubenswrapper[4833]: I1013 08:09:42.436794 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-lzsts"] Oct 13 08:09:42 crc kubenswrapper[4833]: I1013 08:09:42.493594 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwhqd\" (UniqueName: \"kubernetes.io/projected/a922c41a-416a-4a00-8360-37b21d30e628-kube-api-access-kwhqd\") pod \"octavia-persistence-db-create-lzsts\" (UID: \"a922c41a-416a-4a00-8360-37b21d30e628\") " pod="openstack/octavia-persistence-db-create-lzsts" Oct 13 08:09:42 crc kubenswrapper[4833]: I1013 08:09:42.595680 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwhqd\" (UniqueName: \"kubernetes.io/projected/a922c41a-416a-4a00-8360-37b21d30e628-kube-api-access-kwhqd\") pod \"octavia-persistence-db-create-lzsts\" (UID: \"a922c41a-416a-4a00-8360-37b21d30e628\") " pod="openstack/octavia-persistence-db-create-lzsts" Oct 13 08:09:42 crc kubenswrapper[4833]: I1013 08:09:42.623260 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwhqd\" (UniqueName: \"kubernetes.io/projected/a922c41a-416a-4a00-8360-37b21d30e628-kube-api-access-kwhqd\") pod \"octavia-persistence-db-create-lzsts\" (UID: \"a922c41a-416a-4a00-8360-37b21d30e628\") " pod="openstack/octavia-persistence-db-create-lzsts" Oct 13 08:09:42 crc kubenswrapper[4833]: I1013 08:09:42.744188 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-lzsts" Oct 13 08:09:43 crc kubenswrapper[4833]: I1013 08:09:43.211148 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-lzsts"] Oct 13 08:09:43 crc kubenswrapper[4833]: W1013 08:09:43.218265 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda922c41a_416a_4a00_8360_37b21d30e628.slice/crio-70beb650fb74033608a91311962faafbb70f8cde5c32ba9481c091908a1b957d WatchSource:0}: Error finding container 70beb650fb74033608a91311962faafbb70f8cde5c32ba9481c091908a1b957d: Status 404 returned error can't find the container with id 70beb650fb74033608a91311962faafbb70f8cde5c32ba9481c091908a1b957d Oct 13 08:09:43 crc kubenswrapper[4833]: I1013 08:09:43.357599 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-lzsts" event={"ID":"a922c41a-416a-4a00-8360-37b21d30e628","Type":"ContainerStarted","Data":"70beb650fb74033608a91311962faafbb70f8cde5c32ba9481c091908a1b957d"} Oct 13 08:09:44 crc kubenswrapper[4833]: I1013 08:09:44.375664 4833 generic.go:334] "Generic (PLEG): container finished" podID="a922c41a-416a-4a00-8360-37b21d30e628" containerID="b72154f6d783e80e8bcc5b5a6e348cbcdb32a32bd379fe4860a9da1a65386805" exitCode=0 Oct 13 08:09:44 crc kubenswrapper[4833]: I1013 08:09:44.375757 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-lzsts" event={"ID":"a922c41a-416a-4a00-8360-37b21d30e628","Type":"ContainerDied","Data":"b72154f6d783e80e8bcc5b5a6e348cbcdb32a32bd379fe4860a9da1a65386805"} Oct 13 08:09:45 crc kubenswrapper[4833]: I1013 08:09:45.849474 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-lzsts" Oct 13 08:09:45 crc kubenswrapper[4833]: I1013 08:09:45.978679 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwhqd\" (UniqueName: \"kubernetes.io/projected/a922c41a-416a-4a00-8360-37b21d30e628-kube-api-access-kwhqd\") pod \"a922c41a-416a-4a00-8360-37b21d30e628\" (UID: \"a922c41a-416a-4a00-8360-37b21d30e628\") " Oct 13 08:09:45 crc kubenswrapper[4833]: I1013 08:09:45.986066 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a922c41a-416a-4a00-8360-37b21d30e628-kube-api-access-kwhqd" (OuterVolumeSpecName: "kube-api-access-kwhqd") pod "a922c41a-416a-4a00-8360-37b21d30e628" (UID: "a922c41a-416a-4a00-8360-37b21d30e628"). InnerVolumeSpecName "kube-api-access-kwhqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:09:46 crc kubenswrapper[4833]: I1013 08:09:46.081811 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwhqd\" (UniqueName: \"kubernetes.io/projected/a922c41a-416a-4a00-8360-37b21d30e628-kube-api-access-kwhqd\") on node \"crc\" DevicePath \"\"" Oct 13 08:09:46 crc kubenswrapper[4833]: I1013 08:09:46.404461 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-lzsts" event={"ID":"a922c41a-416a-4a00-8360-37b21d30e628","Type":"ContainerDied","Data":"70beb650fb74033608a91311962faafbb70f8cde5c32ba9481c091908a1b957d"} Oct 13 08:09:46 crc kubenswrapper[4833]: I1013 08:09:46.404507 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70beb650fb74033608a91311962faafbb70f8cde5c32ba9481c091908a1b957d" Oct 13 08:09:46 crc kubenswrapper[4833]: I1013 08:09:46.404599 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-lzsts" Oct 13 08:09:52 crc kubenswrapper[4833]: I1013 08:09:52.289286 4833 scope.go:117] "RemoveContainer" containerID="822fd76c870110279b91f2053106b87ca4acac746835343aea4b90dea4888bb2" Oct 13 08:09:52 crc kubenswrapper[4833]: I1013 08:09:52.337732 4833 scope.go:117] "RemoveContainer" containerID="2358d627f4b4f6ccc77818eeb4b48c9fd9fcdcc46baede8c999aaa3dbe5c960f" Oct 13 08:09:52 crc kubenswrapper[4833]: I1013 08:09:52.395975 4833 scope.go:117] "RemoveContainer" containerID="f6e366342a2b99abce28c251849269c2471823a00712ff9a6773729f030e3457" Oct 13 08:09:53 crc kubenswrapper[4833]: I1013 08:09:53.020954 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-b12f-account-create-72jt8"] Oct 13 08:09:53 crc kubenswrapper[4833]: E1013 08:09:53.021672 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a922c41a-416a-4a00-8360-37b21d30e628" containerName="mariadb-database-create" Oct 13 08:09:53 crc kubenswrapper[4833]: I1013 08:09:53.021703 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a922c41a-416a-4a00-8360-37b21d30e628" containerName="mariadb-database-create" Oct 13 08:09:53 crc kubenswrapper[4833]: I1013 08:09:53.022181 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a922c41a-416a-4a00-8360-37b21d30e628" containerName="mariadb-database-create" Oct 13 08:09:53 crc kubenswrapper[4833]: I1013 08:09:53.023452 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-b12f-account-create-72jt8" Oct 13 08:09:53 crc kubenswrapper[4833]: I1013 08:09:53.027023 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-persistence-db-secret" Oct 13 08:09:53 crc kubenswrapper[4833]: I1013 08:09:53.034679 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-b12f-account-create-72jt8"] Oct 13 08:09:53 crc kubenswrapper[4833]: I1013 08:09:53.157490 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dwnt\" (UniqueName: \"kubernetes.io/projected/25461515-6805-4b45-a203-c778beb80fb4-kube-api-access-9dwnt\") pod \"octavia-b12f-account-create-72jt8\" (UID: \"25461515-6805-4b45-a203-c778beb80fb4\") " pod="openstack/octavia-b12f-account-create-72jt8" Oct 13 08:09:53 crc kubenswrapper[4833]: I1013 08:09:53.259912 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dwnt\" (UniqueName: \"kubernetes.io/projected/25461515-6805-4b45-a203-c778beb80fb4-kube-api-access-9dwnt\") pod \"octavia-b12f-account-create-72jt8\" (UID: \"25461515-6805-4b45-a203-c778beb80fb4\") " pod="openstack/octavia-b12f-account-create-72jt8" Oct 13 08:09:53 crc kubenswrapper[4833]: I1013 08:09:53.287036 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dwnt\" (UniqueName: \"kubernetes.io/projected/25461515-6805-4b45-a203-c778beb80fb4-kube-api-access-9dwnt\") pod \"octavia-b12f-account-create-72jt8\" (UID: \"25461515-6805-4b45-a203-c778beb80fb4\") " pod="openstack/octavia-b12f-account-create-72jt8" Oct 13 08:09:53 crc kubenswrapper[4833]: I1013 08:09:53.363438 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-b12f-account-create-72jt8" Oct 13 08:09:54 crc kubenswrapper[4833]: I1013 08:09:53.864141 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-b12f-account-create-72jt8"] Oct 13 08:09:54 crc kubenswrapper[4833]: I1013 08:09:54.511318 4833 generic.go:334] "Generic (PLEG): container finished" podID="25461515-6805-4b45-a203-c778beb80fb4" containerID="4324029f5827ab866c52db217e214e944d5f60b903fa03bc6fbdb7e396911f31" exitCode=0 Oct 13 08:09:54 crc kubenswrapper[4833]: I1013 08:09:54.511719 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-b12f-account-create-72jt8" event={"ID":"25461515-6805-4b45-a203-c778beb80fb4","Type":"ContainerDied","Data":"4324029f5827ab866c52db217e214e944d5f60b903fa03bc6fbdb7e396911f31"} Oct 13 08:09:54 crc kubenswrapper[4833]: I1013 08:09:54.511854 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-b12f-account-create-72jt8" event={"ID":"25461515-6805-4b45-a203-c778beb80fb4","Type":"ContainerStarted","Data":"2129e3438530dec15bee302667d4ac12208164b486108e258c728cea50fa8632"} Oct 13 08:09:55 crc kubenswrapper[4833]: I1013 08:09:55.880172 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-b12f-account-create-72jt8" Oct 13 08:09:56 crc kubenswrapper[4833]: I1013 08:09:56.023808 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dwnt\" (UniqueName: \"kubernetes.io/projected/25461515-6805-4b45-a203-c778beb80fb4-kube-api-access-9dwnt\") pod \"25461515-6805-4b45-a203-c778beb80fb4\" (UID: \"25461515-6805-4b45-a203-c778beb80fb4\") " Oct 13 08:09:56 crc kubenswrapper[4833]: I1013 08:09:56.032152 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25461515-6805-4b45-a203-c778beb80fb4-kube-api-access-9dwnt" (OuterVolumeSpecName: "kube-api-access-9dwnt") pod "25461515-6805-4b45-a203-c778beb80fb4" (UID: "25461515-6805-4b45-a203-c778beb80fb4"). InnerVolumeSpecName "kube-api-access-9dwnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:09:56 crc kubenswrapper[4833]: I1013 08:09:56.126838 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dwnt\" (UniqueName: \"kubernetes.io/projected/25461515-6805-4b45-a203-c778beb80fb4-kube-api-access-9dwnt\") on node \"crc\" DevicePath \"\"" Oct 13 08:09:56 crc kubenswrapper[4833]: I1013 08:09:56.538894 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-b12f-account-create-72jt8" event={"ID":"25461515-6805-4b45-a203-c778beb80fb4","Type":"ContainerDied","Data":"2129e3438530dec15bee302667d4ac12208164b486108e258c728cea50fa8632"} Oct 13 08:09:56 crc kubenswrapper[4833]: I1013 08:09:56.538955 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2129e3438530dec15bee302667d4ac12208164b486108e258c728cea50fa8632" Oct 13 08:09:56 crc kubenswrapper[4833]: I1013 08:09:56.539009 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-b12f-account-create-72jt8" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.330212 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-rd2wk" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.411823 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-c74mk" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.437755 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-c74mk" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.497627 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-6b9f5bbfb5-2bc5l"] Oct 13 08:09:59 crc kubenswrapper[4833]: E1013 08:09:59.498101 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25461515-6805-4b45-a203-c778beb80fb4" containerName="mariadb-account-create" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.498118 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="25461515-6805-4b45-a203-c778beb80fb4" containerName="mariadb-account-create" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.498314 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="25461515-6805-4b45-a203-c778beb80fb4" containerName="mariadb-account-create" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.499623 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.505956 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-scripts" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.506205 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-octavia-dockercfg-t6h2m" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.506403 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-octavia-ovndbs" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.506489 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-config-data" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.531796 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-6b9f5bbfb5-2bc5l"] Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.608157 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-ovndb-tls-certs\") pod \"octavia-api-6b9f5bbfb5-2bc5l\" (UID: \"a86ac26b-2b19-4662-aa4b-2a94e78a18e9\") " pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.608203 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-combined-ca-bundle\") pod \"octavia-api-6b9f5bbfb5-2bc5l\" (UID: \"a86ac26b-2b19-4662-aa4b-2a94e78a18e9\") " pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.608290 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-scripts\") pod \"octavia-api-6b9f5bbfb5-2bc5l\" (UID: \"a86ac26b-2b19-4662-aa4b-2a94e78a18e9\") " pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.608355 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-config-data\") pod \"octavia-api-6b9f5bbfb5-2bc5l\" (UID: \"a86ac26b-2b19-4662-aa4b-2a94e78a18e9\") " pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.608381 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-config-data-merged\") pod \"octavia-api-6b9f5bbfb5-2bc5l\" (UID: \"a86ac26b-2b19-4662-aa4b-2a94e78a18e9\") " pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.608397 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-octavia-run\") pod \"octavia-api-6b9f5bbfb5-2bc5l\" (UID: \"a86ac26b-2b19-4662-aa4b-2a94e78a18e9\") " pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.617082 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-rd2wk-config-6rxzq"] Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.620079 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rd2wk-config-6rxzq" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.622087 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.628382 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rd2wk-config-6rxzq"] Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.710268 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-ovndb-tls-certs\") pod \"octavia-api-6b9f5bbfb5-2bc5l\" (UID: \"a86ac26b-2b19-4662-aa4b-2a94e78a18e9\") " pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.710331 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-combined-ca-bundle\") pod \"octavia-api-6b9f5bbfb5-2bc5l\" (UID: \"a86ac26b-2b19-4662-aa4b-2a94e78a18e9\") " pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.710433 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/777c40eb-ad65-446c-acd9-8de603c473c0-var-run\") pod \"ovn-controller-rd2wk-config-6rxzq\" (UID: \"777c40eb-ad65-446c-acd9-8de603c473c0\") " pod="openstack/ovn-controller-rd2wk-config-6rxzq" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.710580 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/777c40eb-ad65-446c-acd9-8de603c473c0-var-run-ovn\") pod \"ovn-controller-rd2wk-config-6rxzq\" (UID: \"777c40eb-ad65-446c-acd9-8de603c473c0\") " pod="openstack/ovn-controller-rd2wk-config-6rxzq" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.710609 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48tnv\" (UniqueName: \"kubernetes.io/projected/777c40eb-ad65-446c-acd9-8de603c473c0-kube-api-access-48tnv\") pod \"ovn-controller-rd2wk-config-6rxzq\" (UID: \"777c40eb-ad65-446c-acd9-8de603c473c0\") " pod="openstack/ovn-controller-rd2wk-config-6rxzq" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.711046 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-scripts\") pod \"octavia-api-6b9f5bbfb5-2bc5l\" (UID: \"a86ac26b-2b19-4662-aa4b-2a94e78a18e9\") " pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.711121 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/777c40eb-ad65-446c-acd9-8de603c473c0-scripts\") pod \"ovn-controller-rd2wk-config-6rxzq\" (UID: \"777c40eb-ad65-446c-acd9-8de603c473c0\") " pod="openstack/ovn-controller-rd2wk-config-6rxzq" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.711333 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/777c40eb-ad65-446c-acd9-8de603c473c0-additional-scripts\") pod \"ovn-controller-rd2wk-config-6rxzq\" (UID: \"777c40eb-ad65-446c-acd9-8de603c473c0\") " pod="openstack/ovn-controller-rd2wk-config-6rxzq" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.711405 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/777c40eb-ad65-446c-acd9-8de603c473c0-var-log-ovn\") pod \"ovn-controller-rd2wk-config-6rxzq\" (UID: \"777c40eb-ad65-446c-acd9-8de603c473c0\") " pod="openstack/ovn-controller-rd2wk-config-6rxzq" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.711598 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-config-data\") pod \"octavia-api-6b9f5bbfb5-2bc5l\" (UID: \"a86ac26b-2b19-4662-aa4b-2a94e78a18e9\") " pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.711704 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-config-data-merged\") pod \"octavia-api-6b9f5bbfb5-2bc5l\" (UID: \"a86ac26b-2b19-4662-aa4b-2a94e78a18e9\") " pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.711779 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-octavia-run\") pod \"octavia-api-6b9f5bbfb5-2bc5l\" (UID: \"a86ac26b-2b19-4662-aa4b-2a94e78a18e9\") " pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.712207 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-config-data-merged\") pod \"octavia-api-6b9f5bbfb5-2bc5l\" (UID: \"a86ac26b-2b19-4662-aa4b-2a94e78a18e9\") " pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.712308 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-octavia-run\") pod \"octavia-api-6b9f5bbfb5-2bc5l\" (UID: \"a86ac26b-2b19-4662-aa4b-2a94e78a18e9\") " pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.718039 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-combined-ca-bundle\") pod \"octavia-api-6b9f5bbfb5-2bc5l\" (UID: \"a86ac26b-2b19-4662-aa4b-2a94e78a18e9\") " pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.718686 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-scripts\") pod \"octavia-api-6b9f5bbfb5-2bc5l\" (UID: \"a86ac26b-2b19-4662-aa4b-2a94e78a18e9\") " pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.719916 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-config-data\") pod \"octavia-api-6b9f5bbfb5-2bc5l\" (UID: \"a86ac26b-2b19-4662-aa4b-2a94e78a18e9\") " pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.720486 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-ovndb-tls-certs\") pod \"octavia-api-6b9f5bbfb5-2bc5l\" (UID: \"a86ac26b-2b19-4662-aa4b-2a94e78a18e9\") " pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.814621 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/777c40eb-ad65-446c-acd9-8de603c473c0-var-run\") pod \"ovn-controller-rd2wk-config-6rxzq\" (UID: \"777c40eb-ad65-446c-acd9-8de603c473c0\") " pod="openstack/ovn-controller-rd2wk-config-6rxzq" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.815030 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/777c40eb-ad65-446c-acd9-8de603c473c0-var-run-ovn\") pod \"ovn-controller-rd2wk-config-6rxzq\" (UID: \"777c40eb-ad65-446c-acd9-8de603c473c0\") " pod="openstack/ovn-controller-rd2wk-config-6rxzq" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.815050 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48tnv\" (UniqueName: \"kubernetes.io/projected/777c40eb-ad65-446c-acd9-8de603c473c0-kube-api-access-48tnv\") pod \"ovn-controller-rd2wk-config-6rxzq\" (UID: \"777c40eb-ad65-446c-acd9-8de603c473c0\") " pod="openstack/ovn-controller-rd2wk-config-6rxzq" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.815080 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/777c40eb-ad65-446c-acd9-8de603c473c0-scripts\") pod \"ovn-controller-rd2wk-config-6rxzq\" (UID: \"777c40eb-ad65-446c-acd9-8de603c473c0\") " pod="openstack/ovn-controller-rd2wk-config-6rxzq" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.815096 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/777c40eb-ad65-446c-acd9-8de603c473c0-var-run\") pod \"ovn-controller-rd2wk-config-6rxzq\" (UID: \"777c40eb-ad65-446c-acd9-8de603c473c0\") " pod="openstack/ovn-controller-rd2wk-config-6rxzq" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.815109 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/777c40eb-ad65-446c-acd9-8de603c473c0-additional-scripts\") pod \"ovn-controller-rd2wk-config-6rxzq\" (UID: \"777c40eb-ad65-446c-acd9-8de603c473c0\") " pod="openstack/ovn-controller-rd2wk-config-6rxzq" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.815257 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/777c40eb-ad65-446c-acd9-8de603c473c0-var-log-ovn\") pod \"ovn-controller-rd2wk-config-6rxzq\" (UID: \"777c40eb-ad65-446c-acd9-8de603c473c0\") " pod="openstack/ovn-controller-rd2wk-config-6rxzq" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.815711 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/777c40eb-ad65-446c-acd9-8de603c473c0-var-log-ovn\") pod \"ovn-controller-rd2wk-config-6rxzq\" (UID: \"777c40eb-ad65-446c-acd9-8de603c473c0\") " pod="openstack/ovn-controller-rd2wk-config-6rxzq" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.815794 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/777c40eb-ad65-446c-acd9-8de603c473c0-var-run-ovn\") pod \"ovn-controller-rd2wk-config-6rxzq\" (UID: \"777c40eb-ad65-446c-acd9-8de603c473c0\") " pod="openstack/ovn-controller-rd2wk-config-6rxzq" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.815865 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/777c40eb-ad65-446c-acd9-8de603c473c0-additional-scripts\") pod \"ovn-controller-rd2wk-config-6rxzq\" (UID: \"777c40eb-ad65-446c-acd9-8de603c473c0\") " pod="openstack/ovn-controller-rd2wk-config-6rxzq" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.817574 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/777c40eb-ad65-446c-acd9-8de603c473c0-scripts\") pod \"ovn-controller-rd2wk-config-6rxzq\" (UID: \"777c40eb-ad65-446c-acd9-8de603c473c0\") " pod="openstack/ovn-controller-rd2wk-config-6rxzq" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.828238 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.836115 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48tnv\" (UniqueName: \"kubernetes.io/projected/777c40eb-ad65-446c-acd9-8de603c473c0-kube-api-access-48tnv\") pod \"ovn-controller-rd2wk-config-6rxzq\" (UID: \"777c40eb-ad65-446c-acd9-8de603c473c0\") " pod="openstack/ovn-controller-rd2wk-config-6rxzq" Oct 13 08:09:59 crc kubenswrapper[4833]: I1013 08:09:59.937453 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rd2wk-config-6rxzq" Oct 13 08:10:00 crc kubenswrapper[4833]: I1013 08:10:00.321414 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-6b9f5bbfb5-2bc5l"] Oct 13 08:10:00 crc kubenswrapper[4833]: I1013 08:10:00.327847 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 08:10:00 crc kubenswrapper[4833]: I1013 08:10:00.489498 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rd2wk-config-6rxzq"] Oct 13 08:10:00 crc kubenswrapper[4833]: I1013 08:10:00.543085 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:10:00 crc kubenswrapper[4833]: I1013 08:10:00.543517 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 08:10:00 crc kubenswrapper[4833]: I1013 08:10:00.585106 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" event={"ID":"a86ac26b-2b19-4662-aa4b-2a94e78a18e9","Type":"ContainerStarted","Data":"25530ceb1f317e8d8de02539b6933b6c4a3b95049b6fb6d21063ec2bcd9b6ba6"} Oct 13 08:10:00 crc kubenswrapper[4833]: I1013 08:10:00.587975 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rd2wk-config-6rxzq" event={"ID":"777c40eb-ad65-446c-acd9-8de603c473c0","Type":"ContainerStarted","Data":"45b197c90760feec106f05a89b9277b0b8030ccd7c0532f3ac692c613ea0e2ab"} Oct 13 08:10:01 crc kubenswrapper[4833]: I1013 08:10:01.631409 4833 generic.go:334] "Generic (PLEG): container finished" podID="777c40eb-ad65-446c-acd9-8de603c473c0" containerID="c059e3353b2e9fcd2728f1b27be6446414ea15278896d72775b4b206ce865b3a" exitCode=0 Oct 13 08:10:01 crc kubenswrapper[4833]: I1013 08:10:01.631722 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rd2wk-config-6rxzq" event={"ID":"777c40eb-ad65-446c-acd9-8de603c473c0","Type":"ContainerDied","Data":"c059e3353b2e9fcd2728f1b27be6446414ea15278896d72775b4b206ce865b3a"} Oct 13 08:10:03 crc kubenswrapper[4833]: I1013 08:10:03.060991 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rd2wk-config-6rxzq" Oct 13 08:10:03 crc kubenswrapper[4833]: I1013 08:10:03.084428 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48tnv\" (UniqueName: \"kubernetes.io/projected/777c40eb-ad65-446c-acd9-8de603c473c0-kube-api-access-48tnv\") pod \"777c40eb-ad65-446c-acd9-8de603c473c0\" (UID: \"777c40eb-ad65-446c-acd9-8de603c473c0\") " Oct 13 08:10:03 crc kubenswrapper[4833]: I1013 08:10:03.084510 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/777c40eb-ad65-446c-acd9-8de603c473c0-var-run\") pod \"777c40eb-ad65-446c-acd9-8de603c473c0\" (UID: \"777c40eb-ad65-446c-acd9-8de603c473c0\") " Oct 13 08:10:03 crc kubenswrapper[4833]: I1013 08:10:03.084701 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/777c40eb-ad65-446c-acd9-8de603c473c0-var-run-ovn\") pod \"777c40eb-ad65-446c-acd9-8de603c473c0\" (UID: \"777c40eb-ad65-446c-acd9-8de603c473c0\") " Oct 13 08:10:03 crc kubenswrapper[4833]: I1013 08:10:03.084728 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/777c40eb-ad65-446c-acd9-8de603c473c0-scripts\") pod \"777c40eb-ad65-446c-acd9-8de603c473c0\" (UID: \"777c40eb-ad65-446c-acd9-8de603c473c0\") " Oct 13 08:10:03 crc kubenswrapper[4833]: I1013 08:10:03.084810 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/777c40eb-ad65-446c-acd9-8de603c473c0-var-log-ovn\") pod \"777c40eb-ad65-446c-acd9-8de603c473c0\" (UID: \"777c40eb-ad65-446c-acd9-8de603c473c0\") " Oct 13 08:10:03 crc kubenswrapper[4833]: I1013 08:10:03.084838 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/777c40eb-ad65-446c-acd9-8de603c473c0-additional-scripts\") pod \"777c40eb-ad65-446c-acd9-8de603c473c0\" (UID: \"777c40eb-ad65-446c-acd9-8de603c473c0\") " Oct 13 08:10:03 crc kubenswrapper[4833]: I1013 08:10:03.085869 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/777c40eb-ad65-446c-acd9-8de603c473c0-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "777c40eb-ad65-446c-acd9-8de603c473c0" (UID: "777c40eb-ad65-446c-acd9-8de603c473c0"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 08:10:03 crc kubenswrapper[4833]: I1013 08:10:03.086215 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/777c40eb-ad65-446c-acd9-8de603c473c0-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "777c40eb-ad65-446c-acd9-8de603c473c0" (UID: "777c40eb-ad65-446c-acd9-8de603c473c0"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:10:03 crc kubenswrapper[4833]: I1013 08:10:03.087164 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/777c40eb-ad65-446c-acd9-8de603c473c0-scripts" (OuterVolumeSpecName: "scripts") pod "777c40eb-ad65-446c-acd9-8de603c473c0" (UID: "777c40eb-ad65-446c-acd9-8de603c473c0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:10:03 crc kubenswrapper[4833]: I1013 08:10:03.087211 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/777c40eb-ad65-446c-acd9-8de603c473c0-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "777c40eb-ad65-446c-acd9-8de603c473c0" (UID: "777c40eb-ad65-446c-acd9-8de603c473c0"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 08:10:03 crc kubenswrapper[4833]: I1013 08:10:03.087237 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/777c40eb-ad65-446c-acd9-8de603c473c0-var-run" (OuterVolumeSpecName: "var-run") pod "777c40eb-ad65-446c-acd9-8de603c473c0" (UID: "777c40eb-ad65-446c-acd9-8de603c473c0"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 08:10:03 crc kubenswrapper[4833]: I1013 08:10:03.095506 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/777c40eb-ad65-446c-acd9-8de603c473c0-kube-api-access-48tnv" (OuterVolumeSpecName: "kube-api-access-48tnv") pod "777c40eb-ad65-446c-acd9-8de603c473c0" (UID: "777c40eb-ad65-446c-acd9-8de603c473c0"). InnerVolumeSpecName "kube-api-access-48tnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:10:03 crc kubenswrapper[4833]: I1013 08:10:03.186890 4833 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/777c40eb-ad65-446c-acd9-8de603c473c0-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 13 08:10:03 crc kubenswrapper[4833]: I1013 08:10:03.186937 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/777c40eb-ad65-446c-acd9-8de603c473c0-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 08:10:03 crc kubenswrapper[4833]: I1013 08:10:03.186948 4833 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/777c40eb-ad65-446c-acd9-8de603c473c0-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 13 08:10:03 crc kubenswrapper[4833]: I1013 08:10:03.186958 4833 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/777c40eb-ad65-446c-acd9-8de603c473c0-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 08:10:03 crc kubenswrapper[4833]: I1013 08:10:03.186969 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48tnv\" (UniqueName: \"kubernetes.io/projected/777c40eb-ad65-446c-acd9-8de603c473c0-kube-api-access-48tnv\") on node \"crc\" DevicePath \"\"" Oct 13 08:10:03 crc kubenswrapper[4833]: I1013 08:10:03.186979 4833 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/777c40eb-ad65-446c-acd9-8de603c473c0-var-run\") on node \"crc\" DevicePath \"\"" Oct 13 08:10:03 crc kubenswrapper[4833]: I1013 08:10:03.666863 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rd2wk-config-6rxzq" event={"ID":"777c40eb-ad65-446c-acd9-8de603c473c0","Type":"ContainerDied","Data":"45b197c90760feec106f05a89b9277b0b8030ccd7c0532f3ac692c613ea0e2ab"} Oct 13 08:10:03 crc kubenswrapper[4833]: I1013 08:10:03.667085 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45b197c90760feec106f05a89b9277b0b8030ccd7c0532f3ac692c613ea0e2ab" Oct 13 08:10:03 crc kubenswrapper[4833]: I1013 08:10:03.666918 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rd2wk-config-6rxzq" Oct 13 08:10:04 crc kubenswrapper[4833]: I1013 08:10:04.147533 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-rd2wk-config-6rxzq"] Oct 13 08:10:04 crc kubenswrapper[4833]: I1013 08:10:04.157381 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-rd2wk-config-6rxzq"] Oct 13 08:10:04 crc kubenswrapper[4833]: I1013 08:10:04.657259 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="777c40eb-ad65-446c-acd9-8de603c473c0" path="/var/lib/kubelet/pods/777c40eb-ad65-446c-acd9-8de603c473c0/volumes" Oct 13 08:10:11 crc kubenswrapper[4833]: I1013 08:10:11.752730 4833 generic.go:334] "Generic (PLEG): container finished" podID="a86ac26b-2b19-4662-aa4b-2a94e78a18e9" containerID="de2c09a584a644a6b781272117c5b20e3b841156a964d314f3ed6d25c94e86ef" exitCode=0 Oct 13 08:10:11 crc kubenswrapper[4833]: I1013 08:10:11.752973 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" event={"ID":"a86ac26b-2b19-4662-aa4b-2a94e78a18e9","Type":"ContainerDied","Data":"de2c09a584a644a6b781272117c5b20e3b841156a964d314f3ed6d25c94e86ef"} Oct 13 08:10:12 crc kubenswrapper[4833]: I1013 08:10:12.774570 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" event={"ID":"a86ac26b-2b19-4662-aa4b-2a94e78a18e9","Type":"ContainerStarted","Data":"b7cc36ace7d12f43a0d66a66c02708dea54fe84c6b02bca3f8814b1d752a62ef"} Oct 13 08:10:12 crc kubenswrapper[4833]: I1013 08:10:12.774918 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" event={"ID":"a86ac26b-2b19-4662-aa4b-2a94e78a18e9","Type":"ContainerStarted","Data":"fa93761cb611630899663f375ca7b9b3a1bac45b6a9d165a16af9efcf776112f"} Oct 13 08:10:12 crc kubenswrapper[4833]: I1013 08:10:12.774993 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" Oct 13 08:10:12 crc kubenswrapper[4833]: I1013 08:10:12.775033 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" Oct 13 08:10:12 crc kubenswrapper[4833]: I1013 08:10:12.808937 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" podStartSLOduration=3.426764439 podStartE2EDuration="13.80890878s" podCreationTimestamp="2025-10-13 08:09:59 +0000 UTC" firstStartedPulling="2025-10-13 08:10:00.327614443 +0000 UTC m=+6090.428037359" lastFinishedPulling="2025-10-13 08:10:10.709758784 +0000 UTC m=+6100.810181700" observedRunningTime="2025-10-13 08:10:12.806311727 +0000 UTC m=+6102.906734653" watchObservedRunningTime="2025-10-13 08:10:12.80890878 +0000 UTC m=+6102.909331706" Oct 13 08:10:19 crc kubenswrapper[4833]: I1013 08:10:19.242100 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-rmj9t"] Oct 13 08:10:19 crc kubenswrapper[4833]: E1013 08:10:19.243408 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="777c40eb-ad65-446c-acd9-8de603c473c0" containerName="ovn-config" Oct 13 08:10:19 crc kubenswrapper[4833]: I1013 08:10:19.243430 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="777c40eb-ad65-446c-acd9-8de603c473c0" containerName="ovn-config" Oct 13 08:10:19 crc kubenswrapper[4833]: I1013 08:10:19.243809 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="777c40eb-ad65-446c-acd9-8de603c473c0" containerName="ovn-config" Oct 13 08:10:19 crc kubenswrapper[4833]: I1013 08:10:19.245856 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-rmj9t" Oct 13 08:10:19 crc kubenswrapper[4833]: I1013 08:10:19.255193 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-rmj9t"] Oct 13 08:10:19 crc kubenswrapper[4833]: I1013 08:10:19.283210 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Oct 13 08:10:19 crc kubenswrapper[4833]: I1013 08:10:19.283664 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Oct 13 08:10:19 crc kubenswrapper[4833]: I1013 08:10:19.283896 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Oct 13 08:10:19 crc kubenswrapper[4833]: I1013 08:10:19.436987 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c8629c8-3ccd-4d03-be53-6923783cf739-scripts\") pod \"octavia-rsyslog-rmj9t\" (UID: \"4c8629c8-3ccd-4d03-be53-6923783cf739\") " pod="openstack/octavia-rsyslog-rmj9t" Oct 13 08:10:19 crc kubenswrapper[4833]: I1013 08:10:19.437097 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4c8629c8-3ccd-4d03-be53-6923783cf739-config-data-merged\") pod \"octavia-rsyslog-rmj9t\" (UID: \"4c8629c8-3ccd-4d03-be53-6923783cf739\") " pod="openstack/octavia-rsyslog-rmj9t" Oct 13 08:10:19 crc kubenswrapper[4833]: I1013 08:10:19.437298 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/4c8629c8-3ccd-4d03-be53-6923783cf739-hm-ports\") pod \"octavia-rsyslog-rmj9t\" (UID: \"4c8629c8-3ccd-4d03-be53-6923783cf739\") " pod="openstack/octavia-rsyslog-rmj9t" Oct 13 08:10:19 crc kubenswrapper[4833]: I1013 08:10:19.437333 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c8629c8-3ccd-4d03-be53-6923783cf739-config-data\") pod \"octavia-rsyslog-rmj9t\" (UID: \"4c8629c8-3ccd-4d03-be53-6923783cf739\") " pod="openstack/octavia-rsyslog-rmj9t" Oct 13 08:10:19 crc kubenswrapper[4833]: I1013 08:10:19.539360 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4c8629c8-3ccd-4d03-be53-6923783cf739-config-data-merged\") pod \"octavia-rsyslog-rmj9t\" (UID: \"4c8629c8-3ccd-4d03-be53-6923783cf739\") " pod="openstack/octavia-rsyslog-rmj9t" Oct 13 08:10:19 crc kubenswrapper[4833]: I1013 08:10:19.539721 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/4c8629c8-3ccd-4d03-be53-6923783cf739-hm-ports\") pod \"octavia-rsyslog-rmj9t\" (UID: \"4c8629c8-3ccd-4d03-be53-6923783cf739\") " pod="openstack/octavia-rsyslog-rmj9t" Oct 13 08:10:19 crc kubenswrapper[4833]: I1013 08:10:19.539792 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c8629c8-3ccd-4d03-be53-6923783cf739-config-data\") pod \"octavia-rsyslog-rmj9t\" (UID: \"4c8629c8-3ccd-4d03-be53-6923783cf739\") " pod="openstack/octavia-rsyslog-rmj9t" Oct 13 08:10:19 crc kubenswrapper[4833]: I1013 08:10:19.539866 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c8629c8-3ccd-4d03-be53-6923783cf739-scripts\") pod \"octavia-rsyslog-rmj9t\" (UID: \"4c8629c8-3ccd-4d03-be53-6923783cf739\") " pod="openstack/octavia-rsyslog-rmj9t" Oct 13 08:10:19 crc kubenswrapper[4833]: I1013 08:10:19.542757 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/4c8629c8-3ccd-4d03-be53-6923783cf739-hm-ports\") pod \"octavia-rsyslog-rmj9t\" (UID: \"4c8629c8-3ccd-4d03-be53-6923783cf739\") " pod="openstack/octavia-rsyslog-rmj9t" Oct 13 08:10:19 crc kubenswrapper[4833]: I1013 08:10:19.543383 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4c8629c8-3ccd-4d03-be53-6923783cf739-config-data-merged\") pod \"octavia-rsyslog-rmj9t\" (UID: \"4c8629c8-3ccd-4d03-be53-6923783cf739\") " pod="openstack/octavia-rsyslog-rmj9t" Oct 13 08:10:19 crc kubenswrapper[4833]: I1013 08:10:19.551625 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c8629c8-3ccd-4d03-be53-6923783cf739-config-data\") pod \"octavia-rsyslog-rmj9t\" (UID: \"4c8629c8-3ccd-4d03-be53-6923783cf739\") " pod="openstack/octavia-rsyslog-rmj9t" Oct 13 08:10:19 crc kubenswrapper[4833]: I1013 08:10:19.575994 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c8629c8-3ccd-4d03-be53-6923783cf739-scripts\") pod \"octavia-rsyslog-rmj9t\" (UID: \"4c8629c8-3ccd-4d03-be53-6923783cf739\") " pod="openstack/octavia-rsyslog-rmj9t" Oct 13 08:10:19 crc kubenswrapper[4833]: I1013 08:10:19.596841 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-rmj9t" Oct 13 08:10:19 crc kubenswrapper[4833]: I1013 08:10:19.976827 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-678599687f-tdfgk"] Oct 13 08:10:19 crc kubenswrapper[4833]: I1013 08:10:19.978914 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-678599687f-tdfgk" Oct 13 08:10:19 crc kubenswrapper[4833]: I1013 08:10:19.984271 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Oct 13 08:10:19 crc kubenswrapper[4833]: I1013 08:10:19.991224 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-678599687f-tdfgk"] Oct 13 08:10:20 crc kubenswrapper[4833]: I1013 08:10:20.152227 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a2062231-fd5b-4e8a-97db-20d869a9bf89-httpd-config\") pod \"octavia-image-upload-678599687f-tdfgk\" (UID: \"a2062231-fd5b-4e8a-97db-20d869a9bf89\") " pod="openstack/octavia-image-upload-678599687f-tdfgk" Oct 13 08:10:20 crc kubenswrapper[4833]: I1013 08:10:20.152312 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/a2062231-fd5b-4e8a-97db-20d869a9bf89-amphora-image\") pod \"octavia-image-upload-678599687f-tdfgk\" (UID: \"a2062231-fd5b-4e8a-97db-20d869a9bf89\") " pod="openstack/octavia-image-upload-678599687f-tdfgk" Oct 13 08:10:20 crc kubenswrapper[4833]: I1013 08:10:20.174207 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-rmj9t"] Oct 13 08:10:20 crc kubenswrapper[4833]: I1013 08:10:20.254451 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/a2062231-fd5b-4e8a-97db-20d869a9bf89-amphora-image\") pod \"octavia-image-upload-678599687f-tdfgk\" (UID: \"a2062231-fd5b-4e8a-97db-20d869a9bf89\") " pod="openstack/octavia-image-upload-678599687f-tdfgk" Oct 13 08:10:20 crc kubenswrapper[4833]: I1013 08:10:20.254662 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a2062231-fd5b-4e8a-97db-20d869a9bf89-httpd-config\") pod \"octavia-image-upload-678599687f-tdfgk\" (UID: \"a2062231-fd5b-4e8a-97db-20d869a9bf89\") " pod="openstack/octavia-image-upload-678599687f-tdfgk" Oct 13 08:10:20 crc kubenswrapper[4833]: I1013 08:10:20.255119 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/a2062231-fd5b-4e8a-97db-20d869a9bf89-amphora-image\") pod \"octavia-image-upload-678599687f-tdfgk\" (UID: \"a2062231-fd5b-4e8a-97db-20d869a9bf89\") " pod="openstack/octavia-image-upload-678599687f-tdfgk" Oct 13 08:10:20 crc kubenswrapper[4833]: I1013 08:10:20.261802 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a2062231-fd5b-4e8a-97db-20d869a9bf89-httpd-config\") pod \"octavia-image-upload-678599687f-tdfgk\" (UID: \"a2062231-fd5b-4e8a-97db-20d869a9bf89\") " pod="openstack/octavia-image-upload-678599687f-tdfgk" Oct 13 08:10:20 crc kubenswrapper[4833]: I1013 08:10:20.308158 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-678599687f-tdfgk" Oct 13 08:10:20 crc kubenswrapper[4833]: I1013 08:10:20.825577 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-678599687f-tdfgk"] Oct 13 08:10:20 crc kubenswrapper[4833]: W1013 08:10:20.827840 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2062231_fd5b_4e8a_97db_20d869a9bf89.slice/crio-095a12d20d4a75f3630d47dd6cda304a22093081f4c6fa5bc65c76160cb9c2e9 WatchSource:0}: Error finding container 095a12d20d4a75f3630d47dd6cda304a22093081f4c6fa5bc65c76160cb9c2e9: Status 404 returned error can't find the container with id 095a12d20d4a75f3630d47dd6cda304a22093081f4c6fa5bc65c76160cb9c2e9 Oct 13 08:10:20 crc kubenswrapper[4833]: I1013 08:10:20.867850 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-rmj9t" event={"ID":"4c8629c8-3ccd-4d03-be53-6923783cf739","Type":"ContainerStarted","Data":"d3739e9c02d225ebd880dc8ab580f1ecd56f1518e3359ec7df2122e8e03b32d1"} Oct 13 08:10:20 crc kubenswrapper[4833]: I1013 08:10:20.868873 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-678599687f-tdfgk" event={"ID":"a2062231-fd5b-4e8a-97db-20d869a9bf89","Type":"ContainerStarted","Data":"095a12d20d4a75f3630d47dd6cda304a22093081f4c6fa5bc65c76160cb9c2e9"} Oct 13 08:10:21 crc kubenswrapper[4833]: I1013 08:10:21.469754 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-5b9f5bccc5-4lqcj"] Oct 13 08:10:21 crc kubenswrapper[4833]: I1013 08:10:21.472374 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-5b9f5bccc5-4lqcj" Oct 13 08:10:21 crc kubenswrapper[4833]: I1013 08:10:21.475055 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-octavia-internal-svc" Oct 13 08:10:21 crc kubenswrapper[4833]: I1013 08:10:21.475092 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-octavia-public-svc" Oct 13 08:10:21 crc kubenswrapper[4833]: I1013 08:10:21.483591 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-5b9f5bccc5-4lqcj"] Oct 13 08:10:21 crc kubenswrapper[4833]: I1013 08:10:21.651619 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e9d268a-8671-4459-96b3-abb75af5726a-public-tls-certs\") pod \"octavia-api-5b9f5bccc5-4lqcj\" (UID: \"9e9d268a-8671-4459-96b3-abb75af5726a\") " pod="openstack/octavia-api-5b9f5bccc5-4lqcj" Oct 13 08:10:21 crc kubenswrapper[4833]: I1013 08:10:21.652020 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/9e9d268a-8671-4459-96b3-abb75af5726a-config-data-merged\") pod \"octavia-api-5b9f5bccc5-4lqcj\" (UID: \"9e9d268a-8671-4459-96b3-abb75af5726a\") " pod="openstack/octavia-api-5b9f5bccc5-4lqcj" Oct 13 08:10:21 crc kubenswrapper[4833]: I1013 08:10:21.652054 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e9d268a-8671-4459-96b3-abb75af5726a-scripts\") pod \"octavia-api-5b9f5bccc5-4lqcj\" (UID: \"9e9d268a-8671-4459-96b3-abb75af5726a\") " pod="openstack/octavia-api-5b9f5bccc5-4lqcj" Oct 13 08:10:21 crc kubenswrapper[4833]: I1013 08:10:21.652097 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e9d268a-8671-4459-96b3-abb75af5726a-internal-tls-certs\") pod \"octavia-api-5b9f5bccc5-4lqcj\" (UID: \"9e9d268a-8671-4459-96b3-abb75af5726a\") " pod="openstack/octavia-api-5b9f5bccc5-4lqcj" Oct 13 08:10:21 crc kubenswrapper[4833]: I1013 08:10:21.652162 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e9d268a-8671-4459-96b3-abb75af5726a-ovndb-tls-certs\") pod \"octavia-api-5b9f5bccc5-4lqcj\" (UID: \"9e9d268a-8671-4459-96b3-abb75af5726a\") " pod="openstack/octavia-api-5b9f5bccc5-4lqcj" Oct 13 08:10:21 crc kubenswrapper[4833]: I1013 08:10:21.652207 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/9e9d268a-8671-4459-96b3-abb75af5726a-octavia-run\") pod \"octavia-api-5b9f5bccc5-4lqcj\" (UID: \"9e9d268a-8671-4459-96b3-abb75af5726a\") " pod="openstack/octavia-api-5b9f5bccc5-4lqcj" Oct 13 08:10:21 crc kubenswrapper[4833]: I1013 08:10:21.652243 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9d268a-8671-4459-96b3-abb75af5726a-combined-ca-bundle\") pod \"octavia-api-5b9f5bccc5-4lqcj\" (UID: \"9e9d268a-8671-4459-96b3-abb75af5726a\") " pod="openstack/octavia-api-5b9f5bccc5-4lqcj" Oct 13 08:10:21 crc kubenswrapper[4833]: I1013 08:10:21.652295 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9d268a-8671-4459-96b3-abb75af5726a-config-data\") pod \"octavia-api-5b9f5bccc5-4lqcj\" (UID: \"9e9d268a-8671-4459-96b3-abb75af5726a\") " pod="openstack/octavia-api-5b9f5bccc5-4lqcj" Oct 13 08:10:21 crc kubenswrapper[4833]: I1013 08:10:21.754421 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/9e9d268a-8671-4459-96b3-abb75af5726a-config-data-merged\") pod \"octavia-api-5b9f5bccc5-4lqcj\" (UID: \"9e9d268a-8671-4459-96b3-abb75af5726a\") " pod="openstack/octavia-api-5b9f5bccc5-4lqcj" Oct 13 08:10:21 crc kubenswrapper[4833]: I1013 08:10:21.754479 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e9d268a-8671-4459-96b3-abb75af5726a-scripts\") pod \"octavia-api-5b9f5bccc5-4lqcj\" (UID: \"9e9d268a-8671-4459-96b3-abb75af5726a\") " pod="openstack/octavia-api-5b9f5bccc5-4lqcj" Oct 13 08:10:21 crc kubenswrapper[4833]: I1013 08:10:21.754564 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e9d268a-8671-4459-96b3-abb75af5726a-internal-tls-certs\") pod \"octavia-api-5b9f5bccc5-4lqcj\" (UID: \"9e9d268a-8671-4459-96b3-abb75af5726a\") " pod="openstack/octavia-api-5b9f5bccc5-4lqcj" Oct 13 08:10:21 crc kubenswrapper[4833]: I1013 08:10:21.754631 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e9d268a-8671-4459-96b3-abb75af5726a-ovndb-tls-certs\") pod \"octavia-api-5b9f5bccc5-4lqcj\" (UID: \"9e9d268a-8671-4459-96b3-abb75af5726a\") " pod="openstack/octavia-api-5b9f5bccc5-4lqcj" Oct 13 08:10:21 crc kubenswrapper[4833]: I1013 08:10:21.755074 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/9e9d268a-8671-4459-96b3-abb75af5726a-octavia-run\") pod \"octavia-api-5b9f5bccc5-4lqcj\" (UID: \"9e9d268a-8671-4459-96b3-abb75af5726a\") " pod="openstack/octavia-api-5b9f5bccc5-4lqcj" Oct 13 08:10:21 crc kubenswrapper[4833]: I1013 08:10:21.755111 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9d268a-8671-4459-96b3-abb75af5726a-combined-ca-bundle\") pod \"octavia-api-5b9f5bccc5-4lqcj\" (UID: \"9e9d268a-8671-4459-96b3-abb75af5726a\") " pod="openstack/octavia-api-5b9f5bccc5-4lqcj" Oct 13 08:10:21 crc kubenswrapper[4833]: I1013 08:10:21.756004 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9d268a-8671-4459-96b3-abb75af5726a-config-data\") pod \"octavia-api-5b9f5bccc5-4lqcj\" (UID: \"9e9d268a-8671-4459-96b3-abb75af5726a\") " pod="openstack/octavia-api-5b9f5bccc5-4lqcj" Oct 13 08:10:21 crc kubenswrapper[4833]: I1013 08:10:21.756123 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e9d268a-8671-4459-96b3-abb75af5726a-public-tls-certs\") pod \"octavia-api-5b9f5bccc5-4lqcj\" (UID: \"9e9d268a-8671-4459-96b3-abb75af5726a\") " pod="openstack/octavia-api-5b9f5bccc5-4lqcj" Oct 13 08:10:21 crc kubenswrapper[4833]: I1013 08:10:21.755578 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/9e9d268a-8671-4459-96b3-abb75af5726a-octavia-run\") pod \"octavia-api-5b9f5bccc5-4lqcj\" (UID: \"9e9d268a-8671-4459-96b3-abb75af5726a\") " pod="openstack/octavia-api-5b9f5bccc5-4lqcj" Oct 13 08:10:21 crc kubenswrapper[4833]: I1013 08:10:21.755110 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/9e9d268a-8671-4459-96b3-abb75af5726a-config-data-merged\") pod \"octavia-api-5b9f5bccc5-4lqcj\" (UID: \"9e9d268a-8671-4459-96b3-abb75af5726a\") " pod="openstack/octavia-api-5b9f5bccc5-4lqcj" Oct 13 08:10:21 crc kubenswrapper[4833]: I1013 08:10:21.761646 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e9d268a-8671-4459-96b3-abb75af5726a-scripts\") pod \"octavia-api-5b9f5bccc5-4lqcj\" (UID: \"9e9d268a-8671-4459-96b3-abb75af5726a\") " pod="openstack/octavia-api-5b9f5bccc5-4lqcj" Oct 13 08:10:21 crc kubenswrapper[4833]: I1013 08:10:21.761642 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9d268a-8671-4459-96b3-abb75af5726a-combined-ca-bundle\") pod \"octavia-api-5b9f5bccc5-4lqcj\" (UID: \"9e9d268a-8671-4459-96b3-abb75af5726a\") " pod="openstack/octavia-api-5b9f5bccc5-4lqcj" Oct 13 08:10:21 crc kubenswrapper[4833]: I1013 08:10:21.762342 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e9d268a-8671-4459-96b3-abb75af5726a-public-tls-certs\") pod \"octavia-api-5b9f5bccc5-4lqcj\" (UID: \"9e9d268a-8671-4459-96b3-abb75af5726a\") " pod="openstack/octavia-api-5b9f5bccc5-4lqcj" Oct 13 08:10:21 crc kubenswrapper[4833]: I1013 08:10:21.762408 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9d268a-8671-4459-96b3-abb75af5726a-config-data\") pod \"octavia-api-5b9f5bccc5-4lqcj\" (UID: \"9e9d268a-8671-4459-96b3-abb75af5726a\") " pod="openstack/octavia-api-5b9f5bccc5-4lqcj" Oct 13 08:10:21 crc kubenswrapper[4833]: I1013 08:10:21.762906 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e9d268a-8671-4459-96b3-abb75af5726a-internal-tls-certs\") pod \"octavia-api-5b9f5bccc5-4lqcj\" (UID: \"9e9d268a-8671-4459-96b3-abb75af5726a\") " pod="openstack/octavia-api-5b9f5bccc5-4lqcj" Oct 13 08:10:21 crc kubenswrapper[4833]: I1013 08:10:21.781120 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e9d268a-8671-4459-96b3-abb75af5726a-ovndb-tls-certs\") pod \"octavia-api-5b9f5bccc5-4lqcj\" (UID: \"9e9d268a-8671-4459-96b3-abb75af5726a\") " pod="openstack/octavia-api-5b9f5bccc5-4lqcj" Oct 13 08:10:21 crc kubenswrapper[4833]: I1013 08:10:21.848134 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-5b9f5bccc5-4lqcj" Oct 13 08:10:22 crc kubenswrapper[4833]: W1013 08:10:22.380817 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e9d268a_8671_4459_96b3_abb75af5726a.slice/crio-5afc4f1adf9d67d455a8a892971a327e33da1fc847ed21fbeb13705136228324 WatchSource:0}: Error finding container 5afc4f1adf9d67d455a8a892971a327e33da1fc847ed21fbeb13705136228324: Status 404 returned error can't find the container with id 5afc4f1adf9d67d455a8a892971a327e33da1fc847ed21fbeb13705136228324 Oct 13 08:10:22 crc kubenswrapper[4833]: I1013 08:10:22.383638 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-5b9f5bccc5-4lqcj"] Oct 13 08:10:22 crc kubenswrapper[4833]: I1013 08:10:22.892977 4833 generic.go:334] "Generic (PLEG): container finished" podID="9e9d268a-8671-4459-96b3-abb75af5726a" containerID="39708e1e8bf5a913d41da5ab5bc99b6bfdbb574f8c040baa01360e8478fae109" exitCode=0 Oct 13 08:10:22 crc kubenswrapper[4833]: I1013 08:10:22.893150 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-5b9f5bccc5-4lqcj" event={"ID":"9e9d268a-8671-4459-96b3-abb75af5726a","Type":"ContainerDied","Data":"39708e1e8bf5a913d41da5ab5bc99b6bfdbb574f8c040baa01360e8478fae109"} Oct 13 08:10:22 crc kubenswrapper[4833]: I1013 08:10:22.893424 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-5b9f5bccc5-4lqcj" event={"ID":"9e9d268a-8671-4459-96b3-abb75af5726a","Type":"ContainerStarted","Data":"5afc4f1adf9d67d455a8a892971a327e33da1fc847ed21fbeb13705136228324"} Oct 13 08:10:22 crc kubenswrapper[4833]: I1013 08:10:22.898126 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-rmj9t" event={"ID":"4c8629c8-3ccd-4d03-be53-6923783cf739","Type":"ContainerStarted","Data":"c497cc642fbe9861c913d30fa13d2e246a5ea6aa8dbcf8e253e00f8bf0ba0a1d"} Oct 13 08:10:23 crc kubenswrapper[4833]: I1013 08:10:23.914073 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-5b9f5bccc5-4lqcj" event={"ID":"9e9d268a-8671-4459-96b3-abb75af5726a","Type":"ContainerStarted","Data":"e781c01370001347bf5cb5438360a6c06a02efb44636b4a7067a6856b7630301"} Oct 13 08:10:23 crc kubenswrapper[4833]: I1013 08:10:23.943768 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-5b9f5bccc5-4lqcj" podStartSLOduration=2.943655853 podStartE2EDuration="2.943655853s" podCreationTimestamp="2025-10-13 08:10:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:10:23.939105384 +0000 UTC m=+6114.039528300" watchObservedRunningTime="2025-10-13 08:10:23.943655853 +0000 UTC m=+6114.044078769" Oct 13 08:10:24 crc kubenswrapper[4833]: I1013 08:10:24.926204 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-5b9f5bccc5-4lqcj" event={"ID":"9e9d268a-8671-4459-96b3-abb75af5726a","Type":"ContainerStarted","Data":"81515396c111b1711acecdd4379fca57ab0fca62edd16cacdf150467185b5052"} Oct 13 08:10:24 crc kubenswrapper[4833]: I1013 08:10:24.926681 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-5b9f5bccc5-4lqcj" Oct 13 08:10:24 crc kubenswrapper[4833]: I1013 08:10:24.926738 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-5b9f5bccc5-4lqcj" Oct 13 08:10:24 crc kubenswrapper[4833]: I1013 08:10:24.930390 4833 generic.go:334] "Generic (PLEG): container finished" podID="4c8629c8-3ccd-4d03-be53-6923783cf739" containerID="c497cc642fbe9861c913d30fa13d2e246a5ea6aa8dbcf8e253e00f8bf0ba0a1d" exitCode=0 Oct 13 08:10:24 crc kubenswrapper[4833]: I1013 08:10:24.930430 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-rmj9t" event={"ID":"4c8629c8-3ccd-4d03-be53-6923783cf739","Type":"ContainerDied","Data":"c497cc642fbe9861c913d30fa13d2e246a5ea6aa8dbcf8e253e00f8bf0ba0a1d"} Oct 13 08:10:27 crc kubenswrapper[4833]: I1013 08:10:27.968670 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-rmj9t" event={"ID":"4c8629c8-3ccd-4d03-be53-6923783cf739","Type":"ContainerStarted","Data":"e17b5678cb18e78fdb9111b482ccad97769ee8fe9e2a6973aff3cc185d74b816"} Oct 13 08:10:27 crc kubenswrapper[4833]: I1013 08:10:27.969314 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-rmj9t" Oct 13 08:10:29 crc kubenswrapper[4833]: I1013 08:10:29.250199 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-rmj9t" podStartSLOduration=4.158414064 podStartE2EDuration="10.250181905s" podCreationTimestamp="2025-10-13 08:10:19 +0000 UTC" firstStartedPulling="2025-10-13 08:10:20.181312143 +0000 UTC m=+6110.281735059" lastFinishedPulling="2025-10-13 08:10:26.273079984 +0000 UTC m=+6116.373502900" observedRunningTime="2025-10-13 08:10:27.989342297 +0000 UTC m=+6118.089765213" watchObservedRunningTime="2025-10-13 08:10:29.250181905 +0000 UTC m=+6119.350604811" Oct 13 08:10:29 crc kubenswrapper[4833]: I1013 08:10:29.250386 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-sync-5rmws"] Oct 13 08:10:29 crc kubenswrapper[4833]: I1013 08:10:29.252092 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-5rmws" Oct 13 08:10:29 crc kubenswrapper[4833]: I1013 08:10:29.254758 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-scripts" Oct 13 08:10:29 crc kubenswrapper[4833]: I1013 08:10:29.281121 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-5rmws"] Oct 13 08:10:29 crc kubenswrapper[4833]: I1013 08:10:29.331600 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948a7fdd-f311-454f-b73f-3c62b09a90eb-combined-ca-bundle\") pod \"octavia-db-sync-5rmws\" (UID: \"948a7fdd-f311-454f-b73f-3c62b09a90eb\") " pod="openstack/octavia-db-sync-5rmws" Oct 13 08:10:29 crc kubenswrapper[4833]: I1013 08:10:29.331716 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/948a7fdd-f311-454f-b73f-3c62b09a90eb-scripts\") pod \"octavia-db-sync-5rmws\" (UID: \"948a7fdd-f311-454f-b73f-3c62b09a90eb\") " pod="openstack/octavia-db-sync-5rmws" Oct 13 08:10:29 crc kubenswrapper[4833]: I1013 08:10:29.332062 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948a7fdd-f311-454f-b73f-3c62b09a90eb-config-data\") pod \"octavia-db-sync-5rmws\" (UID: \"948a7fdd-f311-454f-b73f-3c62b09a90eb\") " pod="openstack/octavia-db-sync-5rmws" Oct 13 08:10:29 crc kubenswrapper[4833]: I1013 08:10:29.332124 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/948a7fdd-f311-454f-b73f-3c62b09a90eb-config-data-merged\") pod \"octavia-db-sync-5rmws\" (UID: \"948a7fdd-f311-454f-b73f-3c62b09a90eb\") " pod="openstack/octavia-db-sync-5rmws" Oct 13 08:10:29 crc kubenswrapper[4833]: I1013 08:10:29.433231 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/948a7fdd-f311-454f-b73f-3c62b09a90eb-scripts\") pod \"octavia-db-sync-5rmws\" (UID: \"948a7fdd-f311-454f-b73f-3c62b09a90eb\") " pod="openstack/octavia-db-sync-5rmws" Oct 13 08:10:29 crc kubenswrapper[4833]: I1013 08:10:29.433403 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948a7fdd-f311-454f-b73f-3c62b09a90eb-config-data\") pod \"octavia-db-sync-5rmws\" (UID: \"948a7fdd-f311-454f-b73f-3c62b09a90eb\") " pod="openstack/octavia-db-sync-5rmws" Oct 13 08:10:29 crc kubenswrapper[4833]: I1013 08:10:29.433432 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/948a7fdd-f311-454f-b73f-3c62b09a90eb-config-data-merged\") pod \"octavia-db-sync-5rmws\" (UID: \"948a7fdd-f311-454f-b73f-3c62b09a90eb\") " pod="openstack/octavia-db-sync-5rmws" Oct 13 08:10:29 crc kubenswrapper[4833]: I1013 08:10:29.433496 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948a7fdd-f311-454f-b73f-3c62b09a90eb-combined-ca-bundle\") pod \"octavia-db-sync-5rmws\" (UID: \"948a7fdd-f311-454f-b73f-3c62b09a90eb\") " pod="openstack/octavia-db-sync-5rmws" Oct 13 08:10:29 crc kubenswrapper[4833]: I1013 08:10:29.434188 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/948a7fdd-f311-454f-b73f-3c62b09a90eb-config-data-merged\") pod \"octavia-db-sync-5rmws\" (UID: \"948a7fdd-f311-454f-b73f-3c62b09a90eb\") " pod="openstack/octavia-db-sync-5rmws" Oct 13 08:10:29 crc kubenswrapper[4833]: I1013 08:10:29.454065 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/948a7fdd-f311-454f-b73f-3c62b09a90eb-scripts\") pod \"octavia-db-sync-5rmws\" (UID: \"948a7fdd-f311-454f-b73f-3c62b09a90eb\") " pod="openstack/octavia-db-sync-5rmws" Oct 13 08:10:29 crc kubenswrapper[4833]: I1013 08:10:29.454354 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948a7fdd-f311-454f-b73f-3c62b09a90eb-combined-ca-bundle\") pod \"octavia-db-sync-5rmws\" (UID: \"948a7fdd-f311-454f-b73f-3c62b09a90eb\") " pod="openstack/octavia-db-sync-5rmws" Oct 13 08:10:29 crc kubenswrapper[4833]: I1013 08:10:29.454457 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948a7fdd-f311-454f-b73f-3c62b09a90eb-config-data\") pod \"octavia-db-sync-5rmws\" (UID: \"948a7fdd-f311-454f-b73f-3c62b09a90eb\") " pod="openstack/octavia-db-sync-5rmws" Oct 13 08:10:29 crc kubenswrapper[4833]: I1013 08:10:29.581405 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-5rmws" Oct 13 08:10:30 crc kubenswrapper[4833]: I1013 08:10:30.542956 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:10:30 crc kubenswrapper[4833]: I1013 08:10:30.543333 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 08:10:31 crc kubenswrapper[4833]: I1013 08:10:31.897876 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-5rmws"] Oct 13 08:10:31 crc kubenswrapper[4833]: W1013 08:10:31.907955 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod948a7fdd_f311_454f_b73f_3c62b09a90eb.slice/crio-53804c7abcdc31499d6bd2548e92fddbe0884ffcab58950dd1b4ddc736abd943 WatchSource:0}: Error finding container 53804c7abcdc31499d6bd2548e92fddbe0884ffcab58950dd1b4ddc736abd943: Status 404 returned error can't find the container with id 53804c7abcdc31499d6bd2548e92fddbe0884ffcab58950dd1b4ddc736abd943 Oct 13 08:10:32 crc kubenswrapper[4833]: I1013 08:10:32.017170 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-678599687f-tdfgk" event={"ID":"a2062231-fd5b-4e8a-97db-20d869a9bf89","Type":"ContainerStarted","Data":"7d126dfe6920809da89fc8552c8366d14317d4be0c47e5a101f693934e59f160"} Oct 13 08:10:32 crc kubenswrapper[4833]: I1013 08:10:32.021521 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-5rmws" event={"ID":"948a7fdd-f311-454f-b73f-3c62b09a90eb","Type":"ContainerStarted","Data":"53804c7abcdc31499d6bd2548e92fddbe0884ffcab58950dd1b4ddc736abd943"} Oct 13 08:10:33 crc kubenswrapper[4833]: I1013 08:10:33.040792 4833 generic.go:334] "Generic (PLEG): container finished" podID="a2062231-fd5b-4e8a-97db-20d869a9bf89" containerID="7d126dfe6920809da89fc8552c8366d14317d4be0c47e5a101f693934e59f160" exitCode=0 Oct 13 08:10:33 crc kubenswrapper[4833]: I1013 08:10:33.040907 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-678599687f-tdfgk" event={"ID":"a2062231-fd5b-4e8a-97db-20d869a9bf89","Type":"ContainerDied","Data":"7d126dfe6920809da89fc8552c8366d14317d4be0c47e5a101f693934e59f160"} Oct 13 08:10:33 crc kubenswrapper[4833]: I1013 08:10:33.044241 4833 generic.go:334] "Generic (PLEG): container finished" podID="948a7fdd-f311-454f-b73f-3c62b09a90eb" containerID="94d4494ae52fcd809196f14ecf43faa3136a692a4f73c8aa4d607171fc40daf9" exitCode=0 Oct 13 08:10:33 crc kubenswrapper[4833]: I1013 08:10:33.044310 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-5rmws" event={"ID":"948a7fdd-f311-454f-b73f-3c62b09a90eb","Type":"ContainerDied","Data":"94d4494ae52fcd809196f14ecf43faa3136a692a4f73c8aa4d607171fc40daf9"} Oct 13 08:10:33 crc kubenswrapper[4833]: I1013 08:10:33.784868 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" Oct 13 08:10:33 crc kubenswrapper[4833]: I1013 08:10:33.954821 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" Oct 13 08:10:34 crc kubenswrapper[4833]: I1013 08:10:34.065116 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-678599687f-tdfgk" event={"ID":"a2062231-fd5b-4e8a-97db-20d869a9bf89","Type":"ContainerStarted","Data":"9565c77871600315b450fc3354bb7e61dd1f9c7921dfd59d34d37f70ae5dfe84"} Oct 13 08:10:34 crc kubenswrapper[4833]: I1013 08:10:34.074521 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-5rmws" event={"ID":"948a7fdd-f311-454f-b73f-3c62b09a90eb","Type":"ContainerStarted","Data":"96d3d80e29049da8248ff57519fda8b5efc17505a8288bf019e45908f18ada3c"} Oct 13 08:10:34 crc kubenswrapper[4833]: I1013 08:10:34.093597 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-678599687f-tdfgk" podStartSLOduration=4.364171342 podStartE2EDuration="15.093574273s" podCreationTimestamp="2025-10-13 08:10:19 +0000 UTC" firstStartedPulling="2025-10-13 08:10:20.829786665 +0000 UTC m=+6110.930209621" lastFinishedPulling="2025-10-13 08:10:31.559189626 +0000 UTC m=+6121.659612552" observedRunningTime="2025-10-13 08:10:34.082075446 +0000 UTC m=+6124.182498352" watchObservedRunningTime="2025-10-13 08:10:34.093574273 +0000 UTC m=+6124.193997189" Oct 13 08:10:34 crc kubenswrapper[4833]: I1013 08:10:34.111274 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-sync-5rmws" podStartSLOduration=5.111255656 podStartE2EDuration="5.111255656s" podCreationTimestamp="2025-10-13 08:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:10:34.108712273 +0000 UTC m=+6124.209135219" watchObservedRunningTime="2025-10-13 08:10:34.111255656 +0000 UTC m=+6124.211678572" Oct 13 08:10:34 crc kubenswrapper[4833]: I1013 08:10:34.643628 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-rmj9t" Oct 13 08:10:35 crc kubenswrapper[4833]: I1013 08:10:35.502135 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-js4km"] Oct 13 08:10:35 crc kubenswrapper[4833]: I1013 08:10:35.506830 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-js4km" Oct 13 08:10:35 crc kubenswrapper[4833]: I1013 08:10:35.516013 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-js4km"] Oct 13 08:10:35 crc kubenswrapper[4833]: I1013 08:10:35.704266 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f802ed1-aad8-4ed9-8768-adbfafee22cd-catalog-content\") pod \"community-operators-js4km\" (UID: \"2f802ed1-aad8-4ed9-8768-adbfafee22cd\") " pod="openshift-marketplace/community-operators-js4km" Oct 13 08:10:35 crc kubenswrapper[4833]: I1013 08:10:35.704325 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f802ed1-aad8-4ed9-8768-adbfafee22cd-utilities\") pod \"community-operators-js4km\" (UID: \"2f802ed1-aad8-4ed9-8768-adbfafee22cd\") " pod="openshift-marketplace/community-operators-js4km" Oct 13 08:10:35 crc kubenswrapper[4833]: I1013 08:10:35.704421 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwbqn\" (UniqueName: \"kubernetes.io/projected/2f802ed1-aad8-4ed9-8768-adbfafee22cd-kube-api-access-lwbqn\") pod \"community-operators-js4km\" (UID: \"2f802ed1-aad8-4ed9-8768-adbfafee22cd\") " pod="openshift-marketplace/community-operators-js4km" Oct 13 08:10:35 crc kubenswrapper[4833]: I1013 08:10:35.806353 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwbqn\" (UniqueName: \"kubernetes.io/projected/2f802ed1-aad8-4ed9-8768-adbfafee22cd-kube-api-access-lwbqn\") pod \"community-operators-js4km\" (UID: \"2f802ed1-aad8-4ed9-8768-adbfafee22cd\") " pod="openshift-marketplace/community-operators-js4km" Oct 13 08:10:35 crc kubenswrapper[4833]: I1013 08:10:35.806492 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f802ed1-aad8-4ed9-8768-adbfafee22cd-catalog-content\") pod \"community-operators-js4km\" (UID: \"2f802ed1-aad8-4ed9-8768-adbfafee22cd\") " pod="openshift-marketplace/community-operators-js4km" Oct 13 08:10:35 crc kubenswrapper[4833]: I1013 08:10:35.806591 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f802ed1-aad8-4ed9-8768-adbfafee22cd-utilities\") pod \"community-operators-js4km\" (UID: \"2f802ed1-aad8-4ed9-8768-adbfafee22cd\") " pod="openshift-marketplace/community-operators-js4km" Oct 13 08:10:35 crc kubenswrapper[4833]: I1013 08:10:35.807242 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f802ed1-aad8-4ed9-8768-adbfafee22cd-catalog-content\") pod \"community-operators-js4km\" (UID: \"2f802ed1-aad8-4ed9-8768-adbfafee22cd\") " pod="openshift-marketplace/community-operators-js4km" Oct 13 08:10:35 crc kubenswrapper[4833]: I1013 08:10:35.807235 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f802ed1-aad8-4ed9-8768-adbfafee22cd-utilities\") pod \"community-operators-js4km\" (UID: \"2f802ed1-aad8-4ed9-8768-adbfafee22cd\") " pod="openshift-marketplace/community-operators-js4km" Oct 13 08:10:35 crc kubenswrapper[4833]: I1013 08:10:35.826142 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwbqn\" (UniqueName: \"kubernetes.io/projected/2f802ed1-aad8-4ed9-8768-adbfafee22cd-kube-api-access-lwbqn\") pod \"community-operators-js4km\" (UID: \"2f802ed1-aad8-4ed9-8768-adbfafee22cd\") " pod="openshift-marketplace/community-operators-js4km" Oct 13 08:10:35 crc kubenswrapper[4833]: I1013 08:10:35.829827 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-js4km" Oct 13 08:10:36 crc kubenswrapper[4833]: I1013 08:10:36.310911 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-js4km"] Oct 13 08:10:37 crc kubenswrapper[4833]: I1013 08:10:37.118747 4833 generic.go:334] "Generic (PLEG): container finished" podID="2f802ed1-aad8-4ed9-8768-adbfafee22cd" containerID="46a0cfa84da231977ebdbb892c13e62eceabacccad6ecf868a42d4faf8c05753" exitCode=0 Oct 13 08:10:37 crc kubenswrapper[4833]: I1013 08:10:37.118833 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-js4km" event={"ID":"2f802ed1-aad8-4ed9-8768-adbfafee22cd","Type":"ContainerDied","Data":"46a0cfa84da231977ebdbb892c13e62eceabacccad6ecf868a42d4faf8c05753"} Oct 13 08:10:37 crc kubenswrapper[4833]: I1013 08:10:37.119088 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-js4km" event={"ID":"2f802ed1-aad8-4ed9-8768-adbfafee22cd","Type":"ContainerStarted","Data":"044dde3d455927d023a952f3c478935c020d1d6f23e6f8aea0be171a2f692d17"} Oct 13 08:10:38 crc kubenswrapper[4833]: I1013 08:10:38.132451 4833 generic.go:334] "Generic (PLEG): container finished" podID="948a7fdd-f311-454f-b73f-3c62b09a90eb" containerID="96d3d80e29049da8248ff57519fda8b5efc17505a8288bf019e45908f18ada3c" exitCode=0 Oct 13 08:10:38 crc kubenswrapper[4833]: I1013 08:10:38.132518 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-5rmws" event={"ID":"948a7fdd-f311-454f-b73f-3c62b09a90eb","Type":"ContainerDied","Data":"96d3d80e29049da8248ff57519fda8b5efc17505a8288bf019e45908f18ada3c"} Oct 13 08:10:38 crc kubenswrapper[4833]: I1013 08:10:38.138437 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-js4km" event={"ID":"2f802ed1-aad8-4ed9-8768-adbfafee22cd","Type":"ContainerStarted","Data":"9a62230bb4957781f5bcb3102f9195df4809720fad61e4ec332e52a6b06d7498"} Oct 13 08:10:39 crc kubenswrapper[4833]: I1013 08:10:39.153136 4833 generic.go:334] "Generic (PLEG): container finished" podID="2f802ed1-aad8-4ed9-8768-adbfafee22cd" containerID="9a62230bb4957781f5bcb3102f9195df4809720fad61e4ec332e52a6b06d7498" exitCode=0 Oct 13 08:10:39 crc kubenswrapper[4833]: I1013 08:10:39.155605 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-js4km" event={"ID":"2f802ed1-aad8-4ed9-8768-adbfafee22cd","Type":"ContainerDied","Data":"9a62230bb4957781f5bcb3102f9195df4809720fad61e4ec332e52a6b06d7498"} Oct 13 08:10:39 crc kubenswrapper[4833]: I1013 08:10:39.588934 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-5rmws" Oct 13 08:10:39 crc kubenswrapper[4833]: I1013 08:10:39.706112 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948a7fdd-f311-454f-b73f-3c62b09a90eb-combined-ca-bundle\") pod \"948a7fdd-f311-454f-b73f-3c62b09a90eb\" (UID: \"948a7fdd-f311-454f-b73f-3c62b09a90eb\") " Oct 13 08:10:39 crc kubenswrapper[4833]: I1013 08:10:39.706744 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/948a7fdd-f311-454f-b73f-3c62b09a90eb-scripts\") pod \"948a7fdd-f311-454f-b73f-3c62b09a90eb\" (UID: \"948a7fdd-f311-454f-b73f-3c62b09a90eb\") " Oct 13 08:10:39 crc kubenswrapper[4833]: I1013 08:10:39.706901 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948a7fdd-f311-454f-b73f-3c62b09a90eb-config-data\") pod \"948a7fdd-f311-454f-b73f-3c62b09a90eb\" (UID: \"948a7fdd-f311-454f-b73f-3c62b09a90eb\") " Oct 13 08:10:39 crc kubenswrapper[4833]: I1013 08:10:39.707025 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/948a7fdd-f311-454f-b73f-3c62b09a90eb-config-data-merged\") pod \"948a7fdd-f311-454f-b73f-3c62b09a90eb\" (UID: \"948a7fdd-f311-454f-b73f-3c62b09a90eb\") " Oct 13 08:10:39 crc kubenswrapper[4833]: I1013 08:10:39.712369 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948a7fdd-f311-454f-b73f-3c62b09a90eb-config-data" (OuterVolumeSpecName: "config-data") pod "948a7fdd-f311-454f-b73f-3c62b09a90eb" (UID: "948a7fdd-f311-454f-b73f-3c62b09a90eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:10:39 crc kubenswrapper[4833]: I1013 08:10:39.712414 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948a7fdd-f311-454f-b73f-3c62b09a90eb-scripts" (OuterVolumeSpecName: "scripts") pod "948a7fdd-f311-454f-b73f-3c62b09a90eb" (UID: "948a7fdd-f311-454f-b73f-3c62b09a90eb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:10:39 crc kubenswrapper[4833]: I1013 08:10:39.740950 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948a7fdd-f311-454f-b73f-3c62b09a90eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "948a7fdd-f311-454f-b73f-3c62b09a90eb" (UID: "948a7fdd-f311-454f-b73f-3c62b09a90eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:10:39 crc kubenswrapper[4833]: I1013 08:10:39.743174 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/948a7fdd-f311-454f-b73f-3c62b09a90eb-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "948a7fdd-f311-454f-b73f-3c62b09a90eb" (UID: "948a7fdd-f311-454f-b73f-3c62b09a90eb"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:10:39 crc kubenswrapper[4833]: I1013 08:10:39.809920 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948a7fdd-f311-454f-b73f-3c62b09a90eb-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:10:39 crc kubenswrapper[4833]: I1013 08:10:39.810929 4833 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/948a7fdd-f311-454f-b73f-3c62b09a90eb-config-data-merged\") on node \"crc\" DevicePath \"\"" Oct 13 08:10:39 crc kubenswrapper[4833]: I1013 08:10:39.811024 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948a7fdd-f311-454f-b73f-3c62b09a90eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:10:39 crc kubenswrapper[4833]: I1013 08:10:39.811090 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/948a7fdd-f311-454f-b73f-3c62b09a90eb-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 08:10:40 crc kubenswrapper[4833]: I1013 08:10:40.167104 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-5rmws" event={"ID":"948a7fdd-f311-454f-b73f-3c62b09a90eb","Type":"ContainerDied","Data":"53804c7abcdc31499d6bd2548e92fddbe0884ffcab58950dd1b4ddc736abd943"} Oct 13 08:10:40 crc kubenswrapper[4833]: I1013 08:10:40.167151 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53804c7abcdc31499d6bd2548e92fddbe0884ffcab58950dd1b4ddc736abd943" Oct 13 08:10:40 crc kubenswrapper[4833]: I1013 08:10:40.167225 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-5rmws" Oct 13 08:10:40 crc kubenswrapper[4833]: I1013 08:10:40.184924 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-js4km" event={"ID":"2f802ed1-aad8-4ed9-8768-adbfafee22cd","Type":"ContainerStarted","Data":"a30419ab711468146faf3cacc9e9042d5244c92c613d3ba229b893fe1ad84ac0"} Oct 13 08:10:40 crc kubenswrapper[4833]: I1013 08:10:40.219422 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-js4km" podStartSLOduration=2.717086737 podStartE2EDuration="5.219394372s" podCreationTimestamp="2025-10-13 08:10:35 +0000 UTC" firstStartedPulling="2025-10-13 08:10:37.12648248 +0000 UTC m=+6127.226905396" lastFinishedPulling="2025-10-13 08:10:39.628790115 +0000 UTC m=+6129.729213031" observedRunningTime="2025-10-13 08:10:40.210576422 +0000 UTC m=+6130.310999358" watchObservedRunningTime="2025-10-13 08:10:40.219394372 +0000 UTC m=+6130.319817298" Oct 13 08:10:40 crc kubenswrapper[4833]: I1013 08:10:40.823392 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-5b9f5bccc5-4lqcj" Oct 13 08:10:41 crc kubenswrapper[4833]: I1013 08:10:41.280947 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-5b9f5bccc5-4lqcj" Oct 13 08:10:41 crc kubenswrapper[4833]: I1013 08:10:41.382659 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-api-6b9f5bbfb5-2bc5l"] Oct 13 08:10:41 crc kubenswrapper[4833]: I1013 08:10:41.382977 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" podUID="a86ac26b-2b19-4662-aa4b-2a94e78a18e9" containerName="octavia-api" containerID="cri-o://fa93761cb611630899663f375ca7b9b3a1bac45b6a9d165a16af9efcf776112f" gracePeriod=30 Oct 13 08:10:41 crc kubenswrapper[4833]: I1013 08:10:41.383149 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" podUID="a86ac26b-2b19-4662-aa4b-2a94e78a18e9" containerName="octavia-api-provider-agent" containerID="cri-o://b7cc36ace7d12f43a0d66a66c02708dea54fe84c6b02bca3f8814b1d752a62ef" gracePeriod=30 Oct 13 08:10:43 crc kubenswrapper[4833]: I1013 08:10:43.226977 4833 generic.go:334] "Generic (PLEG): container finished" podID="a86ac26b-2b19-4662-aa4b-2a94e78a18e9" containerID="b7cc36ace7d12f43a0d66a66c02708dea54fe84c6b02bca3f8814b1d752a62ef" exitCode=0 Oct 13 08:10:43 crc kubenswrapper[4833]: I1013 08:10:43.227191 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" event={"ID":"a86ac26b-2b19-4662-aa4b-2a94e78a18e9","Type":"ContainerDied","Data":"b7cc36ace7d12f43a0d66a66c02708dea54fe84c6b02bca3f8814b1d752a62ef"} Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.072415 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.130685 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-ovndb-tls-certs\") pod \"a86ac26b-2b19-4662-aa4b-2a94e78a18e9\" (UID: \"a86ac26b-2b19-4662-aa4b-2a94e78a18e9\") " Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.130999 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-config-data-merged\") pod \"a86ac26b-2b19-4662-aa4b-2a94e78a18e9\" (UID: \"a86ac26b-2b19-4662-aa4b-2a94e78a18e9\") " Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.131121 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-scripts\") pod \"a86ac26b-2b19-4662-aa4b-2a94e78a18e9\" (UID: \"a86ac26b-2b19-4662-aa4b-2a94e78a18e9\") " Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.131266 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-config-data\") pod \"a86ac26b-2b19-4662-aa4b-2a94e78a18e9\" (UID: \"a86ac26b-2b19-4662-aa4b-2a94e78a18e9\") " Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.131903 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-combined-ca-bundle\") pod \"a86ac26b-2b19-4662-aa4b-2a94e78a18e9\" (UID: \"a86ac26b-2b19-4662-aa4b-2a94e78a18e9\") " Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.131979 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-octavia-run\") pod \"a86ac26b-2b19-4662-aa4b-2a94e78a18e9\" (UID: \"a86ac26b-2b19-4662-aa4b-2a94e78a18e9\") " Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.133239 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-octavia-run" (OuterVolumeSpecName: "octavia-run") pod "a86ac26b-2b19-4662-aa4b-2a94e78a18e9" (UID: "a86ac26b-2b19-4662-aa4b-2a94e78a18e9"). InnerVolumeSpecName "octavia-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.143915 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-scripts" (OuterVolumeSpecName: "scripts") pod "a86ac26b-2b19-4662-aa4b-2a94e78a18e9" (UID: "a86ac26b-2b19-4662-aa4b-2a94e78a18e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.151555 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-config-data" (OuterVolumeSpecName: "config-data") pod "a86ac26b-2b19-4662-aa4b-2a94e78a18e9" (UID: "a86ac26b-2b19-4662-aa4b-2a94e78a18e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.200939 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "a86ac26b-2b19-4662-aa4b-2a94e78a18e9" (UID: "a86ac26b-2b19-4662-aa4b-2a94e78a18e9"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.234972 4833 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-config-data-merged\") on node \"crc\" DevicePath \"\"" Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.235025 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.235039 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.235052 4833 reconciler_common.go:293] "Volume detached for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-octavia-run\") on node \"crc\" DevicePath \"\"" Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.257711 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a86ac26b-2b19-4662-aa4b-2a94e78a18e9" (UID: "a86ac26b-2b19-4662-aa4b-2a94e78a18e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.266675 4833 generic.go:334] "Generic (PLEG): container finished" podID="a86ac26b-2b19-4662-aa4b-2a94e78a18e9" containerID="fa93761cb611630899663f375ca7b9b3a1bac45b6a9d165a16af9efcf776112f" exitCode=0 Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.266726 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" event={"ID":"a86ac26b-2b19-4662-aa4b-2a94e78a18e9","Type":"ContainerDied","Data":"fa93761cb611630899663f375ca7b9b3a1bac45b6a9d165a16af9efcf776112f"} Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.266743 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.266760 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" event={"ID":"a86ac26b-2b19-4662-aa4b-2a94e78a18e9","Type":"ContainerDied","Data":"25530ceb1f317e8d8de02539b6933b6c4a3b95049b6fb6d21063ec2bcd9b6ba6"} Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.266782 4833 scope.go:117] "RemoveContainer" containerID="b7cc36ace7d12f43a0d66a66c02708dea54fe84c6b02bca3f8814b1d752a62ef" Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.298552 4833 scope.go:117] "RemoveContainer" containerID="fa93761cb611630899663f375ca7b9b3a1bac45b6a9d165a16af9efcf776112f" Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.326255 4833 scope.go:117] "RemoveContainer" containerID="de2c09a584a644a6b781272117c5b20e3b841156a964d314f3ed6d25c94e86ef" Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.336959 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.348098 4833 scope.go:117] "RemoveContainer" containerID="b7cc36ace7d12f43a0d66a66c02708dea54fe84c6b02bca3f8814b1d752a62ef" Oct 13 08:10:45 crc kubenswrapper[4833]: E1013 08:10:45.350142 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7cc36ace7d12f43a0d66a66c02708dea54fe84c6b02bca3f8814b1d752a62ef\": container with ID starting with b7cc36ace7d12f43a0d66a66c02708dea54fe84c6b02bca3f8814b1d752a62ef not found: ID does not exist" containerID="b7cc36ace7d12f43a0d66a66c02708dea54fe84c6b02bca3f8814b1d752a62ef" Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.350178 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7cc36ace7d12f43a0d66a66c02708dea54fe84c6b02bca3f8814b1d752a62ef"} err="failed to get container status \"b7cc36ace7d12f43a0d66a66c02708dea54fe84c6b02bca3f8814b1d752a62ef\": rpc error: code = NotFound desc = could not find container \"b7cc36ace7d12f43a0d66a66c02708dea54fe84c6b02bca3f8814b1d752a62ef\": container with ID starting with b7cc36ace7d12f43a0d66a66c02708dea54fe84c6b02bca3f8814b1d752a62ef not found: ID does not exist" Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.350199 4833 scope.go:117] "RemoveContainer" containerID="fa93761cb611630899663f375ca7b9b3a1bac45b6a9d165a16af9efcf776112f" Oct 13 08:10:45 crc kubenswrapper[4833]: E1013 08:10:45.350527 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa93761cb611630899663f375ca7b9b3a1bac45b6a9d165a16af9efcf776112f\": container with ID starting with fa93761cb611630899663f375ca7b9b3a1bac45b6a9d165a16af9efcf776112f not found: ID does not exist" containerID="fa93761cb611630899663f375ca7b9b3a1bac45b6a9d165a16af9efcf776112f" Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.350582 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa93761cb611630899663f375ca7b9b3a1bac45b6a9d165a16af9efcf776112f"} err="failed to get container status \"fa93761cb611630899663f375ca7b9b3a1bac45b6a9d165a16af9efcf776112f\": rpc error: code = NotFound desc = could not find container \"fa93761cb611630899663f375ca7b9b3a1bac45b6a9d165a16af9efcf776112f\": container with ID starting with fa93761cb611630899663f375ca7b9b3a1bac45b6a9d165a16af9efcf776112f not found: ID does not exist" Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.350597 4833 scope.go:117] "RemoveContainer" containerID="de2c09a584a644a6b781272117c5b20e3b841156a964d314f3ed6d25c94e86ef" Oct 13 08:10:45 crc kubenswrapper[4833]: E1013 08:10:45.351083 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de2c09a584a644a6b781272117c5b20e3b841156a964d314f3ed6d25c94e86ef\": container with ID starting with de2c09a584a644a6b781272117c5b20e3b841156a964d314f3ed6d25c94e86ef not found: ID does not exist" containerID="de2c09a584a644a6b781272117c5b20e3b841156a964d314f3ed6d25c94e86ef" Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.351103 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de2c09a584a644a6b781272117c5b20e3b841156a964d314f3ed6d25c94e86ef"} err="failed to get container status \"de2c09a584a644a6b781272117c5b20e3b841156a964d314f3ed6d25c94e86ef\": rpc error: code = NotFound desc = could not find container \"de2c09a584a644a6b781272117c5b20e3b841156a964d314f3ed6d25c94e86ef\": container with ID starting with de2c09a584a644a6b781272117c5b20e3b841156a964d314f3ed6d25c94e86ef not found: ID does not exist" Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.388278 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a86ac26b-2b19-4662-aa4b-2a94e78a18e9" (UID: "a86ac26b-2b19-4662-aa4b-2a94e78a18e9"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.439443 4833 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a86ac26b-2b19-4662-aa4b-2a94e78a18e9-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.616779 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-api-6b9f5bbfb5-2bc5l"] Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.634718 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-api-6b9f5bbfb5-2bc5l"] Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.830904 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-js4km" Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.830968 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-js4km" Oct 13 08:10:45 crc kubenswrapper[4833]: I1013 08:10:45.895997 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-js4km" Oct 13 08:10:46 crc kubenswrapper[4833]: I1013 08:10:46.355107 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-js4km" Oct 13 08:10:46 crc kubenswrapper[4833]: I1013 08:10:46.414498 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-js4km"] Oct 13 08:10:46 crc kubenswrapper[4833]: I1013 08:10:46.638954 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a86ac26b-2b19-4662-aa4b-2a94e78a18e9" path="/var/lib/kubelet/pods/a86ac26b-2b19-4662-aa4b-2a94e78a18e9/volumes" Oct 13 08:10:48 crc kubenswrapper[4833]: I1013 08:10:48.320459 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-js4km" podUID="2f802ed1-aad8-4ed9-8768-adbfafee22cd" containerName="registry-server" containerID="cri-o://a30419ab711468146faf3cacc9e9042d5244c92c613d3ba229b893fe1ad84ac0" gracePeriod=2 Oct 13 08:10:48 crc kubenswrapper[4833]: I1013 08:10:48.887554 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-js4km" Oct 13 08:10:48 crc kubenswrapper[4833]: I1013 08:10:48.961837 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f802ed1-aad8-4ed9-8768-adbfafee22cd-utilities\") pod \"2f802ed1-aad8-4ed9-8768-adbfafee22cd\" (UID: \"2f802ed1-aad8-4ed9-8768-adbfafee22cd\") " Oct 13 08:10:48 crc kubenswrapper[4833]: I1013 08:10:48.962472 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwbqn\" (UniqueName: \"kubernetes.io/projected/2f802ed1-aad8-4ed9-8768-adbfafee22cd-kube-api-access-lwbqn\") pod \"2f802ed1-aad8-4ed9-8768-adbfafee22cd\" (UID: \"2f802ed1-aad8-4ed9-8768-adbfafee22cd\") " Oct 13 08:10:48 crc kubenswrapper[4833]: I1013 08:10:48.962685 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f802ed1-aad8-4ed9-8768-adbfafee22cd-catalog-content\") pod \"2f802ed1-aad8-4ed9-8768-adbfafee22cd\" (UID: \"2f802ed1-aad8-4ed9-8768-adbfafee22cd\") " Oct 13 08:10:48 crc kubenswrapper[4833]: I1013 08:10:48.963238 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f802ed1-aad8-4ed9-8768-adbfafee22cd-utilities" (OuterVolumeSpecName: "utilities") pod "2f802ed1-aad8-4ed9-8768-adbfafee22cd" (UID: "2f802ed1-aad8-4ed9-8768-adbfafee22cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:10:48 crc kubenswrapper[4833]: I1013 08:10:48.969702 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f802ed1-aad8-4ed9-8768-adbfafee22cd-kube-api-access-lwbqn" (OuterVolumeSpecName: "kube-api-access-lwbqn") pod "2f802ed1-aad8-4ed9-8768-adbfafee22cd" (UID: "2f802ed1-aad8-4ed9-8768-adbfafee22cd"). InnerVolumeSpecName "kube-api-access-lwbqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:10:49 crc kubenswrapper[4833]: I1013 08:10:49.034269 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f802ed1-aad8-4ed9-8768-adbfafee22cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f802ed1-aad8-4ed9-8768-adbfafee22cd" (UID: "2f802ed1-aad8-4ed9-8768-adbfafee22cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:10:49 crc kubenswrapper[4833]: I1013 08:10:49.065107 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f802ed1-aad8-4ed9-8768-adbfafee22cd-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 08:10:49 crc kubenswrapper[4833]: I1013 08:10:49.065152 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f802ed1-aad8-4ed9-8768-adbfafee22cd-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 08:10:49 crc kubenswrapper[4833]: I1013 08:10:49.065163 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwbqn\" (UniqueName: \"kubernetes.io/projected/2f802ed1-aad8-4ed9-8768-adbfafee22cd-kube-api-access-lwbqn\") on node \"crc\" DevicePath \"\"" Oct 13 08:10:49 crc kubenswrapper[4833]: I1013 08:10:49.345418 4833 generic.go:334] "Generic (PLEG): container finished" podID="2f802ed1-aad8-4ed9-8768-adbfafee22cd" containerID="a30419ab711468146faf3cacc9e9042d5244c92c613d3ba229b893fe1ad84ac0" exitCode=0 Oct 13 08:10:49 crc kubenswrapper[4833]: I1013 08:10:49.345492 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-js4km" event={"ID":"2f802ed1-aad8-4ed9-8768-adbfafee22cd","Type":"ContainerDied","Data":"a30419ab711468146faf3cacc9e9042d5244c92c613d3ba229b893fe1ad84ac0"} Oct 13 08:10:49 crc kubenswrapper[4833]: I1013 08:10:49.345600 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-js4km" event={"ID":"2f802ed1-aad8-4ed9-8768-adbfafee22cd","Type":"ContainerDied","Data":"044dde3d455927d023a952f3c478935c020d1d6f23e6f8aea0be171a2f692d17"} Oct 13 08:10:49 crc kubenswrapper[4833]: I1013 08:10:49.345638 4833 scope.go:117] "RemoveContainer" containerID="a30419ab711468146faf3cacc9e9042d5244c92c613d3ba229b893fe1ad84ac0" Oct 13 08:10:49 crc kubenswrapper[4833]: I1013 08:10:49.345778 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-js4km" Oct 13 08:10:49 crc kubenswrapper[4833]: I1013 08:10:49.385810 4833 scope.go:117] "RemoveContainer" containerID="9a62230bb4957781f5bcb3102f9195df4809720fad61e4ec332e52a6b06d7498" Oct 13 08:10:49 crc kubenswrapper[4833]: I1013 08:10:49.402974 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-js4km"] Oct 13 08:10:49 crc kubenswrapper[4833]: I1013 08:10:49.412219 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-js4km"] Oct 13 08:10:49 crc kubenswrapper[4833]: I1013 08:10:49.421516 4833 scope.go:117] "RemoveContainer" containerID="46a0cfa84da231977ebdbb892c13e62eceabacccad6ecf868a42d4faf8c05753" Oct 13 08:10:49 crc kubenswrapper[4833]: I1013 08:10:49.456083 4833 scope.go:117] "RemoveContainer" containerID="a30419ab711468146faf3cacc9e9042d5244c92c613d3ba229b893fe1ad84ac0" Oct 13 08:10:49 crc kubenswrapper[4833]: E1013 08:10:49.456819 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a30419ab711468146faf3cacc9e9042d5244c92c613d3ba229b893fe1ad84ac0\": container with ID starting with a30419ab711468146faf3cacc9e9042d5244c92c613d3ba229b893fe1ad84ac0 not found: ID does not exist" containerID="a30419ab711468146faf3cacc9e9042d5244c92c613d3ba229b893fe1ad84ac0" Oct 13 08:10:49 crc kubenswrapper[4833]: I1013 08:10:49.456868 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a30419ab711468146faf3cacc9e9042d5244c92c613d3ba229b893fe1ad84ac0"} err="failed to get container status \"a30419ab711468146faf3cacc9e9042d5244c92c613d3ba229b893fe1ad84ac0\": rpc error: code = NotFound desc = could not find container \"a30419ab711468146faf3cacc9e9042d5244c92c613d3ba229b893fe1ad84ac0\": container with ID starting with a30419ab711468146faf3cacc9e9042d5244c92c613d3ba229b893fe1ad84ac0 not found: ID does not exist" Oct 13 08:10:49 crc kubenswrapper[4833]: I1013 08:10:49.456899 4833 scope.go:117] "RemoveContainer" containerID="9a62230bb4957781f5bcb3102f9195df4809720fad61e4ec332e52a6b06d7498" Oct 13 08:10:49 crc kubenswrapper[4833]: E1013 08:10:49.457302 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a62230bb4957781f5bcb3102f9195df4809720fad61e4ec332e52a6b06d7498\": container with ID starting with 9a62230bb4957781f5bcb3102f9195df4809720fad61e4ec332e52a6b06d7498 not found: ID does not exist" containerID="9a62230bb4957781f5bcb3102f9195df4809720fad61e4ec332e52a6b06d7498" Oct 13 08:10:49 crc kubenswrapper[4833]: I1013 08:10:49.457326 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a62230bb4957781f5bcb3102f9195df4809720fad61e4ec332e52a6b06d7498"} err="failed to get container status \"9a62230bb4957781f5bcb3102f9195df4809720fad61e4ec332e52a6b06d7498\": rpc error: code = NotFound desc = could not find container \"9a62230bb4957781f5bcb3102f9195df4809720fad61e4ec332e52a6b06d7498\": container with ID starting with 9a62230bb4957781f5bcb3102f9195df4809720fad61e4ec332e52a6b06d7498 not found: ID does not exist" Oct 13 08:10:49 crc kubenswrapper[4833]: I1013 08:10:49.457342 4833 scope.go:117] "RemoveContainer" containerID="46a0cfa84da231977ebdbb892c13e62eceabacccad6ecf868a42d4faf8c05753" Oct 13 08:10:49 crc kubenswrapper[4833]: E1013 08:10:49.457870 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46a0cfa84da231977ebdbb892c13e62eceabacccad6ecf868a42d4faf8c05753\": container with ID starting with 46a0cfa84da231977ebdbb892c13e62eceabacccad6ecf868a42d4faf8c05753 not found: ID does not exist" containerID="46a0cfa84da231977ebdbb892c13e62eceabacccad6ecf868a42d4faf8c05753" Oct 13 08:10:49 crc kubenswrapper[4833]: I1013 08:10:49.457926 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a0cfa84da231977ebdbb892c13e62eceabacccad6ecf868a42d4faf8c05753"} err="failed to get container status \"46a0cfa84da231977ebdbb892c13e62eceabacccad6ecf868a42d4faf8c05753\": rpc error: code = NotFound desc = could not find container \"46a0cfa84da231977ebdbb892c13e62eceabacccad6ecf868a42d4faf8c05753\": container with ID starting with 46a0cfa84da231977ebdbb892c13e62eceabacccad6ecf868a42d4faf8c05753 not found: ID does not exist" Oct 13 08:10:50 crc kubenswrapper[4833]: I1013 08:10:50.638475 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f802ed1-aad8-4ed9-8768-adbfafee22cd" path="/var/lib/kubelet/pods/2f802ed1-aad8-4ed9-8768-adbfafee22cd/volumes" Oct 13 08:10:59 crc kubenswrapper[4833]: I1013 08:10:59.829852 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" podUID="a86ac26b-2b19-4662-aa4b-2a94e78a18e9" containerName="octavia-api" probeResult="failure" output="Get \"http://10.217.1.108:9876/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 13 08:10:59 crc kubenswrapper[4833]: I1013 08:10:59.829862 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/octavia-api-6b9f5bbfb5-2bc5l" podUID="a86ac26b-2b19-4662-aa4b-2a94e78a18e9" containerName="octavia-api-provider-agent" probeResult="failure" output="Get \"http://10.217.1.108:9876/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 13 08:11:00 crc kubenswrapper[4833]: I1013 08:11:00.542583 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:11:00 crc kubenswrapper[4833]: I1013 08:11:00.542657 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 08:11:00 crc kubenswrapper[4833]: I1013 08:11:00.542716 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 08:11:00 crc kubenswrapper[4833]: I1013 08:11:00.543510 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e914b3d8cdaf60183f90c0954b20183878f4438a5f2a58b95970c43060e5f653"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 08:11:00 crc kubenswrapper[4833]: I1013 08:11:00.543604 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://e914b3d8cdaf60183f90c0954b20183878f4438a5f2a58b95970c43060e5f653" gracePeriod=600 Oct 13 08:11:01 crc kubenswrapper[4833]: I1013 08:11:01.493251 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="e914b3d8cdaf60183f90c0954b20183878f4438a5f2a58b95970c43060e5f653" exitCode=0 Oct 13 08:11:01 crc kubenswrapper[4833]: I1013 08:11:01.493456 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"e914b3d8cdaf60183f90c0954b20183878f4438a5f2a58b95970c43060e5f653"} Oct 13 08:11:01 crc kubenswrapper[4833]: I1013 08:11:01.493906 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"d4d4c44bb0e773c2232317552c7e13324e9005e08dbd3102a3b4703dc201b6e8"} Oct 13 08:11:01 crc kubenswrapper[4833]: I1013 08:11:01.493933 4833 scope.go:117] "RemoveContainer" containerID="84144bfb0195d5a0d1b4378bb219c91ea274fee3df90291859e033d126a958d8" Oct 13 08:11:05 crc kubenswrapper[4833]: I1013 08:11:05.303785 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-678599687f-tdfgk"] Oct 13 08:11:05 crc kubenswrapper[4833]: I1013 08:11:05.304530 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-image-upload-678599687f-tdfgk" podUID="a2062231-fd5b-4e8a-97db-20d869a9bf89" containerName="octavia-amphora-httpd" containerID="cri-o://9565c77871600315b450fc3354bb7e61dd1f9c7921dfd59d34d37f70ae5dfe84" gracePeriod=30 Oct 13 08:11:05 crc kubenswrapper[4833]: I1013 08:11:05.547075 4833 generic.go:334] "Generic (PLEG): container finished" podID="a2062231-fd5b-4e8a-97db-20d869a9bf89" containerID="9565c77871600315b450fc3354bb7e61dd1f9c7921dfd59d34d37f70ae5dfe84" exitCode=0 Oct 13 08:11:05 crc kubenswrapper[4833]: I1013 08:11:05.547144 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-678599687f-tdfgk" event={"ID":"a2062231-fd5b-4e8a-97db-20d869a9bf89","Type":"ContainerDied","Data":"9565c77871600315b450fc3354bb7e61dd1f9c7921dfd59d34d37f70ae5dfe84"} Oct 13 08:11:05 crc kubenswrapper[4833]: I1013 08:11:05.959144 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-678599687f-tdfgk" Oct 13 08:11:06 crc kubenswrapper[4833]: I1013 08:11:06.046634 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a2062231-fd5b-4e8a-97db-20d869a9bf89-httpd-config\") pod \"a2062231-fd5b-4e8a-97db-20d869a9bf89\" (UID: \"a2062231-fd5b-4e8a-97db-20d869a9bf89\") " Oct 13 08:11:06 crc kubenswrapper[4833]: I1013 08:11:06.046730 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/a2062231-fd5b-4e8a-97db-20d869a9bf89-amphora-image\") pod \"a2062231-fd5b-4e8a-97db-20d869a9bf89\" (UID: \"a2062231-fd5b-4e8a-97db-20d869a9bf89\") " Oct 13 08:11:06 crc kubenswrapper[4833]: I1013 08:11:06.082871 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2062231-fd5b-4e8a-97db-20d869a9bf89-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "a2062231-fd5b-4e8a-97db-20d869a9bf89" (UID: "a2062231-fd5b-4e8a-97db-20d869a9bf89"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:11:06 crc kubenswrapper[4833]: I1013 08:11:06.135273 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2062231-fd5b-4e8a-97db-20d869a9bf89-amphora-image" (OuterVolumeSpecName: "amphora-image") pod "a2062231-fd5b-4e8a-97db-20d869a9bf89" (UID: "a2062231-fd5b-4e8a-97db-20d869a9bf89"). InnerVolumeSpecName "amphora-image". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:11:06 crc kubenswrapper[4833]: I1013 08:11:06.149276 4833 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a2062231-fd5b-4e8a-97db-20d869a9bf89-httpd-config\") on node \"crc\" DevicePath \"\"" Oct 13 08:11:06 crc kubenswrapper[4833]: I1013 08:11:06.149308 4833 reconciler_common.go:293] "Volume detached for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/a2062231-fd5b-4e8a-97db-20d869a9bf89-amphora-image\") on node \"crc\" DevicePath \"\"" Oct 13 08:11:06 crc kubenswrapper[4833]: I1013 08:11:06.564749 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-678599687f-tdfgk" event={"ID":"a2062231-fd5b-4e8a-97db-20d869a9bf89","Type":"ContainerDied","Data":"095a12d20d4a75f3630d47dd6cda304a22093081f4c6fa5bc65c76160cb9c2e9"} Oct 13 08:11:06 crc kubenswrapper[4833]: I1013 08:11:06.564804 4833 scope.go:117] "RemoveContainer" containerID="9565c77871600315b450fc3354bb7e61dd1f9c7921dfd59d34d37f70ae5dfe84" Oct 13 08:11:06 crc kubenswrapper[4833]: I1013 08:11:06.564920 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-678599687f-tdfgk" Oct 13 08:11:06 crc kubenswrapper[4833]: I1013 08:11:06.614776 4833 scope.go:117] "RemoveContainer" containerID="7d126dfe6920809da89fc8552c8366d14317d4be0c47e5a101f693934e59f160" Oct 13 08:11:06 crc kubenswrapper[4833]: I1013 08:11:06.663854 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-678599687f-tdfgk"] Oct 13 08:11:06 crc kubenswrapper[4833]: I1013 08:11:06.663902 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-image-upload-678599687f-tdfgk"] Oct 13 08:11:08 crc kubenswrapper[4833]: I1013 08:11:08.637811 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2062231-fd5b-4e8a-97db-20d869a9bf89" path="/var/lib/kubelet/pods/a2062231-fd5b-4e8a-97db-20d869a9bf89/volumes" Oct 13 08:11:11 crc kubenswrapper[4833]: I1013 08:11:11.505777 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-678599687f-sc8lv"] Oct 13 08:11:11 crc kubenswrapper[4833]: E1013 08:11:11.508349 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="948a7fdd-f311-454f-b73f-3c62b09a90eb" containerName="init" Oct 13 08:11:11 crc kubenswrapper[4833]: I1013 08:11:11.508383 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="948a7fdd-f311-454f-b73f-3c62b09a90eb" containerName="init" Oct 13 08:11:11 crc kubenswrapper[4833]: E1013 08:11:11.508399 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f802ed1-aad8-4ed9-8768-adbfafee22cd" containerName="extract-content" Oct 13 08:11:11 crc kubenswrapper[4833]: I1013 08:11:11.508408 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f802ed1-aad8-4ed9-8768-adbfafee22cd" containerName="extract-content" Oct 13 08:11:11 crc kubenswrapper[4833]: E1013 08:11:11.508425 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2062231-fd5b-4e8a-97db-20d869a9bf89" containerName="octavia-amphora-httpd" Oct 13 08:11:11 crc kubenswrapper[4833]: I1013 08:11:11.508433 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2062231-fd5b-4e8a-97db-20d869a9bf89" containerName="octavia-amphora-httpd" Oct 13 08:11:11 crc kubenswrapper[4833]: E1013 08:11:11.508448 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f802ed1-aad8-4ed9-8768-adbfafee22cd" containerName="extract-utilities" Oct 13 08:11:11 crc kubenswrapper[4833]: I1013 08:11:11.508456 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f802ed1-aad8-4ed9-8768-adbfafee22cd" containerName="extract-utilities" Oct 13 08:11:11 crc kubenswrapper[4833]: E1013 08:11:11.508483 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a86ac26b-2b19-4662-aa4b-2a94e78a18e9" containerName="init" Oct 13 08:11:11 crc kubenswrapper[4833]: I1013 08:11:11.508518 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a86ac26b-2b19-4662-aa4b-2a94e78a18e9" containerName="init" Oct 13 08:11:11 crc kubenswrapper[4833]: E1013 08:11:11.508557 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="948a7fdd-f311-454f-b73f-3c62b09a90eb" containerName="octavia-db-sync" Oct 13 08:11:11 crc kubenswrapper[4833]: I1013 08:11:11.508568 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="948a7fdd-f311-454f-b73f-3c62b09a90eb" containerName="octavia-db-sync" Oct 13 08:11:11 crc kubenswrapper[4833]: E1013 08:11:11.508589 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a86ac26b-2b19-4662-aa4b-2a94e78a18e9" containerName="octavia-api-provider-agent" Oct 13 08:11:11 crc kubenswrapper[4833]: I1013 08:11:11.508596 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a86ac26b-2b19-4662-aa4b-2a94e78a18e9" containerName="octavia-api-provider-agent" Oct 13 08:11:11 crc kubenswrapper[4833]: E1013 08:11:11.508611 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f802ed1-aad8-4ed9-8768-adbfafee22cd" containerName="registry-server" Oct 13 08:11:11 crc kubenswrapper[4833]: I1013 08:11:11.508618 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f802ed1-aad8-4ed9-8768-adbfafee22cd" containerName="registry-server" Oct 13 08:11:11 crc kubenswrapper[4833]: E1013 08:11:11.508631 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2062231-fd5b-4e8a-97db-20d869a9bf89" containerName="init" Oct 13 08:11:11 crc kubenswrapper[4833]: I1013 08:11:11.508639 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2062231-fd5b-4e8a-97db-20d869a9bf89" containerName="init" Oct 13 08:11:11 crc kubenswrapper[4833]: E1013 08:11:11.508650 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a86ac26b-2b19-4662-aa4b-2a94e78a18e9" containerName="octavia-api" Oct 13 08:11:11 crc kubenswrapper[4833]: I1013 08:11:11.508659 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a86ac26b-2b19-4662-aa4b-2a94e78a18e9" containerName="octavia-api" Oct 13 08:11:11 crc kubenswrapper[4833]: I1013 08:11:11.508933 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a86ac26b-2b19-4662-aa4b-2a94e78a18e9" containerName="octavia-api" Oct 13 08:11:11 crc kubenswrapper[4833]: I1013 08:11:11.508954 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2062231-fd5b-4e8a-97db-20d869a9bf89" containerName="octavia-amphora-httpd" Oct 13 08:11:11 crc kubenswrapper[4833]: I1013 08:11:11.508974 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a86ac26b-2b19-4662-aa4b-2a94e78a18e9" containerName="octavia-api-provider-agent" Oct 13 08:11:11 crc kubenswrapper[4833]: I1013 08:11:11.508987 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f802ed1-aad8-4ed9-8768-adbfafee22cd" containerName="registry-server" Oct 13 08:11:11 crc kubenswrapper[4833]: I1013 08:11:11.509001 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="948a7fdd-f311-454f-b73f-3c62b09a90eb" containerName="octavia-db-sync" Oct 13 08:11:11 crc kubenswrapper[4833]: I1013 08:11:11.510336 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-678599687f-sc8lv" Oct 13 08:11:11 crc kubenswrapper[4833]: I1013 08:11:11.513463 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Oct 13 08:11:11 crc kubenswrapper[4833]: I1013 08:11:11.517459 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-678599687f-sc8lv"] Oct 13 08:11:11 crc kubenswrapper[4833]: I1013 08:11:11.573857 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ca0eb080-87bc-42d1-8250-15bad5d138cd-httpd-config\") pod \"octavia-image-upload-678599687f-sc8lv\" (UID: \"ca0eb080-87bc-42d1-8250-15bad5d138cd\") " pod="openstack/octavia-image-upload-678599687f-sc8lv" Oct 13 08:11:11 crc kubenswrapper[4833]: I1013 08:11:11.574364 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/ca0eb080-87bc-42d1-8250-15bad5d138cd-amphora-image\") pod \"octavia-image-upload-678599687f-sc8lv\" (UID: \"ca0eb080-87bc-42d1-8250-15bad5d138cd\") " pod="openstack/octavia-image-upload-678599687f-sc8lv" Oct 13 08:11:11 crc kubenswrapper[4833]: I1013 08:11:11.676113 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/ca0eb080-87bc-42d1-8250-15bad5d138cd-amphora-image\") pod \"octavia-image-upload-678599687f-sc8lv\" (UID: \"ca0eb080-87bc-42d1-8250-15bad5d138cd\") " pod="openstack/octavia-image-upload-678599687f-sc8lv" Oct 13 08:11:11 crc kubenswrapper[4833]: I1013 08:11:11.676447 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ca0eb080-87bc-42d1-8250-15bad5d138cd-httpd-config\") pod \"octavia-image-upload-678599687f-sc8lv\" (UID: \"ca0eb080-87bc-42d1-8250-15bad5d138cd\") " pod="openstack/octavia-image-upload-678599687f-sc8lv" Oct 13 08:11:11 crc kubenswrapper[4833]: I1013 08:11:11.676589 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/ca0eb080-87bc-42d1-8250-15bad5d138cd-amphora-image\") pod \"octavia-image-upload-678599687f-sc8lv\" (UID: \"ca0eb080-87bc-42d1-8250-15bad5d138cd\") " pod="openstack/octavia-image-upload-678599687f-sc8lv" Oct 13 08:11:11 crc kubenswrapper[4833]: I1013 08:11:11.683271 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ca0eb080-87bc-42d1-8250-15bad5d138cd-httpd-config\") pod \"octavia-image-upload-678599687f-sc8lv\" (UID: \"ca0eb080-87bc-42d1-8250-15bad5d138cd\") " pod="openstack/octavia-image-upload-678599687f-sc8lv" Oct 13 08:11:11 crc kubenswrapper[4833]: I1013 08:11:11.844968 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-678599687f-sc8lv" Oct 13 08:11:12 crc kubenswrapper[4833]: I1013 08:11:12.324052 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-678599687f-sc8lv"] Oct 13 08:11:12 crc kubenswrapper[4833]: I1013 08:11:12.655706 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-678599687f-sc8lv" event={"ID":"ca0eb080-87bc-42d1-8250-15bad5d138cd","Type":"ContainerStarted","Data":"6c86435f216b22e2d908328a9f469f92ad82f80c89ba23aa75e2aa0026092874"} Oct 13 08:11:13 crc kubenswrapper[4833]: I1013 08:11:13.658650 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-678599687f-sc8lv" event={"ID":"ca0eb080-87bc-42d1-8250-15bad5d138cd","Type":"ContainerStarted","Data":"cd992a32fa090384b75412fdda6bee6981a5f76bf5ba9af15895e555247efadd"} Oct 13 08:11:14 crc kubenswrapper[4833]: I1013 08:11:14.673701 4833 generic.go:334] "Generic (PLEG): container finished" podID="ca0eb080-87bc-42d1-8250-15bad5d138cd" containerID="cd992a32fa090384b75412fdda6bee6981a5f76bf5ba9af15895e555247efadd" exitCode=0 Oct 13 08:11:14 crc kubenswrapper[4833]: I1013 08:11:14.673811 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-678599687f-sc8lv" event={"ID":"ca0eb080-87bc-42d1-8250-15bad5d138cd","Type":"ContainerDied","Data":"cd992a32fa090384b75412fdda6bee6981a5f76bf5ba9af15895e555247efadd"} Oct 13 08:11:15 crc kubenswrapper[4833]: I1013 08:11:15.689092 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-678599687f-sc8lv" event={"ID":"ca0eb080-87bc-42d1-8250-15bad5d138cd","Type":"ContainerStarted","Data":"3c1ca60b12571070c54f729f14f1003dca175213278cc181d8d43fd4266693b2"} Oct 13 08:11:15 crc kubenswrapper[4833]: I1013 08:11:15.722193 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-678599687f-sc8lv" podStartSLOduration=4.129242616 podStartE2EDuration="4.722166969s" podCreationTimestamp="2025-10-13 08:11:11 +0000 UTC" firstStartedPulling="2025-10-13 08:11:12.319511823 +0000 UTC m=+6162.419934739" lastFinishedPulling="2025-10-13 08:11:12.912436166 +0000 UTC m=+6163.012859092" observedRunningTime="2025-10-13 08:11:15.709789058 +0000 UTC m=+6165.810212014" watchObservedRunningTime="2025-10-13 08:11:15.722166969 +0000 UTC m=+6165.822589905" Oct 13 08:11:23 crc kubenswrapper[4833]: I1013 08:11:23.637125 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-njgq2"] Oct 13 08:11:23 crc kubenswrapper[4833]: I1013 08:11:23.644123 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-njgq2" Oct 13 08:11:23 crc kubenswrapper[4833]: I1013 08:11:23.647250 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Oct 13 08:11:23 crc kubenswrapper[4833]: I1013 08:11:23.650798 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Oct 13 08:11:23 crc kubenswrapper[4833]: I1013 08:11:23.656761 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Oct 13 08:11:23 crc kubenswrapper[4833]: I1013 08:11:23.696108 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-njgq2"] Oct 13 08:11:23 crc kubenswrapper[4833]: I1013 08:11:23.755511 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a7654ab-fe63-4757-8e67-4cdf67232494-config-data\") pod \"octavia-healthmanager-njgq2\" (UID: \"3a7654ab-fe63-4757-8e67-4cdf67232494\") " pod="openstack/octavia-healthmanager-njgq2" Oct 13 08:11:23 crc kubenswrapper[4833]: I1013 08:11:23.755585 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/3a7654ab-fe63-4757-8e67-4cdf67232494-amphora-certs\") pod \"octavia-healthmanager-njgq2\" (UID: \"3a7654ab-fe63-4757-8e67-4cdf67232494\") " pod="openstack/octavia-healthmanager-njgq2" Oct 13 08:11:23 crc kubenswrapper[4833]: I1013 08:11:23.755635 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a7654ab-fe63-4757-8e67-4cdf67232494-scripts\") pod \"octavia-healthmanager-njgq2\" (UID: \"3a7654ab-fe63-4757-8e67-4cdf67232494\") " pod="openstack/octavia-healthmanager-njgq2" Oct 13 08:11:23 crc kubenswrapper[4833]: I1013 08:11:23.755693 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/3a7654ab-fe63-4757-8e67-4cdf67232494-hm-ports\") pod \"octavia-healthmanager-njgq2\" (UID: \"3a7654ab-fe63-4757-8e67-4cdf67232494\") " pod="openstack/octavia-healthmanager-njgq2" Oct 13 08:11:23 crc kubenswrapper[4833]: I1013 08:11:23.755732 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/3a7654ab-fe63-4757-8e67-4cdf67232494-config-data-merged\") pod \"octavia-healthmanager-njgq2\" (UID: \"3a7654ab-fe63-4757-8e67-4cdf67232494\") " pod="openstack/octavia-healthmanager-njgq2" Oct 13 08:11:23 crc kubenswrapper[4833]: I1013 08:11:23.755770 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7654ab-fe63-4757-8e67-4cdf67232494-combined-ca-bundle\") pod \"octavia-healthmanager-njgq2\" (UID: \"3a7654ab-fe63-4757-8e67-4cdf67232494\") " pod="openstack/octavia-healthmanager-njgq2" Oct 13 08:11:23 crc kubenswrapper[4833]: I1013 08:11:23.857077 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/3a7654ab-fe63-4757-8e67-4cdf67232494-hm-ports\") pod \"octavia-healthmanager-njgq2\" (UID: \"3a7654ab-fe63-4757-8e67-4cdf67232494\") " pod="openstack/octavia-healthmanager-njgq2" Oct 13 08:11:23 crc kubenswrapper[4833]: I1013 08:11:23.857134 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/3a7654ab-fe63-4757-8e67-4cdf67232494-config-data-merged\") pod \"octavia-healthmanager-njgq2\" (UID: \"3a7654ab-fe63-4757-8e67-4cdf67232494\") " pod="openstack/octavia-healthmanager-njgq2" Oct 13 08:11:23 crc kubenswrapper[4833]: I1013 08:11:23.857167 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7654ab-fe63-4757-8e67-4cdf67232494-combined-ca-bundle\") pod \"octavia-healthmanager-njgq2\" (UID: \"3a7654ab-fe63-4757-8e67-4cdf67232494\") " pod="openstack/octavia-healthmanager-njgq2" Oct 13 08:11:23 crc kubenswrapper[4833]: I1013 08:11:23.857230 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a7654ab-fe63-4757-8e67-4cdf67232494-config-data\") pod \"octavia-healthmanager-njgq2\" (UID: \"3a7654ab-fe63-4757-8e67-4cdf67232494\") " pod="openstack/octavia-healthmanager-njgq2" Oct 13 08:11:23 crc kubenswrapper[4833]: I1013 08:11:23.857260 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/3a7654ab-fe63-4757-8e67-4cdf67232494-amphora-certs\") pod \"octavia-healthmanager-njgq2\" (UID: \"3a7654ab-fe63-4757-8e67-4cdf67232494\") " pod="openstack/octavia-healthmanager-njgq2" Oct 13 08:11:23 crc kubenswrapper[4833]: I1013 08:11:23.857291 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a7654ab-fe63-4757-8e67-4cdf67232494-scripts\") pod \"octavia-healthmanager-njgq2\" (UID: \"3a7654ab-fe63-4757-8e67-4cdf67232494\") " pod="openstack/octavia-healthmanager-njgq2" Oct 13 08:11:23 crc kubenswrapper[4833]: I1013 08:11:23.858108 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/3a7654ab-fe63-4757-8e67-4cdf67232494-config-data-merged\") pod \"octavia-healthmanager-njgq2\" (UID: \"3a7654ab-fe63-4757-8e67-4cdf67232494\") " pod="openstack/octavia-healthmanager-njgq2" Oct 13 08:11:23 crc kubenswrapper[4833]: I1013 08:11:23.860281 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/3a7654ab-fe63-4757-8e67-4cdf67232494-hm-ports\") pod \"octavia-healthmanager-njgq2\" (UID: \"3a7654ab-fe63-4757-8e67-4cdf67232494\") " pod="openstack/octavia-healthmanager-njgq2" Oct 13 08:11:23 crc kubenswrapper[4833]: I1013 08:11:23.864933 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7654ab-fe63-4757-8e67-4cdf67232494-combined-ca-bundle\") pod \"octavia-healthmanager-njgq2\" (UID: \"3a7654ab-fe63-4757-8e67-4cdf67232494\") " pod="openstack/octavia-healthmanager-njgq2" Oct 13 08:11:23 crc kubenswrapper[4833]: I1013 08:11:23.865327 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a7654ab-fe63-4757-8e67-4cdf67232494-config-data\") pod \"octavia-healthmanager-njgq2\" (UID: \"3a7654ab-fe63-4757-8e67-4cdf67232494\") " pod="openstack/octavia-healthmanager-njgq2" Oct 13 08:11:23 crc kubenswrapper[4833]: I1013 08:11:23.865868 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a7654ab-fe63-4757-8e67-4cdf67232494-scripts\") pod \"octavia-healthmanager-njgq2\" (UID: \"3a7654ab-fe63-4757-8e67-4cdf67232494\") " pod="openstack/octavia-healthmanager-njgq2" Oct 13 08:11:23 crc kubenswrapper[4833]: I1013 08:11:23.865885 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/3a7654ab-fe63-4757-8e67-4cdf67232494-amphora-certs\") pod \"octavia-healthmanager-njgq2\" (UID: \"3a7654ab-fe63-4757-8e67-4cdf67232494\") " pod="openstack/octavia-healthmanager-njgq2" Oct 13 08:11:23 crc kubenswrapper[4833]: I1013 08:11:23.974314 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-njgq2" Oct 13 08:11:24 crc kubenswrapper[4833]: I1013 08:11:24.662939 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-njgq2"] Oct 13 08:11:24 crc kubenswrapper[4833]: I1013 08:11:24.799706 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-njgq2" event={"ID":"3a7654ab-fe63-4757-8e67-4cdf67232494","Type":"ContainerStarted","Data":"094b0c48cb491858e81894a8c38876bca1cc4ab3bd83afbe3b9b31aa1d449c6d"} Oct 13 08:11:25 crc kubenswrapper[4833]: I1013 08:11:25.671746 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-6w4xz"] Oct 13 08:11:25 crc kubenswrapper[4833]: I1013 08:11:25.674511 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-6w4xz" Oct 13 08:11:25 crc kubenswrapper[4833]: I1013 08:11:25.679872 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Oct 13 08:11:25 crc kubenswrapper[4833]: I1013 08:11:25.680644 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Oct 13 08:11:25 crc kubenswrapper[4833]: I1013 08:11:25.683695 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-6w4xz"] Oct 13 08:11:25 crc kubenswrapper[4833]: I1013 08:11:25.796184 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/120a8304-64d3-4f08-b340-8f0853335cba-amphora-certs\") pod \"octavia-housekeeping-6w4xz\" (UID: \"120a8304-64d3-4f08-b340-8f0853335cba\") " pod="openstack/octavia-housekeeping-6w4xz" Oct 13 08:11:25 crc kubenswrapper[4833]: I1013 08:11:25.796367 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/120a8304-64d3-4f08-b340-8f0853335cba-hm-ports\") pod \"octavia-housekeeping-6w4xz\" (UID: \"120a8304-64d3-4f08-b340-8f0853335cba\") " pod="openstack/octavia-housekeeping-6w4xz" Oct 13 08:11:25 crc kubenswrapper[4833]: I1013 08:11:25.796899 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/120a8304-64d3-4f08-b340-8f0853335cba-scripts\") pod \"octavia-housekeeping-6w4xz\" (UID: \"120a8304-64d3-4f08-b340-8f0853335cba\") " pod="openstack/octavia-housekeeping-6w4xz" Oct 13 08:11:25 crc kubenswrapper[4833]: I1013 08:11:25.798058 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/120a8304-64d3-4f08-b340-8f0853335cba-config-data\") pod \"octavia-housekeeping-6w4xz\" (UID: \"120a8304-64d3-4f08-b340-8f0853335cba\") " pod="openstack/octavia-housekeeping-6w4xz" Oct 13 08:11:25 crc kubenswrapper[4833]: I1013 08:11:25.798192 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/120a8304-64d3-4f08-b340-8f0853335cba-combined-ca-bundle\") pod \"octavia-housekeeping-6w4xz\" (UID: \"120a8304-64d3-4f08-b340-8f0853335cba\") " pod="openstack/octavia-housekeeping-6w4xz" Oct 13 08:11:25 crc kubenswrapper[4833]: I1013 08:11:25.798388 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/120a8304-64d3-4f08-b340-8f0853335cba-config-data-merged\") pod \"octavia-housekeeping-6w4xz\" (UID: \"120a8304-64d3-4f08-b340-8f0853335cba\") " pod="openstack/octavia-housekeeping-6w4xz" Oct 13 08:11:25 crc kubenswrapper[4833]: I1013 08:11:25.816846 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-njgq2" event={"ID":"3a7654ab-fe63-4757-8e67-4cdf67232494","Type":"ContainerStarted","Data":"909f05c41873aa49ed808fd565e30b028ff4bc13c2b754cf392f81ef39fd221b"} Oct 13 08:11:25 crc kubenswrapper[4833]: I1013 08:11:25.872262 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d2nc2"] Oct 13 08:11:25 crc kubenswrapper[4833]: I1013 08:11:25.875500 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d2nc2" Oct 13 08:11:25 crc kubenswrapper[4833]: I1013 08:11:25.882730 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d2nc2"] Oct 13 08:11:25 crc kubenswrapper[4833]: I1013 08:11:25.900651 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/120a8304-64d3-4f08-b340-8f0853335cba-hm-ports\") pod \"octavia-housekeeping-6w4xz\" (UID: \"120a8304-64d3-4f08-b340-8f0853335cba\") " pod="openstack/octavia-housekeeping-6w4xz" Oct 13 08:11:25 crc kubenswrapper[4833]: I1013 08:11:25.900711 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/120a8304-64d3-4f08-b340-8f0853335cba-scripts\") pod \"octavia-housekeeping-6w4xz\" (UID: \"120a8304-64d3-4f08-b340-8f0853335cba\") " pod="openstack/octavia-housekeeping-6w4xz" Oct 13 08:11:25 crc kubenswrapper[4833]: I1013 08:11:25.900776 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/120a8304-64d3-4f08-b340-8f0853335cba-config-data\") pod \"octavia-housekeeping-6w4xz\" (UID: \"120a8304-64d3-4f08-b340-8f0853335cba\") " pod="openstack/octavia-housekeeping-6w4xz" Oct 13 08:11:25 crc kubenswrapper[4833]: I1013 08:11:25.900808 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/120a8304-64d3-4f08-b340-8f0853335cba-combined-ca-bundle\") pod \"octavia-housekeeping-6w4xz\" (UID: \"120a8304-64d3-4f08-b340-8f0853335cba\") " pod="openstack/octavia-housekeeping-6w4xz" Oct 13 08:11:25 crc kubenswrapper[4833]: I1013 08:11:25.900852 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/120a8304-64d3-4f08-b340-8f0853335cba-config-data-merged\") pod \"octavia-housekeeping-6w4xz\" (UID: \"120a8304-64d3-4f08-b340-8f0853335cba\") " pod="openstack/octavia-housekeeping-6w4xz" Oct 13 08:11:25 crc kubenswrapper[4833]: I1013 08:11:25.900901 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/120a8304-64d3-4f08-b340-8f0853335cba-amphora-certs\") pod \"octavia-housekeeping-6w4xz\" (UID: \"120a8304-64d3-4f08-b340-8f0853335cba\") " pod="openstack/octavia-housekeeping-6w4xz" Oct 13 08:11:25 crc kubenswrapper[4833]: I1013 08:11:25.902572 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/120a8304-64d3-4f08-b340-8f0853335cba-config-data-merged\") pod \"octavia-housekeeping-6w4xz\" (UID: \"120a8304-64d3-4f08-b340-8f0853335cba\") " pod="openstack/octavia-housekeeping-6w4xz" Oct 13 08:11:25 crc kubenswrapper[4833]: I1013 08:11:25.903150 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/120a8304-64d3-4f08-b340-8f0853335cba-hm-ports\") pod \"octavia-housekeeping-6w4xz\" (UID: \"120a8304-64d3-4f08-b340-8f0853335cba\") " pod="openstack/octavia-housekeeping-6w4xz" Oct 13 08:11:25 crc kubenswrapper[4833]: I1013 08:11:25.907741 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/120a8304-64d3-4f08-b340-8f0853335cba-scripts\") pod \"octavia-housekeeping-6w4xz\" (UID: \"120a8304-64d3-4f08-b340-8f0853335cba\") " pod="openstack/octavia-housekeeping-6w4xz" Oct 13 08:11:25 crc kubenswrapper[4833]: I1013 08:11:25.907941 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/120a8304-64d3-4f08-b340-8f0853335cba-combined-ca-bundle\") pod \"octavia-housekeeping-6w4xz\" (UID: \"120a8304-64d3-4f08-b340-8f0853335cba\") " pod="openstack/octavia-housekeeping-6w4xz" Oct 13 08:11:25 crc kubenswrapper[4833]: I1013 08:11:25.930066 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/120a8304-64d3-4f08-b340-8f0853335cba-config-data\") pod \"octavia-housekeeping-6w4xz\" (UID: \"120a8304-64d3-4f08-b340-8f0853335cba\") " pod="openstack/octavia-housekeeping-6w4xz" Oct 13 08:11:25 crc kubenswrapper[4833]: I1013 08:11:25.934072 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/120a8304-64d3-4f08-b340-8f0853335cba-amphora-certs\") pod \"octavia-housekeeping-6w4xz\" (UID: \"120a8304-64d3-4f08-b340-8f0853335cba\") " pod="openstack/octavia-housekeeping-6w4xz" Oct 13 08:11:25 crc kubenswrapper[4833]: I1013 08:11:25.993686 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-6w4xz" Oct 13 08:11:26 crc kubenswrapper[4833]: I1013 08:11:26.009194 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e2c5c4b-b290-46ee-9075-b7e10bf408f5-utilities\") pod \"redhat-operators-d2nc2\" (UID: \"6e2c5c4b-b290-46ee-9075-b7e10bf408f5\") " pod="openshift-marketplace/redhat-operators-d2nc2" Oct 13 08:11:26 crc kubenswrapper[4833]: I1013 08:11:26.009380 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjtn9\" (UniqueName: \"kubernetes.io/projected/6e2c5c4b-b290-46ee-9075-b7e10bf408f5-kube-api-access-kjtn9\") pod \"redhat-operators-d2nc2\" (UID: \"6e2c5c4b-b290-46ee-9075-b7e10bf408f5\") " pod="openshift-marketplace/redhat-operators-d2nc2" Oct 13 08:11:26 crc kubenswrapper[4833]: I1013 08:11:26.009477 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e2c5c4b-b290-46ee-9075-b7e10bf408f5-catalog-content\") pod \"redhat-operators-d2nc2\" (UID: \"6e2c5c4b-b290-46ee-9075-b7e10bf408f5\") " pod="openshift-marketplace/redhat-operators-d2nc2" Oct 13 08:11:26 crc kubenswrapper[4833]: I1013 08:11:26.112058 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjtn9\" (UniqueName: \"kubernetes.io/projected/6e2c5c4b-b290-46ee-9075-b7e10bf408f5-kube-api-access-kjtn9\") pod \"redhat-operators-d2nc2\" (UID: \"6e2c5c4b-b290-46ee-9075-b7e10bf408f5\") " pod="openshift-marketplace/redhat-operators-d2nc2" Oct 13 08:11:26 crc kubenswrapper[4833]: I1013 08:11:26.112565 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e2c5c4b-b290-46ee-9075-b7e10bf408f5-catalog-content\") pod \"redhat-operators-d2nc2\" (UID: \"6e2c5c4b-b290-46ee-9075-b7e10bf408f5\") " pod="openshift-marketplace/redhat-operators-d2nc2" Oct 13 08:11:26 crc kubenswrapper[4833]: I1013 08:11:26.112854 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e2c5c4b-b290-46ee-9075-b7e10bf408f5-utilities\") pod \"redhat-operators-d2nc2\" (UID: \"6e2c5c4b-b290-46ee-9075-b7e10bf408f5\") " pod="openshift-marketplace/redhat-operators-d2nc2" Oct 13 08:11:26 crc kubenswrapper[4833]: I1013 08:11:26.115180 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e2c5c4b-b290-46ee-9075-b7e10bf408f5-catalog-content\") pod \"redhat-operators-d2nc2\" (UID: \"6e2c5c4b-b290-46ee-9075-b7e10bf408f5\") " pod="openshift-marketplace/redhat-operators-d2nc2" Oct 13 08:11:26 crc kubenswrapper[4833]: I1013 08:11:26.115287 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e2c5c4b-b290-46ee-9075-b7e10bf408f5-utilities\") pod \"redhat-operators-d2nc2\" (UID: \"6e2c5c4b-b290-46ee-9075-b7e10bf408f5\") " pod="openshift-marketplace/redhat-operators-d2nc2" Oct 13 08:11:26 crc kubenswrapper[4833]: I1013 08:11:26.135951 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjtn9\" (UniqueName: \"kubernetes.io/projected/6e2c5c4b-b290-46ee-9075-b7e10bf408f5-kube-api-access-kjtn9\") pod \"redhat-operators-d2nc2\" (UID: \"6e2c5c4b-b290-46ee-9075-b7e10bf408f5\") " pod="openshift-marketplace/redhat-operators-d2nc2" Oct 13 08:11:26 crc kubenswrapper[4833]: I1013 08:11:26.208567 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d2nc2" Oct 13 08:11:26 crc kubenswrapper[4833]: I1013 08:11:26.511838 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-6w4xz"] Oct 13 08:11:26 crc kubenswrapper[4833]: W1013 08:11:26.519946 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod120a8304_64d3_4f08_b340_8f0853335cba.slice/crio-5afe561a4ab6ba779d294d6452688a6357ca80f792c559b73b8bee7071c9c663 WatchSource:0}: Error finding container 5afe561a4ab6ba779d294d6452688a6357ca80f792c559b73b8bee7071c9c663: Status 404 returned error can't find the container with id 5afe561a4ab6ba779d294d6452688a6357ca80f792c559b73b8bee7071c9c663 Oct 13 08:11:26 crc kubenswrapper[4833]: I1013 08:11:26.670152 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d2nc2"] Oct 13 08:11:26 crc kubenswrapper[4833]: W1013 08:11:26.670239 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e2c5c4b_b290_46ee_9075_b7e10bf408f5.slice/crio-4d7b5fce13c219586b8782fee0b7b724b75b49b5d3477d126141bd99235a4926 WatchSource:0}: Error finding container 4d7b5fce13c219586b8782fee0b7b724b75b49b5d3477d126141bd99235a4926: Status 404 returned error can't find the container with id 4d7b5fce13c219586b8782fee0b7b724b75b49b5d3477d126141bd99235a4926 Oct 13 08:11:26 crc kubenswrapper[4833]: I1013 08:11:26.825788 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-6w4xz" event={"ID":"120a8304-64d3-4f08-b340-8f0853335cba","Type":"ContainerStarted","Data":"5afe561a4ab6ba779d294d6452688a6357ca80f792c559b73b8bee7071c9c663"} Oct 13 08:11:26 crc kubenswrapper[4833]: I1013 08:11:26.833229 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2nc2" event={"ID":"6e2c5c4b-b290-46ee-9075-b7e10bf408f5","Type":"ContainerStarted","Data":"4d7b5fce13c219586b8782fee0b7b724b75b49b5d3477d126141bd99235a4926"} Oct 13 08:11:27 crc kubenswrapper[4833]: I1013 08:11:27.166612 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-w4qcr"] Oct 13 08:11:27 crc kubenswrapper[4833]: I1013 08:11:27.176218 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-w4qcr" Oct 13 08:11:27 crc kubenswrapper[4833]: I1013 08:11:27.188900 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-w4qcr"] Oct 13 08:11:27 crc kubenswrapper[4833]: I1013 08:11:27.192066 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Oct 13 08:11:27 crc kubenswrapper[4833]: I1013 08:11:27.192305 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Oct 13 08:11:27 crc kubenswrapper[4833]: I1013 08:11:27.234725 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87bfdfbf-fa98-4597-bbc7-bb9add7b65db-config-data\") pod \"octavia-worker-w4qcr\" (UID: \"87bfdfbf-fa98-4597-bbc7-bb9add7b65db\") " pod="openstack/octavia-worker-w4qcr" Oct 13 08:11:27 crc kubenswrapper[4833]: I1013 08:11:27.234821 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/87bfdfbf-fa98-4597-bbc7-bb9add7b65db-config-data-merged\") pod \"octavia-worker-w4qcr\" (UID: \"87bfdfbf-fa98-4597-bbc7-bb9add7b65db\") " pod="openstack/octavia-worker-w4qcr" Oct 13 08:11:27 crc kubenswrapper[4833]: I1013 08:11:27.234867 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/87bfdfbf-fa98-4597-bbc7-bb9add7b65db-hm-ports\") pod \"octavia-worker-w4qcr\" (UID: \"87bfdfbf-fa98-4597-bbc7-bb9add7b65db\") " pod="openstack/octavia-worker-w4qcr" Oct 13 08:11:27 crc kubenswrapper[4833]: I1013 08:11:27.234978 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87bfdfbf-fa98-4597-bbc7-bb9add7b65db-scripts\") pod \"octavia-worker-w4qcr\" (UID: \"87bfdfbf-fa98-4597-bbc7-bb9add7b65db\") " pod="openstack/octavia-worker-w4qcr" Oct 13 08:11:27 crc kubenswrapper[4833]: I1013 08:11:27.235216 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87bfdfbf-fa98-4597-bbc7-bb9add7b65db-combined-ca-bundle\") pod \"octavia-worker-w4qcr\" (UID: \"87bfdfbf-fa98-4597-bbc7-bb9add7b65db\") " pod="openstack/octavia-worker-w4qcr" Oct 13 08:11:27 crc kubenswrapper[4833]: I1013 08:11:27.235796 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/87bfdfbf-fa98-4597-bbc7-bb9add7b65db-amphora-certs\") pod \"octavia-worker-w4qcr\" (UID: \"87bfdfbf-fa98-4597-bbc7-bb9add7b65db\") " pod="openstack/octavia-worker-w4qcr" Oct 13 08:11:27 crc kubenswrapper[4833]: I1013 08:11:27.339699 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87bfdfbf-fa98-4597-bbc7-bb9add7b65db-combined-ca-bundle\") pod \"octavia-worker-w4qcr\" (UID: \"87bfdfbf-fa98-4597-bbc7-bb9add7b65db\") " pod="openstack/octavia-worker-w4qcr" Oct 13 08:11:27 crc kubenswrapper[4833]: I1013 08:11:27.339933 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/87bfdfbf-fa98-4597-bbc7-bb9add7b65db-amphora-certs\") pod \"octavia-worker-w4qcr\" (UID: \"87bfdfbf-fa98-4597-bbc7-bb9add7b65db\") " pod="openstack/octavia-worker-w4qcr" Oct 13 08:11:27 crc kubenswrapper[4833]: I1013 08:11:27.341318 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87bfdfbf-fa98-4597-bbc7-bb9add7b65db-config-data\") pod \"octavia-worker-w4qcr\" (UID: \"87bfdfbf-fa98-4597-bbc7-bb9add7b65db\") " pod="openstack/octavia-worker-w4qcr" Oct 13 08:11:27 crc kubenswrapper[4833]: I1013 08:11:27.341369 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/87bfdfbf-fa98-4597-bbc7-bb9add7b65db-config-data-merged\") pod \"octavia-worker-w4qcr\" (UID: \"87bfdfbf-fa98-4597-bbc7-bb9add7b65db\") " pod="openstack/octavia-worker-w4qcr" Oct 13 08:11:27 crc kubenswrapper[4833]: I1013 08:11:27.341415 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/87bfdfbf-fa98-4597-bbc7-bb9add7b65db-hm-ports\") pod \"octavia-worker-w4qcr\" (UID: \"87bfdfbf-fa98-4597-bbc7-bb9add7b65db\") " pod="openstack/octavia-worker-w4qcr" Oct 13 08:11:27 crc kubenswrapper[4833]: I1013 08:11:27.341460 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87bfdfbf-fa98-4597-bbc7-bb9add7b65db-scripts\") pod \"octavia-worker-w4qcr\" (UID: \"87bfdfbf-fa98-4597-bbc7-bb9add7b65db\") " pod="openstack/octavia-worker-w4qcr" Oct 13 08:11:27 crc kubenswrapper[4833]: I1013 08:11:27.342240 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/87bfdfbf-fa98-4597-bbc7-bb9add7b65db-config-data-merged\") pod \"octavia-worker-w4qcr\" (UID: \"87bfdfbf-fa98-4597-bbc7-bb9add7b65db\") " pod="openstack/octavia-worker-w4qcr" Oct 13 08:11:27 crc kubenswrapper[4833]: I1013 08:11:27.343295 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/87bfdfbf-fa98-4597-bbc7-bb9add7b65db-hm-ports\") pod \"octavia-worker-w4qcr\" (UID: \"87bfdfbf-fa98-4597-bbc7-bb9add7b65db\") " pod="openstack/octavia-worker-w4qcr" Oct 13 08:11:27 crc kubenswrapper[4833]: I1013 08:11:27.347367 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/87bfdfbf-fa98-4597-bbc7-bb9add7b65db-amphora-certs\") pod \"octavia-worker-w4qcr\" (UID: \"87bfdfbf-fa98-4597-bbc7-bb9add7b65db\") " pod="openstack/octavia-worker-w4qcr" Oct 13 08:11:27 crc kubenswrapper[4833]: I1013 08:11:27.361137 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87bfdfbf-fa98-4597-bbc7-bb9add7b65db-combined-ca-bundle\") pod \"octavia-worker-w4qcr\" (UID: \"87bfdfbf-fa98-4597-bbc7-bb9add7b65db\") " pod="openstack/octavia-worker-w4qcr" Oct 13 08:11:27 crc kubenswrapper[4833]: I1013 08:11:27.361147 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87bfdfbf-fa98-4597-bbc7-bb9add7b65db-config-data\") pod \"octavia-worker-w4qcr\" (UID: \"87bfdfbf-fa98-4597-bbc7-bb9add7b65db\") " pod="openstack/octavia-worker-w4qcr" Oct 13 08:11:27 crc kubenswrapper[4833]: I1013 08:11:27.361459 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87bfdfbf-fa98-4597-bbc7-bb9add7b65db-scripts\") pod \"octavia-worker-w4qcr\" (UID: \"87bfdfbf-fa98-4597-bbc7-bb9add7b65db\") " pod="openstack/octavia-worker-w4qcr" Oct 13 08:11:27 crc kubenswrapper[4833]: I1013 08:11:27.520631 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-w4qcr" Oct 13 08:11:27 crc kubenswrapper[4833]: I1013 08:11:27.846281 4833 generic.go:334] "Generic (PLEG): container finished" podID="6e2c5c4b-b290-46ee-9075-b7e10bf408f5" containerID="211084ba6864929e2e9d81cb744cff2104d17cc2561ac82325933283493ec3a8" exitCode=0 Oct 13 08:11:27 crc kubenswrapper[4833]: I1013 08:11:27.846459 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2nc2" event={"ID":"6e2c5c4b-b290-46ee-9075-b7e10bf408f5","Type":"ContainerDied","Data":"211084ba6864929e2e9d81cb744cff2104d17cc2561ac82325933283493ec3a8"} Oct 13 08:11:27 crc kubenswrapper[4833]: I1013 08:11:27.851332 4833 generic.go:334] "Generic (PLEG): container finished" podID="3a7654ab-fe63-4757-8e67-4cdf67232494" containerID="909f05c41873aa49ed808fd565e30b028ff4bc13c2b754cf392f81ef39fd221b" exitCode=0 Oct 13 08:11:27 crc kubenswrapper[4833]: I1013 08:11:27.851372 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-njgq2" event={"ID":"3a7654ab-fe63-4757-8e67-4cdf67232494","Type":"ContainerDied","Data":"909f05c41873aa49ed808fd565e30b028ff4bc13c2b754cf392f81ef39fd221b"} Oct 13 08:11:28 crc kubenswrapper[4833]: I1013 08:11:28.074616 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-w4qcr"] Oct 13 08:11:28 crc kubenswrapper[4833]: I1013 08:11:28.872359 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-njgq2" event={"ID":"3a7654ab-fe63-4757-8e67-4cdf67232494","Type":"ContainerStarted","Data":"ebc89807b168f242ed3446fd640727276dca6bf1e392a7eb685db542a49a0dad"} Oct 13 08:11:28 crc kubenswrapper[4833]: I1013 08:11:28.873146 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-njgq2" Oct 13 08:11:28 crc kubenswrapper[4833]: I1013 08:11:28.875721 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-w4qcr" event={"ID":"87bfdfbf-fa98-4597-bbc7-bb9add7b65db","Type":"ContainerStarted","Data":"0195774d8142c94f92b0bf43726092bf705f669e2d7b0493006cb200ccfcbdba"} Oct 13 08:11:28 crc kubenswrapper[4833]: I1013 08:11:28.893730 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-njgq2" podStartSLOduration=5.893713037 podStartE2EDuration="5.893713037s" podCreationTimestamp="2025-10-13 08:11:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:11:28.889729524 +0000 UTC m=+6178.990152440" watchObservedRunningTime="2025-10-13 08:11:28.893713037 +0000 UTC m=+6178.994135953" Oct 13 08:11:29 crc kubenswrapper[4833]: I1013 08:11:29.892523 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2nc2" event={"ID":"6e2c5c4b-b290-46ee-9075-b7e10bf408f5","Type":"ContainerStarted","Data":"638117f664e60f878d0ce9b27e60b7cee169a8f3eb4757d2f037762d7d338c92"} Oct 13 08:11:29 crc kubenswrapper[4833]: I1013 08:11:29.898140 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-6w4xz" event={"ID":"120a8304-64d3-4f08-b340-8f0853335cba","Type":"ContainerStarted","Data":"c4a9d6bee216693889e794890612cc5a045375a031758df8af9d04e6ac7c39f4"} Oct 13 08:11:30 crc kubenswrapper[4833]: I1013 08:11:30.910115 4833 generic.go:334] "Generic (PLEG): container finished" podID="120a8304-64d3-4f08-b340-8f0853335cba" containerID="c4a9d6bee216693889e794890612cc5a045375a031758df8af9d04e6ac7c39f4" exitCode=0 Oct 13 08:11:30 crc kubenswrapper[4833]: I1013 08:11:30.910288 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-6w4xz" event={"ID":"120a8304-64d3-4f08-b340-8f0853335cba","Type":"ContainerDied","Data":"c4a9d6bee216693889e794890612cc5a045375a031758df8af9d04e6ac7c39f4"} Oct 13 08:11:30 crc kubenswrapper[4833]: I1013 08:11:30.914426 4833 generic.go:334] "Generic (PLEG): container finished" podID="6e2c5c4b-b290-46ee-9075-b7e10bf408f5" containerID="638117f664e60f878d0ce9b27e60b7cee169a8f3eb4757d2f037762d7d338c92" exitCode=0 Oct 13 08:11:30 crc kubenswrapper[4833]: I1013 08:11:30.914645 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2nc2" event={"ID":"6e2c5c4b-b290-46ee-9075-b7e10bf408f5","Type":"ContainerDied","Data":"638117f664e60f878d0ce9b27e60b7cee169a8f3eb4757d2f037762d7d338c92"} Oct 13 08:11:30 crc kubenswrapper[4833]: I1013 08:11:30.918101 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-w4qcr" event={"ID":"87bfdfbf-fa98-4597-bbc7-bb9add7b65db","Type":"ContainerStarted","Data":"fa78a4e48fba78aee31497b18e069b0bc1b80da2c0d59960729fb16923b28782"} Oct 13 08:11:31 crc kubenswrapper[4833]: I1013 08:11:31.929638 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2nc2" event={"ID":"6e2c5c4b-b290-46ee-9075-b7e10bf408f5","Type":"ContainerStarted","Data":"8a424c8853f5ffd0b89ac28b002b927fc61d61f6e6874e93b223ae4869407586"} Oct 13 08:11:31 crc kubenswrapper[4833]: I1013 08:11:31.937165 4833 generic.go:334] "Generic (PLEG): container finished" podID="87bfdfbf-fa98-4597-bbc7-bb9add7b65db" containerID="fa78a4e48fba78aee31497b18e069b0bc1b80da2c0d59960729fb16923b28782" exitCode=0 Oct 13 08:11:31 crc kubenswrapper[4833]: I1013 08:11:31.937267 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-w4qcr" event={"ID":"87bfdfbf-fa98-4597-bbc7-bb9add7b65db","Type":"ContainerDied","Data":"fa78a4e48fba78aee31497b18e069b0bc1b80da2c0d59960729fb16923b28782"} Oct 13 08:11:31 crc kubenswrapper[4833]: E1013 08:11:31.945683 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod120a8304_64d3_4f08_b340_8f0853335cba.slice/crio-conmon-c4a9d6bee216693889e794890612cc5a045375a031758df8af9d04e6ac7c39f4.scope\": RecentStats: unable to find data in memory cache]" Oct 13 08:11:31 crc kubenswrapper[4833]: I1013 08:11:31.948379 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-6w4xz" event={"ID":"120a8304-64d3-4f08-b340-8f0853335cba","Type":"ContainerStarted","Data":"109dcb634f3422653780ebf4736c179213fb42cd0eb56f6c0b7d0f82a153000a"} Oct 13 08:11:31 crc kubenswrapper[4833]: I1013 08:11:31.948496 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-6w4xz" Oct 13 08:11:31 crc kubenswrapper[4833]: I1013 08:11:31.963639 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d2nc2" podStartSLOduration=3.196011185 podStartE2EDuration="6.963623705s" podCreationTimestamp="2025-10-13 08:11:25 +0000 UTC" firstStartedPulling="2025-10-13 08:11:27.848829627 +0000 UTC m=+6177.949252543" lastFinishedPulling="2025-10-13 08:11:31.616442147 +0000 UTC m=+6181.716865063" observedRunningTime="2025-10-13 08:11:31.958108849 +0000 UTC m=+6182.058531755" watchObservedRunningTime="2025-10-13 08:11:31.963623705 +0000 UTC m=+6182.064046621" Oct 13 08:11:31 crc kubenswrapper[4833]: I1013 08:11:31.988781 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-6w4xz" podStartSLOduration=4.720383854 podStartE2EDuration="6.98876163s" podCreationTimestamp="2025-10-13 08:11:25 +0000 UTC" firstStartedPulling="2025-10-13 08:11:26.5232957 +0000 UTC m=+6176.623718606" lastFinishedPulling="2025-10-13 08:11:28.791673466 +0000 UTC m=+6178.892096382" observedRunningTime="2025-10-13 08:11:31.98208631 +0000 UTC m=+6182.082509226" watchObservedRunningTime="2025-10-13 08:11:31.98876163 +0000 UTC m=+6182.089184556" Oct 13 08:11:32 crc kubenswrapper[4833]: I1013 08:11:32.960669 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-w4qcr" event={"ID":"87bfdfbf-fa98-4597-bbc7-bb9add7b65db","Type":"ContainerStarted","Data":"97544ec74eb0792b01a736242e371789a9d07e15684995f9371c2e10d28ea9b3"} Oct 13 08:11:33 crc kubenswrapper[4833]: I1013 08:11:33.969160 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-w4qcr" Oct 13 08:11:34 crc kubenswrapper[4833]: I1013 08:11:34.035381 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-w4qcr" podStartSLOduration=5.112683802 podStartE2EDuration="7.035358972s" podCreationTimestamp="2025-10-13 08:11:27 +0000 UTC" firstStartedPulling="2025-10-13 08:11:28.103341861 +0000 UTC m=+6178.203764777" lastFinishedPulling="2025-10-13 08:11:30.026017031 +0000 UTC m=+6180.126439947" observedRunningTime="2025-10-13 08:11:32.988123955 +0000 UTC m=+6183.088546901" watchObservedRunningTime="2025-10-13 08:11:34.035358972 +0000 UTC m=+6184.135781888" Oct 13 08:11:34 crc kubenswrapper[4833]: I1013 08:11:34.042698 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-7n2mh"] Oct 13 08:11:34 crc kubenswrapper[4833]: I1013 08:11:34.055715 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-7n2mh"] Oct 13 08:11:34 crc kubenswrapper[4833]: I1013 08:11:34.640096 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e6b0b75-c288-4ecb-9bc0-96c9a79abb10" path="/var/lib/kubelet/pods/4e6b0b75-c288-4ecb-9bc0-96c9a79abb10/volumes" Oct 13 08:11:36 crc kubenswrapper[4833]: I1013 08:11:36.210244 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d2nc2" Oct 13 08:11:36 crc kubenswrapper[4833]: I1013 08:11:36.210791 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d2nc2" Oct 13 08:11:37 crc kubenswrapper[4833]: I1013 08:11:37.298867 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d2nc2" podUID="6e2c5c4b-b290-46ee-9075-b7e10bf408f5" containerName="registry-server" probeResult="failure" output=< Oct 13 08:11:37 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Oct 13 08:11:37 crc kubenswrapper[4833]: > Oct 13 08:11:39 crc kubenswrapper[4833]: I1013 08:11:39.006736 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-njgq2" Oct 13 08:11:41 crc kubenswrapper[4833]: I1013 08:11:41.027505 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-6w4xz" Oct 13 08:11:42 crc kubenswrapper[4833]: E1013 08:11:42.224169 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod120a8304_64d3_4f08_b340_8f0853335cba.slice/crio-conmon-c4a9d6bee216693889e794890612cc5a045375a031758df8af9d04e6ac7c39f4.scope\": RecentStats: unable to find data in memory cache]" Oct 13 08:11:42 crc kubenswrapper[4833]: I1013 08:11:42.559767 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-w4qcr" Oct 13 08:11:44 crc kubenswrapper[4833]: I1013 08:11:44.031792 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-3eb6-account-create-xgvlm"] Oct 13 08:11:44 crc kubenswrapper[4833]: I1013 08:11:44.042834 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-3eb6-account-create-xgvlm"] Oct 13 08:11:44 crc kubenswrapper[4833]: I1013 08:11:44.638667 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebeb361-7c2d-4a9c-b5da-cc39cd98e24a" path="/var/lib/kubelet/pods/5ebeb361-7c2d-4a9c-b5da-cc39cd98e24a/volumes" Oct 13 08:11:46 crc kubenswrapper[4833]: I1013 08:11:46.279253 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d2nc2" Oct 13 08:11:46 crc kubenswrapper[4833]: I1013 08:11:46.357922 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d2nc2" Oct 13 08:11:46 crc kubenswrapper[4833]: I1013 08:11:46.534060 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d2nc2"] Oct 13 08:11:48 crc kubenswrapper[4833]: I1013 08:11:48.102554 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d2nc2" podUID="6e2c5c4b-b290-46ee-9075-b7e10bf408f5" containerName="registry-server" containerID="cri-o://8a424c8853f5ffd0b89ac28b002b927fc61d61f6e6874e93b223ae4869407586" gracePeriod=2 Oct 13 08:11:48 crc kubenswrapper[4833]: I1013 08:11:48.669488 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d2nc2" Oct 13 08:11:48 crc kubenswrapper[4833]: I1013 08:11:48.732443 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjtn9\" (UniqueName: \"kubernetes.io/projected/6e2c5c4b-b290-46ee-9075-b7e10bf408f5-kube-api-access-kjtn9\") pod \"6e2c5c4b-b290-46ee-9075-b7e10bf408f5\" (UID: \"6e2c5c4b-b290-46ee-9075-b7e10bf408f5\") " Oct 13 08:11:48 crc kubenswrapper[4833]: I1013 08:11:48.732553 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e2c5c4b-b290-46ee-9075-b7e10bf408f5-utilities\") pod \"6e2c5c4b-b290-46ee-9075-b7e10bf408f5\" (UID: \"6e2c5c4b-b290-46ee-9075-b7e10bf408f5\") " Oct 13 08:11:48 crc kubenswrapper[4833]: I1013 08:11:48.732612 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e2c5c4b-b290-46ee-9075-b7e10bf408f5-catalog-content\") pod \"6e2c5c4b-b290-46ee-9075-b7e10bf408f5\" (UID: \"6e2c5c4b-b290-46ee-9075-b7e10bf408f5\") " Oct 13 08:11:48 crc kubenswrapper[4833]: I1013 08:11:48.738809 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e2c5c4b-b290-46ee-9075-b7e10bf408f5-utilities" (OuterVolumeSpecName: "utilities") pod "6e2c5c4b-b290-46ee-9075-b7e10bf408f5" (UID: "6e2c5c4b-b290-46ee-9075-b7e10bf408f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:11:48 crc kubenswrapper[4833]: I1013 08:11:48.747791 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e2c5c4b-b290-46ee-9075-b7e10bf408f5-kube-api-access-kjtn9" (OuterVolumeSpecName: "kube-api-access-kjtn9") pod "6e2c5c4b-b290-46ee-9075-b7e10bf408f5" (UID: "6e2c5c4b-b290-46ee-9075-b7e10bf408f5"). InnerVolumeSpecName "kube-api-access-kjtn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:11:48 crc kubenswrapper[4833]: I1013 08:11:48.823419 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e2c5c4b-b290-46ee-9075-b7e10bf408f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e2c5c4b-b290-46ee-9075-b7e10bf408f5" (UID: "6e2c5c4b-b290-46ee-9075-b7e10bf408f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:11:48 crc kubenswrapper[4833]: I1013 08:11:48.835551 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjtn9\" (UniqueName: \"kubernetes.io/projected/6e2c5c4b-b290-46ee-9075-b7e10bf408f5-kube-api-access-kjtn9\") on node \"crc\" DevicePath \"\"" Oct 13 08:11:48 crc kubenswrapper[4833]: I1013 08:11:48.835729 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e2c5c4b-b290-46ee-9075-b7e10bf408f5-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 08:11:48 crc kubenswrapper[4833]: I1013 08:11:48.835782 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e2c5c4b-b290-46ee-9075-b7e10bf408f5-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 08:11:49 crc kubenswrapper[4833]: I1013 08:11:49.128290 4833 generic.go:334] "Generic (PLEG): container finished" podID="6e2c5c4b-b290-46ee-9075-b7e10bf408f5" containerID="8a424c8853f5ffd0b89ac28b002b927fc61d61f6e6874e93b223ae4869407586" exitCode=0 Oct 13 08:11:49 crc kubenswrapper[4833]: I1013 08:11:49.128386 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2nc2" event={"ID":"6e2c5c4b-b290-46ee-9075-b7e10bf408f5","Type":"ContainerDied","Data":"8a424c8853f5ffd0b89ac28b002b927fc61d61f6e6874e93b223ae4869407586"} Oct 13 08:11:49 crc kubenswrapper[4833]: I1013 08:11:49.128429 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2nc2" event={"ID":"6e2c5c4b-b290-46ee-9075-b7e10bf408f5","Type":"ContainerDied","Data":"4d7b5fce13c219586b8782fee0b7b724b75b49b5d3477d126141bd99235a4926"} Oct 13 08:11:49 crc kubenswrapper[4833]: I1013 08:11:49.128520 4833 scope.go:117] "RemoveContainer" containerID="8a424c8853f5ffd0b89ac28b002b927fc61d61f6e6874e93b223ae4869407586" Oct 13 08:11:49 crc kubenswrapper[4833]: I1013 08:11:49.129352 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d2nc2" Oct 13 08:11:49 crc kubenswrapper[4833]: I1013 08:11:49.179738 4833 scope.go:117] "RemoveContainer" containerID="638117f664e60f878d0ce9b27e60b7cee169a8f3eb4757d2f037762d7d338c92" Oct 13 08:11:49 crc kubenswrapper[4833]: I1013 08:11:49.248431 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d2nc2"] Oct 13 08:11:49 crc kubenswrapper[4833]: I1013 08:11:49.261276 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d2nc2"] Oct 13 08:11:49 crc kubenswrapper[4833]: I1013 08:11:49.267814 4833 scope.go:117] "RemoveContainer" containerID="211084ba6864929e2e9d81cb744cff2104d17cc2561ac82325933283493ec3a8" Oct 13 08:11:49 crc kubenswrapper[4833]: I1013 08:11:49.340820 4833 scope.go:117] "RemoveContainer" containerID="8a424c8853f5ffd0b89ac28b002b927fc61d61f6e6874e93b223ae4869407586" Oct 13 08:11:49 crc kubenswrapper[4833]: E1013 08:11:49.354163 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a424c8853f5ffd0b89ac28b002b927fc61d61f6e6874e93b223ae4869407586\": container with ID starting with 8a424c8853f5ffd0b89ac28b002b927fc61d61f6e6874e93b223ae4869407586 not found: ID does not exist" containerID="8a424c8853f5ffd0b89ac28b002b927fc61d61f6e6874e93b223ae4869407586" Oct 13 08:11:49 crc kubenswrapper[4833]: I1013 08:11:49.354212 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a424c8853f5ffd0b89ac28b002b927fc61d61f6e6874e93b223ae4869407586"} err="failed to get container status \"8a424c8853f5ffd0b89ac28b002b927fc61d61f6e6874e93b223ae4869407586\": rpc error: code = NotFound desc = could not find container \"8a424c8853f5ffd0b89ac28b002b927fc61d61f6e6874e93b223ae4869407586\": container with ID starting with 8a424c8853f5ffd0b89ac28b002b927fc61d61f6e6874e93b223ae4869407586 not found: ID does not exist" Oct 13 08:11:49 crc kubenswrapper[4833]: I1013 08:11:49.354238 4833 scope.go:117] "RemoveContainer" containerID="638117f664e60f878d0ce9b27e60b7cee169a8f3eb4757d2f037762d7d338c92" Oct 13 08:11:49 crc kubenswrapper[4833]: E1013 08:11:49.368528 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"638117f664e60f878d0ce9b27e60b7cee169a8f3eb4757d2f037762d7d338c92\": container with ID starting with 638117f664e60f878d0ce9b27e60b7cee169a8f3eb4757d2f037762d7d338c92 not found: ID does not exist" containerID="638117f664e60f878d0ce9b27e60b7cee169a8f3eb4757d2f037762d7d338c92" Oct 13 08:11:49 crc kubenswrapper[4833]: I1013 08:11:49.368584 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"638117f664e60f878d0ce9b27e60b7cee169a8f3eb4757d2f037762d7d338c92"} err="failed to get container status \"638117f664e60f878d0ce9b27e60b7cee169a8f3eb4757d2f037762d7d338c92\": rpc error: code = NotFound desc = could not find container \"638117f664e60f878d0ce9b27e60b7cee169a8f3eb4757d2f037762d7d338c92\": container with ID starting with 638117f664e60f878d0ce9b27e60b7cee169a8f3eb4757d2f037762d7d338c92 not found: ID does not exist" Oct 13 08:11:49 crc kubenswrapper[4833]: I1013 08:11:49.368612 4833 scope.go:117] "RemoveContainer" containerID="211084ba6864929e2e9d81cb744cff2104d17cc2561ac82325933283493ec3a8" Oct 13 08:11:49 crc kubenswrapper[4833]: E1013 08:11:49.372911 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"211084ba6864929e2e9d81cb744cff2104d17cc2561ac82325933283493ec3a8\": container with ID starting with 211084ba6864929e2e9d81cb744cff2104d17cc2561ac82325933283493ec3a8 not found: ID does not exist" containerID="211084ba6864929e2e9d81cb744cff2104d17cc2561ac82325933283493ec3a8" Oct 13 08:11:49 crc kubenswrapper[4833]: I1013 08:11:49.372942 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"211084ba6864929e2e9d81cb744cff2104d17cc2561ac82325933283493ec3a8"} err="failed to get container status \"211084ba6864929e2e9d81cb744cff2104d17cc2561ac82325933283493ec3a8\": rpc error: code = NotFound desc = could not find container \"211084ba6864929e2e9d81cb744cff2104d17cc2561ac82325933283493ec3a8\": container with ID starting with 211084ba6864929e2e9d81cb744cff2104d17cc2561ac82325933283493ec3a8 not found: ID does not exist" Oct 13 08:11:50 crc kubenswrapper[4833]: I1013 08:11:50.649480 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e2c5c4b-b290-46ee-9075-b7e10bf408f5" path="/var/lib/kubelet/pods/6e2c5c4b-b290-46ee-9075-b7e10bf408f5/volumes" Oct 13 08:11:51 crc kubenswrapper[4833]: I1013 08:11:51.070867 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-2qxgl"] Oct 13 08:11:51 crc kubenswrapper[4833]: I1013 08:11:51.082135 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-2qxgl"] Oct 13 08:11:52 crc kubenswrapper[4833]: E1013 08:11:52.489270 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod120a8304_64d3_4f08_b340_8f0853335cba.slice/crio-conmon-c4a9d6bee216693889e794890612cc5a045375a031758df8af9d04e6ac7c39f4.scope\": RecentStats: unable to find data in memory cache]" Oct 13 08:11:52 crc kubenswrapper[4833]: I1013 08:11:52.655644 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e82f80e-e2cb-4040-832a-84adfa9ea71b" path="/var/lib/kubelet/pods/0e82f80e-e2cb-4040-832a-84adfa9ea71b/volumes" Oct 13 08:11:52 crc kubenswrapper[4833]: I1013 08:11:52.655736 4833 scope.go:117] "RemoveContainer" containerID="8bd47cee7a525c2bd51e570539b3ababb1085b0d89c94c1001bf0cbe47e02c10" Oct 13 08:11:52 crc kubenswrapper[4833]: I1013 08:11:52.700695 4833 scope.go:117] "RemoveContainer" containerID="eec1b88ea50ea1ada585bba5c540ba7497293dc120cae542bc0e6a6663965a7c" Oct 13 08:11:52 crc kubenswrapper[4833]: I1013 08:11:52.759521 4833 scope.go:117] "RemoveContainer" containerID="cae36f7c179114b8281334c08ba6a15e2615e3f522b744f4fdf707d88101d901" Oct 13 08:12:02 crc kubenswrapper[4833]: E1013 08:12:02.837558 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod120a8304_64d3_4f08_b340_8f0853335cba.slice/crio-conmon-c4a9d6bee216693889e794890612cc5a045375a031758df8af9d04e6ac7c39f4.scope\": RecentStats: unable to find data in memory cache]" Oct 13 08:12:10 crc kubenswrapper[4833]: I1013 08:12:10.040907 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-vt9xr"] Oct 13 08:12:10 crc kubenswrapper[4833]: I1013 08:12:10.058186 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-vt9xr"] Oct 13 08:12:10 crc kubenswrapper[4833]: I1013 08:12:10.647230 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62f705b8-c040-4df0-9e2f-e9eb7a71b3ed" path="/var/lib/kubelet/pods/62f705b8-c040-4df0-9e2f-e9eb7a71b3ed/volumes" Oct 13 08:12:13 crc kubenswrapper[4833]: E1013 08:12:13.120419 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod120a8304_64d3_4f08_b340_8f0853335cba.slice/crio-conmon-c4a9d6bee216693889e794890612cc5a045375a031758df8af9d04e6ac7c39f4.scope\": RecentStats: unable to find data in memory cache]" Oct 13 08:12:20 crc kubenswrapper[4833]: I1013 08:12:20.047860 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-3032-account-create-x8xnn"] Oct 13 08:12:20 crc kubenswrapper[4833]: I1013 08:12:20.055316 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-3032-account-create-x8xnn"] Oct 13 08:12:20 crc kubenswrapper[4833]: I1013 08:12:20.647057 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d242248e-b0f5-48a2-bf01-94af4ddf9f34" path="/var/lib/kubelet/pods/d242248e-b0f5-48a2-bf01-94af4ddf9f34/volumes" Oct 13 08:12:23 crc kubenswrapper[4833]: E1013 08:12:23.435676 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod120a8304_64d3_4f08_b340_8f0853335cba.slice/crio-conmon-c4a9d6bee216693889e794890612cc5a045375a031758df8af9d04e6ac7c39f4.scope\": RecentStats: unable to find data in memory cache]" Oct 13 08:12:29 crc kubenswrapper[4833]: I1013 08:12:29.032460 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-cnqxb"] Oct 13 08:12:29 crc kubenswrapper[4833]: I1013 08:12:29.042437 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-cnqxb"] Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.302051 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7d95b76f49-5ppjz"] Oct 13 08:12:30 crc kubenswrapper[4833]: E1013 08:12:30.303855 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e2c5c4b-b290-46ee-9075-b7e10bf408f5" containerName="registry-server" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.303972 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e2c5c4b-b290-46ee-9075-b7e10bf408f5" containerName="registry-server" Oct 13 08:12:30 crc kubenswrapper[4833]: E1013 08:12:30.304058 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e2c5c4b-b290-46ee-9075-b7e10bf408f5" containerName="extract-utilities" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.304139 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e2c5c4b-b290-46ee-9075-b7e10bf408f5" containerName="extract-utilities" Oct 13 08:12:30 crc kubenswrapper[4833]: E1013 08:12:30.304226 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e2c5c4b-b290-46ee-9075-b7e10bf408f5" containerName="extract-content" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.304303 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e2c5c4b-b290-46ee-9075-b7e10bf408f5" containerName="extract-content" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.304752 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e2c5c4b-b290-46ee-9075-b7e10bf408f5" containerName="registry-server" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.306200 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d95b76f49-5ppjz" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.310062 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.315075 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.315404 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-xx5vr" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.315724 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.318655 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d95b76f49-5ppjz"] Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.345927 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.346187 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c8e44ad3-827f-4a9c-8f7e-a41059fcf803" containerName="glance-log" containerID="cri-o://6a5afbe4012a503f715489125cf71b882308e6c76f318b7bdd7a537498fca932" gracePeriod=30 Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.346349 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c8e44ad3-827f-4a9c-8f7e-a41059fcf803" containerName="glance-httpd" containerID="cri-o://97c61ac8d49253595623e276bf34e21ce13946ddf579d589d7c5898c7404a95b" gracePeriod=30 Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.414869 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5645dff74f-9pj25"] Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.416391 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5645dff74f-9pj25" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.430218 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5645dff74f-9pj25"] Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.438941 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsnx9\" (UniqueName: \"kubernetes.io/projected/7801e397-48d5-4f29-8e2f-cc9228c6983f-kube-api-access-rsnx9\") pod \"horizon-7d95b76f49-5ppjz\" (UID: \"7801e397-48d5-4f29-8e2f-cc9228c6983f\") " pod="openstack/horizon-7d95b76f49-5ppjz" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.439073 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7801e397-48d5-4f29-8e2f-cc9228c6983f-config-data\") pod \"horizon-7d95b76f49-5ppjz\" (UID: \"7801e397-48d5-4f29-8e2f-cc9228c6983f\") " pod="openstack/horizon-7d95b76f49-5ppjz" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.439100 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7801e397-48d5-4f29-8e2f-cc9228c6983f-scripts\") pod \"horizon-7d95b76f49-5ppjz\" (UID: \"7801e397-48d5-4f29-8e2f-cc9228c6983f\") " pod="openstack/horizon-7d95b76f49-5ppjz" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.439156 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7801e397-48d5-4f29-8e2f-cc9228c6983f-horizon-secret-key\") pod \"horizon-7d95b76f49-5ppjz\" (UID: \"7801e397-48d5-4f29-8e2f-cc9228c6983f\") " pod="openstack/horizon-7d95b76f49-5ppjz" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.439184 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7801e397-48d5-4f29-8e2f-cc9228c6983f-logs\") pod \"horizon-7d95b76f49-5ppjz\" (UID: \"7801e397-48d5-4f29-8e2f-cc9228c6983f\") " pod="openstack/horizon-7d95b76f49-5ppjz" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.444624 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.444826 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="cbcef078-11a6-49ff-9d33-bc1a608c6f7f" containerName="glance-log" containerID="cri-o://a47a5aa6a855292702dd5b5c9fc13e725ac9267c86798cbc6677ec69594359c3" gracePeriod=30 Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.444949 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="cbcef078-11a6-49ff-9d33-bc1a608c6f7f" containerName="glance-httpd" containerID="cri-o://c7bcdd9c07d5ad56dd5a7335442b83dd88136ed846c66a8003bbb890ba86216e" gracePeriod=30 Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.541197 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsnx9\" (UniqueName: \"kubernetes.io/projected/7801e397-48d5-4f29-8e2f-cc9228c6983f-kube-api-access-rsnx9\") pod \"horizon-7d95b76f49-5ppjz\" (UID: \"7801e397-48d5-4f29-8e2f-cc9228c6983f\") " pod="openstack/horizon-7d95b76f49-5ppjz" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.541270 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7801e397-48d5-4f29-8e2f-cc9228c6983f-config-data\") pod \"horizon-7d95b76f49-5ppjz\" (UID: \"7801e397-48d5-4f29-8e2f-cc9228c6983f\") " pod="openstack/horizon-7d95b76f49-5ppjz" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.541291 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7801e397-48d5-4f29-8e2f-cc9228c6983f-scripts\") pod \"horizon-7d95b76f49-5ppjz\" (UID: \"7801e397-48d5-4f29-8e2f-cc9228c6983f\") " pod="openstack/horizon-7d95b76f49-5ppjz" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.541327 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7801e397-48d5-4f29-8e2f-cc9228c6983f-horizon-secret-key\") pod \"horizon-7d95b76f49-5ppjz\" (UID: \"7801e397-48d5-4f29-8e2f-cc9228c6983f\") " pod="openstack/horizon-7d95b76f49-5ppjz" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.541350 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7801e397-48d5-4f29-8e2f-cc9228c6983f-logs\") pod \"horizon-7d95b76f49-5ppjz\" (UID: \"7801e397-48d5-4f29-8e2f-cc9228c6983f\") " pod="openstack/horizon-7d95b76f49-5ppjz" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.541383 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0de44410-584a-4827-97c0-0146aef11ca1-horizon-secret-key\") pod \"horizon-5645dff74f-9pj25\" (UID: \"0de44410-584a-4827-97c0-0146aef11ca1\") " pod="openstack/horizon-5645dff74f-9pj25" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.541402 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0de44410-584a-4827-97c0-0146aef11ca1-logs\") pod \"horizon-5645dff74f-9pj25\" (UID: \"0de44410-584a-4827-97c0-0146aef11ca1\") " pod="openstack/horizon-5645dff74f-9pj25" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.541428 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmfss\" (UniqueName: \"kubernetes.io/projected/0de44410-584a-4827-97c0-0146aef11ca1-kube-api-access-nmfss\") pod \"horizon-5645dff74f-9pj25\" (UID: \"0de44410-584a-4827-97c0-0146aef11ca1\") " pod="openstack/horizon-5645dff74f-9pj25" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.541494 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0de44410-584a-4827-97c0-0146aef11ca1-config-data\") pod \"horizon-5645dff74f-9pj25\" (UID: \"0de44410-584a-4827-97c0-0146aef11ca1\") " pod="openstack/horizon-5645dff74f-9pj25" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.541529 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0de44410-584a-4827-97c0-0146aef11ca1-scripts\") pod \"horizon-5645dff74f-9pj25\" (UID: \"0de44410-584a-4827-97c0-0146aef11ca1\") " pod="openstack/horizon-5645dff74f-9pj25" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.541815 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7801e397-48d5-4f29-8e2f-cc9228c6983f-logs\") pod \"horizon-7d95b76f49-5ppjz\" (UID: \"7801e397-48d5-4f29-8e2f-cc9228c6983f\") " pod="openstack/horizon-7d95b76f49-5ppjz" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.542451 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7801e397-48d5-4f29-8e2f-cc9228c6983f-scripts\") pod \"horizon-7d95b76f49-5ppjz\" (UID: \"7801e397-48d5-4f29-8e2f-cc9228c6983f\") " pod="openstack/horizon-7d95b76f49-5ppjz" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.543196 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7801e397-48d5-4f29-8e2f-cc9228c6983f-config-data\") pod \"horizon-7d95b76f49-5ppjz\" (UID: \"7801e397-48d5-4f29-8e2f-cc9228c6983f\") " pod="openstack/horizon-7d95b76f49-5ppjz" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.547488 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7801e397-48d5-4f29-8e2f-cc9228c6983f-horizon-secret-key\") pod \"horizon-7d95b76f49-5ppjz\" (UID: \"7801e397-48d5-4f29-8e2f-cc9228c6983f\") " pod="openstack/horizon-7d95b76f49-5ppjz" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.558294 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsnx9\" (UniqueName: \"kubernetes.io/projected/7801e397-48d5-4f29-8e2f-cc9228c6983f-kube-api-access-rsnx9\") pod \"horizon-7d95b76f49-5ppjz\" (UID: \"7801e397-48d5-4f29-8e2f-cc9228c6983f\") " pod="openstack/horizon-7d95b76f49-5ppjz" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.639141 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3efa7e52-a2ff-4ce0-a294-3f326ef52cde" path="/var/lib/kubelet/pods/3efa7e52-a2ff-4ce0-a294-3f326ef52cde/volumes" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.639746 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-xx5vr" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.642635 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0de44410-584a-4827-97c0-0146aef11ca1-config-data\") pod \"horizon-5645dff74f-9pj25\" (UID: \"0de44410-584a-4827-97c0-0146aef11ca1\") " pod="openstack/horizon-5645dff74f-9pj25" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.642761 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0de44410-584a-4827-97c0-0146aef11ca1-scripts\") pod \"horizon-5645dff74f-9pj25\" (UID: \"0de44410-584a-4827-97c0-0146aef11ca1\") " pod="openstack/horizon-5645dff74f-9pj25" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.642906 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0de44410-584a-4827-97c0-0146aef11ca1-horizon-secret-key\") pod \"horizon-5645dff74f-9pj25\" (UID: \"0de44410-584a-4827-97c0-0146aef11ca1\") " pod="openstack/horizon-5645dff74f-9pj25" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.642978 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0de44410-584a-4827-97c0-0146aef11ca1-logs\") pod \"horizon-5645dff74f-9pj25\" (UID: \"0de44410-584a-4827-97c0-0146aef11ca1\") " pod="openstack/horizon-5645dff74f-9pj25" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.643052 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmfss\" (UniqueName: \"kubernetes.io/projected/0de44410-584a-4827-97c0-0146aef11ca1-kube-api-access-nmfss\") pod \"horizon-5645dff74f-9pj25\" (UID: \"0de44410-584a-4827-97c0-0146aef11ca1\") " pod="openstack/horizon-5645dff74f-9pj25" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.643774 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0de44410-584a-4827-97c0-0146aef11ca1-logs\") pod \"horizon-5645dff74f-9pj25\" (UID: \"0de44410-584a-4827-97c0-0146aef11ca1\") " pod="openstack/horizon-5645dff74f-9pj25" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.643823 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0de44410-584a-4827-97c0-0146aef11ca1-config-data\") pod \"horizon-5645dff74f-9pj25\" (UID: \"0de44410-584a-4827-97c0-0146aef11ca1\") " pod="openstack/horizon-5645dff74f-9pj25" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.643975 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0de44410-584a-4827-97c0-0146aef11ca1-scripts\") pod \"horizon-5645dff74f-9pj25\" (UID: \"0de44410-584a-4827-97c0-0146aef11ca1\") " pod="openstack/horizon-5645dff74f-9pj25" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.647088 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0de44410-584a-4827-97c0-0146aef11ca1-horizon-secret-key\") pod \"horizon-5645dff74f-9pj25\" (UID: \"0de44410-584a-4827-97c0-0146aef11ca1\") " pod="openstack/horizon-5645dff74f-9pj25" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.647667 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d95b76f49-5ppjz" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.658484 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmfss\" (UniqueName: \"kubernetes.io/projected/0de44410-584a-4827-97c0-0146aef11ca1-kube-api-access-nmfss\") pod \"horizon-5645dff74f-9pj25\" (UID: \"0de44410-584a-4827-97c0-0146aef11ca1\") " pod="openstack/horizon-5645dff74f-9pj25" Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.736835 4833 generic.go:334] "Generic (PLEG): container finished" podID="c8e44ad3-827f-4a9c-8f7e-a41059fcf803" containerID="6a5afbe4012a503f715489125cf71b882308e6c76f318b7bdd7a537498fca932" exitCode=143 Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.736907 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8e44ad3-827f-4a9c-8f7e-a41059fcf803","Type":"ContainerDied","Data":"6a5afbe4012a503f715489125cf71b882308e6c76f318b7bdd7a537498fca932"} Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.740506 4833 generic.go:334] "Generic (PLEG): container finished" podID="cbcef078-11a6-49ff-9d33-bc1a608c6f7f" containerID="a47a5aa6a855292702dd5b5c9fc13e725ac9267c86798cbc6677ec69594359c3" exitCode=143 Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.740582 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cbcef078-11a6-49ff-9d33-bc1a608c6f7f","Type":"ContainerDied","Data":"a47a5aa6a855292702dd5b5c9fc13e725ac9267c86798cbc6677ec69594359c3"} Oct 13 08:12:30 crc kubenswrapper[4833]: I1013 08:12:30.754645 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5645dff74f-9pj25" Oct 13 08:12:31 crc kubenswrapper[4833]: I1013 08:12:31.179222 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d95b76f49-5ppjz"] Oct 13 08:12:31 crc kubenswrapper[4833]: I1013 08:12:31.313375 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5645dff74f-9pj25"] Oct 13 08:12:31 crc kubenswrapper[4833]: W1013 08:12:31.315145 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0de44410_584a_4827_97c0_0146aef11ca1.slice/crio-752d16f76ae3fb62c96b1adae695898e0db7a3ea2db223b0d6c24cc9c8b22e92 WatchSource:0}: Error finding container 752d16f76ae3fb62c96b1adae695898e0db7a3ea2db223b0d6c24cc9c8b22e92: Status 404 returned error can't find the container with id 752d16f76ae3fb62c96b1adae695898e0db7a3ea2db223b0d6c24cc9c8b22e92 Oct 13 08:12:31 crc kubenswrapper[4833]: I1013 08:12:31.753710 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5645dff74f-9pj25" event={"ID":"0de44410-584a-4827-97c0-0146aef11ca1","Type":"ContainerStarted","Data":"752d16f76ae3fb62c96b1adae695898e0db7a3ea2db223b0d6c24cc9c8b22e92"} Oct 13 08:12:31 crc kubenswrapper[4833]: I1013 08:12:31.755386 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d95b76f49-5ppjz" event={"ID":"7801e397-48d5-4f29-8e2f-cc9228c6983f","Type":"ContainerStarted","Data":"6f57955a6fca67afef914b927b594276820f4f1a495207a92ac806cfcd951a5f"} Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.796627 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5645dff74f-9pj25"] Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.831421 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6859f6c64b-zcrl2"] Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.833112 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6859f6c64b-zcrl2" Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.837438 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.839104 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6859f6c64b-zcrl2"] Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.894175 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-scripts\") pod \"horizon-6859f6c64b-zcrl2\" (UID: \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\") " pod="openstack/horizon-6859f6c64b-zcrl2" Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.894241 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-horizon-tls-certs\") pod \"horizon-6859f6c64b-zcrl2\" (UID: \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\") " pod="openstack/horizon-6859f6c64b-zcrl2" Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.894298 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-logs\") pod \"horizon-6859f6c64b-zcrl2\" (UID: \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\") " pod="openstack/horizon-6859f6c64b-zcrl2" Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.894342 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-config-data\") pod \"horizon-6859f6c64b-zcrl2\" (UID: \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\") " pod="openstack/horizon-6859f6c64b-zcrl2" Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.894438 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hlvl\" (UniqueName: \"kubernetes.io/projected/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-kube-api-access-8hlvl\") pod \"horizon-6859f6c64b-zcrl2\" (UID: \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\") " pod="openstack/horizon-6859f6c64b-zcrl2" Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.894505 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-combined-ca-bundle\") pod \"horizon-6859f6c64b-zcrl2\" (UID: \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\") " pod="openstack/horizon-6859f6c64b-zcrl2" Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.894705 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-horizon-secret-key\") pod \"horizon-6859f6c64b-zcrl2\" (UID: \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\") " pod="openstack/horizon-6859f6c64b-zcrl2" Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.912714 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7d95b76f49-5ppjz"] Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.927417 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-66c86f59f8-n4tkm"] Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.932685 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66c86f59f8-n4tkm" Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.939368 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66c86f59f8-n4tkm"] Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.996088 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-combined-ca-bundle\") pod \"horizon-6859f6c64b-zcrl2\" (UID: \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\") " pod="openstack/horizon-6859f6c64b-zcrl2" Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.996143 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d86df57-4667-4910-9463-8366b3080c9f-logs\") pod \"horizon-66c86f59f8-n4tkm\" (UID: \"2d86df57-4667-4910-9463-8366b3080c9f\") " pod="openstack/horizon-66c86f59f8-n4tkm" Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.996173 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d86df57-4667-4910-9463-8366b3080c9f-scripts\") pod \"horizon-66c86f59f8-n4tkm\" (UID: \"2d86df57-4667-4910-9463-8366b3080c9f\") " pod="openstack/horizon-66c86f59f8-n4tkm" Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.996324 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d86df57-4667-4910-9463-8366b3080c9f-horizon-tls-certs\") pod \"horizon-66c86f59f8-n4tkm\" (UID: \"2d86df57-4667-4910-9463-8366b3080c9f\") " pod="openstack/horizon-66c86f59f8-n4tkm" Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.996412 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-horizon-secret-key\") pod \"horizon-6859f6c64b-zcrl2\" (UID: \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\") " pod="openstack/horizon-6859f6c64b-zcrl2" Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.996512 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-scripts\") pod \"horizon-6859f6c64b-zcrl2\" (UID: \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\") " pod="openstack/horizon-6859f6c64b-zcrl2" Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.996578 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-horizon-tls-certs\") pod \"horizon-6859f6c64b-zcrl2\" (UID: \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\") " pod="openstack/horizon-6859f6c64b-zcrl2" Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.996645 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-logs\") pod \"horizon-6859f6c64b-zcrl2\" (UID: \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\") " pod="openstack/horizon-6859f6c64b-zcrl2" Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.996686 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kxdm\" (UniqueName: \"kubernetes.io/projected/2d86df57-4667-4910-9463-8366b3080c9f-kube-api-access-4kxdm\") pod \"horizon-66c86f59f8-n4tkm\" (UID: \"2d86df57-4667-4910-9463-8366b3080c9f\") " pod="openstack/horizon-66c86f59f8-n4tkm" Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.996717 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d86df57-4667-4910-9463-8366b3080c9f-combined-ca-bundle\") pod \"horizon-66c86f59f8-n4tkm\" (UID: \"2d86df57-4667-4910-9463-8366b3080c9f\") " pod="openstack/horizon-66c86f59f8-n4tkm" Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.996743 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2d86df57-4667-4910-9463-8366b3080c9f-horizon-secret-key\") pod \"horizon-66c86f59f8-n4tkm\" (UID: \"2d86df57-4667-4910-9463-8366b3080c9f\") " pod="openstack/horizon-66c86f59f8-n4tkm" Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.996776 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-config-data\") pod \"horizon-6859f6c64b-zcrl2\" (UID: \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\") " pod="openstack/horizon-6859f6c64b-zcrl2" Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.996853 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d86df57-4667-4910-9463-8366b3080c9f-config-data\") pod \"horizon-66c86f59f8-n4tkm\" (UID: \"2d86df57-4667-4910-9463-8366b3080c9f\") " pod="openstack/horizon-66c86f59f8-n4tkm" Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.996938 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hlvl\" (UniqueName: \"kubernetes.io/projected/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-kube-api-access-8hlvl\") pod \"horizon-6859f6c64b-zcrl2\" (UID: \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\") " pod="openstack/horizon-6859f6c64b-zcrl2" Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.997837 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-logs\") pod \"horizon-6859f6c64b-zcrl2\" (UID: \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\") " pod="openstack/horizon-6859f6c64b-zcrl2" Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.998179 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-scripts\") pod \"horizon-6859f6c64b-zcrl2\" (UID: \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\") " pod="openstack/horizon-6859f6c64b-zcrl2" Oct 13 08:12:32 crc kubenswrapper[4833]: I1013 08:12:32.998559 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-config-data\") pod \"horizon-6859f6c64b-zcrl2\" (UID: \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\") " pod="openstack/horizon-6859f6c64b-zcrl2" Oct 13 08:12:33 crc kubenswrapper[4833]: I1013 08:12:33.002619 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-horizon-tls-certs\") pod \"horizon-6859f6c64b-zcrl2\" (UID: \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\") " pod="openstack/horizon-6859f6c64b-zcrl2" Oct 13 08:12:33 crc kubenswrapper[4833]: I1013 08:12:33.004784 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-combined-ca-bundle\") pod \"horizon-6859f6c64b-zcrl2\" (UID: \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\") " pod="openstack/horizon-6859f6c64b-zcrl2" Oct 13 08:12:33 crc kubenswrapper[4833]: I1013 08:12:33.016450 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-horizon-secret-key\") pod \"horizon-6859f6c64b-zcrl2\" (UID: \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\") " pod="openstack/horizon-6859f6c64b-zcrl2" Oct 13 08:12:33 crc kubenswrapper[4833]: I1013 08:12:33.017626 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hlvl\" (UniqueName: \"kubernetes.io/projected/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-kube-api-access-8hlvl\") pod \"horizon-6859f6c64b-zcrl2\" (UID: \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\") " pod="openstack/horizon-6859f6c64b-zcrl2" Oct 13 08:12:33 crc kubenswrapper[4833]: I1013 08:12:33.098730 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d86df57-4667-4910-9463-8366b3080c9f-logs\") pod \"horizon-66c86f59f8-n4tkm\" (UID: \"2d86df57-4667-4910-9463-8366b3080c9f\") " pod="openstack/horizon-66c86f59f8-n4tkm" Oct 13 08:12:33 crc kubenswrapper[4833]: I1013 08:12:33.099059 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d86df57-4667-4910-9463-8366b3080c9f-scripts\") pod \"horizon-66c86f59f8-n4tkm\" (UID: \"2d86df57-4667-4910-9463-8366b3080c9f\") " pod="openstack/horizon-66c86f59f8-n4tkm" Oct 13 08:12:33 crc kubenswrapper[4833]: I1013 08:12:33.099094 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d86df57-4667-4910-9463-8366b3080c9f-horizon-tls-certs\") pod \"horizon-66c86f59f8-n4tkm\" (UID: \"2d86df57-4667-4910-9463-8366b3080c9f\") " pod="openstack/horizon-66c86f59f8-n4tkm" Oct 13 08:12:33 crc kubenswrapper[4833]: I1013 08:12:33.099169 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kxdm\" (UniqueName: \"kubernetes.io/projected/2d86df57-4667-4910-9463-8366b3080c9f-kube-api-access-4kxdm\") pod \"horizon-66c86f59f8-n4tkm\" (UID: \"2d86df57-4667-4910-9463-8366b3080c9f\") " pod="openstack/horizon-66c86f59f8-n4tkm" Oct 13 08:12:33 crc kubenswrapper[4833]: I1013 08:12:33.099187 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d86df57-4667-4910-9463-8366b3080c9f-combined-ca-bundle\") pod \"horizon-66c86f59f8-n4tkm\" (UID: \"2d86df57-4667-4910-9463-8366b3080c9f\") " pod="openstack/horizon-66c86f59f8-n4tkm" Oct 13 08:12:33 crc kubenswrapper[4833]: I1013 08:12:33.099203 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2d86df57-4667-4910-9463-8366b3080c9f-horizon-secret-key\") pod \"horizon-66c86f59f8-n4tkm\" (UID: \"2d86df57-4667-4910-9463-8366b3080c9f\") " pod="openstack/horizon-66c86f59f8-n4tkm" Oct 13 08:12:33 crc kubenswrapper[4833]: I1013 08:12:33.099239 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d86df57-4667-4910-9463-8366b3080c9f-config-data\") pod \"horizon-66c86f59f8-n4tkm\" (UID: \"2d86df57-4667-4910-9463-8366b3080c9f\") " pod="openstack/horizon-66c86f59f8-n4tkm" Oct 13 08:12:33 crc kubenswrapper[4833]: I1013 08:12:33.099262 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d86df57-4667-4910-9463-8366b3080c9f-logs\") pod \"horizon-66c86f59f8-n4tkm\" (UID: \"2d86df57-4667-4910-9463-8366b3080c9f\") " pod="openstack/horizon-66c86f59f8-n4tkm" Oct 13 08:12:33 crc kubenswrapper[4833]: I1013 08:12:33.100907 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d86df57-4667-4910-9463-8366b3080c9f-scripts\") pod \"horizon-66c86f59f8-n4tkm\" (UID: \"2d86df57-4667-4910-9463-8366b3080c9f\") " pod="openstack/horizon-66c86f59f8-n4tkm" Oct 13 08:12:33 crc kubenswrapper[4833]: I1013 08:12:33.101722 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d86df57-4667-4910-9463-8366b3080c9f-config-data\") pod \"horizon-66c86f59f8-n4tkm\" (UID: \"2d86df57-4667-4910-9463-8366b3080c9f\") " pod="openstack/horizon-66c86f59f8-n4tkm" Oct 13 08:12:33 crc kubenswrapper[4833]: I1013 08:12:33.103122 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2d86df57-4667-4910-9463-8366b3080c9f-horizon-secret-key\") pod \"horizon-66c86f59f8-n4tkm\" (UID: \"2d86df57-4667-4910-9463-8366b3080c9f\") " pod="openstack/horizon-66c86f59f8-n4tkm" Oct 13 08:12:33 crc kubenswrapper[4833]: I1013 08:12:33.103459 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d86df57-4667-4910-9463-8366b3080c9f-combined-ca-bundle\") pod \"horizon-66c86f59f8-n4tkm\" (UID: \"2d86df57-4667-4910-9463-8366b3080c9f\") " pod="openstack/horizon-66c86f59f8-n4tkm" Oct 13 08:12:33 crc kubenswrapper[4833]: I1013 08:12:33.104028 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d86df57-4667-4910-9463-8366b3080c9f-horizon-tls-certs\") pod \"horizon-66c86f59f8-n4tkm\" (UID: \"2d86df57-4667-4910-9463-8366b3080c9f\") " pod="openstack/horizon-66c86f59f8-n4tkm" Oct 13 08:12:33 crc kubenswrapper[4833]: I1013 08:12:33.116327 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kxdm\" (UniqueName: \"kubernetes.io/projected/2d86df57-4667-4910-9463-8366b3080c9f-kube-api-access-4kxdm\") pod \"horizon-66c86f59f8-n4tkm\" (UID: \"2d86df57-4667-4910-9463-8366b3080c9f\") " pod="openstack/horizon-66c86f59f8-n4tkm" Oct 13 08:12:33 crc kubenswrapper[4833]: I1013 08:12:33.159345 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6859f6c64b-zcrl2" Oct 13 08:12:33 crc kubenswrapper[4833]: I1013 08:12:33.249121 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66c86f59f8-n4tkm" Oct 13 08:12:33 crc kubenswrapper[4833]: I1013 08:12:33.601241 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6859f6c64b-zcrl2"] Oct 13 08:12:33 crc kubenswrapper[4833]: E1013 08:12:33.711223 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8e44ad3_827f_4a9c_8f7e_a41059fcf803.slice/crio-conmon-97c61ac8d49253595623e276bf34e21ce13946ddf579d589d7c5898c7404a95b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8e44ad3_827f_4a9c_8f7e_a41059fcf803.slice/crio-97c61ac8d49253595623e276bf34e21ce13946ddf579d589d7c5898c7404a95b.scope\": RecentStats: unable to find data in memory cache]" Oct 13 08:12:33 crc kubenswrapper[4833]: I1013 08:12:33.781330 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66c86f59f8-n4tkm"] Oct 13 08:12:33 crc kubenswrapper[4833]: I1013 08:12:33.803506 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6859f6c64b-zcrl2" event={"ID":"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86","Type":"ContainerStarted","Data":"696d67f825fcc4fe373ea4bffe60112548a75dd37e725d492abe1711ffe54fd7"} Oct 13 08:12:33 crc kubenswrapper[4833]: I1013 08:12:33.815919 4833 generic.go:334] "Generic (PLEG): container finished" podID="cbcef078-11a6-49ff-9d33-bc1a608c6f7f" containerID="c7bcdd9c07d5ad56dd5a7335442b83dd88136ed846c66a8003bbb890ba86216e" exitCode=0 Oct 13 08:12:33 crc kubenswrapper[4833]: I1013 08:12:33.816062 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cbcef078-11a6-49ff-9d33-bc1a608c6f7f","Type":"ContainerDied","Data":"c7bcdd9c07d5ad56dd5a7335442b83dd88136ed846c66a8003bbb890ba86216e"} Oct 13 08:12:33 crc kubenswrapper[4833]: I1013 08:12:33.817980 4833 generic.go:334] "Generic (PLEG): container finished" podID="c8e44ad3-827f-4a9c-8f7e-a41059fcf803" containerID="97c61ac8d49253595623e276bf34e21ce13946ddf579d589d7c5898c7404a95b" exitCode=0 Oct 13 08:12:33 crc kubenswrapper[4833]: I1013 08:12:33.818005 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8e44ad3-827f-4a9c-8f7e-a41059fcf803","Type":"ContainerDied","Data":"97c61ac8d49253595623e276bf34e21ce13946ddf579d589d7c5898c7404a95b"} Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.118615 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.261388 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.318342 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-logs\") pod \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\" (UID: \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\") " Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.318408 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc4np\" (UniqueName: \"kubernetes.io/projected/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-kube-api-access-zc4np\") pod \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\" (UID: \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\") " Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.318458 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-config-data\") pod \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\" (UID: \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\") " Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.318515 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-combined-ca-bundle\") pod \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\" (UID: \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\") " Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.318558 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-combined-ca-bundle\") pod \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\" (UID: \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\") " Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.318593 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-logs\") pod \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\" (UID: \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\") " Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.318612 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-public-tls-certs\") pod \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\" (UID: \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\") " Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.318639 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-internal-tls-certs\") pod \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\" (UID: \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\") " Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.318705 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-scripts\") pod \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\" (UID: \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\") " Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.318741 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-config-data\") pod \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\" (UID: \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\") " Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.318809 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvrg6\" (UniqueName: \"kubernetes.io/projected/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-kube-api-access-jvrg6\") pod \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\" (UID: \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\") " Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.318833 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-httpd-run\") pod \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\" (UID: \"c8e44ad3-827f-4a9c-8f7e-a41059fcf803\") " Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.318897 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-httpd-run\") pod \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\" (UID: \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\") " Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.318919 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-scripts\") pod \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\" (UID: \"cbcef078-11a6-49ff-9d33-bc1a608c6f7f\") " Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.319102 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-logs" (OuterVolumeSpecName: "logs") pod "c8e44ad3-827f-4a9c-8f7e-a41059fcf803" (UID: "c8e44ad3-827f-4a9c-8f7e-a41059fcf803"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.319390 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-logs\") on node \"crc\" DevicePath \"\"" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.319401 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-logs" (OuterVolumeSpecName: "logs") pod "cbcef078-11a6-49ff-9d33-bc1a608c6f7f" (UID: "cbcef078-11a6-49ff-9d33-bc1a608c6f7f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.319753 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c8e44ad3-827f-4a9c-8f7e-a41059fcf803" (UID: "c8e44ad3-827f-4a9c-8f7e-a41059fcf803"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.324244 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-kube-api-access-jvrg6" (OuterVolumeSpecName: "kube-api-access-jvrg6") pod "c8e44ad3-827f-4a9c-8f7e-a41059fcf803" (UID: "c8e44ad3-827f-4a9c-8f7e-a41059fcf803"). InnerVolumeSpecName "kube-api-access-jvrg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.324609 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "cbcef078-11a6-49ff-9d33-bc1a608c6f7f" (UID: "cbcef078-11a6-49ff-9d33-bc1a608c6f7f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.331201 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-kube-api-access-zc4np" (OuterVolumeSpecName: "kube-api-access-zc4np") pod "cbcef078-11a6-49ff-9d33-bc1a608c6f7f" (UID: "cbcef078-11a6-49ff-9d33-bc1a608c6f7f"). InnerVolumeSpecName "kube-api-access-zc4np". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.331310 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-scripts" (OuterVolumeSpecName: "scripts") pod "c8e44ad3-827f-4a9c-8f7e-a41059fcf803" (UID: "c8e44ad3-827f-4a9c-8f7e-a41059fcf803"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.350846 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-scripts" (OuterVolumeSpecName: "scripts") pod "cbcef078-11a6-49ff-9d33-bc1a608c6f7f" (UID: "cbcef078-11a6-49ff-9d33-bc1a608c6f7f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.368647 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8e44ad3-827f-4a9c-8f7e-a41059fcf803" (UID: "c8e44ad3-827f-4a9c-8f7e-a41059fcf803"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.379860 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbcef078-11a6-49ff-9d33-bc1a608c6f7f" (UID: "cbcef078-11a6-49ff-9d33-bc1a608c6f7f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.411186 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-config-data" (OuterVolumeSpecName: "config-data") pod "c8e44ad3-827f-4a9c-8f7e-a41059fcf803" (UID: "c8e44ad3-827f-4a9c-8f7e-a41059fcf803"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.417759 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c8e44ad3-827f-4a9c-8f7e-a41059fcf803" (UID: "c8e44ad3-827f-4a9c-8f7e-a41059fcf803"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.420880 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-logs\") on node \"crc\" DevicePath \"\"" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.420904 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc4np\" (UniqueName: \"kubernetes.io/projected/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-kube-api-access-zc4np\") on node \"crc\" DevicePath \"\"" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.420915 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.420926 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.420934 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.420942 4833 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.420952 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.420960 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvrg6\" (UniqueName: \"kubernetes.io/projected/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-kube-api-access-jvrg6\") on node \"crc\" DevicePath \"\"" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.420968 4833 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8e44ad3-827f-4a9c-8f7e-a41059fcf803-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.420975 4833 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.420983 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.430034 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-config-data" (OuterVolumeSpecName: "config-data") pod "cbcef078-11a6-49ff-9d33-bc1a608c6f7f" (UID: "cbcef078-11a6-49ff-9d33-bc1a608c6f7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.443324 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cbcef078-11a6-49ff-9d33-bc1a608c6f7f" (UID: "cbcef078-11a6-49ff-9d33-bc1a608c6f7f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.522514 4833 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.522557 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbcef078-11a6-49ff-9d33-bc1a608c6f7f-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.847947 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8e44ad3-827f-4a9c-8f7e-a41059fcf803","Type":"ContainerDied","Data":"cf2a6323a9ece9e4c80a38cfb93d7f0a6ffbca9601eaeb92a5a2082398d71caf"} Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.848191 4833 scope.go:117] "RemoveContainer" containerID="97c61ac8d49253595623e276bf34e21ce13946ddf579d589d7c5898c7404a95b" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.848314 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.850803 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66c86f59f8-n4tkm" event={"ID":"2d86df57-4667-4910-9463-8366b3080c9f","Type":"ContainerStarted","Data":"3c40f05a27514a42c5ccf846f6fbf566f5c76ffcfba43617e26958d222b25668"} Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.854853 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cbcef078-11a6-49ff-9d33-bc1a608c6f7f","Type":"ContainerDied","Data":"acaf13fa54a0539d83207d42cd4bea354f21fdf154e6a7babffc4ac5a4180bc2"} Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.854884 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.888719 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.919485 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.954663 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 08:12:34 crc kubenswrapper[4833]: E1013 08:12:34.955159 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e44ad3-827f-4a9c-8f7e-a41059fcf803" containerName="glance-log" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.955178 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e44ad3-827f-4a9c-8f7e-a41059fcf803" containerName="glance-log" Oct 13 08:12:34 crc kubenswrapper[4833]: E1013 08:12:34.955189 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e44ad3-827f-4a9c-8f7e-a41059fcf803" containerName="glance-httpd" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.955195 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e44ad3-827f-4a9c-8f7e-a41059fcf803" containerName="glance-httpd" Oct 13 08:12:34 crc kubenswrapper[4833]: E1013 08:12:34.955218 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbcef078-11a6-49ff-9d33-bc1a608c6f7f" containerName="glance-httpd" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.955224 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbcef078-11a6-49ff-9d33-bc1a608c6f7f" containerName="glance-httpd" Oct 13 08:12:34 crc kubenswrapper[4833]: E1013 08:12:34.955237 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbcef078-11a6-49ff-9d33-bc1a608c6f7f" containerName="glance-log" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.955243 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbcef078-11a6-49ff-9d33-bc1a608c6f7f" containerName="glance-log" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.955415 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbcef078-11a6-49ff-9d33-bc1a608c6f7f" containerName="glance-httpd" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.955433 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbcef078-11a6-49ff-9d33-bc1a608c6f7f" containerName="glance-log" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.955450 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8e44ad3-827f-4a9c-8f7e-a41059fcf803" containerName="glance-httpd" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.955463 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8e44ad3-827f-4a9c-8f7e-a41059fcf803" containerName="glance-log" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.957753 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.963829 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.964601 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-l8ns9" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.965848 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.966003 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.966134 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.985969 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 08:12:34 crc kubenswrapper[4833]: I1013 08:12:34.999355 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.011873 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.013594 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.015683 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.015759 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.021154 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.040079 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aadbd2b5-d3a3-4eca-857b-efca637a54ae-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"aadbd2b5-d3a3-4eca-857b-efca637a54ae\") " pod="openstack/glance-default-external-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.040112 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aadbd2b5-d3a3-4eca-857b-efca637a54ae-config-data\") pod \"glance-default-external-api-0\" (UID: \"aadbd2b5-d3a3-4eca-857b-efca637a54ae\") " pod="openstack/glance-default-external-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.040134 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfrdc\" (UniqueName: \"kubernetes.io/projected/cbf8b795-b01a-48f8-8470-0aea6a0c2556-kube-api-access-vfrdc\") pod \"glance-default-internal-api-0\" (UID: \"cbf8b795-b01a-48f8-8470-0aea6a0c2556\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.040153 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztmhd\" (UniqueName: \"kubernetes.io/projected/aadbd2b5-d3a3-4eca-857b-efca637a54ae-kube-api-access-ztmhd\") pod \"glance-default-external-api-0\" (UID: \"aadbd2b5-d3a3-4eca-857b-efca637a54ae\") " pod="openstack/glance-default-external-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.040189 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbf8b795-b01a-48f8-8470-0aea6a0c2556-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cbf8b795-b01a-48f8-8470-0aea6a0c2556\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.040208 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aadbd2b5-d3a3-4eca-857b-efca637a54ae-logs\") pod \"glance-default-external-api-0\" (UID: \"aadbd2b5-d3a3-4eca-857b-efca637a54ae\") " pod="openstack/glance-default-external-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.040228 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aadbd2b5-d3a3-4eca-857b-efca637a54ae-scripts\") pod \"glance-default-external-api-0\" (UID: \"aadbd2b5-d3a3-4eca-857b-efca637a54ae\") " pod="openstack/glance-default-external-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.040256 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aadbd2b5-d3a3-4eca-857b-efca637a54ae-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"aadbd2b5-d3a3-4eca-857b-efca637a54ae\") " pod="openstack/glance-default-external-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.040273 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbf8b795-b01a-48f8-8470-0aea6a0c2556-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cbf8b795-b01a-48f8-8470-0aea6a0c2556\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.040296 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbf8b795-b01a-48f8-8470-0aea6a0c2556-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cbf8b795-b01a-48f8-8470-0aea6a0c2556\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.040315 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbf8b795-b01a-48f8-8470-0aea6a0c2556-logs\") pod \"glance-default-internal-api-0\" (UID: \"cbf8b795-b01a-48f8-8470-0aea6a0c2556\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.040343 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aadbd2b5-d3a3-4eca-857b-efca637a54ae-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"aadbd2b5-d3a3-4eca-857b-efca637a54ae\") " pod="openstack/glance-default-external-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.040367 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbf8b795-b01a-48f8-8470-0aea6a0c2556-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cbf8b795-b01a-48f8-8470-0aea6a0c2556\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.040384 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbf8b795-b01a-48f8-8470-0aea6a0c2556-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cbf8b795-b01a-48f8-8470-0aea6a0c2556\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.144477 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfrdc\" (UniqueName: \"kubernetes.io/projected/cbf8b795-b01a-48f8-8470-0aea6a0c2556-kube-api-access-vfrdc\") pod \"glance-default-internal-api-0\" (UID: \"cbf8b795-b01a-48f8-8470-0aea6a0c2556\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.144515 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztmhd\" (UniqueName: \"kubernetes.io/projected/aadbd2b5-d3a3-4eca-857b-efca637a54ae-kube-api-access-ztmhd\") pod \"glance-default-external-api-0\" (UID: \"aadbd2b5-d3a3-4eca-857b-efca637a54ae\") " pod="openstack/glance-default-external-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.144647 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbf8b795-b01a-48f8-8470-0aea6a0c2556-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cbf8b795-b01a-48f8-8470-0aea6a0c2556\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.144672 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aadbd2b5-d3a3-4eca-857b-efca637a54ae-logs\") pod \"glance-default-external-api-0\" (UID: \"aadbd2b5-d3a3-4eca-857b-efca637a54ae\") " pod="openstack/glance-default-external-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.144705 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aadbd2b5-d3a3-4eca-857b-efca637a54ae-scripts\") pod \"glance-default-external-api-0\" (UID: \"aadbd2b5-d3a3-4eca-857b-efca637a54ae\") " pod="openstack/glance-default-external-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.144736 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aadbd2b5-d3a3-4eca-857b-efca637a54ae-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"aadbd2b5-d3a3-4eca-857b-efca637a54ae\") " pod="openstack/glance-default-external-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.144756 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbf8b795-b01a-48f8-8470-0aea6a0c2556-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cbf8b795-b01a-48f8-8470-0aea6a0c2556\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.144796 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbf8b795-b01a-48f8-8470-0aea6a0c2556-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cbf8b795-b01a-48f8-8470-0aea6a0c2556\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.144817 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbf8b795-b01a-48f8-8470-0aea6a0c2556-logs\") pod \"glance-default-internal-api-0\" (UID: \"cbf8b795-b01a-48f8-8470-0aea6a0c2556\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.144863 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aadbd2b5-d3a3-4eca-857b-efca637a54ae-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"aadbd2b5-d3a3-4eca-857b-efca637a54ae\") " pod="openstack/glance-default-external-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.144889 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbf8b795-b01a-48f8-8470-0aea6a0c2556-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cbf8b795-b01a-48f8-8470-0aea6a0c2556\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.144905 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbf8b795-b01a-48f8-8470-0aea6a0c2556-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cbf8b795-b01a-48f8-8470-0aea6a0c2556\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.144990 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aadbd2b5-d3a3-4eca-857b-efca637a54ae-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"aadbd2b5-d3a3-4eca-857b-efca637a54ae\") " pod="openstack/glance-default-external-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.145006 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aadbd2b5-d3a3-4eca-857b-efca637a54ae-config-data\") pod \"glance-default-external-api-0\" (UID: \"aadbd2b5-d3a3-4eca-857b-efca637a54ae\") " pod="openstack/glance-default-external-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.146530 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbf8b795-b01a-48f8-8470-0aea6a0c2556-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cbf8b795-b01a-48f8-8470-0aea6a0c2556\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.155780 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aadbd2b5-d3a3-4eca-857b-efca637a54ae-logs\") pod \"glance-default-external-api-0\" (UID: \"aadbd2b5-d3a3-4eca-857b-efca637a54ae\") " pod="openstack/glance-default-external-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.156100 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aadbd2b5-d3a3-4eca-857b-efca637a54ae-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"aadbd2b5-d3a3-4eca-857b-efca637a54ae\") " pod="openstack/glance-default-external-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.157996 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbf8b795-b01a-48f8-8470-0aea6a0c2556-logs\") pod \"glance-default-internal-api-0\" (UID: \"cbf8b795-b01a-48f8-8470-0aea6a0c2556\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.167257 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbf8b795-b01a-48f8-8470-0aea6a0c2556-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cbf8b795-b01a-48f8-8470-0aea6a0c2556\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.168561 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aadbd2b5-d3a3-4eca-857b-efca637a54ae-config-data\") pod \"glance-default-external-api-0\" (UID: \"aadbd2b5-d3a3-4eca-857b-efca637a54ae\") " pod="openstack/glance-default-external-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.171388 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbf8b795-b01a-48f8-8470-0aea6a0c2556-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cbf8b795-b01a-48f8-8470-0aea6a0c2556\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.178512 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztmhd\" (UniqueName: \"kubernetes.io/projected/aadbd2b5-d3a3-4eca-857b-efca637a54ae-kube-api-access-ztmhd\") pod \"glance-default-external-api-0\" (UID: \"aadbd2b5-d3a3-4eca-857b-efca637a54ae\") " pod="openstack/glance-default-external-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.179533 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfrdc\" (UniqueName: \"kubernetes.io/projected/cbf8b795-b01a-48f8-8470-0aea6a0c2556-kube-api-access-vfrdc\") pod \"glance-default-internal-api-0\" (UID: \"cbf8b795-b01a-48f8-8470-0aea6a0c2556\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.179664 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbf8b795-b01a-48f8-8470-0aea6a0c2556-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cbf8b795-b01a-48f8-8470-0aea6a0c2556\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.179990 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbf8b795-b01a-48f8-8470-0aea6a0c2556-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cbf8b795-b01a-48f8-8470-0aea6a0c2556\") " pod="openstack/glance-default-internal-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.182155 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aadbd2b5-d3a3-4eca-857b-efca637a54ae-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"aadbd2b5-d3a3-4eca-857b-efca637a54ae\") " pod="openstack/glance-default-external-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.186124 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aadbd2b5-d3a3-4eca-857b-efca637a54ae-scripts\") pod \"glance-default-external-api-0\" (UID: \"aadbd2b5-d3a3-4eca-857b-efca637a54ae\") " pod="openstack/glance-default-external-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.209451 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aadbd2b5-d3a3-4eca-857b-efca637a54ae-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"aadbd2b5-d3a3-4eca-857b-efca637a54ae\") " pod="openstack/glance-default-external-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.293089 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 08:12:35 crc kubenswrapper[4833]: I1013 08:12:35.407677 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 08:12:36 crc kubenswrapper[4833]: I1013 08:12:36.641028 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8e44ad3-827f-4a9c-8f7e-a41059fcf803" path="/var/lib/kubelet/pods/c8e44ad3-827f-4a9c-8f7e-a41059fcf803/volumes" Oct 13 08:12:36 crc kubenswrapper[4833]: I1013 08:12:36.642592 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbcef078-11a6-49ff-9d33-bc1a608c6f7f" path="/var/lib/kubelet/pods/cbcef078-11a6-49ff-9d33-bc1a608c6f7f/volumes" Oct 13 08:12:39 crc kubenswrapper[4833]: I1013 08:12:39.475852 4833 scope.go:117] "RemoveContainer" containerID="6a5afbe4012a503f715489125cf71b882308e6c76f318b7bdd7a537498fca932" Oct 13 08:12:39 crc kubenswrapper[4833]: I1013 08:12:39.628857 4833 scope.go:117] "RemoveContainer" containerID="c7bcdd9c07d5ad56dd5a7335442b83dd88136ed846c66a8003bbb890ba86216e" Oct 13 08:12:39 crc kubenswrapper[4833]: I1013 08:12:39.838110 4833 scope.go:117] "RemoveContainer" containerID="a47a5aa6a855292702dd5b5c9fc13e725ac9267c86798cbc6677ec69594359c3" Oct 13 08:12:40 crc kubenswrapper[4833]: I1013 08:12:40.159523 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 08:12:40 crc kubenswrapper[4833]: W1013 08:12:40.173658 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaadbd2b5_d3a3_4eca_857b_efca637a54ae.slice/crio-d02c9feb67d73f2bdd59cab24b816d3b496251cf7f503dd5abd2db9c64849152 WatchSource:0}: Error finding container d02c9feb67d73f2bdd59cab24b816d3b496251cf7f503dd5abd2db9c64849152: Status 404 returned error can't find the container with id d02c9feb67d73f2bdd59cab24b816d3b496251cf7f503dd5abd2db9c64849152 Oct 13 08:12:40 crc kubenswrapper[4833]: I1013 08:12:40.280675 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 08:12:40 crc kubenswrapper[4833]: I1013 08:12:40.957171 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cbf8b795-b01a-48f8-8470-0aea6a0c2556","Type":"ContainerStarted","Data":"2923a1ff9634a7a42a6d633c83b11c28ad5a391c3ad41fe91f0d413e0a5da136"} Oct 13 08:12:40 crc kubenswrapper[4833]: I1013 08:12:40.957412 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cbf8b795-b01a-48f8-8470-0aea6a0c2556","Type":"ContainerStarted","Data":"77b44427dd809c6ff7492641e666af777bef4d7e6d303e9fcf452fe25d888b69"} Oct 13 08:12:40 crc kubenswrapper[4833]: I1013 08:12:40.961956 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6859f6c64b-zcrl2" event={"ID":"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86","Type":"ContainerStarted","Data":"bf3f40db32563f697e49da9c09f691b6702b2eb1614e7811f3e4d291fe7e7b77"} Oct 13 08:12:40 crc kubenswrapper[4833]: I1013 08:12:40.961997 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6859f6c64b-zcrl2" event={"ID":"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86","Type":"ContainerStarted","Data":"1f23d793e01873b325b36a882efa46148ffc7926a68a066ec6cc5d88494d8042"} Oct 13 08:12:40 crc kubenswrapper[4833]: I1013 08:12:40.975765 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66c86f59f8-n4tkm" event={"ID":"2d86df57-4667-4910-9463-8366b3080c9f","Type":"ContainerStarted","Data":"ab31ffc2e1fec12a584bcd7ca0c6423a76c4147f5fe94dc28034f4064e3dcc08"} Oct 13 08:12:40 crc kubenswrapper[4833]: I1013 08:12:40.975811 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66c86f59f8-n4tkm" event={"ID":"2d86df57-4667-4910-9463-8366b3080c9f","Type":"ContainerStarted","Data":"994c4d313cd3b450e5bd3a26d1d2422f374f88377480ff0e54b2d612aa35b2f3"} Oct 13 08:12:40 crc kubenswrapper[4833]: I1013 08:12:40.985087 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6859f6c64b-zcrl2" podStartSLOduration=2.96396847 podStartE2EDuration="8.985068423s" podCreationTimestamp="2025-10-13 08:12:32 +0000 UTC" firstStartedPulling="2025-10-13 08:12:33.666567563 +0000 UTC m=+6243.766990479" lastFinishedPulling="2025-10-13 08:12:39.687667516 +0000 UTC m=+6249.788090432" observedRunningTime="2025-10-13 08:12:40.980636727 +0000 UTC m=+6251.081059643" watchObservedRunningTime="2025-10-13 08:12:40.985068423 +0000 UTC m=+6251.085491339" Oct 13 08:12:40 crc kubenswrapper[4833]: I1013 08:12:40.994841 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d95b76f49-5ppjz" event={"ID":"7801e397-48d5-4f29-8e2f-cc9228c6983f","Type":"ContainerStarted","Data":"f1b33372ec55e2ffd2dab3a1f9e054331d51a8ecd9d5fc5af1543b4db6551d7a"} Oct 13 08:12:40 crc kubenswrapper[4833]: I1013 08:12:40.994893 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d95b76f49-5ppjz" event={"ID":"7801e397-48d5-4f29-8e2f-cc9228c6983f","Type":"ContainerStarted","Data":"d47939fbbddb964c07341fc5f4cf48d3459f1ac5b9f6866115285cf5d8ac01d4"} Oct 13 08:12:40 crc kubenswrapper[4833]: I1013 08:12:40.995035 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7d95b76f49-5ppjz" podUID="7801e397-48d5-4f29-8e2f-cc9228c6983f" containerName="horizon-log" containerID="cri-o://d47939fbbddb964c07341fc5f4cf48d3459f1ac5b9f6866115285cf5d8ac01d4" gracePeriod=30 Oct 13 08:12:40 crc kubenswrapper[4833]: I1013 08:12:40.995210 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7d95b76f49-5ppjz" podUID="7801e397-48d5-4f29-8e2f-cc9228c6983f" containerName="horizon" containerID="cri-o://f1b33372ec55e2ffd2dab3a1f9e054331d51a8ecd9d5fc5af1543b4db6551d7a" gracePeriod=30 Oct 13 08:12:40 crc kubenswrapper[4833]: I1013 08:12:40.998496 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"aadbd2b5-d3a3-4eca-857b-efca637a54ae","Type":"ContainerStarted","Data":"a92d104b4caa731fba915d63d17484fd0164c0415d6bb55629225a3c491da0df"} Oct 13 08:12:40 crc kubenswrapper[4833]: I1013 08:12:40.998557 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"aadbd2b5-d3a3-4eca-857b-efca637a54ae","Type":"ContainerStarted","Data":"d02c9feb67d73f2bdd59cab24b816d3b496251cf7f503dd5abd2db9c64849152"} Oct 13 08:12:41 crc kubenswrapper[4833]: I1013 08:12:41.006733 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-66c86f59f8-n4tkm" podStartSLOduration=3.129186335 podStartE2EDuration="9.006715968s" podCreationTimestamp="2025-10-13 08:12:32 +0000 UTC" firstStartedPulling="2025-10-13 08:12:33.799881902 +0000 UTC m=+6243.900304808" lastFinishedPulling="2025-10-13 08:12:39.677411485 +0000 UTC m=+6249.777834441" observedRunningTime="2025-10-13 08:12:41.006201964 +0000 UTC m=+6251.106624890" watchObservedRunningTime="2025-10-13 08:12:41.006715968 +0000 UTC m=+6251.107138884" Oct 13 08:12:41 crc kubenswrapper[4833]: I1013 08:12:41.011329 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5645dff74f-9pj25" event={"ID":"0de44410-584a-4827-97c0-0146aef11ca1","Type":"ContainerStarted","Data":"aeadc2522d58a707ca549ea4fb9c11b795b8f6a1c3ad537434a84ae0bb5cb8e5"} Oct 13 08:12:41 crc kubenswrapper[4833]: I1013 08:12:41.011399 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5645dff74f-9pj25" event={"ID":"0de44410-584a-4827-97c0-0146aef11ca1","Type":"ContainerStarted","Data":"90692c6dd467ba5af220225028e031747b42715c05ae39e0a12c1a273ea28d14"} Oct 13 08:12:41 crc kubenswrapper[4833]: I1013 08:12:41.011439 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5645dff74f-9pj25" podUID="0de44410-584a-4827-97c0-0146aef11ca1" containerName="horizon-log" containerID="cri-o://90692c6dd467ba5af220225028e031747b42715c05ae39e0a12c1a273ea28d14" gracePeriod=30 Oct 13 08:12:41 crc kubenswrapper[4833]: I1013 08:12:41.011447 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5645dff74f-9pj25" podUID="0de44410-584a-4827-97c0-0146aef11ca1" containerName="horizon" containerID="cri-o://aeadc2522d58a707ca549ea4fb9c11b795b8f6a1c3ad537434a84ae0bb5cb8e5" gracePeriod=30 Oct 13 08:12:41 crc kubenswrapper[4833]: I1013 08:12:41.033129 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7d95b76f49-5ppjz" podStartSLOduration=2.466223854 podStartE2EDuration="11.033105328s" podCreationTimestamp="2025-10-13 08:12:30 +0000 UTC" firstStartedPulling="2025-10-13 08:12:31.185290156 +0000 UTC m=+6241.285713072" lastFinishedPulling="2025-10-13 08:12:39.75217163 +0000 UTC m=+6249.852594546" observedRunningTime="2025-10-13 08:12:41.031890854 +0000 UTC m=+6251.132313770" watchObservedRunningTime="2025-10-13 08:12:41.033105328 +0000 UTC m=+6251.133528234" Oct 13 08:12:41 crc kubenswrapper[4833]: I1013 08:12:41.068088 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5645dff74f-9pj25" podStartSLOduration=2.612933294 podStartE2EDuration="11.068070622s" podCreationTimestamp="2025-10-13 08:12:30 +0000 UTC" firstStartedPulling="2025-10-13 08:12:31.318383149 +0000 UTC m=+6241.418806085" lastFinishedPulling="2025-10-13 08:12:39.773520497 +0000 UTC m=+6249.873943413" observedRunningTime="2025-10-13 08:12:41.054855557 +0000 UTC m=+6251.155278483" watchObservedRunningTime="2025-10-13 08:12:41.068070622 +0000 UTC m=+6251.168493538" Oct 13 08:12:42 crc kubenswrapper[4833]: I1013 08:12:42.021627 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"aadbd2b5-d3a3-4eca-857b-efca637a54ae","Type":"ContainerStarted","Data":"3b15070fbadafbabec1ecf6cb9a684553fc9808d1de7051674aa2823d9e0f61f"} Oct 13 08:12:42 crc kubenswrapper[4833]: I1013 08:12:42.023525 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cbf8b795-b01a-48f8-8470-0aea6a0c2556","Type":"ContainerStarted","Data":"75f4d343f6f1e0823a67a98ac013d25e33c2c67b9363e677a58b19a28d38ff2d"} Oct 13 08:12:42 crc kubenswrapper[4833]: I1013 08:12:42.042742 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.042725386 podStartE2EDuration="8.042725386s" podCreationTimestamp="2025-10-13 08:12:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:12:42.037147698 +0000 UTC m=+6252.137570614" watchObservedRunningTime="2025-10-13 08:12:42.042725386 +0000 UTC m=+6252.143148302" Oct 13 08:12:42 crc kubenswrapper[4833]: I1013 08:12:42.065847 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.065827213 podStartE2EDuration="8.065827213s" podCreationTimestamp="2025-10-13 08:12:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:12:42.055671494 +0000 UTC m=+6252.156094420" watchObservedRunningTime="2025-10-13 08:12:42.065827213 +0000 UTC m=+6252.166250129" Oct 13 08:12:43 crc kubenswrapper[4833]: I1013 08:12:43.159507 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6859f6c64b-zcrl2" Oct 13 08:12:43 crc kubenswrapper[4833]: I1013 08:12:43.159616 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6859f6c64b-zcrl2" Oct 13 08:12:43 crc kubenswrapper[4833]: I1013 08:12:43.249749 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-66c86f59f8-n4tkm" Oct 13 08:12:43 crc kubenswrapper[4833]: I1013 08:12:43.249799 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-66c86f59f8-n4tkm" Oct 13 08:12:45 crc kubenswrapper[4833]: I1013 08:12:45.293693 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 13 08:12:45 crc kubenswrapper[4833]: I1013 08:12:45.294681 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 13 08:12:45 crc kubenswrapper[4833]: I1013 08:12:45.325953 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 13 08:12:45 crc kubenswrapper[4833]: I1013 08:12:45.353630 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 13 08:12:45 crc kubenswrapper[4833]: I1013 08:12:45.408604 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 13 08:12:45 crc kubenswrapper[4833]: I1013 08:12:45.408650 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 13 08:12:45 crc kubenswrapper[4833]: I1013 08:12:45.455965 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 13 08:12:45 crc kubenswrapper[4833]: I1013 08:12:45.457962 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 13 08:12:46 crc kubenswrapper[4833]: I1013 08:12:46.070463 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 13 08:12:46 crc kubenswrapper[4833]: I1013 08:12:46.070505 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 13 08:12:46 crc kubenswrapper[4833]: I1013 08:12:46.070518 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 13 08:12:46 crc kubenswrapper[4833]: I1013 08:12:46.070529 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 13 08:12:47 crc kubenswrapper[4833]: I1013 08:12:47.949388 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 13 08:12:47 crc kubenswrapper[4833]: I1013 08:12:47.951771 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 13 08:12:47 crc kubenswrapper[4833]: I1013 08:12:47.982585 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 13 08:12:48 crc kubenswrapper[4833]: I1013 08:12:48.090433 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 13 08:12:50 crc kubenswrapper[4833]: I1013 08:12:50.648126 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7d95b76f49-5ppjz" Oct 13 08:12:50 crc kubenswrapper[4833]: I1013 08:12:50.755279 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5645dff74f-9pj25" Oct 13 08:12:52 crc kubenswrapper[4833]: I1013 08:12:52.897904 4833 scope.go:117] "RemoveContainer" containerID="2f69f3d34d6d455330886244c23d5da749e1f774325cb65cca32d66a215e3b41" Oct 13 08:12:52 crc kubenswrapper[4833]: I1013 08:12:52.940492 4833 scope.go:117] "RemoveContainer" containerID="f107cc8b2a19371cedc2a10f03fbf969373d3440068d4180c418d0d725820b85" Oct 13 08:12:52 crc kubenswrapper[4833]: I1013 08:12:52.996443 4833 scope.go:117] "RemoveContainer" containerID="56f49c78bc36d5b56105a0b0172978f946dbf5bf3df6d8e6f5df850affcb6085" Oct 13 08:12:53 crc kubenswrapper[4833]: I1013 08:12:53.162076 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6859f6c64b-zcrl2" podUID="3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.122:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.122:8443: connect: connection refused" Oct 13 08:12:53 crc kubenswrapper[4833]: I1013 08:12:53.251804 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66c86f59f8-n4tkm" podUID="2d86df57-4667-4910-9463-8366b3080c9f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.123:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.123:8443: connect: connection refused" Oct 13 08:13:00 crc kubenswrapper[4833]: I1013 08:13:00.543222 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:13:00 crc kubenswrapper[4833]: I1013 08:13:00.543771 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 08:13:05 crc kubenswrapper[4833]: I1013 08:13:05.042428 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6859f6c64b-zcrl2" Oct 13 08:13:05 crc kubenswrapper[4833]: I1013 08:13:05.046234 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-66c86f59f8-n4tkm" Oct 13 08:13:06 crc kubenswrapper[4833]: I1013 08:13:06.625328 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-66c86f59f8-n4tkm" Oct 13 08:13:06 crc kubenswrapper[4833]: I1013 08:13:06.654991 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6859f6c64b-zcrl2" Oct 13 08:13:06 crc kubenswrapper[4833]: I1013 08:13:06.709738 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6859f6c64b-zcrl2"] Oct 13 08:13:07 crc kubenswrapper[4833]: I1013 08:13:07.321348 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6859f6c64b-zcrl2" podUID="3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86" containerName="horizon-log" containerID="cri-o://bf3f40db32563f697e49da9c09f691b6702b2eb1614e7811f3e4d291fe7e7b77" gracePeriod=30 Oct 13 08:13:07 crc kubenswrapper[4833]: I1013 08:13:07.321497 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6859f6c64b-zcrl2" podUID="3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86" containerName="horizon" containerID="cri-o://1f23d793e01873b325b36a882efa46148ffc7926a68a066ec6cc5d88494d8042" gracePeriod=30 Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.366095 4833 generic.go:334] "Generic (PLEG): container finished" podID="0de44410-584a-4827-97c0-0146aef11ca1" containerID="aeadc2522d58a707ca549ea4fb9c11b795b8f6a1c3ad537434a84ae0bb5cb8e5" exitCode=137 Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.366846 4833 generic.go:334] "Generic (PLEG): container finished" podID="0de44410-584a-4827-97c0-0146aef11ca1" containerID="90692c6dd467ba5af220225028e031747b42715c05ae39e0a12c1a273ea28d14" exitCode=137 Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.366199 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5645dff74f-9pj25" event={"ID":"0de44410-584a-4827-97c0-0146aef11ca1","Type":"ContainerDied","Data":"aeadc2522d58a707ca549ea4fb9c11b795b8f6a1c3ad537434a84ae0bb5cb8e5"} Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.366979 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5645dff74f-9pj25" event={"ID":"0de44410-584a-4827-97c0-0146aef11ca1","Type":"ContainerDied","Data":"90692c6dd467ba5af220225028e031747b42715c05ae39e0a12c1a273ea28d14"} Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.369087 4833 generic.go:334] "Generic (PLEG): container finished" podID="3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86" containerID="1f23d793e01873b325b36a882efa46148ffc7926a68a066ec6cc5d88494d8042" exitCode=0 Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.369144 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6859f6c64b-zcrl2" event={"ID":"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86","Type":"ContainerDied","Data":"1f23d793e01873b325b36a882efa46148ffc7926a68a066ec6cc5d88494d8042"} Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.370905 4833 generic.go:334] "Generic (PLEG): container finished" podID="7801e397-48d5-4f29-8e2f-cc9228c6983f" containerID="f1b33372ec55e2ffd2dab3a1f9e054331d51a8ecd9d5fc5af1543b4db6551d7a" exitCode=137 Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.371006 4833 generic.go:334] "Generic (PLEG): container finished" podID="7801e397-48d5-4f29-8e2f-cc9228c6983f" containerID="d47939fbbddb964c07341fc5f4cf48d3459f1ac5b9f6866115285cf5d8ac01d4" exitCode=137 Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.371036 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d95b76f49-5ppjz" event={"ID":"7801e397-48d5-4f29-8e2f-cc9228c6983f","Type":"ContainerDied","Data":"f1b33372ec55e2ffd2dab3a1f9e054331d51a8ecd9d5fc5af1543b4db6551d7a"} Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.371073 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d95b76f49-5ppjz" event={"ID":"7801e397-48d5-4f29-8e2f-cc9228c6983f","Type":"ContainerDied","Data":"d47939fbbddb964c07341fc5f4cf48d3459f1ac5b9f6866115285cf5d8ac01d4"} Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.555054 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d95b76f49-5ppjz" Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.570410 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5645dff74f-9pj25" Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.628039 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0de44410-584a-4827-97c0-0146aef11ca1-horizon-secret-key\") pod \"0de44410-584a-4827-97c0-0146aef11ca1\" (UID: \"0de44410-584a-4827-97c0-0146aef11ca1\") " Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.628119 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7801e397-48d5-4f29-8e2f-cc9228c6983f-scripts\") pod \"7801e397-48d5-4f29-8e2f-cc9228c6983f\" (UID: \"7801e397-48d5-4f29-8e2f-cc9228c6983f\") " Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.628183 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0de44410-584a-4827-97c0-0146aef11ca1-scripts\") pod \"0de44410-584a-4827-97c0-0146aef11ca1\" (UID: \"0de44410-584a-4827-97c0-0146aef11ca1\") " Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.628294 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0de44410-584a-4827-97c0-0146aef11ca1-logs\") pod \"0de44410-584a-4827-97c0-0146aef11ca1\" (UID: \"0de44410-584a-4827-97c0-0146aef11ca1\") " Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.628369 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmfss\" (UniqueName: \"kubernetes.io/projected/0de44410-584a-4827-97c0-0146aef11ca1-kube-api-access-nmfss\") pod \"0de44410-584a-4827-97c0-0146aef11ca1\" (UID: \"0de44410-584a-4827-97c0-0146aef11ca1\") " Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.628419 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0de44410-584a-4827-97c0-0146aef11ca1-config-data\") pod \"0de44410-584a-4827-97c0-0146aef11ca1\" (UID: \"0de44410-584a-4827-97c0-0146aef11ca1\") " Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.628515 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7801e397-48d5-4f29-8e2f-cc9228c6983f-config-data\") pod \"7801e397-48d5-4f29-8e2f-cc9228c6983f\" (UID: \"7801e397-48d5-4f29-8e2f-cc9228c6983f\") " Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.628579 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsnx9\" (UniqueName: \"kubernetes.io/projected/7801e397-48d5-4f29-8e2f-cc9228c6983f-kube-api-access-rsnx9\") pod \"7801e397-48d5-4f29-8e2f-cc9228c6983f\" (UID: \"7801e397-48d5-4f29-8e2f-cc9228c6983f\") " Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.628662 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7801e397-48d5-4f29-8e2f-cc9228c6983f-horizon-secret-key\") pod \"7801e397-48d5-4f29-8e2f-cc9228c6983f\" (UID: \"7801e397-48d5-4f29-8e2f-cc9228c6983f\") " Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.628757 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7801e397-48d5-4f29-8e2f-cc9228c6983f-logs\") pod \"7801e397-48d5-4f29-8e2f-cc9228c6983f\" (UID: \"7801e397-48d5-4f29-8e2f-cc9228c6983f\") " Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.630011 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7801e397-48d5-4f29-8e2f-cc9228c6983f-logs" (OuterVolumeSpecName: "logs") pod "7801e397-48d5-4f29-8e2f-cc9228c6983f" (UID: "7801e397-48d5-4f29-8e2f-cc9228c6983f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.636829 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0de44410-584a-4827-97c0-0146aef11ca1-logs" (OuterVolumeSpecName: "logs") pod "0de44410-584a-4827-97c0-0146aef11ca1" (UID: "0de44410-584a-4827-97c0-0146aef11ca1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.639024 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0de44410-584a-4827-97c0-0146aef11ca1-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "0de44410-584a-4827-97c0-0146aef11ca1" (UID: "0de44410-584a-4827-97c0-0146aef11ca1"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.639158 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7801e397-48d5-4f29-8e2f-cc9228c6983f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "7801e397-48d5-4f29-8e2f-cc9228c6983f" (UID: "7801e397-48d5-4f29-8e2f-cc9228c6983f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.639246 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0de44410-584a-4827-97c0-0146aef11ca1-kube-api-access-nmfss" (OuterVolumeSpecName: "kube-api-access-nmfss") pod "0de44410-584a-4827-97c0-0146aef11ca1" (UID: "0de44410-584a-4827-97c0-0146aef11ca1"). InnerVolumeSpecName "kube-api-access-nmfss". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.643717 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7801e397-48d5-4f29-8e2f-cc9228c6983f-kube-api-access-rsnx9" (OuterVolumeSpecName: "kube-api-access-rsnx9") pod "7801e397-48d5-4f29-8e2f-cc9228c6983f" (UID: "7801e397-48d5-4f29-8e2f-cc9228c6983f"). InnerVolumeSpecName "kube-api-access-rsnx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.665661 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7801e397-48d5-4f29-8e2f-cc9228c6983f-scripts" (OuterVolumeSpecName: "scripts") pod "7801e397-48d5-4f29-8e2f-cc9228c6983f" (UID: "7801e397-48d5-4f29-8e2f-cc9228c6983f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.667127 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0de44410-584a-4827-97c0-0146aef11ca1-config-data" (OuterVolumeSpecName: "config-data") pod "0de44410-584a-4827-97c0-0146aef11ca1" (UID: "0de44410-584a-4827-97c0-0146aef11ca1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.667752 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7801e397-48d5-4f29-8e2f-cc9228c6983f-config-data" (OuterVolumeSpecName: "config-data") pod "7801e397-48d5-4f29-8e2f-cc9228c6983f" (UID: "7801e397-48d5-4f29-8e2f-cc9228c6983f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.670801 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0de44410-584a-4827-97c0-0146aef11ca1-scripts" (OuterVolumeSpecName: "scripts") pod "0de44410-584a-4827-97c0-0146aef11ca1" (UID: "0de44410-584a-4827-97c0-0146aef11ca1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.732597 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsnx9\" (UniqueName: \"kubernetes.io/projected/7801e397-48d5-4f29-8e2f-cc9228c6983f-kube-api-access-rsnx9\") on node \"crc\" DevicePath \"\"" Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.732638 4833 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7801e397-48d5-4f29-8e2f-cc9228c6983f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.732697 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7801e397-48d5-4f29-8e2f-cc9228c6983f-logs\") on node \"crc\" DevicePath \"\"" Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.732716 4833 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0de44410-584a-4827-97c0-0146aef11ca1-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.732736 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7801e397-48d5-4f29-8e2f-cc9228c6983f-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.732752 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0de44410-584a-4827-97c0-0146aef11ca1-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.732769 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0de44410-584a-4827-97c0-0146aef11ca1-logs\") on node \"crc\" DevicePath \"\"" Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.732786 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmfss\" (UniqueName: \"kubernetes.io/projected/0de44410-584a-4827-97c0-0146aef11ca1-kube-api-access-nmfss\") on node \"crc\" DevicePath \"\"" Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.732803 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0de44410-584a-4827-97c0-0146aef11ca1-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:13:11 crc kubenswrapper[4833]: I1013 08:13:11.732818 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7801e397-48d5-4f29-8e2f-cc9228c6983f-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:13:12 crc kubenswrapper[4833]: I1013 08:13:12.388062 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5645dff74f-9pj25" event={"ID":"0de44410-584a-4827-97c0-0146aef11ca1","Type":"ContainerDied","Data":"752d16f76ae3fb62c96b1adae695898e0db7a3ea2db223b0d6c24cc9c8b22e92"} Oct 13 08:13:12 crc kubenswrapper[4833]: I1013 08:13:12.388139 4833 scope.go:117] "RemoveContainer" containerID="aeadc2522d58a707ca549ea4fb9c11b795b8f6a1c3ad537434a84ae0bb5cb8e5" Oct 13 08:13:12 crc kubenswrapper[4833]: I1013 08:13:12.388315 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5645dff74f-9pj25" Oct 13 08:13:12 crc kubenswrapper[4833]: I1013 08:13:12.395593 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d95b76f49-5ppjz" event={"ID":"7801e397-48d5-4f29-8e2f-cc9228c6983f","Type":"ContainerDied","Data":"6f57955a6fca67afef914b927b594276820f4f1a495207a92ac806cfcd951a5f"} Oct 13 08:13:12 crc kubenswrapper[4833]: I1013 08:13:12.395719 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d95b76f49-5ppjz" Oct 13 08:13:12 crc kubenswrapper[4833]: I1013 08:13:12.468876 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5645dff74f-9pj25"] Oct 13 08:13:12 crc kubenswrapper[4833]: I1013 08:13:12.480955 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5645dff74f-9pj25"] Oct 13 08:13:12 crc kubenswrapper[4833]: I1013 08:13:12.495063 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7d95b76f49-5ppjz"] Oct 13 08:13:12 crc kubenswrapper[4833]: I1013 08:13:12.506341 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7d95b76f49-5ppjz"] Oct 13 08:13:12 crc kubenswrapper[4833]: I1013 08:13:12.639944 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0de44410-584a-4827-97c0-0146aef11ca1" path="/var/lib/kubelet/pods/0de44410-584a-4827-97c0-0146aef11ca1/volumes" Oct 13 08:13:12 crc kubenswrapper[4833]: I1013 08:13:12.640960 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7801e397-48d5-4f29-8e2f-cc9228c6983f" path="/var/lib/kubelet/pods/7801e397-48d5-4f29-8e2f-cc9228c6983f/volumes" Oct 13 08:13:12 crc kubenswrapper[4833]: I1013 08:13:12.648588 4833 scope.go:117] "RemoveContainer" containerID="90692c6dd467ba5af220225028e031747b42715c05ae39e0a12c1a273ea28d14" Oct 13 08:13:12 crc kubenswrapper[4833]: I1013 08:13:12.672380 4833 scope.go:117] "RemoveContainer" containerID="f1b33372ec55e2ffd2dab3a1f9e054331d51a8ecd9d5fc5af1543b4db6551d7a" Oct 13 08:13:12 crc kubenswrapper[4833]: I1013 08:13:12.908031 4833 scope.go:117] "RemoveContainer" containerID="d47939fbbddb964c07341fc5f4cf48d3459f1ac5b9f6866115285cf5d8ac01d4" Oct 13 08:13:13 crc kubenswrapper[4833]: I1013 08:13:13.160724 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6859f6c64b-zcrl2" podUID="3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.122:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.122:8443: connect: connection refused" Oct 13 08:13:23 crc kubenswrapper[4833]: I1013 08:13:23.160772 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6859f6c64b-zcrl2" podUID="3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.122:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.122:8443: connect: connection refused" Oct 13 08:13:30 crc kubenswrapper[4833]: I1013 08:13:30.542641 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:13:30 crc kubenswrapper[4833]: I1013 08:13:30.543206 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 08:13:33 crc kubenswrapper[4833]: I1013 08:13:33.160166 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6859f6c64b-zcrl2" podUID="3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.122:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.122:8443: connect: connection refused" Oct 13 08:13:33 crc kubenswrapper[4833]: I1013 08:13:33.160666 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6859f6c64b-zcrl2" Oct 13 08:13:37 crc kubenswrapper[4833]: I1013 08:13:37.037588 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-gvmh4"] Oct 13 08:13:37 crc kubenswrapper[4833]: I1013 08:13:37.047244 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-gvmh4"] Oct 13 08:13:37 crc kubenswrapper[4833]: I1013 08:13:37.699884 4833 generic.go:334] "Generic (PLEG): container finished" podID="3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86" containerID="bf3f40db32563f697e49da9c09f691b6702b2eb1614e7811f3e4d291fe7e7b77" exitCode=137 Oct 13 08:13:37 crc kubenswrapper[4833]: I1013 08:13:37.699999 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6859f6c64b-zcrl2" event={"ID":"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86","Type":"ContainerDied","Data":"bf3f40db32563f697e49da9c09f691b6702b2eb1614e7811f3e4d291fe7e7b77"} Oct 13 08:13:37 crc kubenswrapper[4833]: I1013 08:13:37.816489 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6859f6c64b-zcrl2" Oct 13 08:13:37 crc kubenswrapper[4833]: I1013 08:13:37.945785 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-horizon-secret-key\") pod \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\" (UID: \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\") " Oct 13 08:13:37 crc kubenswrapper[4833]: I1013 08:13:37.945934 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-logs\") pod \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\" (UID: \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\") " Oct 13 08:13:37 crc kubenswrapper[4833]: I1013 08:13:37.946121 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-horizon-tls-certs\") pod \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\" (UID: \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\") " Oct 13 08:13:37 crc kubenswrapper[4833]: I1013 08:13:37.946198 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hlvl\" (UniqueName: \"kubernetes.io/projected/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-kube-api-access-8hlvl\") pod \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\" (UID: \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\") " Oct 13 08:13:37 crc kubenswrapper[4833]: I1013 08:13:37.946252 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-combined-ca-bundle\") pod \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\" (UID: \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\") " Oct 13 08:13:37 crc kubenswrapper[4833]: I1013 08:13:37.946301 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-scripts\") pod \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\" (UID: \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\") " Oct 13 08:13:37 crc kubenswrapper[4833]: I1013 08:13:37.946370 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-config-data\") pod \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\" (UID: \"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86\") " Oct 13 08:13:37 crc kubenswrapper[4833]: I1013 08:13:37.946470 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-logs" (OuterVolumeSpecName: "logs") pod "3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86" (UID: "3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:13:37 crc kubenswrapper[4833]: I1013 08:13:37.947041 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-logs\") on node \"crc\" DevicePath \"\"" Oct 13 08:13:37 crc kubenswrapper[4833]: I1013 08:13:37.956847 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-kube-api-access-8hlvl" (OuterVolumeSpecName: "kube-api-access-8hlvl") pod "3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86" (UID: "3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86"). InnerVolumeSpecName "kube-api-access-8hlvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:13:37 crc kubenswrapper[4833]: I1013 08:13:37.962426 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86" (UID: "3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:13:37 crc kubenswrapper[4833]: I1013 08:13:37.974006 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-config-data" (OuterVolumeSpecName: "config-data") pod "3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86" (UID: "3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:13:37 crc kubenswrapper[4833]: I1013 08:13:37.977006 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86" (UID: "3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:13:37 crc kubenswrapper[4833]: I1013 08:13:37.986212 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-scripts" (OuterVolumeSpecName: "scripts") pod "3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86" (UID: "3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:13:38 crc kubenswrapper[4833]: I1013 08:13:38.003088 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86" (UID: "3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:13:38 crc kubenswrapper[4833]: I1013 08:13:38.050029 4833 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 13 08:13:38 crc kubenswrapper[4833]: I1013 08:13:38.050070 4833 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 08:13:38 crc kubenswrapper[4833]: I1013 08:13:38.050083 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hlvl\" (UniqueName: \"kubernetes.io/projected/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-kube-api-access-8hlvl\") on node \"crc\" DevicePath \"\"" Oct 13 08:13:38 crc kubenswrapper[4833]: I1013 08:13:38.050101 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:13:38 crc kubenswrapper[4833]: I1013 08:13:38.050113 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 08:13:38 crc kubenswrapper[4833]: I1013 08:13:38.050125 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:13:38 crc kubenswrapper[4833]: I1013 08:13:38.647720 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f8b4645-97a7-4bb8-b15e-35ad4c8bf70d" path="/var/lib/kubelet/pods/5f8b4645-97a7-4bb8-b15e-35ad4c8bf70d/volumes" Oct 13 08:13:38 crc kubenswrapper[4833]: I1013 08:13:38.716354 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6859f6c64b-zcrl2" event={"ID":"3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86","Type":"ContainerDied","Data":"696d67f825fcc4fe373ea4bffe60112548a75dd37e725d492abe1711ffe54fd7"} Oct 13 08:13:38 crc kubenswrapper[4833]: I1013 08:13:38.716438 4833 scope.go:117] "RemoveContainer" containerID="1f23d793e01873b325b36a882efa46148ffc7926a68a066ec6cc5d88494d8042" Oct 13 08:13:38 crc kubenswrapper[4833]: I1013 08:13:38.716474 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6859f6c64b-zcrl2" Oct 13 08:13:38 crc kubenswrapper[4833]: I1013 08:13:38.760186 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6859f6c64b-zcrl2"] Oct 13 08:13:38 crc kubenswrapper[4833]: I1013 08:13:38.772482 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6859f6c64b-zcrl2"] Oct 13 08:13:38 crc kubenswrapper[4833]: I1013 08:13:38.990105 4833 scope.go:117] "RemoveContainer" containerID="bf3f40db32563f697e49da9c09f691b6702b2eb1614e7811f3e4d291fe7e7b77" Oct 13 08:13:40 crc kubenswrapper[4833]: I1013 08:13:40.646116 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86" path="/var/lib/kubelet/pods/3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86/volumes" Oct 13 08:13:47 crc kubenswrapper[4833]: I1013 08:13:47.042898 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-4a75-account-create-xgv4p"] Oct 13 08:13:47 crc kubenswrapper[4833]: I1013 08:13:47.052289 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-4a75-account-create-xgv4p"] Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.209735 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-b4c45f97d-8pghc"] Oct 13 08:13:48 crc kubenswrapper[4833]: E1013 08:13:48.210469 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7801e397-48d5-4f29-8e2f-cc9228c6983f" containerName="horizon" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.210486 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7801e397-48d5-4f29-8e2f-cc9228c6983f" containerName="horizon" Oct 13 08:13:48 crc kubenswrapper[4833]: E1013 08:13:48.210505 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86" containerName="horizon" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.210512 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86" containerName="horizon" Oct 13 08:13:48 crc kubenswrapper[4833]: E1013 08:13:48.210527 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0de44410-584a-4827-97c0-0146aef11ca1" containerName="horizon" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.210556 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0de44410-584a-4827-97c0-0146aef11ca1" containerName="horizon" Oct 13 08:13:48 crc kubenswrapper[4833]: E1013 08:13:48.210576 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0de44410-584a-4827-97c0-0146aef11ca1" containerName="horizon-log" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.210583 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0de44410-584a-4827-97c0-0146aef11ca1" containerName="horizon-log" Oct 13 08:13:48 crc kubenswrapper[4833]: E1013 08:13:48.210607 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86" containerName="horizon-log" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.210614 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86" containerName="horizon-log" Oct 13 08:13:48 crc kubenswrapper[4833]: E1013 08:13:48.210631 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7801e397-48d5-4f29-8e2f-cc9228c6983f" containerName="horizon-log" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.210639 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7801e397-48d5-4f29-8e2f-cc9228c6983f" containerName="horizon-log" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.210865 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0de44410-584a-4827-97c0-0146aef11ca1" containerName="horizon-log" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.210883 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86" containerName="horizon-log" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.210898 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7801e397-48d5-4f29-8e2f-cc9228c6983f" containerName="horizon-log" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.210912 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7801e397-48d5-4f29-8e2f-cc9228c6983f" containerName="horizon" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.210925 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0de44410-584a-4827-97c0-0146aef11ca1" containerName="horizon" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.210942 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ad2d8fb-5b8d-4f5e-8ad3-313e21f1ee86" containerName="horizon" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.212098 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b4c45f97d-8pghc" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.230373 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b4c45f97d-8pghc"] Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.307277 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1197b9de-5ae7-42f1-b0f6-7c1adcc009e8-config-data\") pod \"horizon-b4c45f97d-8pghc\" (UID: \"1197b9de-5ae7-42f1-b0f6-7c1adcc009e8\") " pod="openstack/horizon-b4c45f97d-8pghc" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.307369 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1197b9de-5ae7-42f1-b0f6-7c1adcc009e8-logs\") pod \"horizon-b4c45f97d-8pghc\" (UID: \"1197b9de-5ae7-42f1-b0f6-7c1adcc009e8\") " pod="openstack/horizon-b4c45f97d-8pghc" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.307413 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7t78\" (UniqueName: \"kubernetes.io/projected/1197b9de-5ae7-42f1-b0f6-7c1adcc009e8-kube-api-access-p7t78\") pod \"horizon-b4c45f97d-8pghc\" (UID: \"1197b9de-5ae7-42f1-b0f6-7c1adcc009e8\") " pod="openstack/horizon-b4c45f97d-8pghc" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.307438 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1197b9de-5ae7-42f1-b0f6-7c1adcc009e8-horizon-secret-key\") pod \"horizon-b4c45f97d-8pghc\" (UID: \"1197b9de-5ae7-42f1-b0f6-7c1adcc009e8\") " pod="openstack/horizon-b4c45f97d-8pghc" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.307489 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1197b9de-5ae7-42f1-b0f6-7c1adcc009e8-scripts\") pod \"horizon-b4c45f97d-8pghc\" (UID: \"1197b9de-5ae7-42f1-b0f6-7c1adcc009e8\") " pod="openstack/horizon-b4c45f97d-8pghc" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.307604 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1197b9de-5ae7-42f1-b0f6-7c1adcc009e8-horizon-tls-certs\") pod \"horizon-b4c45f97d-8pghc\" (UID: \"1197b9de-5ae7-42f1-b0f6-7c1adcc009e8\") " pod="openstack/horizon-b4c45f97d-8pghc" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.307641 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1197b9de-5ae7-42f1-b0f6-7c1adcc009e8-combined-ca-bundle\") pod \"horizon-b4c45f97d-8pghc\" (UID: \"1197b9de-5ae7-42f1-b0f6-7c1adcc009e8\") " pod="openstack/horizon-b4c45f97d-8pghc" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.409203 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1197b9de-5ae7-42f1-b0f6-7c1adcc009e8-logs\") pod \"horizon-b4c45f97d-8pghc\" (UID: \"1197b9de-5ae7-42f1-b0f6-7c1adcc009e8\") " pod="openstack/horizon-b4c45f97d-8pghc" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.409281 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7t78\" (UniqueName: \"kubernetes.io/projected/1197b9de-5ae7-42f1-b0f6-7c1adcc009e8-kube-api-access-p7t78\") pod \"horizon-b4c45f97d-8pghc\" (UID: \"1197b9de-5ae7-42f1-b0f6-7c1adcc009e8\") " pod="openstack/horizon-b4c45f97d-8pghc" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.409308 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1197b9de-5ae7-42f1-b0f6-7c1adcc009e8-horizon-secret-key\") pod \"horizon-b4c45f97d-8pghc\" (UID: \"1197b9de-5ae7-42f1-b0f6-7c1adcc009e8\") " pod="openstack/horizon-b4c45f97d-8pghc" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.409354 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1197b9de-5ae7-42f1-b0f6-7c1adcc009e8-scripts\") pod \"horizon-b4c45f97d-8pghc\" (UID: \"1197b9de-5ae7-42f1-b0f6-7c1adcc009e8\") " pod="openstack/horizon-b4c45f97d-8pghc" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.409449 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1197b9de-5ae7-42f1-b0f6-7c1adcc009e8-horizon-tls-certs\") pod \"horizon-b4c45f97d-8pghc\" (UID: \"1197b9de-5ae7-42f1-b0f6-7c1adcc009e8\") " pod="openstack/horizon-b4c45f97d-8pghc" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.409484 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1197b9de-5ae7-42f1-b0f6-7c1adcc009e8-combined-ca-bundle\") pod \"horizon-b4c45f97d-8pghc\" (UID: \"1197b9de-5ae7-42f1-b0f6-7c1adcc009e8\") " pod="openstack/horizon-b4c45f97d-8pghc" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.409505 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1197b9de-5ae7-42f1-b0f6-7c1adcc009e8-config-data\") pod \"horizon-b4c45f97d-8pghc\" (UID: \"1197b9de-5ae7-42f1-b0f6-7c1adcc009e8\") " pod="openstack/horizon-b4c45f97d-8pghc" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.409671 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1197b9de-5ae7-42f1-b0f6-7c1adcc009e8-logs\") pod \"horizon-b4c45f97d-8pghc\" (UID: \"1197b9de-5ae7-42f1-b0f6-7c1adcc009e8\") " pod="openstack/horizon-b4c45f97d-8pghc" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.410163 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1197b9de-5ae7-42f1-b0f6-7c1adcc009e8-scripts\") pod \"horizon-b4c45f97d-8pghc\" (UID: \"1197b9de-5ae7-42f1-b0f6-7c1adcc009e8\") " pod="openstack/horizon-b4c45f97d-8pghc" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.410602 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1197b9de-5ae7-42f1-b0f6-7c1adcc009e8-config-data\") pod \"horizon-b4c45f97d-8pghc\" (UID: \"1197b9de-5ae7-42f1-b0f6-7c1adcc009e8\") " pod="openstack/horizon-b4c45f97d-8pghc" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.417197 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1197b9de-5ae7-42f1-b0f6-7c1adcc009e8-combined-ca-bundle\") pod \"horizon-b4c45f97d-8pghc\" (UID: \"1197b9de-5ae7-42f1-b0f6-7c1adcc009e8\") " pod="openstack/horizon-b4c45f97d-8pghc" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.417851 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1197b9de-5ae7-42f1-b0f6-7c1adcc009e8-horizon-tls-certs\") pod \"horizon-b4c45f97d-8pghc\" (UID: \"1197b9de-5ae7-42f1-b0f6-7c1adcc009e8\") " pod="openstack/horizon-b4c45f97d-8pghc" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.417863 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1197b9de-5ae7-42f1-b0f6-7c1adcc009e8-horizon-secret-key\") pod \"horizon-b4c45f97d-8pghc\" (UID: \"1197b9de-5ae7-42f1-b0f6-7c1adcc009e8\") " pod="openstack/horizon-b4c45f97d-8pghc" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.424074 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7t78\" (UniqueName: \"kubernetes.io/projected/1197b9de-5ae7-42f1-b0f6-7c1adcc009e8-kube-api-access-p7t78\") pod \"horizon-b4c45f97d-8pghc\" (UID: \"1197b9de-5ae7-42f1-b0f6-7c1adcc009e8\") " pod="openstack/horizon-b4c45f97d-8pghc" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.580134 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b4c45f97d-8pghc" Oct 13 08:13:48 crc kubenswrapper[4833]: I1013 08:13:48.641623 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e01e6bee-66dc-4aaf-8cfc-a601a28e7f2b" path="/var/lib/kubelet/pods/e01e6bee-66dc-4aaf-8cfc-a601a28e7f2b/volumes" Oct 13 08:13:49 crc kubenswrapper[4833]: I1013 08:13:49.047365 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b4c45f97d-8pghc"] Oct 13 08:13:49 crc kubenswrapper[4833]: I1013 08:13:49.531733 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-whgqm"] Oct 13 08:13:49 crc kubenswrapper[4833]: I1013 08:13:49.533207 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-whgqm" Oct 13 08:13:49 crc kubenswrapper[4833]: I1013 08:13:49.540645 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-whgqm"] Oct 13 08:13:49 crc kubenswrapper[4833]: I1013 08:13:49.635405 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64qhk\" (UniqueName: \"kubernetes.io/projected/633e0793-d72b-4b22-88f3-8c58511dd268-kube-api-access-64qhk\") pod \"heat-db-create-whgqm\" (UID: \"633e0793-d72b-4b22-88f3-8c58511dd268\") " pod="openstack/heat-db-create-whgqm" Oct 13 08:13:49 crc kubenswrapper[4833]: I1013 08:13:49.738672 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64qhk\" (UniqueName: \"kubernetes.io/projected/633e0793-d72b-4b22-88f3-8c58511dd268-kube-api-access-64qhk\") pod \"heat-db-create-whgqm\" (UID: \"633e0793-d72b-4b22-88f3-8c58511dd268\") " pod="openstack/heat-db-create-whgqm" Oct 13 08:13:49 crc kubenswrapper[4833]: I1013 08:13:49.756374 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64qhk\" (UniqueName: \"kubernetes.io/projected/633e0793-d72b-4b22-88f3-8c58511dd268-kube-api-access-64qhk\") pod \"heat-db-create-whgqm\" (UID: \"633e0793-d72b-4b22-88f3-8c58511dd268\") " pod="openstack/heat-db-create-whgqm" Oct 13 08:13:49 crc kubenswrapper[4833]: I1013 08:13:49.859640 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b4c45f97d-8pghc" event={"ID":"1197b9de-5ae7-42f1-b0f6-7c1adcc009e8","Type":"ContainerStarted","Data":"8c5cebfaa2439da4e5cd69e625ee9d8993c00eab3e127fd1dcb598d826d978a5"} Oct 13 08:13:49 crc kubenswrapper[4833]: I1013 08:13:49.860018 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b4c45f97d-8pghc" event={"ID":"1197b9de-5ae7-42f1-b0f6-7c1adcc009e8","Type":"ContainerStarted","Data":"2829241c42da7fd7b47f9f0a0f4a56bc6f60cb7d1e444e018a7749b01ea58fd7"} Oct 13 08:13:49 crc kubenswrapper[4833]: I1013 08:13:49.860030 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b4c45f97d-8pghc" event={"ID":"1197b9de-5ae7-42f1-b0f6-7c1adcc009e8","Type":"ContainerStarted","Data":"31b38b39f9c15e56bd5d2a775cf69fdef7810f9ace73c20bf55c2bb09402aa16"} Oct 13 08:13:49 crc kubenswrapper[4833]: I1013 08:13:49.860663 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-whgqm" Oct 13 08:13:50 crc kubenswrapper[4833]: I1013 08:13:50.382067 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-b4c45f97d-8pghc" podStartSLOduration=2.382045506 podStartE2EDuration="2.382045506s" podCreationTimestamp="2025-10-13 08:13:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:13:49.89565198 +0000 UTC m=+6319.996074906" watchObservedRunningTime="2025-10-13 08:13:50.382045506 +0000 UTC m=+6320.482468422" Oct 13 08:13:50 crc kubenswrapper[4833]: I1013 08:13:50.386482 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-whgqm"] Oct 13 08:13:50 crc kubenswrapper[4833]: I1013 08:13:50.896940 4833 generic.go:334] "Generic (PLEG): container finished" podID="633e0793-d72b-4b22-88f3-8c58511dd268" containerID="85463d271053011f8742463109b09f517270dc130bdf684dd000987e4d1dadf3" exitCode=0 Oct 13 08:13:50 crc kubenswrapper[4833]: I1013 08:13:50.898643 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-whgqm" event={"ID":"633e0793-d72b-4b22-88f3-8c58511dd268","Type":"ContainerDied","Data":"85463d271053011f8742463109b09f517270dc130bdf684dd000987e4d1dadf3"} Oct 13 08:13:50 crc kubenswrapper[4833]: I1013 08:13:50.898677 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-whgqm" event={"ID":"633e0793-d72b-4b22-88f3-8c58511dd268","Type":"ContainerStarted","Data":"930ac71b12594fff558a4375a4d711239e90667bf284f5ae0d18604a98652b2e"} Oct 13 08:13:52 crc kubenswrapper[4833]: I1013 08:13:52.327737 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-whgqm" Oct 13 08:13:52 crc kubenswrapper[4833]: I1013 08:13:52.503163 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64qhk\" (UniqueName: \"kubernetes.io/projected/633e0793-d72b-4b22-88f3-8c58511dd268-kube-api-access-64qhk\") pod \"633e0793-d72b-4b22-88f3-8c58511dd268\" (UID: \"633e0793-d72b-4b22-88f3-8c58511dd268\") " Oct 13 08:13:52 crc kubenswrapper[4833]: I1013 08:13:52.511634 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/633e0793-d72b-4b22-88f3-8c58511dd268-kube-api-access-64qhk" (OuterVolumeSpecName: "kube-api-access-64qhk") pod "633e0793-d72b-4b22-88f3-8c58511dd268" (UID: "633e0793-d72b-4b22-88f3-8c58511dd268"). InnerVolumeSpecName "kube-api-access-64qhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:13:52 crc kubenswrapper[4833]: I1013 08:13:52.605818 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64qhk\" (UniqueName: \"kubernetes.io/projected/633e0793-d72b-4b22-88f3-8c58511dd268-kube-api-access-64qhk\") on node \"crc\" DevicePath \"\"" Oct 13 08:13:52 crc kubenswrapper[4833]: I1013 08:13:52.921755 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-whgqm" event={"ID":"633e0793-d72b-4b22-88f3-8c58511dd268","Type":"ContainerDied","Data":"930ac71b12594fff558a4375a4d711239e90667bf284f5ae0d18604a98652b2e"} Oct 13 08:13:52 crc kubenswrapper[4833]: I1013 08:13:52.921819 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="930ac71b12594fff558a4375a4d711239e90667bf284f5ae0d18604a98652b2e" Oct 13 08:13:52 crc kubenswrapper[4833]: I1013 08:13:52.921915 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-whgqm" Oct 13 08:13:53 crc kubenswrapper[4833]: I1013 08:13:53.191901 4833 scope.go:117] "RemoveContainer" containerID="5bfe638406657c961623ef99c2da2850489f807594bbaa6e4c9a5214b966ecf4" Oct 13 08:13:53 crc kubenswrapper[4833]: I1013 08:13:53.227017 4833 scope.go:117] "RemoveContainer" containerID="89dbd316acd0c0fe8845b1bc1c8b4ffe8226fe32ae0be9030235e141d9c27520" Oct 13 08:13:53 crc kubenswrapper[4833]: I1013 08:13:53.291262 4833 scope.go:117] "RemoveContainer" containerID="9922d338e11ab2e964cedc0603b6f4656c3f43fcf739f737108c656a8505f28e" Oct 13 08:13:55 crc kubenswrapper[4833]: I1013 08:13:55.038838 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-c25l7"] Oct 13 08:13:55 crc kubenswrapper[4833]: I1013 08:13:55.052018 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-c25l7"] Oct 13 08:13:56 crc kubenswrapper[4833]: I1013 08:13:56.655897 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e923cf03-0639-47ae-af30-793d9582ec2b" path="/var/lib/kubelet/pods/e923cf03-0639-47ae-af30-793d9582ec2b/volumes" Oct 13 08:13:58 crc kubenswrapper[4833]: I1013 08:13:58.581706 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-b4c45f97d-8pghc" Oct 13 08:13:58 crc kubenswrapper[4833]: I1013 08:13:58.582114 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-b4c45f97d-8pghc" Oct 13 08:13:59 crc kubenswrapper[4833]: I1013 08:13:59.625089 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-b888-account-create-nhhh8"] Oct 13 08:13:59 crc kubenswrapper[4833]: E1013 08:13:59.625841 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="633e0793-d72b-4b22-88f3-8c58511dd268" containerName="mariadb-database-create" Oct 13 08:13:59 crc kubenswrapper[4833]: I1013 08:13:59.625863 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="633e0793-d72b-4b22-88f3-8c58511dd268" containerName="mariadb-database-create" Oct 13 08:13:59 crc kubenswrapper[4833]: I1013 08:13:59.626253 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="633e0793-d72b-4b22-88f3-8c58511dd268" containerName="mariadb-database-create" Oct 13 08:13:59 crc kubenswrapper[4833]: I1013 08:13:59.627363 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-b888-account-create-nhhh8" Oct 13 08:13:59 crc kubenswrapper[4833]: I1013 08:13:59.629552 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Oct 13 08:13:59 crc kubenswrapper[4833]: I1013 08:13:59.638631 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-b888-account-create-nhhh8"] Oct 13 08:13:59 crc kubenswrapper[4833]: I1013 08:13:59.804388 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5ljp\" (UniqueName: \"kubernetes.io/projected/a46f3081-52a3-46e0-9e1f-773be13994b9-kube-api-access-w5ljp\") pod \"heat-b888-account-create-nhhh8\" (UID: \"a46f3081-52a3-46e0-9e1f-773be13994b9\") " pod="openstack/heat-b888-account-create-nhhh8" Oct 13 08:13:59 crc kubenswrapper[4833]: I1013 08:13:59.906661 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5ljp\" (UniqueName: \"kubernetes.io/projected/a46f3081-52a3-46e0-9e1f-773be13994b9-kube-api-access-w5ljp\") pod \"heat-b888-account-create-nhhh8\" (UID: \"a46f3081-52a3-46e0-9e1f-773be13994b9\") " pod="openstack/heat-b888-account-create-nhhh8" Oct 13 08:13:59 crc kubenswrapper[4833]: I1013 08:13:59.947691 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5ljp\" (UniqueName: \"kubernetes.io/projected/a46f3081-52a3-46e0-9e1f-773be13994b9-kube-api-access-w5ljp\") pod \"heat-b888-account-create-nhhh8\" (UID: \"a46f3081-52a3-46e0-9e1f-773be13994b9\") " pod="openstack/heat-b888-account-create-nhhh8" Oct 13 08:13:59 crc kubenswrapper[4833]: I1013 08:13:59.968459 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-b888-account-create-nhhh8" Oct 13 08:14:00 crc kubenswrapper[4833]: I1013 08:14:00.456425 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-b888-account-create-nhhh8"] Oct 13 08:14:00 crc kubenswrapper[4833]: W1013 08:14:00.463407 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda46f3081_52a3_46e0_9e1f_773be13994b9.slice/crio-4def1d997691aad312ab98756a87cc274666ffcbb2ca43f9329223b406d9f7eb WatchSource:0}: Error finding container 4def1d997691aad312ab98756a87cc274666ffcbb2ca43f9329223b406d9f7eb: Status 404 returned error can't find the container with id 4def1d997691aad312ab98756a87cc274666ffcbb2ca43f9329223b406d9f7eb Oct 13 08:14:00 crc kubenswrapper[4833]: I1013 08:14:00.543194 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:14:00 crc kubenswrapper[4833]: I1013 08:14:00.543272 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 08:14:00 crc kubenswrapper[4833]: I1013 08:14:00.543330 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 08:14:00 crc kubenswrapper[4833]: I1013 08:14:00.544453 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d4d4c44bb0e773c2232317552c7e13324e9005e08dbd3102a3b4703dc201b6e8"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 08:14:00 crc kubenswrapper[4833]: I1013 08:14:00.544608 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://d4d4c44bb0e773c2232317552c7e13324e9005e08dbd3102a3b4703dc201b6e8" gracePeriod=600 Oct 13 08:14:00 crc kubenswrapper[4833]: E1013 08:14:00.670952 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:14:01 crc kubenswrapper[4833]: I1013 08:14:01.024392 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="d4d4c44bb0e773c2232317552c7e13324e9005e08dbd3102a3b4703dc201b6e8" exitCode=0 Oct 13 08:14:01 crc kubenswrapper[4833]: I1013 08:14:01.024462 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"d4d4c44bb0e773c2232317552c7e13324e9005e08dbd3102a3b4703dc201b6e8"} Oct 13 08:14:01 crc kubenswrapper[4833]: I1013 08:14:01.024506 4833 scope.go:117] "RemoveContainer" containerID="e914b3d8cdaf60183f90c0954b20183878f4438a5f2a58b95970c43060e5f653" Oct 13 08:14:01 crc kubenswrapper[4833]: I1013 08:14:01.025307 4833 scope.go:117] "RemoveContainer" containerID="d4d4c44bb0e773c2232317552c7e13324e9005e08dbd3102a3b4703dc201b6e8" Oct 13 08:14:01 crc kubenswrapper[4833]: E1013 08:14:01.025779 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:14:01 crc kubenswrapper[4833]: I1013 08:14:01.027697 4833 generic.go:334] "Generic (PLEG): container finished" podID="a46f3081-52a3-46e0-9e1f-773be13994b9" containerID="4ff117ea5cffce4b98b89a59680f1d178a628d40d23958ffcfda7a3db14ae8d7" exitCode=0 Oct 13 08:14:01 crc kubenswrapper[4833]: I1013 08:14:01.027747 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-b888-account-create-nhhh8" event={"ID":"a46f3081-52a3-46e0-9e1f-773be13994b9","Type":"ContainerDied","Data":"4ff117ea5cffce4b98b89a59680f1d178a628d40d23958ffcfda7a3db14ae8d7"} Oct 13 08:14:01 crc kubenswrapper[4833]: I1013 08:14:01.027778 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-b888-account-create-nhhh8" event={"ID":"a46f3081-52a3-46e0-9e1f-773be13994b9","Type":"ContainerStarted","Data":"4def1d997691aad312ab98756a87cc274666ffcbb2ca43f9329223b406d9f7eb"} Oct 13 08:14:02 crc kubenswrapper[4833]: I1013 08:14:02.449917 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-b888-account-create-nhhh8" Oct 13 08:14:02 crc kubenswrapper[4833]: I1013 08:14:02.478462 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5ljp\" (UniqueName: \"kubernetes.io/projected/a46f3081-52a3-46e0-9e1f-773be13994b9-kube-api-access-w5ljp\") pod \"a46f3081-52a3-46e0-9e1f-773be13994b9\" (UID: \"a46f3081-52a3-46e0-9e1f-773be13994b9\") " Oct 13 08:14:02 crc kubenswrapper[4833]: I1013 08:14:02.488676 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a46f3081-52a3-46e0-9e1f-773be13994b9-kube-api-access-w5ljp" (OuterVolumeSpecName: "kube-api-access-w5ljp") pod "a46f3081-52a3-46e0-9e1f-773be13994b9" (UID: "a46f3081-52a3-46e0-9e1f-773be13994b9"). InnerVolumeSpecName "kube-api-access-w5ljp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:14:02 crc kubenswrapper[4833]: I1013 08:14:02.581186 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5ljp\" (UniqueName: \"kubernetes.io/projected/a46f3081-52a3-46e0-9e1f-773be13994b9-kube-api-access-w5ljp\") on node \"crc\" DevicePath \"\"" Oct 13 08:14:03 crc kubenswrapper[4833]: I1013 08:14:03.055167 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-b888-account-create-nhhh8" event={"ID":"a46f3081-52a3-46e0-9e1f-773be13994b9","Type":"ContainerDied","Data":"4def1d997691aad312ab98756a87cc274666ffcbb2ca43f9329223b406d9f7eb"} Oct 13 08:14:03 crc kubenswrapper[4833]: I1013 08:14:03.055211 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4def1d997691aad312ab98756a87cc274666ffcbb2ca43f9329223b406d9f7eb" Oct 13 08:14:03 crc kubenswrapper[4833]: I1013 08:14:03.055255 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-b888-account-create-nhhh8" Oct 13 08:14:04 crc kubenswrapper[4833]: I1013 08:14:04.674516 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-8q5lv"] Oct 13 08:14:04 crc kubenswrapper[4833]: E1013 08:14:04.675251 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a46f3081-52a3-46e0-9e1f-773be13994b9" containerName="mariadb-account-create" Oct 13 08:14:04 crc kubenswrapper[4833]: I1013 08:14:04.675266 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a46f3081-52a3-46e0-9e1f-773be13994b9" containerName="mariadb-account-create" Oct 13 08:14:04 crc kubenswrapper[4833]: I1013 08:14:04.675471 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a46f3081-52a3-46e0-9e1f-773be13994b9" containerName="mariadb-account-create" Oct 13 08:14:04 crc kubenswrapper[4833]: I1013 08:14:04.676213 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-8q5lv" Oct 13 08:14:04 crc kubenswrapper[4833]: I1013 08:14:04.678639 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-4pfkm" Oct 13 08:14:04 crc kubenswrapper[4833]: I1013 08:14:04.679505 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Oct 13 08:14:04 crc kubenswrapper[4833]: I1013 08:14:04.690208 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-8q5lv"] Oct 13 08:14:04 crc kubenswrapper[4833]: I1013 08:14:04.825625 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01-config-data\") pod \"heat-db-sync-8q5lv\" (UID: \"9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01\") " pod="openstack/heat-db-sync-8q5lv" Oct 13 08:14:04 crc kubenswrapper[4833]: I1013 08:14:04.825848 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01-combined-ca-bundle\") pod \"heat-db-sync-8q5lv\" (UID: \"9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01\") " pod="openstack/heat-db-sync-8q5lv" Oct 13 08:14:04 crc kubenswrapper[4833]: I1013 08:14:04.825983 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcrfx\" (UniqueName: \"kubernetes.io/projected/9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01-kube-api-access-jcrfx\") pod \"heat-db-sync-8q5lv\" (UID: \"9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01\") " pod="openstack/heat-db-sync-8q5lv" Oct 13 08:14:04 crc kubenswrapper[4833]: I1013 08:14:04.927759 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01-config-data\") pod \"heat-db-sync-8q5lv\" (UID: \"9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01\") " pod="openstack/heat-db-sync-8q5lv" Oct 13 08:14:04 crc kubenswrapper[4833]: I1013 08:14:04.927835 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01-combined-ca-bundle\") pod \"heat-db-sync-8q5lv\" (UID: \"9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01\") " pod="openstack/heat-db-sync-8q5lv" Oct 13 08:14:04 crc kubenswrapper[4833]: I1013 08:14:04.927919 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcrfx\" (UniqueName: \"kubernetes.io/projected/9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01-kube-api-access-jcrfx\") pod \"heat-db-sync-8q5lv\" (UID: \"9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01\") " pod="openstack/heat-db-sync-8q5lv" Oct 13 08:14:04 crc kubenswrapper[4833]: I1013 08:14:04.935873 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01-config-data\") pod \"heat-db-sync-8q5lv\" (UID: \"9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01\") " pod="openstack/heat-db-sync-8q5lv" Oct 13 08:14:04 crc kubenswrapper[4833]: I1013 08:14:04.943765 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01-combined-ca-bundle\") pod \"heat-db-sync-8q5lv\" (UID: \"9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01\") " pod="openstack/heat-db-sync-8q5lv" Oct 13 08:14:04 crc kubenswrapper[4833]: I1013 08:14:04.944218 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcrfx\" (UniqueName: \"kubernetes.io/projected/9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01-kube-api-access-jcrfx\") pod \"heat-db-sync-8q5lv\" (UID: \"9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01\") " pod="openstack/heat-db-sync-8q5lv" Oct 13 08:14:04 crc kubenswrapper[4833]: I1013 08:14:04.999885 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-8q5lv" Oct 13 08:14:05 crc kubenswrapper[4833]: W1013 08:14:05.511143 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cb6aedb_ee3c_44ea_b1a3_9c6b63119d01.slice/crio-bd50552fdb493814627ac63ac423388673c5847d2b504e19cdaa41910ab04495 WatchSource:0}: Error finding container bd50552fdb493814627ac63ac423388673c5847d2b504e19cdaa41910ab04495: Status 404 returned error can't find the container with id bd50552fdb493814627ac63ac423388673c5847d2b504e19cdaa41910ab04495 Oct 13 08:14:05 crc kubenswrapper[4833]: I1013 08:14:05.516351 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-8q5lv"] Oct 13 08:14:06 crc kubenswrapper[4833]: I1013 08:14:06.090179 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-8q5lv" event={"ID":"9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01","Type":"ContainerStarted","Data":"bd50552fdb493814627ac63ac423388673c5847d2b504e19cdaa41910ab04495"} Oct 13 08:14:10 crc kubenswrapper[4833]: I1013 08:14:10.704949 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-b4c45f97d-8pghc" Oct 13 08:14:12 crc kubenswrapper[4833]: I1013 08:14:12.589432 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-b4c45f97d-8pghc" Oct 13 08:14:12 crc kubenswrapper[4833]: I1013 08:14:12.663331 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66c86f59f8-n4tkm"] Oct 13 08:14:12 crc kubenswrapper[4833]: I1013 08:14:12.663556 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-66c86f59f8-n4tkm" podUID="2d86df57-4667-4910-9463-8366b3080c9f" containerName="horizon-log" containerID="cri-o://ab31ffc2e1fec12a584bcd7ca0c6423a76c4147f5fe94dc28034f4064e3dcc08" gracePeriod=30 Oct 13 08:14:12 crc kubenswrapper[4833]: I1013 08:14:12.663675 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-66c86f59f8-n4tkm" podUID="2d86df57-4667-4910-9463-8366b3080c9f" containerName="horizon" containerID="cri-o://994c4d313cd3b450e5bd3a26d1d2422f374f88377480ff0e54b2d612aa35b2f3" gracePeriod=30 Oct 13 08:14:14 crc kubenswrapper[4833]: I1013 08:14:14.183146 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-8q5lv" event={"ID":"9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01","Type":"ContainerStarted","Data":"d5a566a36d3105e7af84214e1d6eef7aea235d7c17e8aef56bc97bedac8d9914"} Oct 13 08:14:14 crc kubenswrapper[4833]: I1013 08:14:14.208049 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-8q5lv" podStartSLOduration=2.727194292 podStartE2EDuration="10.208022527s" podCreationTimestamp="2025-10-13 08:14:04 +0000 UTC" firstStartedPulling="2025-10-13 08:14:05.513453269 +0000 UTC m=+6335.613876175" lastFinishedPulling="2025-10-13 08:14:12.994281494 +0000 UTC m=+6343.094704410" observedRunningTime="2025-10-13 08:14:14.198234929 +0000 UTC m=+6344.298657865" watchObservedRunningTime="2025-10-13 08:14:14.208022527 +0000 UTC m=+6344.308445453" Oct 13 08:14:15 crc kubenswrapper[4833]: I1013 08:14:15.627432 4833 scope.go:117] "RemoveContainer" containerID="d4d4c44bb0e773c2232317552c7e13324e9005e08dbd3102a3b4703dc201b6e8" Oct 13 08:14:15 crc kubenswrapper[4833]: E1013 08:14:15.628029 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:14:15 crc kubenswrapper[4833]: I1013 08:14:15.827844 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-66c86f59f8-n4tkm" podUID="2d86df57-4667-4910-9463-8366b3080c9f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.123:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:35696->10.217.1.123:8443: read: connection reset by peer" Oct 13 08:14:16 crc kubenswrapper[4833]: I1013 08:14:16.228611 4833 generic.go:334] "Generic (PLEG): container finished" podID="2d86df57-4667-4910-9463-8366b3080c9f" containerID="994c4d313cd3b450e5bd3a26d1d2422f374f88377480ff0e54b2d612aa35b2f3" exitCode=0 Oct 13 08:14:16 crc kubenswrapper[4833]: I1013 08:14:16.228668 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66c86f59f8-n4tkm" event={"ID":"2d86df57-4667-4910-9463-8366b3080c9f","Type":"ContainerDied","Data":"994c4d313cd3b450e5bd3a26d1d2422f374f88377480ff0e54b2d612aa35b2f3"} Oct 13 08:14:17 crc kubenswrapper[4833]: I1013 08:14:17.239131 4833 generic.go:334] "Generic (PLEG): container finished" podID="9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01" containerID="d5a566a36d3105e7af84214e1d6eef7aea235d7c17e8aef56bc97bedac8d9914" exitCode=0 Oct 13 08:14:17 crc kubenswrapper[4833]: I1013 08:14:17.239196 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-8q5lv" event={"ID":"9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01","Type":"ContainerDied","Data":"d5a566a36d3105e7af84214e1d6eef7aea235d7c17e8aef56bc97bedac8d9914"} Oct 13 08:14:18 crc kubenswrapper[4833]: I1013 08:14:18.682286 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-8q5lv" Oct 13 08:14:18 crc kubenswrapper[4833]: I1013 08:14:18.736907 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01-combined-ca-bundle\") pod \"9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01\" (UID: \"9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01\") " Oct 13 08:14:18 crc kubenswrapper[4833]: I1013 08:14:18.737575 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcrfx\" (UniqueName: \"kubernetes.io/projected/9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01-kube-api-access-jcrfx\") pod \"9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01\" (UID: \"9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01\") " Oct 13 08:14:18 crc kubenswrapper[4833]: I1013 08:14:18.737705 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01-config-data\") pod \"9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01\" (UID: \"9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01\") " Oct 13 08:14:18 crc kubenswrapper[4833]: I1013 08:14:18.764875 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01-kube-api-access-jcrfx" (OuterVolumeSpecName: "kube-api-access-jcrfx") pod "9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01" (UID: "9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01"). InnerVolumeSpecName "kube-api-access-jcrfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:14:18 crc kubenswrapper[4833]: I1013 08:14:18.801399 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01" (UID: "9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:14:18 crc kubenswrapper[4833]: I1013 08:14:18.840504 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:14:18 crc kubenswrapper[4833]: I1013 08:14:18.840633 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcrfx\" (UniqueName: \"kubernetes.io/projected/9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01-kube-api-access-jcrfx\") on node \"crc\" DevicePath \"\"" Oct 13 08:14:18 crc kubenswrapper[4833]: I1013 08:14:18.857287 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01-config-data" (OuterVolumeSpecName: "config-data") pod "9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01" (UID: "9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:14:18 crc kubenswrapper[4833]: I1013 08:14:18.943107 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:14:19 crc kubenswrapper[4833]: I1013 08:14:19.274315 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-8q5lv" event={"ID":"9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01","Type":"ContainerDied","Data":"bd50552fdb493814627ac63ac423388673c5847d2b504e19cdaa41910ab04495"} Oct 13 08:14:19 crc kubenswrapper[4833]: I1013 08:14:19.274394 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd50552fdb493814627ac63ac423388673c5847d2b504e19cdaa41910ab04495" Oct 13 08:14:19 crc kubenswrapper[4833]: I1013 08:14:19.274478 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-8q5lv" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.442234 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-69748b5-xzlww"] Oct 13 08:14:20 crc kubenswrapper[4833]: E1013 08:14:20.455099 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01" containerName="heat-db-sync" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.455122 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01" containerName="heat-db-sync" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.455334 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01" containerName="heat-db-sync" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.456095 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-69748b5-xzlww" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.459643 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-4pfkm" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.462873 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.463120 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.469372 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-69748b5-xzlww"] Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.528186 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6cfcb4bbcf-kjtxc"] Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.529506 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6cfcb4bbcf-kjtxc" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.532139 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.550275 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6cfcb4bbcf-kjtxc"] Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.593315 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/264dc850-9061-4a78-b7b5-f26d1e2fdee2-combined-ca-bundle\") pod \"heat-engine-69748b5-xzlww\" (UID: \"264dc850-9061-4a78-b7b5-f26d1e2fdee2\") " pod="openstack/heat-engine-69748b5-xzlww" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.593411 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d256a571-b50d-47b7-b502-064e126ce3c4-config-data-custom\") pod \"heat-api-6cfcb4bbcf-kjtxc\" (UID: \"d256a571-b50d-47b7-b502-064e126ce3c4\") " pod="openstack/heat-api-6cfcb4bbcf-kjtxc" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.593492 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/264dc850-9061-4a78-b7b5-f26d1e2fdee2-config-data\") pod \"heat-engine-69748b5-xzlww\" (UID: \"264dc850-9061-4a78-b7b5-f26d1e2fdee2\") " pod="openstack/heat-engine-69748b5-xzlww" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.593507 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d256a571-b50d-47b7-b502-064e126ce3c4-config-data\") pod \"heat-api-6cfcb4bbcf-kjtxc\" (UID: \"d256a571-b50d-47b7-b502-064e126ce3c4\") " pod="openstack/heat-api-6cfcb4bbcf-kjtxc" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.593524 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/264dc850-9061-4a78-b7b5-f26d1e2fdee2-config-data-custom\") pod \"heat-engine-69748b5-xzlww\" (UID: \"264dc850-9061-4a78-b7b5-f26d1e2fdee2\") " pod="openstack/heat-engine-69748b5-xzlww" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.593608 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d256a571-b50d-47b7-b502-064e126ce3c4-combined-ca-bundle\") pod \"heat-api-6cfcb4bbcf-kjtxc\" (UID: \"d256a571-b50d-47b7-b502-064e126ce3c4\") " pod="openstack/heat-api-6cfcb4bbcf-kjtxc" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.593628 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l7xd\" (UniqueName: \"kubernetes.io/projected/d256a571-b50d-47b7-b502-064e126ce3c4-kube-api-access-8l7xd\") pod \"heat-api-6cfcb4bbcf-kjtxc\" (UID: \"d256a571-b50d-47b7-b502-064e126ce3c4\") " pod="openstack/heat-api-6cfcb4bbcf-kjtxc" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.593690 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf776\" (UniqueName: \"kubernetes.io/projected/264dc850-9061-4a78-b7b5-f26d1e2fdee2-kube-api-access-zf776\") pod \"heat-engine-69748b5-xzlww\" (UID: \"264dc850-9061-4a78-b7b5-f26d1e2fdee2\") " pod="openstack/heat-engine-69748b5-xzlww" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.601600 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-754446978c-lv9pm"] Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.602927 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-754446978c-lv9pm" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.606891 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.618581 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-754446978c-lv9pm"] Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.695957 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b82a1fc6-0e74-4766-94ac-31a29f8985bf-config-data-custom\") pod \"heat-cfnapi-754446978c-lv9pm\" (UID: \"b82a1fc6-0e74-4766-94ac-31a29f8985bf\") " pod="openstack/heat-cfnapi-754446978c-lv9pm" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.696035 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b82a1fc6-0e74-4766-94ac-31a29f8985bf-combined-ca-bundle\") pod \"heat-cfnapi-754446978c-lv9pm\" (UID: \"b82a1fc6-0e74-4766-94ac-31a29f8985bf\") " pod="openstack/heat-cfnapi-754446978c-lv9pm" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.696113 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/264dc850-9061-4a78-b7b5-f26d1e2fdee2-combined-ca-bundle\") pod \"heat-engine-69748b5-xzlww\" (UID: \"264dc850-9061-4a78-b7b5-f26d1e2fdee2\") " pod="openstack/heat-engine-69748b5-xzlww" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.696201 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d256a571-b50d-47b7-b502-064e126ce3c4-config-data-custom\") pod \"heat-api-6cfcb4bbcf-kjtxc\" (UID: \"d256a571-b50d-47b7-b502-064e126ce3c4\") " pod="openstack/heat-api-6cfcb4bbcf-kjtxc" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.696267 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/264dc850-9061-4a78-b7b5-f26d1e2fdee2-config-data\") pod \"heat-engine-69748b5-xzlww\" (UID: \"264dc850-9061-4a78-b7b5-f26d1e2fdee2\") " pod="openstack/heat-engine-69748b5-xzlww" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.696283 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d256a571-b50d-47b7-b502-064e126ce3c4-config-data\") pod \"heat-api-6cfcb4bbcf-kjtxc\" (UID: \"d256a571-b50d-47b7-b502-064e126ce3c4\") " pod="openstack/heat-api-6cfcb4bbcf-kjtxc" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.696300 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/264dc850-9061-4a78-b7b5-f26d1e2fdee2-config-data-custom\") pod \"heat-engine-69748b5-xzlww\" (UID: \"264dc850-9061-4a78-b7b5-f26d1e2fdee2\") " pod="openstack/heat-engine-69748b5-xzlww" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.696357 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4qqq\" (UniqueName: \"kubernetes.io/projected/b82a1fc6-0e74-4766-94ac-31a29f8985bf-kube-api-access-r4qqq\") pod \"heat-cfnapi-754446978c-lv9pm\" (UID: \"b82a1fc6-0e74-4766-94ac-31a29f8985bf\") " pod="openstack/heat-cfnapi-754446978c-lv9pm" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.696384 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b82a1fc6-0e74-4766-94ac-31a29f8985bf-config-data\") pod \"heat-cfnapi-754446978c-lv9pm\" (UID: \"b82a1fc6-0e74-4766-94ac-31a29f8985bf\") " pod="openstack/heat-cfnapi-754446978c-lv9pm" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.696439 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d256a571-b50d-47b7-b502-064e126ce3c4-combined-ca-bundle\") pod \"heat-api-6cfcb4bbcf-kjtxc\" (UID: \"d256a571-b50d-47b7-b502-064e126ce3c4\") " pod="openstack/heat-api-6cfcb4bbcf-kjtxc" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.696456 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l7xd\" (UniqueName: \"kubernetes.io/projected/d256a571-b50d-47b7-b502-064e126ce3c4-kube-api-access-8l7xd\") pod \"heat-api-6cfcb4bbcf-kjtxc\" (UID: \"d256a571-b50d-47b7-b502-064e126ce3c4\") " pod="openstack/heat-api-6cfcb4bbcf-kjtxc" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.696555 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf776\" (UniqueName: \"kubernetes.io/projected/264dc850-9061-4a78-b7b5-f26d1e2fdee2-kube-api-access-zf776\") pod \"heat-engine-69748b5-xzlww\" (UID: \"264dc850-9061-4a78-b7b5-f26d1e2fdee2\") " pod="openstack/heat-engine-69748b5-xzlww" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.704252 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/264dc850-9061-4a78-b7b5-f26d1e2fdee2-combined-ca-bundle\") pod \"heat-engine-69748b5-xzlww\" (UID: \"264dc850-9061-4a78-b7b5-f26d1e2fdee2\") " pod="openstack/heat-engine-69748b5-xzlww" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.704866 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d256a571-b50d-47b7-b502-064e126ce3c4-combined-ca-bundle\") pod \"heat-api-6cfcb4bbcf-kjtxc\" (UID: \"d256a571-b50d-47b7-b502-064e126ce3c4\") " pod="openstack/heat-api-6cfcb4bbcf-kjtxc" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.705527 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d256a571-b50d-47b7-b502-064e126ce3c4-config-data-custom\") pod \"heat-api-6cfcb4bbcf-kjtxc\" (UID: \"d256a571-b50d-47b7-b502-064e126ce3c4\") " pod="openstack/heat-api-6cfcb4bbcf-kjtxc" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.706623 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/264dc850-9061-4a78-b7b5-f26d1e2fdee2-config-data\") pod \"heat-engine-69748b5-xzlww\" (UID: \"264dc850-9061-4a78-b7b5-f26d1e2fdee2\") " pod="openstack/heat-engine-69748b5-xzlww" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.710960 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d256a571-b50d-47b7-b502-064e126ce3c4-config-data\") pod \"heat-api-6cfcb4bbcf-kjtxc\" (UID: \"d256a571-b50d-47b7-b502-064e126ce3c4\") " pod="openstack/heat-api-6cfcb4bbcf-kjtxc" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.711005 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/264dc850-9061-4a78-b7b5-f26d1e2fdee2-config-data-custom\") pod \"heat-engine-69748b5-xzlww\" (UID: \"264dc850-9061-4a78-b7b5-f26d1e2fdee2\") " pod="openstack/heat-engine-69748b5-xzlww" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.714148 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf776\" (UniqueName: \"kubernetes.io/projected/264dc850-9061-4a78-b7b5-f26d1e2fdee2-kube-api-access-zf776\") pod \"heat-engine-69748b5-xzlww\" (UID: \"264dc850-9061-4a78-b7b5-f26d1e2fdee2\") " pod="openstack/heat-engine-69748b5-xzlww" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.716323 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l7xd\" (UniqueName: \"kubernetes.io/projected/d256a571-b50d-47b7-b502-064e126ce3c4-kube-api-access-8l7xd\") pod \"heat-api-6cfcb4bbcf-kjtxc\" (UID: \"d256a571-b50d-47b7-b502-064e126ce3c4\") " pod="openstack/heat-api-6cfcb4bbcf-kjtxc" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.799506 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4qqq\" (UniqueName: \"kubernetes.io/projected/b82a1fc6-0e74-4766-94ac-31a29f8985bf-kube-api-access-r4qqq\") pod \"heat-cfnapi-754446978c-lv9pm\" (UID: \"b82a1fc6-0e74-4766-94ac-31a29f8985bf\") " pod="openstack/heat-cfnapi-754446978c-lv9pm" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.799611 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b82a1fc6-0e74-4766-94ac-31a29f8985bf-config-data\") pod \"heat-cfnapi-754446978c-lv9pm\" (UID: \"b82a1fc6-0e74-4766-94ac-31a29f8985bf\") " pod="openstack/heat-cfnapi-754446978c-lv9pm" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.799791 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b82a1fc6-0e74-4766-94ac-31a29f8985bf-config-data-custom\") pod \"heat-cfnapi-754446978c-lv9pm\" (UID: \"b82a1fc6-0e74-4766-94ac-31a29f8985bf\") " pod="openstack/heat-cfnapi-754446978c-lv9pm" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.799817 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b82a1fc6-0e74-4766-94ac-31a29f8985bf-combined-ca-bundle\") pod \"heat-cfnapi-754446978c-lv9pm\" (UID: \"b82a1fc6-0e74-4766-94ac-31a29f8985bf\") " pod="openstack/heat-cfnapi-754446978c-lv9pm" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.803751 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-69748b5-xzlww" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.806583 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b82a1fc6-0e74-4766-94ac-31a29f8985bf-config-data\") pod \"heat-cfnapi-754446978c-lv9pm\" (UID: \"b82a1fc6-0e74-4766-94ac-31a29f8985bf\") " pod="openstack/heat-cfnapi-754446978c-lv9pm" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.806661 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b82a1fc6-0e74-4766-94ac-31a29f8985bf-combined-ca-bundle\") pod \"heat-cfnapi-754446978c-lv9pm\" (UID: \"b82a1fc6-0e74-4766-94ac-31a29f8985bf\") " pod="openstack/heat-cfnapi-754446978c-lv9pm" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.808052 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b82a1fc6-0e74-4766-94ac-31a29f8985bf-config-data-custom\") pod \"heat-cfnapi-754446978c-lv9pm\" (UID: \"b82a1fc6-0e74-4766-94ac-31a29f8985bf\") " pod="openstack/heat-cfnapi-754446978c-lv9pm" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.819140 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4qqq\" (UniqueName: \"kubernetes.io/projected/b82a1fc6-0e74-4766-94ac-31a29f8985bf-kube-api-access-r4qqq\") pod \"heat-cfnapi-754446978c-lv9pm\" (UID: \"b82a1fc6-0e74-4766-94ac-31a29f8985bf\") " pod="openstack/heat-cfnapi-754446978c-lv9pm" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.864964 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6cfcb4bbcf-kjtxc" Oct 13 08:14:20 crc kubenswrapper[4833]: I1013 08:14:20.926321 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-754446978c-lv9pm" Oct 13 08:14:21 crc kubenswrapper[4833]: I1013 08:14:21.308351 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-69748b5-xzlww"] Oct 13 08:14:21 crc kubenswrapper[4833]: I1013 08:14:21.434246 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-754446978c-lv9pm"] Oct 13 08:14:21 crc kubenswrapper[4833]: I1013 08:14:21.442488 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6cfcb4bbcf-kjtxc"] Oct 13 08:14:22 crc kubenswrapper[4833]: I1013 08:14:22.322659 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-69748b5-xzlww" event={"ID":"264dc850-9061-4a78-b7b5-f26d1e2fdee2","Type":"ContainerStarted","Data":"a9bbb65c37eb2c3608dc3bb9656a53d8d67ae6ccbc586486d01fa955c03e7f60"} Oct 13 08:14:22 crc kubenswrapper[4833]: I1013 08:14:22.323682 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-69748b5-xzlww" event={"ID":"264dc850-9061-4a78-b7b5-f26d1e2fdee2","Type":"ContainerStarted","Data":"76d15e3b7dfc360c7dacec599fdb5feb64e376c7b851285d7a0a0d6b672aebd7"} Oct 13 08:14:22 crc kubenswrapper[4833]: I1013 08:14:22.323970 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-69748b5-xzlww" Oct 13 08:14:22 crc kubenswrapper[4833]: I1013 08:14:22.328143 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6cfcb4bbcf-kjtxc" event={"ID":"d256a571-b50d-47b7-b502-064e126ce3c4","Type":"ContainerStarted","Data":"883a12494edda6a24ee223f31ec3236907b182b9421e399ae7ca17c1bdb19390"} Oct 13 08:14:22 crc kubenswrapper[4833]: I1013 08:14:22.332232 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-754446978c-lv9pm" event={"ID":"b82a1fc6-0e74-4766-94ac-31a29f8985bf","Type":"ContainerStarted","Data":"c67ee7a863229e837df2467b7f4ece44c9dab0c5ee3b2cb6c71821ff3a8140d5"} Oct 13 08:14:22 crc kubenswrapper[4833]: I1013 08:14:22.347243 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-69748b5-xzlww" podStartSLOduration=2.347215056 podStartE2EDuration="2.347215056s" podCreationTimestamp="2025-10-13 08:14:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:14:22.34701872 +0000 UTC m=+6352.447441656" watchObservedRunningTime="2025-10-13 08:14:22.347215056 +0000 UTC m=+6352.447637972" Oct 13 08:14:23 crc kubenswrapper[4833]: I1013 08:14:23.250845 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-66c86f59f8-n4tkm" podUID="2d86df57-4667-4910-9463-8366b3080c9f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.123:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.123:8443: connect: connection refused" Oct 13 08:14:24 crc kubenswrapper[4833]: I1013 08:14:24.353877 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-754446978c-lv9pm" event={"ID":"b82a1fc6-0e74-4766-94ac-31a29f8985bf","Type":"ContainerStarted","Data":"2f01ca39a1b7411fa6631f0a91dd20d538281f1affe2e7a5dc483ebe6aa9f132"} Oct 13 08:14:24 crc kubenswrapper[4833]: I1013 08:14:24.354328 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-754446978c-lv9pm" Oct 13 08:14:24 crc kubenswrapper[4833]: I1013 08:14:24.357064 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6cfcb4bbcf-kjtxc" event={"ID":"d256a571-b50d-47b7-b502-064e126ce3c4","Type":"ContainerStarted","Data":"47d1d07b4aa5e11a0d8d8bb458caf3ea564539db5f3e870722483bdf7dd5558b"} Oct 13 08:14:24 crc kubenswrapper[4833]: I1013 08:14:24.357828 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6cfcb4bbcf-kjtxc" Oct 13 08:14:24 crc kubenswrapper[4833]: I1013 08:14:24.376999 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-754446978c-lv9pm" podStartSLOduration=2.364030644 podStartE2EDuration="4.37697535s" podCreationTimestamp="2025-10-13 08:14:20 +0000 UTC" firstStartedPulling="2025-10-13 08:14:21.450180264 +0000 UTC m=+6351.550603180" lastFinishedPulling="2025-10-13 08:14:23.46312497 +0000 UTC m=+6353.563547886" observedRunningTime="2025-10-13 08:14:24.373180772 +0000 UTC m=+6354.473603688" watchObservedRunningTime="2025-10-13 08:14:24.37697535 +0000 UTC m=+6354.477398276" Oct 13 08:14:24 crc kubenswrapper[4833]: I1013 08:14:24.397104 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6cfcb4bbcf-kjtxc" podStartSLOduration=2.379116533 podStartE2EDuration="4.397087242s" podCreationTimestamp="2025-10-13 08:14:20 +0000 UTC" firstStartedPulling="2025-10-13 08:14:21.449144674 +0000 UTC m=+6351.549567590" lastFinishedPulling="2025-10-13 08:14:23.467115383 +0000 UTC m=+6353.567538299" observedRunningTime="2025-10-13 08:14:24.393489949 +0000 UTC m=+6354.493912875" watchObservedRunningTime="2025-10-13 08:14:24.397087242 +0000 UTC m=+6354.497510158" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.282389 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6664854fc-pnxtp"] Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.284024 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6664854fc-pnxtp" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.312125 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-cdd768f9b-ghvwt"] Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.313422 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-cdd768f9b-ghvwt" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.323975 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6664854fc-pnxtp"] Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.335945 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a70ba4a7-5e72-4533-a9e4-7181e816b057-combined-ca-bundle\") pod \"heat-engine-6664854fc-pnxtp\" (UID: \"a70ba4a7-5e72-4533-a9e4-7181e816b057\") " pod="openstack/heat-engine-6664854fc-pnxtp" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.336058 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6x82\" (UniqueName: \"kubernetes.io/projected/a70ba4a7-5e72-4533-a9e4-7181e816b057-kube-api-access-t6x82\") pod \"heat-engine-6664854fc-pnxtp\" (UID: \"a70ba4a7-5e72-4533-a9e4-7181e816b057\") " pod="openstack/heat-engine-6664854fc-pnxtp" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.336097 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a70ba4a7-5e72-4533-a9e4-7181e816b057-config-data-custom\") pod \"heat-engine-6664854fc-pnxtp\" (UID: \"a70ba4a7-5e72-4533-a9e4-7181e816b057\") " pod="openstack/heat-engine-6664854fc-pnxtp" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.336122 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a70ba4a7-5e72-4533-a9e4-7181e816b057-config-data\") pod \"heat-engine-6664854fc-pnxtp\" (UID: \"a70ba4a7-5e72-4533-a9e4-7181e816b057\") " pod="openstack/heat-engine-6664854fc-pnxtp" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.352494 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-cdd768f9b-ghvwt"] Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.361653 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6dd8b67b84-5hnhw"] Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.363240 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6dd8b67b84-5hnhw" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.410891 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6dd8b67b84-5hnhw"] Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.438782 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9152798d-5d54-4a88-8d83-05903594a058-config-data-custom\") pod \"heat-cfnapi-cdd768f9b-ghvwt\" (UID: \"9152798d-5d54-4a88-8d83-05903594a058\") " pod="openstack/heat-cfnapi-cdd768f9b-ghvwt" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.438864 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e-config-data\") pod \"heat-api-6dd8b67b84-5hnhw\" (UID: \"3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e\") " pod="openstack/heat-api-6dd8b67b84-5hnhw" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.438965 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a70ba4a7-5e72-4533-a9e4-7181e816b057-combined-ca-bundle\") pod \"heat-engine-6664854fc-pnxtp\" (UID: \"a70ba4a7-5e72-4533-a9e4-7181e816b057\") " pod="openstack/heat-engine-6664854fc-pnxtp" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.439002 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e-combined-ca-bundle\") pod \"heat-api-6dd8b67b84-5hnhw\" (UID: \"3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e\") " pod="openstack/heat-api-6dd8b67b84-5hnhw" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.439033 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e-config-data-custom\") pod \"heat-api-6dd8b67b84-5hnhw\" (UID: \"3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e\") " pod="openstack/heat-api-6dd8b67b84-5hnhw" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.439071 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzqw6\" (UniqueName: \"kubernetes.io/projected/9152798d-5d54-4a88-8d83-05903594a058-kube-api-access-dzqw6\") pod \"heat-cfnapi-cdd768f9b-ghvwt\" (UID: \"9152798d-5d54-4a88-8d83-05903594a058\") " pod="openstack/heat-cfnapi-cdd768f9b-ghvwt" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.439114 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9152798d-5d54-4a88-8d83-05903594a058-config-data\") pod \"heat-cfnapi-cdd768f9b-ghvwt\" (UID: \"9152798d-5d54-4a88-8d83-05903594a058\") " pod="openstack/heat-cfnapi-cdd768f9b-ghvwt" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.439162 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9152798d-5d54-4a88-8d83-05903594a058-combined-ca-bundle\") pod \"heat-cfnapi-cdd768f9b-ghvwt\" (UID: \"9152798d-5d54-4a88-8d83-05903594a058\") " pod="openstack/heat-cfnapi-cdd768f9b-ghvwt" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.439185 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jflf9\" (UniqueName: \"kubernetes.io/projected/3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e-kube-api-access-jflf9\") pod \"heat-api-6dd8b67b84-5hnhw\" (UID: \"3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e\") " pod="openstack/heat-api-6dd8b67b84-5hnhw" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.439210 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6x82\" (UniqueName: \"kubernetes.io/projected/a70ba4a7-5e72-4533-a9e4-7181e816b057-kube-api-access-t6x82\") pod \"heat-engine-6664854fc-pnxtp\" (UID: \"a70ba4a7-5e72-4533-a9e4-7181e816b057\") " pod="openstack/heat-engine-6664854fc-pnxtp" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.439249 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a70ba4a7-5e72-4533-a9e4-7181e816b057-config-data-custom\") pod \"heat-engine-6664854fc-pnxtp\" (UID: \"a70ba4a7-5e72-4533-a9e4-7181e816b057\") " pod="openstack/heat-engine-6664854fc-pnxtp" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.439305 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a70ba4a7-5e72-4533-a9e4-7181e816b057-config-data\") pod \"heat-engine-6664854fc-pnxtp\" (UID: \"a70ba4a7-5e72-4533-a9e4-7181e816b057\") " pod="openstack/heat-engine-6664854fc-pnxtp" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.447419 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a70ba4a7-5e72-4533-a9e4-7181e816b057-combined-ca-bundle\") pod \"heat-engine-6664854fc-pnxtp\" (UID: \"a70ba4a7-5e72-4533-a9e4-7181e816b057\") " pod="openstack/heat-engine-6664854fc-pnxtp" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.449165 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a70ba4a7-5e72-4533-a9e4-7181e816b057-config-data\") pod \"heat-engine-6664854fc-pnxtp\" (UID: \"a70ba4a7-5e72-4533-a9e4-7181e816b057\") " pod="openstack/heat-engine-6664854fc-pnxtp" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.462860 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6x82\" (UniqueName: \"kubernetes.io/projected/a70ba4a7-5e72-4533-a9e4-7181e816b057-kube-api-access-t6x82\") pod \"heat-engine-6664854fc-pnxtp\" (UID: \"a70ba4a7-5e72-4533-a9e4-7181e816b057\") " pod="openstack/heat-engine-6664854fc-pnxtp" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.462876 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a70ba4a7-5e72-4533-a9e4-7181e816b057-config-data-custom\") pod \"heat-engine-6664854fc-pnxtp\" (UID: \"a70ba4a7-5e72-4533-a9e4-7181e816b057\") " pod="openstack/heat-engine-6664854fc-pnxtp" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.542013 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e-config-data\") pod \"heat-api-6dd8b67b84-5hnhw\" (UID: \"3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e\") " pod="openstack/heat-api-6dd8b67b84-5hnhw" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.542649 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e-combined-ca-bundle\") pod \"heat-api-6dd8b67b84-5hnhw\" (UID: \"3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e\") " pod="openstack/heat-api-6dd8b67b84-5hnhw" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.542841 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e-config-data-custom\") pod \"heat-api-6dd8b67b84-5hnhw\" (UID: \"3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e\") " pod="openstack/heat-api-6dd8b67b84-5hnhw" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.543104 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzqw6\" (UniqueName: \"kubernetes.io/projected/9152798d-5d54-4a88-8d83-05903594a058-kube-api-access-dzqw6\") pod \"heat-cfnapi-cdd768f9b-ghvwt\" (UID: \"9152798d-5d54-4a88-8d83-05903594a058\") " pod="openstack/heat-cfnapi-cdd768f9b-ghvwt" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.543978 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9152798d-5d54-4a88-8d83-05903594a058-config-data\") pod \"heat-cfnapi-cdd768f9b-ghvwt\" (UID: \"9152798d-5d54-4a88-8d83-05903594a058\") " pod="openstack/heat-cfnapi-cdd768f9b-ghvwt" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.544412 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9152798d-5d54-4a88-8d83-05903594a058-combined-ca-bundle\") pod \"heat-cfnapi-cdd768f9b-ghvwt\" (UID: \"9152798d-5d54-4a88-8d83-05903594a058\") " pod="openstack/heat-cfnapi-cdd768f9b-ghvwt" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.545230 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jflf9\" (UniqueName: \"kubernetes.io/projected/3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e-kube-api-access-jflf9\") pod \"heat-api-6dd8b67b84-5hnhw\" (UID: \"3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e\") " pod="openstack/heat-api-6dd8b67b84-5hnhw" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.545684 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9152798d-5d54-4a88-8d83-05903594a058-config-data-custom\") pod \"heat-cfnapi-cdd768f9b-ghvwt\" (UID: \"9152798d-5d54-4a88-8d83-05903594a058\") " pod="openstack/heat-cfnapi-cdd768f9b-ghvwt" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.548915 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e-config-data\") pod \"heat-api-6dd8b67b84-5hnhw\" (UID: \"3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e\") " pod="openstack/heat-api-6dd8b67b84-5hnhw" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.549075 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e-config-data-custom\") pod \"heat-api-6dd8b67b84-5hnhw\" (UID: \"3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e\") " pod="openstack/heat-api-6dd8b67b84-5hnhw" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.549650 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9152798d-5d54-4a88-8d83-05903594a058-combined-ca-bundle\") pod \"heat-cfnapi-cdd768f9b-ghvwt\" (UID: \"9152798d-5d54-4a88-8d83-05903594a058\") " pod="openstack/heat-cfnapi-cdd768f9b-ghvwt" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.549948 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e-combined-ca-bundle\") pod \"heat-api-6dd8b67b84-5hnhw\" (UID: \"3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e\") " pod="openstack/heat-api-6dd8b67b84-5hnhw" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.554363 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9152798d-5d54-4a88-8d83-05903594a058-config-data-custom\") pod \"heat-cfnapi-cdd768f9b-ghvwt\" (UID: \"9152798d-5d54-4a88-8d83-05903594a058\") " pod="openstack/heat-cfnapi-cdd768f9b-ghvwt" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.555130 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9152798d-5d54-4a88-8d83-05903594a058-config-data\") pod \"heat-cfnapi-cdd768f9b-ghvwt\" (UID: \"9152798d-5d54-4a88-8d83-05903594a058\") " pod="openstack/heat-cfnapi-cdd768f9b-ghvwt" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.565668 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzqw6\" (UniqueName: \"kubernetes.io/projected/9152798d-5d54-4a88-8d83-05903594a058-kube-api-access-dzqw6\") pod \"heat-cfnapi-cdd768f9b-ghvwt\" (UID: \"9152798d-5d54-4a88-8d83-05903594a058\") " pod="openstack/heat-cfnapi-cdd768f9b-ghvwt" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.570484 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jflf9\" (UniqueName: \"kubernetes.io/projected/3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e-kube-api-access-jflf9\") pod \"heat-api-6dd8b67b84-5hnhw\" (UID: \"3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e\") " pod="openstack/heat-api-6dd8b67b84-5hnhw" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.652603 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6664854fc-pnxtp" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.659947 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-cdd768f9b-ghvwt" Oct 13 08:14:27 crc kubenswrapper[4833]: I1013 08:14:27.692238 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6dd8b67b84-5hnhw" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.186516 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-cdd768f9b-ghvwt"] Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.270915 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6664854fc-pnxtp"] Oct 13 08:14:28 crc kubenswrapper[4833]: W1013 08:14:28.279489 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda70ba4a7_5e72_4533_a9e4_7181e816b057.slice/crio-34dfaf3b02038e4a52426d78c2c8447aca4267e736477b9058b4256c94104d1f WatchSource:0}: Error finding container 34dfaf3b02038e4a52426d78c2c8447aca4267e736477b9058b4256c94104d1f: Status 404 returned error can't find the container with id 34dfaf3b02038e4a52426d78c2c8447aca4267e736477b9058b4256c94104d1f Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.353173 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6dd8b67b84-5hnhw"] Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.419746 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6664854fc-pnxtp" event={"ID":"a70ba4a7-5e72-4533-a9e4-7181e816b057","Type":"ContainerStarted","Data":"34dfaf3b02038e4a52426d78c2c8447aca4267e736477b9058b4256c94104d1f"} Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.423597 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6cfcb4bbcf-kjtxc"] Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.423796 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-6cfcb4bbcf-kjtxc" podUID="d256a571-b50d-47b7-b502-064e126ce3c4" containerName="heat-api" containerID="cri-o://47d1d07b4aa5e11a0d8d8bb458caf3ea564539db5f3e870722483bdf7dd5558b" gracePeriod=60 Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.424417 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6dd8b67b84-5hnhw" event={"ID":"3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e","Type":"ContainerStarted","Data":"75a0844c25b0283db338f181dd93d88025b0e919295bf2d82ec18b08d63839d6"} Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.436778 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-cdd768f9b-ghvwt" event={"ID":"9152798d-5d54-4a88-8d83-05903594a058","Type":"ContainerStarted","Data":"c9d6c408e4289b696c612e4c09f36ddfbe1a55f5a5c598448d38932e45cd3254"} Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.438486 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-754446978c-lv9pm"] Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.438632 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-754446978c-lv9pm" podUID="b82a1fc6-0e74-4766-94ac-31a29f8985bf" containerName="heat-cfnapi" containerID="cri-o://2f01ca39a1b7411fa6631f0a91dd20d538281f1affe2e7a5dc483ebe6aa9f132" gracePeriod=60 Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.460156 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-cc5d665d5-b4fc4"] Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.461469 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-cc5d665d5-b4fc4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.484641 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-68779bfcb7-pjbs4"] Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.486559 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-68779bfcb7-pjbs4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.497356 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-68779bfcb7-pjbs4"] Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.505953 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.511304 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.512761 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.530057 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-cc5d665d5-b4fc4"] Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.547871 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.577879 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e298300a-64be-4c39-8c6d-6b40af6fdf2c-config-data\") pod \"heat-api-cc5d665d5-b4fc4\" (UID: \"e298300a-64be-4c39-8c6d-6b40af6fdf2c\") " pod="openstack/heat-api-cc5d665d5-b4fc4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.577928 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0943c9ec-c442-4180-aad0-6a1919690b86-public-tls-certs\") pod \"heat-cfnapi-68779bfcb7-pjbs4\" (UID: \"0943c9ec-c442-4180-aad0-6a1919690b86\") " pod="openstack/heat-cfnapi-68779bfcb7-pjbs4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.577968 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e298300a-64be-4c39-8c6d-6b40af6fdf2c-internal-tls-certs\") pod \"heat-api-cc5d665d5-b4fc4\" (UID: \"e298300a-64be-4c39-8c6d-6b40af6fdf2c\") " pod="openstack/heat-api-cc5d665d5-b4fc4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.578015 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e298300a-64be-4c39-8c6d-6b40af6fdf2c-config-data-custom\") pod \"heat-api-cc5d665d5-b4fc4\" (UID: \"e298300a-64be-4c39-8c6d-6b40af6fdf2c\") " pod="openstack/heat-api-cc5d665d5-b4fc4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.578052 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e298300a-64be-4c39-8c6d-6b40af6fdf2c-combined-ca-bundle\") pod \"heat-api-cc5d665d5-b4fc4\" (UID: \"e298300a-64be-4c39-8c6d-6b40af6fdf2c\") " pod="openstack/heat-api-cc5d665d5-b4fc4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.578076 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcjwd\" (UniqueName: \"kubernetes.io/projected/0943c9ec-c442-4180-aad0-6a1919690b86-kube-api-access-wcjwd\") pod \"heat-cfnapi-68779bfcb7-pjbs4\" (UID: \"0943c9ec-c442-4180-aad0-6a1919690b86\") " pod="openstack/heat-cfnapi-68779bfcb7-pjbs4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.578113 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0943c9ec-c442-4180-aad0-6a1919690b86-combined-ca-bundle\") pod \"heat-cfnapi-68779bfcb7-pjbs4\" (UID: \"0943c9ec-c442-4180-aad0-6a1919690b86\") " pod="openstack/heat-cfnapi-68779bfcb7-pjbs4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.578145 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0943c9ec-c442-4180-aad0-6a1919690b86-internal-tls-certs\") pod \"heat-cfnapi-68779bfcb7-pjbs4\" (UID: \"0943c9ec-c442-4180-aad0-6a1919690b86\") " pod="openstack/heat-cfnapi-68779bfcb7-pjbs4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.578171 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0943c9ec-c442-4180-aad0-6a1919690b86-config-data-custom\") pod \"heat-cfnapi-68779bfcb7-pjbs4\" (UID: \"0943c9ec-c442-4180-aad0-6a1919690b86\") " pod="openstack/heat-cfnapi-68779bfcb7-pjbs4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.578197 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wqd9\" (UniqueName: \"kubernetes.io/projected/e298300a-64be-4c39-8c6d-6b40af6fdf2c-kube-api-access-8wqd9\") pod \"heat-api-cc5d665d5-b4fc4\" (UID: \"e298300a-64be-4c39-8c6d-6b40af6fdf2c\") " pod="openstack/heat-api-cc5d665d5-b4fc4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.578212 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0943c9ec-c442-4180-aad0-6a1919690b86-config-data\") pod \"heat-cfnapi-68779bfcb7-pjbs4\" (UID: \"0943c9ec-c442-4180-aad0-6a1919690b86\") " pod="openstack/heat-cfnapi-68779bfcb7-pjbs4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.578279 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e298300a-64be-4c39-8c6d-6b40af6fdf2c-public-tls-certs\") pod \"heat-api-cc5d665d5-b4fc4\" (UID: \"e298300a-64be-4c39-8c6d-6b40af6fdf2c\") " pod="openstack/heat-api-cc5d665d5-b4fc4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.635103 4833 scope.go:117] "RemoveContainer" containerID="d4d4c44bb0e773c2232317552c7e13324e9005e08dbd3102a3b4703dc201b6e8" Oct 13 08:14:28 crc kubenswrapper[4833]: E1013 08:14:28.635396 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.679696 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wqd9\" (UniqueName: \"kubernetes.io/projected/e298300a-64be-4c39-8c6d-6b40af6fdf2c-kube-api-access-8wqd9\") pod \"heat-api-cc5d665d5-b4fc4\" (UID: \"e298300a-64be-4c39-8c6d-6b40af6fdf2c\") " pod="openstack/heat-api-cc5d665d5-b4fc4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.679731 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0943c9ec-c442-4180-aad0-6a1919690b86-config-data\") pod \"heat-cfnapi-68779bfcb7-pjbs4\" (UID: \"0943c9ec-c442-4180-aad0-6a1919690b86\") " pod="openstack/heat-cfnapi-68779bfcb7-pjbs4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.679827 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e298300a-64be-4c39-8c6d-6b40af6fdf2c-public-tls-certs\") pod \"heat-api-cc5d665d5-b4fc4\" (UID: \"e298300a-64be-4c39-8c6d-6b40af6fdf2c\") " pod="openstack/heat-api-cc5d665d5-b4fc4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.679890 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e298300a-64be-4c39-8c6d-6b40af6fdf2c-config-data\") pod \"heat-api-cc5d665d5-b4fc4\" (UID: \"e298300a-64be-4c39-8c6d-6b40af6fdf2c\") " pod="openstack/heat-api-cc5d665d5-b4fc4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.679907 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0943c9ec-c442-4180-aad0-6a1919690b86-public-tls-certs\") pod \"heat-cfnapi-68779bfcb7-pjbs4\" (UID: \"0943c9ec-c442-4180-aad0-6a1919690b86\") " pod="openstack/heat-cfnapi-68779bfcb7-pjbs4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.679943 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e298300a-64be-4c39-8c6d-6b40af6fdf2c-internal-tls-certs\") pod \"heat-api-cc5d665d5-b4fc4\" (UID: \"e298300a-64be-4c39-8c6d-6b40af6fdf2c\") " pod="openstack/heat-api-cc5d665d5-b4fc4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.679981 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e298300a-64be-4c39-8c6d-6b40af6fdf2c-config-data-custom\") pod \"heat-api-cc5d665d5-b4fc4\" (UID: \"e298300a-64be-4c39-8c6d-6b40af6fdf2c\") " pod="openstack/heat-api-cc5d665d5-b4fc4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.680019 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e298300a-64be-4c39-8c6d-6b40af6fdf2c-combined-ca-bundle\") pod \"heat-api-cc5d665d5-b4fc4\" (UID: \"e298300a-64be-4c39-8c6d-6b40af6fdf2c\") " pod="openstack/heat-api-cc5d665d5-b4fc4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.680040 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcjwd\" (UniqueName: \"kubernetes.io/projected/0943c9ec-c442-4180-aad0-6a1919690b86-kube-api-access-wcjwd\") pod \"heat-cfnapi-68779bfcb7-pjbs4\" (UID: \"0943c9ec-c442-4180-aad0-6a1919690b86\") " pod="openstack/heat-cfnapi-68779bfcb7-pjbs4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.680090 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0943c9ec-c442-4180-aad0-6a1919690b86-combined-ca-bundle\") pod \"heat-cfnapi-68779bfcb7-pjbs4\" (UID: \"0943c9ec-c442-4180-aad0-6a1919690b86\") " pod="openstack/heat-cfnapi-68779bfcb7-pjbs4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.680135 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0943c9ec-c442-4180-aad0-6a1919690b86-internal-tls-certs\") pod \"heat-cfnapi-68779bfcb7-pjbs4\" (UID: \"0943c9ec-c442-4180-aad0-6a1919690b86\") " pod="openstack/heat-cfnapi-68779bfcb7-pjbs4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.680159 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0943c9ec-c442-4180-aad0-6a1919690b86-config-data-custom\") pod \"heat-cfnapi-68779bfcb7-pjbs4\" (UID: \"0943c9ec-c442-4180-aad0-6a1919690b86\") " pod="openstack/heat-cfnapi-68779bfcb7-pjbs4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.687766 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0943c9ec-c442-4180-aad0-6a1919690b86-combined-ca-bundle\") pod \"heat-cfnapi-68779bfcb7-pjbs4\" (UID: \"0943c9ec-c442-4180-aad0-6a1919690b86\") " pod="openstack/heat-cfnapi-68779bfcb7-pjbs4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.688400 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0943c9ec-c442-4180-aad0-6a1919690b86-config-data-custom\") pod \"heat-cfnapi-68779bfcb7-pjbs4\" (UID: \"0943c9ec-c442-4180-aad0-6a1919690b86\") " pod="openstack/heat-cfnapi-68779bfcb7-pjbs4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.688944 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e298300a-64be-4c39-8c6d-6b40af6fdf2c-combined-ca-bundle\") pod \"heat-api-cc5d665d5-b4fc4\" (UID: \"e298300a-64be-4c39-8c6d-6b40af6fdf2c\") " pod="openstack/heat-api-cc5d665d5-b4fc4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.691397 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0943c9ec-c442-4180-aad0-6a1919690b86-internal-tls-certs\") pod \"heat-cfnapi-68779bfcb7-pjbs4\" (UID: \"0943c9ec-c442-4180-aad0-6a1919690b86\") " pod="openstack/heat-cfnapi-68779bfcb7-pjbs4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.692664 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0943c9ec-c442-4180-aad0-6a1919690b86-config-data\") pod \"heat-cfnapi-68779bfcb7-pjbs4\" (UID: \"0943c9ec-c442-4180-aad0-6a1919690b86\") " pod="openstack/heat-cfnapi-68779bfcb7-pjbs4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.693462 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e298300a-64be-4c39-8c6d-6b40af6fdf2c-internal-tls-certs\") pod \"heat-api-cc5d665d5-b4fc4\" (UID: \"e298300a-64be-4c39-8c6d-6b40af6fdf2c\") " pod="openstack/heat-api-cc5d665d5-b4fc4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.694285 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e298300a-64be-4c39-8c6d-6b40af6fdf2c-config-data-custom\") pod \"heat-api-cc5d665d5-b4fc4\" (UID: \"e298300a-64be-4c39-8c6d-6b40af6fdf2c\") " pod="openstack/heat-api-cc5d665d5-b4fc4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.701597 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e298300a-64be-4c39-8c6d-6b40af6fdf2c-config-data\") pod \"heat-api-cc5d665d5-b4fc4\" (UID: \"e298300a-64be-4c39-8c6d-6b40af6fdf2c\") " pod="openstack/heat-api-cc5d665d5-b4fc4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.704475 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0943c9ec-c442-4180-aad0-6a1919690b86-public-tls-certs\") pod \"heat-cfnapi-68779bfcb7-pjbs4\" (UID: \"0943c9ec-c442-4180-aad0-6a1919690b86\") " pod="openstack/heat-cfnapi-68779bfcb7-pjbs4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.704963 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e298300a-64be-4c39-8c6d-6b40af6fdf2c-public-tls-certs\") pod \"heat-api-cc5d665d5-b4fc4\" (UID: \"e298300a-64be-4c39-8c6d-6b40af6fdf2c\") " pod="openstack/heat-api-cc5d665d5-b4fc4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.708156 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcjwd\" (UniqueName: \"kubernetes.io/projected/0943c9ec-c442-4180-aad0-6a1919690b86-kube-api-access-wcjwd\") pod \"heat-cfnapi-68779bfcb7-pjbs4\" (UID: \"0943c9ec-c442-4180-aad0-6a1919690b86\") " pod="openstack/heat-cfnapi-68779bfcb7-pjbs4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.712285 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wqd9\" (UniqueName: \"kubernetes.io/projected/e298300a-64be-4c39-8c6d-6b40af6fdf2c-kube-api-access-8wqd9\") pod \"heat-api-cc5d665d5-b4fc4\" (UID: \"e298300a-64be-4c39-8c6d-6b40af6fdf2c\") " pod="openstack/heat-api-cc5d665d5-b4fc4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.729956 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-754446978c-lv9pm" podUID="b82a1fc6-0e74-4766-94ac-31a29f8985bf" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.1.132:8000/healthcheck\": read tcp 10.217.0.2:43068->10.217.1.132:8000: read: connection reset by peer" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.954995 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-cc5d665d5-b4fc4" Oct 13 08:14:28 crc kubenswrapper[4833]: I1013 08:14:28.965408 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-68779bfcb7-pjbs4" Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.042834 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-754446978c-lv9pm" Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.088257 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b82a1fc6-0e74-4766-94ac-31a29f8985bf-config-data\") pod \"b82a1fc6-0e74-4766-94ac-31a29f8985bf\" (UID: \"b82a1fc6-0e74-4766-94ac-31a29f8985bf\") " Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.088340 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b82a1fc6-0e74-4766-94ac-31a29f8985bf-combined-ca-bundle\") pod \"b82a1fc6-0e74-4766-94ac-31a29f8985bf\" (UID: \"b82a1fc6-0e74-4766-94ac-31a29f8985bf\") " Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.088413 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b82a1fc6-0e74-4766-94ac-31a29f8985bf-config-data-custom\") pod \"b82a1fc6-0e74-4766-94ac-31a29f8985bf\" (UID: \"b82a1fc6-0e74-4766-94ac-31a29f8985bf\") " Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.088529 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4qqq\" (UniqueName: \"kubernetes.io/projected/b82a1fc6-0e74-4766-94ac-31a29f8985bf-kube-api-access-r4qqq\") pod \"b82a1fc6-0e74-4766-94ac-31a29f8985bf\" (UID: \"b82a1fc6-0e74-4766-94ac-31a29f8985bf\") " Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.099790 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b82a1fc6-0e74-4766-94ac-31a29f8985bf-kube-api-access-r4qqq" (OuterVolumeSpecName: "kube-api-access-r4qqq") pod "b82a1fc6-0e74-4766-94ac-31a29f8985bf" (UID: "b82a1fc6-0e74-4766-94ac-31a29f8985bf"). InnerVolumeSpecName "kube-api-access-r4qqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.106695 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b82a1fc6-0e74-4766-94ac-31a29f8985bf-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b82a1fc6-0e74-4766-94ac-31a29f8985bf" (UID: "b82a1fc6-0e74-4766-94ac-31a29f8985bf"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.133720 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b82a1fc6-0e74-4766-94ac-31a29f8985bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b82a1fc6-0e74-4766-94ac-31a29f8985bf" (UID: "b82a1fc6-0e74-4766-94ac-31a29f8985bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.170981 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b82a1fc6-0e74-4766-94ac-31a29f8985bf-config-data" (OuterVolumeSpecName: "config-data") pod "b82a1fc6-0e74-4766-94ac-31a29f8985bf" (UID: "b82a1fc6-0e74-4766-94ac-31a29f8985bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.190992 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b82a1fc6-0e74-4766-94ac-31a29f8985bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.191227 4833 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b82a1fc6-0e74-4766-94ac-31a29f8985bf-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.191297 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4qqq\" (UniqueName: \"kubernetes.io/projected/b82a1fc6-0e74-4766-94ac-31a29f8985bf-kube-api-access-r4qqq\") on node \"crc\" DevicePath \"\"" Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.191358 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b82a1fc6-0e74-4766-94ac-31a29f8985bf-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.448039 4833 generic.go:334] "Generic (PLEG): container finished" podID="3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e" containerID="d57d1e336aa1ac9709975e66c27c083eb2c6d309ae12c31faeb9c7d8487df59f" exitCode=1 Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.448084 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6dd8b67b84-5hnhw" event={"ID":"3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e","Type":"ContainerDied","Data":"d57d1e336aa1ac9709975e66c27c083eb2c6d309ae12c31faeb9c7d8487df59f"} Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.448802 4833 scope.go:117] "RemoveContainer" containerID="d57d1e336aa1ac9709975e66c27c083eb2c6d309ae12c31faeb9c7d8487df59f" Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.452916 4833 generic.go:334] "Generic (PLEG): container finished" podID="b82a1fc6-0e74-4766-94ac-31a29f8985bf" containerID="2f01ca39a1b7411fa6631f0a91dd20d538281f1affe2e7a5dc483ebe6aa9f132" exitCode=0 Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.453009 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-754446978c-lv9pm" Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.453283 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-754446978c-lv9pm" event={"ID":"b82a1fc6-0e74-4766-94ac-31a29f8985bf","Type":"ContainerDied","Data":"2f01ca39a1b7411fa6631f0a91dd20d538281f1affe2e7a5dc483ebe6aa9f132"} Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.453351 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-754446978c-lv9pm" event={"ID":"b82a1fc6-0e74-4766-94ac-31a29f8985bf","Type":"ContainerDied","Data":"c67ee7a863229e837df2467b7f4ece44c9dab0c5ee3b2cb6c71821ff3a8140d5"} Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.453372 4833 scope.go:117] "RemoveContainer" containerID="2f01ca39a1b7411fa6631f0a91dd20d538281f1affe2e7a5dc483ebe6aa9f132" Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.461167 4833 generic.go:334] "Generic (PLEG): container finished" podID="9152798d-5d54-4a88-8d83-05903594a058" containerID="6a3d6d6f6052fb87f6db9978cab344ab32a0100d6b1d47aaa289db399214d04e" exitCode=1 Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.461223 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-cdd768f9b-ghvwt" event={"ID":"9152798d-5d54-4a88-8d83-05903594a058","Type":"ContainerDied","Data":"6a3d6d6f6052fb87f6db9978cab344ab32a0100d6b1d47aaa289db399214d04e"} Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.461908 4833 scope.go:117] "RemoveContainer" containerID="6a3d6d6f6052fb87f6db9978cab344ab32a0100d6b1d47aaa289db399214d04e" Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.468673 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6664854fc-pnxtp" event={"ID":"a70ba4a7-5e72-4533-a9e4-7181e816b057","Type":"ContainerStarted","Data":"cf3cecb8f9f53376ea3dbc9f31e6598052435a7acae1075c49978d66720bc3f2"} Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.469138 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6664854fc-pnxtp" Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.485228 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-cc5d665d5-b4fc4"] Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.508382 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6664854fc-pnxtp" podStartSLOduration=2.508363329 podStartE2EDuration="2.508363329s" podCreationTimestamp="2025-10-13 08:14:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:14:29.503965894 +0000 UTC m=+6359.604388810" watchObservedRunningTime="2025-10-13 08:14:29.508363329 +0000 UTC m=+6359.608786245" Oct 13 08:14:29 crc kubenswrapper[4833]: W1013 08:14:29.509818 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode298300a_64be_4c39_8c6d_6b40af6fdf2c.slice/crio-35be0d2ce287029a247077afa5422c13f91d2e5dac3cd717cc2e93a69e81cbaa WatchSource:0}: Error finding container 35be0d2ce287029a247077afa5422c13f91d2e5dac3cd717cc2e93a69e81cbaa: Status 404 returned error can't find the container with id 35be0d2ce287029a247077afa5422c13f91d2e5dac3cd717cc2e93a69e81cbaa Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.549051 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-68779bfcb7-pjbs4"] Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.795353 4833 scope.go:117] "RemoveContainer" containerID="2f01ca39a1b7411fa6631f0a91dd20d538281f1affe2e7a5dc483ebe6aa9f132" Oct 13 08:14:29 crc kubenswrapper[4833]: E1013 08:14:29.797034 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f01ca39a1b7411fa6631f0a91dd20d538281f1affe2e7a5dc483ebe6aa9f132\": container with ID starting with 2f01ca39a1b7411fa6631f0a91dd20d538281f1affe2e7a5dc483ebe6aa9f132 not found: ID does not exist" containerID="2f01ca39a1b7411fa6631f0a91dd20d538281f1affe2e7a5dc483ebe6aa9f132" Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.797085 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f01ca39a1b7411fa6631f0a91dd20d538281f1affe2e7a5dc483ebe6aa9f132"} err="failed to get container status \"2f01ca39a1b7411fa6631f0a91dd20d538281f1affe2e7a5dc483ebe6aa9f132\": rpc error: code = NotFound desc = could not find container \"2f01ca39a1b7411fa6631f0a91dd20d538281f1affe2e7a5dc483ebe6aa9f132\": container with ID starting with 2f01ca39a1b7411fa6631f0a91dd20d538281f1affe2e7a5dc483ebe6aa9f132 not found: ID does not exist" Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.797433 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-754446978c-lv9pm"] Oct 13 08:14:29 crc kubenswrapper[4833]: I1013 08:14:29.806921 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-754446978c-lv9pm"] Oct 13 08:14:30 crc kubenswrapper[4833]: I1013 08:14:30.078568 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6cfcb4bbcf-kjtxc" Oct 13 08:14:30 crc kubenswrapper[4833]: I1013 08:14:30.478338 4833 generic.go:334] "Generic (PLEG): container finished" podID="9152798d-5d54-4a88-8d83-05903594a058" containerID="0098df4daaa24e556f6cc8fc3a87ccf885fec53e85bcf6e67042af69e00821fa" exitCode=1 Oct 13 08:14:30 crc kubenswrapper[4833]: I1013 08:14:30.478460 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-cdd768f9b-ghvwt" event={"ID":"9152798d-5d54-4a88-8d83-05903594a058","Type":"ContainerDied","Data":"0098df4daaa24e556f6cc8fc3a87ccf885fec53e85bcf6e67042af69e00821fa"} Oct 13 08:14:30 crc kubenswrapper[4833]: I1013 08:14:30.478743 4833 scope.go:117] "RemoveContainer" containerID="6a3d6d6f6052fb87f6db9978cab344ab32a0100d6b1d47aaa289db399214d04e" Oct 13 08:14:30 crc kubenswrapper[4833]: I1013 08:14:30.479024 4833 scope.go:117] "RemoveContainer" containerID="0098df4daaa24e556f6cc8fc3a87ccf885fec53e85bcf6e67042af69e00821fa" Oct 13 08:14:30 crc kubenswrapper[4833]: E1013 08:14:30.479354 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-cdd768f9b-ghvwt_openstack(9152798d-5d54-4a88-8d83-05903594a058)\"" pod="openstack/heat-cfnapi-cdd768f9b-ghvwt" podUID="9152798d-5d54-4a88-8d83-05903594a058" Oct 13 08:14:30 crc kubenswrapper[4833]: I1013 08:14:30.486991 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-68779bfcb7-pjbs4" event={"ID":"0943c9ec-c442-4180-aad0-6a1919690b86","Type":"ContainerStarted","Data":"bbea1427812b54a2eef035864f7146cf098bf7f1ccbfcae40c04ab3b4ae6d937"} Oct 13 08:14:30 crc kubenswrapper[4833]: I1013 08:14:30.487037 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-68779bfcb7-pjbs4" event={"ID":"0943c9ec-c442-4180-aad0-6a1919690b86","Type":"ContainerStarted","Data":"a9e8f6a87447b67b9d9c1921bb39efe5af3e4bfb1db8dfb1c25ca1c9236f94e2"} Oct 13 08:14:30 crc kubenswrapper[4833]: I1013 08:14:30.487895 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-68779bfcb7-pjbs4" Oct 13 08:14:30 crc kubenswrapper[4833]: I1013 08:14:30.490247 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-cc5d665d5-b4fc4" event={"ID":"e298300a-64be-4c39-8c6d-6b40af6fdf2c","Type":"ContainerStarted","Data":"30dbd306618c1903bf67ab7320ef248d9ba70227f2d2aef120fa55c584168c19"} Oct 13 08:14:30 crc kubenswrapper[4833]: I1013 08:14:30.490276 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-cc5d665d5-b4fc4" event={"ID":"e298300a-64be-4c39-8c6d-6b40af6fdf2c","Type":"ContainerStarted","Data":"35be0d2ce287029a247077afa5422c13f91d2e5dac3cd717cc2e93a69e81cbaa"} Oct 13 08:14:30 crc kubenswrapper[4833]: I1013 08:14:30.490831 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-cc5d665d5-b4fc4" Oct 13 08:14:30 crc kubenswrapper[4833]: I1013 08:14:30.493216 4833 generic.go:334] "Generic (PLEG): container finished" podID="3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e" containerID="5b94542b3d9ab95051e749d7c1191858d78a82a4f3d610322f186760bc7a1ba4" exitCode=1 Oct 13 08:14:30 crc kubenswrapper[4833]: I1013 08:14:30.493268 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6dd8b67b84-5hnhw" event={"ID":"3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e","Type":"ContainerDied","Data":"5b94542b3d9ab95051e749d7c1191858d78a82a4f3d610322f186760bc7a1ba4"} Oct 13 08:14:30 crc kubenswrapper[4833]: I1013 08:14:30.493661 4833 scope.go:117] "RemoveContainer" containerID="5b94542b3d9ab95051e749d7c1191858d78a82a4f3d610322f186760bc7a1ba4" Oct 13 08:14:30 crc kubenswrapper[4833]: E1013 08:14:30.493893 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6dd8b67b84-5hnhw_openstack(3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e)\"" pod="openstack/heat-api-6dd8b67b84-5hnhw" podUID="3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e" Oct 13 08:14:30 crc kubenswrapper[4833]: I1013 08:14:30.522473 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-cc5d665d5-b4fc4" podStartSLOduration=2.522454218 podStartE2EDuration="2.522454218s" podCreationTimestamp="2025-10-13 08:14:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:14:30.519032381 +0000 UTC m=+6360.619455297" watchObservedRunningTime="2025-10-13 08:14:30.522454218 +0000 UTC m=+6360.622877134" Oct 13 08:14:30 crc kubenswrapper[4833]: I1013 08:14:30.545604 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-68779bfcb7-pjbs4" podStartSLOduration=2.545590046 podStartE2EDuration="2.545590046s" podCreationTimestamp="2025-10-13 08:14:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:14:30.541364446 +0000 UTC m=+6360.641787362" watchObservedRunningTime="2025-10-13 08:14:30.545590046 +0000 UTC m=+6360.646012962" Oct 13 08:14:30 crc kubenswrapper[4833]: I1013 08:14:30.559390 4833 scope.go:117] "RemoveContainer" containerID="d57d1e336aa1ac9709975e66c27c083eb2c6d309ae12c31faeb9c7d8487df59f" Oct 13 08:14:30 crc kubenswrapper[4833]: I1013 08:14:30.649725 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b82a1fc6-0e74-4766-94ac-31a29f8985bf" path="/var/lib/kubelet/pods/b82a1fc6-0e74-4766-94ac-31a29f8985bf/volumes" Oct 13 08:14:31 crc kubenswrapper[4833]: I1013 08:14:31.528405 4833 scope.go:117] "RemoveContainer" containerID="5b94542b3d9ab95051e749d7c1191858d78a82a4f3d610322f186760bc7a1ba4" Oct 13 08:14:31 crc kubenswrapper[4833]: E1013 08:14:31.529078 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6dd8b67b84-5hnhw_openstack(3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e)\"" pod="openstack/heat-api-6dd8b67b84-5hnhw" podUID="3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e" Oct 13 08:14:31 crc kubenswrapper[4833]: I1013 08:14:31.538163 4833 scope.go:117] "RemoveContainer" containerID="0098df4daaa24e556f6cc8fc3a87ccf885fec53e85bcf6e67042af69e00821fa" Oct 13 08:14:31 crc kubenswrapper[4833]: E1013 08:14:31.538349 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-cdd768f9b-ghvwt_openstack(9152798d-5d54-4a88-8d83-05903594a058)\"" pod="openstack/heat-cfnapi-cdd768f9b-ghvwt" podUID="9152798d-5d54-4a88-8d83-05903594a058" Oct 13 08:14:32 crc kubenswrapper[4833]: I1013 08:14:32.660667 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-cdd768f9b-ghvwt" Oct 13 08:14:32 crc kubenswrapper[4833]: I1013 08:14:32.661642 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-cdd768f9b-ghvwt" Oct 13 08:14:32 crc kubenswrapper[4833]: I1013 08:14:32.662684 4833 scope.go:117] "RemoveContainer" containerID="0098df4daaa24e556f6cc8fc3a87ccf885fec53e85bcf6e67042af69e00821fa" Oct 13 08:14:32 crc kubenswrapper[4833]: E1013 08:14:32.662988 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-cdd768f9b-ghvwt_openstack(9152798d-5d54-4a88-8d83-05903594a058)\"" pod="openstack/heat-cfnapi-cdd768f9b-ghvwt" podUID="9152798d-5d54-4a88-8d83-05903594a058" Oct 13 08:14:32 crc kubenswrapper[4833]: I1013 08:14:32.693011 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-6dd8b67b84-5hnhw" Oct 13 08:14:32 crc kubenswrapper[4833]: I1013 08:14:32.693092 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6dd8b67b84-5hnhw" Oct 13 08:14:32 crc kubenswrapper[4833]: I1013 08:14:32.693918 4833 scope.go:117] "RemoveContainer" containerID="5b94542b3d9ab95051e749d7c1191858d78a82a4f3d610322f186760bc7a1ba4" Oct 13 08:14:32 crc kubenswrapper[4833]: E1013 08:14:32.694327 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6dd8b67b84-5hnhw_openstack(3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e)\"" pod="openstack/heat-api-6dd8b67b84-5hnhw" podUID="3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e" Oct 13 08:14:33 crc kubenswrapper[4833]: I1013 08:14:33.250606 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-66c86f59f8-n4tkm" podUID="2d86df57-4667-4910-9463-8366b3080c9f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.123:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.123:8443: connect: connection refused" Oct 13 08:14:33 crc kubenswrapper[4833]: I1013 08:14:33.251363 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-66c86f59f8-n4tkm" Oct 13 08:14:33 crc kubenswrapper[4833]: I1013 08:14:33.564666 4833 scope.go:117] "RemoveContainer" containerID="0098df4daaa24e556f6cc8fc3a87ccf885fec53e85bcf6e67042af69e00821fa" Oct 13 08:14:33 crc kubenswrapper[4833]: E1013 08:14:33.565091 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-cdd768f9b-ghvwt_openstack(9152798d-5d54-4a88-8d83-05903594a058)\"" pod="openstack/heat-cfnapi-cdd768f9b-ghvwt" podUID="9152798d-5d54-4a88-8d83-05903594a058" Oct 13 08:14:33 crc kubenswrapper[4833]: I1013 08:14:33.817524 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6cfcb4bbcf-kjtxc" podUID="d256a571-b50d-47b7-b502-064e126ce3c4" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.1.131:8004/healthcheck\": read tcp 10.217.0.2:46296->10.217.1.131:8004: read: connection reset by peer" Oct 13 08:14:34 crc kubenswrapper[4833]: I1013 08:14:34.197987 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6cfcb4bbcf-kjtxc" Oct 13 08:14:34 crc kubenswrapper[4833]: I1013 08:14:34.216826 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d256a571-b50d-47b7-b502-064e126ce3c4-config-data-custom\") pod \"d256a571-b50d-47b7-b502-064e126ce3c4\" (UID: \"d256a571-b50d-47b7-b502-064e126ce3c4\") " Oct 13 08:14:34 crc kubenswrapper[4833]: I1013 08:14:34.216950 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8l7xd\" (UniqueName: \"kubernetes.io/projected/d256a571-b50d-47b7-b502-064e126ce3c4-kube-api-access-8l7xd\") pod \"d256a571-b50d-47b7-b502-064e126ce3c4\" (UID: \"d256a571-b50d-47b7-b502-064e126ce3c4\") " Oct 13 08:14:34 crc kubenswrapper[4833]: I1013 08:14:34.216984 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d256a571-b50d-47b7-b502-064e126ce3c4-config-data\") pod \"d256a571-b50d-47b7-b502-064e126ce3c4\" (UID: \"d256a571-b50d-47b7-b502-064e126ce3c4\") " Oct 13 08:14:34 crc kubenswrapper[4833]: I1013 08:14:34.217092 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d256a571-b50d-47b7-b502-064e126ce3c4-combined-ca-bundle\") pod \"d256a571-b50d-47b7-b502-064e126ce3c4\" (UID: \"d256a571-b50d-47b7-b502-064e126ce3c4\") " Oct 13 08:14:34 crc kubenswrapper[4833]: I1013 08:14:34.223025 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d256a571-b50d-47b7-b502-064e126ce3c4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d256a571-b50d-47b7-b502-064e126ce3c4" (UID: "d256a571-b50d-47b7-b502-064e126ce3c4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:14:34 crc kubenswrapper[4833]: I1013 08:14:34.223414 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d256a571-b50d-47b7-b502-064e126ce3c4-kube-api-access-8l7xd" (OuterVolumeSpecName: "kube-api-access-8l7xd") pod "d256a571-b50d-47b7-b502-064e126ce3c4" (UID: "d256a571-b50d-47b7-b502-064e126ce3c4"). InnerVolumeSpecName "kube-api-access-8l7xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:14:34 crc kubenswrapper[4833]: I1013 08:14:34.271654 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d256a571-b50d-47b7-b502-064e126ce3c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d256a571-b50d-47b7-b502-064e126ce3c4" (UID: "d256a571-b50d-47b7-b502-064e126ce3c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:14:34 crc kubenswrapper[4833]: I1013 08:14:34.321042 4833 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d256a571-b50d-47b7-b502-064e126ce3c4-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 13 08:14:34 crc kubenswrapper[4833]: I1013 08:14:34.321086 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8l7xd\" (UniqueName: \"kubernetes.io/projected/d256a571-b50d-47b7-b502-064e126ce3c4-kube-api-access-8l7xd\") on node \"crc\" DevicePath \"\"" Oct 13 08:14:34 crc kubenswrapper[4833]: I1013 08:14:34.321104 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d256a571-b50d-47b7-b502-064e126ce3c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:14:34 crc kubenswrapper[4833]: I1013 08:14:34.322856 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d256a571-b50d-47b7-b502-064e126ce3c4-config-data" (OuterVolumeSpecName: "config-data") pod "d256a571-b50d-47b7-b502-064e126ce3c4" (UID: "d256a571-b50d-47b7-b502-064e126ce3c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:14:34 crc kubenswrapper[4833]: I1013 08:14:34.422970 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d256a571-b50d-47b7-b502-064e126ce3c4-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:14:34 crc kubenswrapper[4833]: I1013 08:14:34.578070 4833 generic.go:334] "Generic (PLEG): container finished" podID="d256a571-b50d-47b7-b502-064e126ce3c4" containerID="47d1d07b4aa5e11a0d8d8bb458caf3ea564539db5f3e870722483bdf7dd5558b" exitCode=0 Oct 13 08:14:34 crc kubenswrapper[4833]: I1013 08:14:34.578137 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6cfcb4bbcf-kjtxc" event={"ID":"d256a571-b50d-47b7-b502-064e126ce3c4","Type":"ContainerDied","Data":"47d1d07b4aa5e11a0d8d8bb458caf3ea564539db5f3e870722483bdf7dd5558b"} Oct 13 08:14:34 crc kubenswrapper[4833]: I1013 08:14:34.578183 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6cfcb4bbcf-kjtxc" event={"ID":"d256a571-b50d-47b7-b502-064e126ce3c4","Type":"ContainerDied","Data":"883a12494edda6a24ee223f31ec3236907b182b9421e399ae7ca17c1bdb19390"} Oct 13 08:14:34 crc kubenswrapper[4833]: I1013 08:14:34.578210 4833 scope.go:117] "RemoveContainer" containerID="47d1d07b4aa5e11a0d8d8bb458caf3ea564539db5f3e870722483bdf7dd5558b" Oct 13 08:14:34 crc kubenswrapper[4833]: I1013 08:14:34.578209 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6cfcb4bbcf-kjtxc" Oct 13 08:14:34 crc kubenswrapper[4833]: I1013 08:14:34.637009 4833 scope.go:117] "RemoveContainer" containerID="47d1d07b4aa5e11a0d8d8bb458caf3ea564539db5f3e870722483bdf7dd5558b" Oct 13 08:14:34 crc kubenswrapper[4833]: E1013 08:14:34.637699 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47d1d07b4aa5e11a0d8d8bb458caf3ea564539db5f3e870722483bdf7dd5558b\": container with ID starting with 47d1d07b4aa5e11a0d8d8bb458caf3ea564539db5f3e870722483bdf7dd5558b not found: ID does not exist" containerID="47d1d07b4aa5e11a0d8d8bb458caf3ea564539db5f3e870722483bdf7dd5558b" Oct 13 08:14:34 crc kubenswrapper[4833]: I1013 08:14:34.637839 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47d1d07b4aa5e11a0d8d8bb458caf3ea564539db5f3e870722483bdf7dd5558b"} err="failed to get container status \"47d1d07b4aa5e11a0d8d8bb458caf3ea564539db5f3e870722483bdf7dd5558b\": rpc error: code = NotFound desc = could not find container \"47d1d07b4aa5e11a0d8d8bb458caf3ea564539db5f3e870722483bdf7dd5558b\": container with ID starting with 47d1d07b4aa5e11a0d8d8bb458caf3ea564539db5f3e870722483bdf7dd5558b not found: ID does not exist" Oct 13 08:14:34 crc kubenswrapper[4833]: I1013 08:14:34.644771 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6cfcb4bbcf-kjtxc"] Oct 13 08:14:34 crc kubenswrapper[4833]: I1013 08:14:34.649301 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6cfcb4bbcf-kjtxc"] Oct 13 08:14:36 crc kubenswrapper[4833]: I1013 08:14:36.641097 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d256a571-b50d-47b7-b502-064e126ce3c4" path="/var/lib/kubelet/pods/d256a571-b50d-47b7-b502-064e126ce3c4/volumes" Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.258930 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-cc5d665d5-b4fc4" Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.321105 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6dd8b67b84-5hnhw"] Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.392853 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-68779bfcb7-pjbs4" Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.468086 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-cdd768f9b-ghvwt"] Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.635075 4833 scope.go:117] "RemoveContainer" containerID="d4d4c44bb0e773c2232317552c7e13324e9005e08dbd3102a3b4703dc201b6e8" Oct 13 08:14:40 crc kubenswrapper[4833]: E1013 08:14:40.635758 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.817138 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6dd8b67b84-5hnhw" Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.824067 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-cdd768f9b-ghvwt" Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.837615 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-69748b5-xzlww" Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.876935 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jflf9\" (UniqueName: \"kubernetes.io/projected/3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e-kube-api-access-jflf9\") pod \"3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e\" (UID: \"3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e\") " Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.876975 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9152798d-5d54-4a88-8d83-05903594a058-combined-ca-bundle\") pod \"9152798d-5d54-4a88-8d83-05903594a058\" (UID: \"9152798d-5d54-4a88-8d83-05903594a058\") " Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.876998 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9152798d-5d54-4a88-8d83-05903594a058-config-data-custom\") pod \"9152798d-5d54-4a88-8d83-05903594a058\" (UID: \"9152798d-5d54-4a88-8d83-05903594a058\") " Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.877073 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9152798d-5d54-4a88-8d83-05903594a058-config-data\") pod \"9152798d-5d54-4a88-8d83-05903594a058\" (UID: \"9152798d-5d54-4a88-8d83-05903594a058\") " Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.877111 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e-config-data-custom\") pod \"3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e\" (UID: \"3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e\") " Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.877157 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzqw6\" (UniqueName: \"kubernetes.io/projected/9152798d-5d54-4a88-8d83-05903594a058-kube-api-access-dzqw6\") pod \"9152798d-5d54-4a88-8d83-05903594a058\" (UID: \"9152798d-5d54-4a88-8d83-05903594a058\") " Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.877244 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e-config-data\") pod \"3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e\" (UID: \"3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e\") " Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.877355 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e-combined-ca-bundle\") pod \"3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e\" (UID: \"3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e\") " Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.885660 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9152798d-5d54-4a88-8d83-05903594a058-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9152798d-5d54-4a88-8d83-05903594a058" (UID: "9152798d-5d54-4a88-8d83-05903594a058"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.886072 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e-kube-api-access-jflf9" (OuterVolumeSpecName: "kube-api-access-jflf9") pod "3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e" (UID: "3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e"). InnerVolumeSpecName "kube-api-access-jflf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.888078 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9152798d-5d54-4a88-8d83-05903594a058-kube-api-access-dzqw6" (OuterVolumeSpecName: "kube-api-access-dzqw6") pod "9152798d-5d54-4a88-8d83-05903594a058" (UID: "9152798d-5d54-4a88-8d83-05903594a058"). InnerVolumeSpecName "kube-api-access-dzqw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.889554 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e" (UID: "3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.923805 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e" (UID: "3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.923819 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9152798d-5d54-4a88-8d83-05903594a058-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9152798d-5d54-4a88-8d83-05903594a058" (UID: "9152798d-5d54-4a88-8d83-05903594a058"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.941305 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9152798d-5d54-4a88-8d83-05903594a058-config-data" (OuterVolumeSpecName: "config-data") pod "9152798d-5d54-4a88-8d83-05903594a058" (UID: "9152798d-5d54-4a88-8d83-05903594a058"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.954746 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e-config-data" (OuterVolumeSpecName: "config-data") pod "3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e" (UID: "3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.979343 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.979590 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.979654 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jflf9\" (UniqueName: \"kubernetes.io/projected/3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e-kube-api-access-jflf9\") on node \"crc\" DevicePath \"\"" Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.979717 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9152798d-5d54-4a88-8d83-05903594a058-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.979780 4833 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9152798d-5d54-4a88-8d83-05903594a058-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.979837 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9152798d-5d54-4a88-8d83-05903594a058-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.979893 4833 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 13 08:14:40 crc kubenswrapper[4833]: I1013 08:14:40.979944 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzqw6\" (UniqueName: \"kubernetes.io/projected/9152798d-5d54-4a88-8d83-05903594a058-kube-api-access-dzqw6\") on node \"crc\" DevicePath \"\"" Oct 13 08:14:41 crc kubenswrapper[4833]: I1013 08:14:41.670247 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6dd8b67b84-5hnhw" event={"ID":"3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e","Type":"ContainerDied","Data":"75a0844c25b0283db338f181dd93d88025b0e919295bf2d82ec18b08d63839d6"} Oct 13 08:14:41 crc kubenswrapper[4833]: I1013 08:14:41.670628 4833 scope.go:117] "RemoveContainer" containerID="5b94542b3d9ab95051e749d7c1191858d78a82a4f3d610322f186760bc7a1ba4" Oct 13 08:14:41 crc kubenswrapper[4833]: I1013 08:14:41.670325 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6dd8b67b84-5hnhw" Oct 13 08:14:41 crc kubenswrapper[4833]: I1013 08:14:41.672902 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-cdd768f9b-ghvwt" event={"ID":"9152798d-5d54-4a88-8d83-05903594a058","Type":"ContainerDied","Data":"c9d6c408e4289b696c612e4c09f36ddfbe1a55f5a5c598448d38932e45cd3254"} Oct 13 08:14:41 crc kubenswrapper[4833]: I1013 08:14:41.673005 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-cdd768f9b-ghvwt" Oct 13 08:14:41 crc kubenswrapper[4833]: I1013 08:14:41.754098 4833 scope.go:117] "RemoveContainer" containerID="0098df4daaa24e556f6cc8fc3a87ccf885fec53e85bcf6e67042af69e00821fa" Oct 13 08:14:41 crc kubenswrapper[4833]: I1013 08:14:41.796213 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6dd8b67b84-5hnhw"] Oct 13 08:14:41 crc kubenswrapper[4833]: I1013 08:14:41.809318 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6dd8b67b84-5hnhw"] Oct 13 08:14:41 crc kubenswrapper[4833]: I1013 08:14:41.848812 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-cdd768f9b-ghvwt"] Oct 13 08:14:41 crc kubenswrapper[4833]: I1013 08:14:41.863518 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-cdd768f9b-ghvwt"] Oct 13 08:14:42 crc kubenswrapper[4833]: I1013 08:14:42.686836 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e" path="/var/lib/kubelet/pods/3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e/volumes" Oct 13 08:14:42 crc kubenswrapper[4833]: I1013 08:14:42.687846 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9152798d-5d54-4a88-8d83-05903594a058" path="/var/lib/kubelet/pods/9152798d-5d54-4a88-8d83-05903594a058/volumes" Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.088149 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66c86f59f8-n4tkm" Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.130759 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d86df57-4667-4910-9463-8366b3080c9f-scripts\") pod \"2d86df57-4667-4910-9463-8366b3080c9f\" (UID: \"2d86df57-4667-4910-9463-8366b3080c9f\") " Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.130803 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kxdm\" (UniqueName: \"kubernetes.io/projected/2d86df57-4667-4910-9463-8366b3080c9f-kube-api-access-4kxdm\") pod \"2d86df57-4667-4910-9463-8366b3080c9f\" (UID: \"2d86df57-4667-4910-9463-8366b3080c9f\") " Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.130937 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d86df57-4667-4910-9463-8366b3080c9f-config-data\") pod \"2d86df57-4667-4910-9463-8366b3080c9f\" (UID: \"2d86df57-4667-4910-9463-8366b3080c9f\") " Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.131012 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d86df57-4667-4910-9463-8366b3080c9f-logs\") pod \"2d86df57-4667-4910-9463-8366b3080c9f\" (UID: \"2d86df57-4667-4910-9463-8366b3080c9f\") " Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.131046 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d86df57-4667-4910-9463-8366b3080c9f-combined-ca-bundle\") pod \"2d86df57-4667-4910-9463-8366b3080c9f\" (UID: \"2d86df57-4667-4910-9463-8366b3080c9f\") " Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.131090 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2d86df57-4667-4910-9463-8366b3080c9f-horizon-secret-key\") pod \"2d86df57-4667-4910-9463-8366b3080c9f\" (UID: \"2d86df57-4667-4910-9463-8366b3080c9f\") " Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.131209 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d86df57-4667-4910-9463-8366b3080c9f-horizon-tls-certs\") pod \"2d86df57-4667-4910-9463-8366b3080c9f\" (UID: \"2d86df57-4667-4910-9463-8366b3080c9f\") " Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.132994 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d86df57-4667-4910-9463-8366b3080c9f-logs" (OuterVolumeSpecName: "logs") pod "2d86df57-4667-4910-9463-8366b3080c9f" (UID: "2d86df57-4667-4910-9463-8366b3080c9f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.138486 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d86df57-4667-4910-9463-8366b3080c9f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "2d86df57-4667-4910-9463-8366b3080c9f" (UID: "2d86df57-4667-4910-9463-8366b3080c9f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.140655 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d86df57-4667-4910-9463-8366b3080c9f-kube-api-access-4kxdm" (OuterVolumeSpecName: "kube-api-access-4kxdm") pod "2d86df57-4667-4910-9463-8366b3080c9f" (UID: "2d86df57-4667-4910-9463-8366b3080c9f"). InnerVolumeSpecName "kube-api-access-4kxdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.165041 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d86df57-4667-4910-9463-8366b3080c9f-scripts" (OuterVolumeSpecName: "scripts") pod "2d86df57-4667-4910-9463-8366b3080c9f" (UID: "2d86df57-4667-4910-9463-8366b3080c9f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.170683 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d86df57-4667-4910-9463-8366b3080c9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d86df57-4667-4910-9463-8366b3080c9f" (UID: "2d86df57-4667-4910-9463-8366b3080c9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.179912 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d86df57-4667-4910-9463-8366b3080c9f-config-data" (OuterVolumeSpecName: "config-data") pod "2d86df57-4667-4910-9463-8366b3080c9f" (UID: "2d86df57-4667-4910-9463-8366b3080c9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.201782 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d86df57-4667-4910-9463-8366b3080c9f-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "2d86df57-4667-4910-9463-8366b3080c9f" (UID: "2d86df57-4667-4910-9463-8366b3080c9f"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.233604 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d86df57-4667-4910-9463-8366b3080c9f-logs\") on node \"crc\" DevicePath \"\"" Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.233632 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d86df57-4667-4910-9463-8366b3080c9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.233647 4833 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2d86df57-4667-4910-9463-8366b3080c9f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.233655 4833 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d86df57-4667-4910-9463-8366b3080c9f-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.233664 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d86df57-4667-4910-9463-8366b3080c9f-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.233672 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kxdm\" (UniqueName: \"kubernetes.io/projected/2d86df57-4667-4910-9463-8366b3080c9f-kube-api-access-4kxdm\") on node \"crc\" DevicePath \"\"" Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.233680 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d86df57-4667-4910-9463-8366b3080c9f-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.709610 4833 generic.go:334] "Generic (PLEG): container finished" podID="2d86df57-4667-4910-9463-8366b3080c9f" containerID="ab31ffc2e1fec12a584bcd7ca0c6423a76c4147f5fe94dc28034f4064e3dcc08" exitCode=137 Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.709652 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66c86f59f8-n4tkm" event={"ID":"2d86df57-4667-4910-9463-8366b3080c9f","Type":"ContainerDied","Data":"ab31ffc2e1fec12a584bcd7ca0c6423a76c4147f5fe94dc28034f4064e3dcc08"} Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.709677 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66c86f59f8-n4tkm" event={"ID":"2d86df57-4667-4910-9463-8366b3080c9f","Type":"ContainerDied","Data":"3c40f05a27514a42c5ccf846f6fbf566f5c76ffcfba43617e26958d222b25668"} Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.709695 4833 scope.go:117] "RemoveContainer" containerID="994c4d313cd3b450e5bd3a26d1d2422f374f88377480ff0e54b2d612aa35b2f3" Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.709786 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66c86f59f8-n4tkm" Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.746666 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66c86f59f8-n4tkm"] Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.756569 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-66c86f59f8-n4tkm"] Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.898801 4833 scope.go:117] "RemoveContainer" containerID="ab31ffc2e1fec12a584bcd7ca0c6423a76c4147f5fe94dc28034f4064e3dcc08" Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.916572 4833 scope.go:117] "RemoveContainer" containerID="994c4d313cd3b450e5bd3a26d1d2422f374f88377480ff0e54b2d612aa35b2f3" Oct 13 08:14:43 crc kubenswrapper[4833]: E1013 08:14:43.916999 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"994c4d313cd3b450e5bd3a26d1d2422f374f88377480ff0e54b2d612aa35b2f3\": container with ID starting with 994c4d313cd3b450e5bd3a26d1d2422f374f88377480ff0e54b2d612aa35b2f3 not found: ID does not exist" containerID="994c4d313cd3b450e5bd3a26d1d2422f374f88377480ff0e54b2d612aa35b2f3" Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.917040 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"994c4d313cd3b450e5bd3a26d1d2422f374f88377480ff0e54b2d612aa35b2f3"} err="failed to get container status \"994c4d313cd3b450e5bd3a26d1d2422f374f88377480ff0e54b2d612aa35b2f3\": rpc error: code = NotFound desc = could not find container \"994c4d313cd3b450e5bd3a26d1d2422f374f88377480ff0e54b2d612aa35b2f3\": container with ID starting with 994c4d313cd3b450e5bd3a26d1d2422f374f88377480ff0e54b2d612aa35b2f3 not found: ID does not exist" Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.917069 4833 scope.go:117] "RemoveContainer" containerID="ab31ffc2e1fec12a584bcd7ca0c6423a76c4147f5fe94dc28034f4064e3dcc08" Oct 13 08:14:43 crc kubenswrapper[4833]: E1013 08:14:43.917405 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab31ffc2e1fec12a584bcd7ca0c6423a76c4147f5fe94dc28034f4064e3dcc08\": container with ID starting with ab31ffc2e1fec12a584bcd7ca0c6423a76c4147f5fe94dc28034f4064e3dcc08 not found: ID does not exist" containerID="ab31ffc2e1fec12a584bcd7ca0c6423a76c4147f5fe94dc28034f4064e3dcc08" Oct 13 08:14:43 crc kubenswrapper[4833]: I1013 08:14:43.917437 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab31ffc2e1fec12a584bcd7ca0c6423a76c4147f5fe94dc28034f4064e3dcc08"} err="failed to get container status \"ab31ffc2e1fec12a584bcd7ca0c6423a76c4147f5fe94dc28034f4064e3dcc08\": rpc error: code = NotFound desc = could not find container \"ab31ffc2e1fec12a584bcd7ca0c6423a76c4147f5fe94dc28034f4064e3dcc08\": container with ID starting with ab31ffc2e1fec12a584bcd7ca0c6423a76c4147f5fe94dc28034f4064e3dcc08 not found: ID does not exist" Oct 13 08:14:44 crc kubenswrapper[4833]: I1013 08:14:44.640520 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d86df57-4667-4910-9463-8366b3080c9f" path="/var/lib/kubelet/pods/2d86df57-4667-4910-9463-8366b3080c9f/volumes" Oct 13 08:14:47 crc kubenswrapper[4833]: I1013 08:14:47.682234 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6664854fc-pnxtp" Oct 13 08:14:47 crc kubenswrapper[4833]: I1013 08:14:47.763202 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-69748b5-xzlww"] Oct 13 08:14:47 crc kubenswrapper[4833]: I1013 08:14:47.763446 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-69748b5-xzlww" podUID="264dc850-9061-4a78-b7b5-f26d1e2fdee2" containerName="heat-engine" containerID="cri-o://a9bbb65c37eb2c3608dc3bb9656a53d8d67ae6ccbc586486d01fa955c03e7f60" gracePeriod=60 Oct 13 08:14:50 crc kubenswrapper[4833]: E1013 08:14:50.806166 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a9bbb65c37eb2c3608dc3bb9656a53d8d67ae6ccbc586486d01fa955c03e7f60" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 13 08:14:50 crc kubenswrapper[4833]: E1013 08:14:50.808911 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a9bbb65c37eb2c3608dc3bb9656a53d8d67ae6ccbc586486d01fa955c03e7f60" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 13 08:14:50 crc kubenswrapper[4833]: E1013 08:14:50.811229 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a9bbb65c37eb2c3608dc3bb9656a53d8d67ae6ccbc586486d01fa955c03e7f60" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 13 08:14:50 crc kubenswrapper[4833]: E1013 08:14:50.811318 4833 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-69748b5-xzlww" podUID="264dc850-9061-4a78-b7b5-f26d1e2fdee2" containerName="heat-engine" Oct 13 08:14:53 crc kubenswrapper[4833]: I1013 08:14:53.498496 4833 scope.go:117] "RemoveContainer" containerID="4972162f9b7ee3ce65a461cc5c46bb9c6b189afd806fd4241c71a71f927d4297" Oct 13 08:14:53 crc kubenswrapper[4833]: I1013 08:14:53.626993 4833 scope.go:117] "RemoveContainer" containerID="d4d4c44bb0e773c2232317552c7e13324e9005e08dbd3102a3b4703dc201b6e8" Oct 13 08:14:53 crc kubenswrapper[4833]: E1013 08:14:53.627397 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:14:53 crc kubenswrapper[4833]: I1013 08:14:53.847513 4833 generic.go:334] "Generic (PLEG): container finished" podID="264dc850-9061-4a78-b7b5-f26d1e2fdee2" containerID="a9bbb65c37eb2c3608dc3bb9656a53d8d67ae6ccbc586486d01fa955c03e7f60" exitCode=0 Oct 13 08:14:53 crc kubenswrapper[4833]: I1013 08:14:53.847600 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-69748b5-xzlww" event={"ID":"264dc850-9061-4a78-b7b5-f26d1e2fdee2","Type":"ContainerDied","Data":"a9bbb65c37eb2c3608dc3bb9656a53d8d67ae6ccbc586486d01fa955c03e7f60"} Oct 13 08:14:54 crc kubenswrapper[4833]: I1013 08:14:54.160607 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-69748b5-xzlww" Oct 13 08:14:54 crc kubenswrapper[4833]: I1013 08:14:54.321379 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/264dc850-9061-4a78-b7b5-f26d1e2fdee2-config-data-custom\") pod \"264dc850-9061-4a78-b7b5-f26d1e2fdee2\" (UID: \"264dc850-9061-4a78-b7b5-f26d1e2fdee2\") " Oct 13 08:14:54 crc kubenswrapper[4833]: I1013 08:14:54.321523 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/264dc850-9061-4a78-b7b5-f26d1e2fdee2-config-data\") pod \"264dc850-9061-4a78-b7b5-f26d1e2fdee2\" (UID: \"264dc850-9061-4a78-b7b5-f26d1e2fdee2\") " Oct 13 08:14:54 crc kubenswrapper[4833]: I1013 08:14:54.321599 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf776\" (UniqueName: \"kubernetes.io/projected/264dc850-9061-4a78-b7b5-f26d1e2fdee2-kube-api-access-zf776\") pod \"264dc850-9061-4a78-b7b5-f26d1e2fdee2\" (UID: \"264dc850-9061-4a78-b7b5-f26d1e2fdee2\") " Oct 13 08:14:54 crc kubenswrapper[4833]: I1013 08:14:54.321669 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/264dc850-9061-4a78-b7b5-f26d1e2fdee2-combined-ca-bundle\") pod \"264dc850-9061-4a78-b7b5-f26d1e2fdee2\" (UID: \"264dc850-9061-4a78-b7b5-f26d1e2fdee2\") " Oct 13 08:14:54 crc kubenswrapper[4833]: I1013 08:14:54.329428 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/264dc850-9061-4a78-b7b5-f26d1e2fdee2-kube-api-access-zf776" (OuterVolumeSpecName: "kube-api-access-zf776") pod "264dc850-9061-4a78-b7b5-f26d1e2fdee2" (UID: "264dc850-9061-4a78-b7b5-f26d1e2fdee2"). InnerVolumeSpecName "kube-api-access-zf776". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:14:54 crc kubenswrapper[4833]: I1013 08:14:54.348483 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/264dc850-9061-4a78-b7b5-f26d1e2fdee2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "264dc850-9061-4a78-b7b5-f26d1e2fdee2" (UID: "264dc850-9061-4a78-b7b5-f26d1e2fdee2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:14:54 crc kubenswrapper[4833]: I1013 08:14:54.357493 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/264dc850-9061-4a78-b7b5-f26d1e2fdee2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "264dc850-9061-4a78-b7b5-f26d1e2fdee2" (UID: "264dc850-9061-4a78-b7b5-f26d1e2fdee2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:14:54 crc kubenswrapper[4833]: I1013 08:14:54.385228 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/264dc850-9061-4a78-b7b5-f26d1e2fdee2-config-data" (OuterVolumeSpecName: "config-data") pod "264dc850-9061-4a78-b7b5-f26d1e2fdee2" (UID: "264dc850-9061-4a78-b7b5-f26d1e2fdee2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:14:54 crc kubenswrapper[4833]: I1013 08:14:54.424392 4833 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/264dc850-9061-4a78-b7b5-f26d1e2fdee2-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 13 08:14:54 crc kubenswrapper[4833]: I1013 08:14:54.424459 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/264dc850-9061-4a78-b7b5-f26d1e2fdee2-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:14:54 crc kubenswrapper[4833]: I1013 08:14:54.424478 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf776\" (UniqueName: \"kubernetes.io/projected/264dc850-9061-4a78-b7b5-f26d1e2fdee2-kube-api-access-zf776\") on node \"crc\" DevicePath \"\"" Oct 13 08:14:54 crc kubenswrapper[4833]: I1013 08:14:54.424497 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/264dc850-9061-4a78-b7b5-f26d1e2fdee2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:14:54 crc kubenswrapper[4833]: I1013 08:14:54.859599 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-69748b5-xzlww" event={"ID":"264dc850-9061-4a78-b7b5-f26d1e2fdee2","Type":"ContainerDied","Data":"76d15e3b7dfc360c7dacec599fdb5feb64e376c7b851285d7a0a0d6b672aebd7"} Oct 13 08:14:54 crc kubenswrapper[4833]: I1013 08:14:54.859649 4833 scope.go:117] "RemoveContainer" containerID="a9bbb65c37eb2c3608dc3bb9656a53d8d67ae6ccbc586486d01fa955c03e7f60" Oct 13 08:14:54 crc kubenswrapper[4833]: I1013 08:14:54.859763 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-69748b5-xzlww" Oct 13 08:14:54 crc kubenswrapper[4833]: I1013 08:14:54.892493 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-69748b5-xzlww"] Oct 13 08:14:54 crc kubenswrapper[4833]: I1013 08:14:54.903232 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-69748b5-xzlww"] Oct 13 08:14:56 crc kubenswrapper[4833]: I1013 08:14:56.644053 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="264dc850-9061-4a78-b7b5-f26d1e2fdee2" path="/var/lib/kubelet/pods/264dc850-9061-4a78-b7b5-f26d1e2fdee2/volumes" Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.045856 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-jspfq"] Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.058370 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-jspfq"] Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.161205 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339055-77zpf"] Oct 13 08:15:00 crc kubenswrapper[4833]: E1013 08:15:00.162409 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d86df57-4667-4910-9463-8366b3080c9f" containerName="horizon" Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.162454 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d86df57-4667-4910-9463-8366b3080c9f" containerName="horizon" Oct 13 08:15:00 crc kubenswrapper[4833]: E1013 08:15:00.162481 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9152798d-5d54-4a88-8d83-05903594a058" containerName="heat-cfnapi" Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.162490 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9152798d-5d54-4a88-8d83-05903594a058" containerName="heat-cfnapi" Oct 13 08:15:00 crc kubenswrapper[4833]: E1013 08:15:00.162510 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d256a571-b50d-47b7-b502-064e126ce3c4" containerName="heat-api" Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.162518 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d256a571-b50d-47b7-b502-064e126ce3c4" containerName="heat-api" Oct 13 08:15:00 crc kubenswrapper[4833]: E1013 08:15:00.162552 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e" containerName="heat-api" Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.162564 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e" containerName="heat-api" Oct 13 08:15:00 crc kubenswrapper[4833]: E1013 08:15:00.162595 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b82a1fc6-0e74-4766-94ac-31a29f8985bf" containerName="heat-cfnapi" Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.162606 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b82a1fc6-0e74-4766-94ac-31a29f8985bf" containerName="heat-cfnapi" Oct 13 08:15:00 crc kubenswrapper[4833]: E1013 08:15:00.162637 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d86df57-4667-4910-9463-8366b3080c9f" containerName="horizon-log" Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.162649 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d86df57-4667-4910-9463-8366b3080c9f" containerName="horizon-log" Oct 13 08:15:00 crc kubenswrapper[4833]: E1013 08:15:00.162677 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="264dc850-9061-4a78-b7b5-f26d1e2fdee2" containerName="heat-engine" Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.162688 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="264dc850-9061-4a78-b7b5-f26d1e2fdee2" containerName="heat-engine" Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.163062 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="264dc850-9061-4a78-b7b5-f26d1e2fdee2" containerName="heat-engine" Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.163096 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e" containerName="heat-api" Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.163139 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="9152798d-5d54-4a88-8d83-05903594a058" containerName="heat-cfnapi" Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.163156 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e" containerName="heat-api" Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.163172 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d256a571-b50d-47b7-b502-064e126ce3c4" containerName="heat-api" Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.163191 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d86df57-4667-4910-9463-8366b3080c9f" containerName="horizon-log" Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.163223 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d86df57-4667-4910-9463-8366b3080c9f" containerName="horizon" Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.163249 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b82a1fc6-0e74-4766-94ac-31a29f8985bf" containerName="heat-cfnapi" Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.164382 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339055-77zpf" Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.172378 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339055-77zpf"] Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.206335 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.206582 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.346612 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40b04da5-acdc-491e-b1a6-c2a377dc8284-config-volume\") pod \"collect-profiles-29339055-77zpf\" (UID: \"40b04da5-acdc-491e-b1a6-c2a377dc8284\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339055-77zpf" Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.346697 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txqcr\" (UniqueName: \"kubernetes.io/projected/40b04da5-acdc-491e-b1a6-c2a377dc8284-kube-api-access-txqcr\") pod \"collect-profiles-29339055-77zpf\" (UID: \"40b04da5-acdc-491e-b1a6-c2a377dc8284\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339055-77zpf" Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.347134 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40b04da5-acdc-491e-b1a6-c2a377dc8284-secret-volume\") pod \"collect-profiles-29339055-77zpf\" (UID: \"40b04da5-acdc-491e-b1a6-c2a377dc8284\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339055-77zpf" Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.448882 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40b04da5-acdc-491e-b1a6-c2a377dc8284-config-volume\") pod \"collect-profiles-29339055-77zpf\" (UID: \"40b04da5-acdc-491e-b1a6-c2a377dc8284\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339055-77zpf" Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.448933 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txqcr\" (UniqueName: \"kubernetes.io/projected/40b04da5-acdc-491e-b1a6-c2a377dc8284-kube-api-access-txqcr\") pod \"collect-profiles-29339055-77zpf\" (UID: \"40b04da5-acdc-491e-b1a6-c2a377dc8284\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339055-77zpf" Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.449021 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40b04da5-acdc-491e-b1a6-c2a377dc8284-secret-volume\") pod \"collect-profiles-29339055-77zpf\" (UID: \"40b04da5-acdc-491e-b1a6-c2a377dc8284\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339055-77zpf" Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.450473 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40b04da5-acdc-491e-b1a6-c2a377dc8284-config-volume\") pod \"collect-profiles-29339055-77zpf\" (UID: \"40b04da5-acdc-491e-b1a6-c2a377dc8284\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339055-77zpf" Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.457977 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40b04da5-acdc-491e-b1a6-c2a377dc8284-secret-volume\") pod \"collect-profiles-29339055-77zpf\" (UID: \"40b04da5-acdc-491e-b1a6-c2a377dc8284\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339055-77zpf" Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.481832 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txqcr\" (UniqueName: \"kubernetes.io/projected/40b04da5-acdc-491e-b1a6-c2a377dc8284-kube-api-access-txqcr\") pod \"collect-profiles-29339055-77zpf\" (UID: \"40b04da5-acdc-491e-b1a6-c2a377dc8284\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339055-77zpf" Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.533853 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339055-77zpf" Oct 13 08:15:00 crc kubenswrapper[4833]: I1013 08:15:00.645624 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="601fd1b5-672e-469c-ab36-ea8b202585b6" path="/var/lib/kubelet/pods/601fd1b5-672e-469c-ab36-ea8b202585b6/volumes" Oct 13 08:15:01 crc kubenswrapper[4833]: I1013 08:15:01.011304 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339055-77zpf"] Oct 13 08:15:01 crc kubenswrapper[4833]: I1013 08:15:01.953022 4833 generic.go:334] "Generic (PLEG): container finished" podID="40b04da5-acdc-491e-b1a6-c2a377dc8284" containerID="77d1505ed1c409974f1884c5c44b65e1e7d11bf83e2f541f8523d078a2f3142d" exitCode=0 Oct 13 08:15:01 crc kubenswrapper[4833]: I1013 08:15:01.953091 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339055-77zpf" event={"ID":"40b04da5-acdc-491e-b1a6-c2a377dc8284","Type":"ContainerDied","Data":"77d1505ed1c409974f1884c5c44b65e1e7d11bf83e2f541f8523d078a2f3142d"} Oct 13 08:15:01 crc kubenswrapper[4833]: I1013 08:15:01.953386 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339055-77zpf" event={"ID":"40b04da5-acdc-491e-b1a6-c2a377dc8284","Type":"ContainerStarted","Data":"4bcb9aa40523b3e2d065721b49de399bc76c5e9bcc11fe21a7c9e56731a1924f"} Oct 13 08:15:03 crc kubenswrapper[4833]: I1013 08:15:03.345154 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339055-77zpf" Oct 13 08:15:03 crc kubenswrapper[4833]: I1013 08:15:03.525230 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40b04da5-acdc-491e-b1a6-c2a377dc8284-config-volume\") pod \"40b04da5-acdc-491e-b1a6-c2a377dc8284\" (UID: \"40b04da5-acdc-491e-b1a6-c2a377dc8284\") " Oct 13 08:15:03 crc kubenswrapper[4833]: I1013 08:15:03.525298 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40b04da5-acdc-491e-b1a6-c2a377dc8284-secret-volume\") pod \"40b04da5-acdc-491e-b1a6-c2a377dc8284\" (UID: \"40b04da5-acdc-491e-b1a6-c2a377dc8284\") " Oct 13 08:15:03 crc kubenswrapper[4833]: I1013 08:15:03.525338 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txqcr\" (UniqueName: \"kubernetes.io/projected/40b04da5-acdc-491e-b1a6-c2a377dc8284-kube-api-access-txqcr\") pod \"40b04da5-acdc-491e-b1a6-c2a377dc8284\" (UID: \"40b04da5-acdc-491e-b1a6-c2a377dc8284\") " Oct 13 08:15:03 crc kubenswrapper[4833]: I1013 08:15:03.526039 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40b04da5-acdc-491e-b1a6-c2a377dc8284-config-volume" (OuterVolumeSpecName: "config-volume") pod "40b04da5-acdc-491e-b1a6-c2a377dc8284" (UID: "40b04da5-acdc-491e-b1a6-c2a377dc8284"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:15:03 crc kubenswrapper[4833]: I1013 08:15:03.526164 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40b04da5-acdc-491e-b1a6-c2a377dc8284-config-volume\") on node \"crc\" DevicePath \"\"" Oct 13 08:15:03 crc kubenswrapper[4833]: I1013 08:15:03.530672 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40b04da5-acdc-491e-b1a6-c2a377dc8284-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "40b04da5-acdc-491e-b1a6-c2a377dc8284" (UID: "40b04da5-acdc-491e-b1a6-c2a377dc8284"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:15:03 crc kubenswrapper[4833]: I1013 08:15:03.531393 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40b04da5-acdc-491e-b1a6-c2a377dc8284-kube-api-access-txqcr" (OuterVolumeSpecName: "kube-api-access-txqcr") pod "40b04da5-acdc-491e-b1a6-c2a377dc8284" (UID: "40b04da5-acdc-491e-b1a6-c2a377dc8284"). InnerVolumeSpecName "kube-api-access-txqcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:15:03 crc kubenswrapper[4833]: I1013 08:15:03.628563 4833 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40b04da5-acdc-491e-b1a6-c2a377dc8284-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 13 08:15:03 crc kubenswrapper[4833]: I1013 08:15:03.628605 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txqcr\" (UniqueName: \"kubernetes.io/projected/40b04da5-acdc-491e-b1a6-c2a377dc8284-kube-api-access-txqcr\") on node \"crc\" DevicePath \"\"" Oct 13 08:15:03 crc kubenswrapper[4833]: I1013 08:15:03.868876 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k"] Oct 13 08:15:03 crc kubenswrapper[4833]: E1013 08:15:03.869370 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9152798d-5d54-4a88-8d83-05903594a058" containerName="heat-cfnapi" Oct 13 08:15:03 crc kubenswrapper[4833]: I1013 08:15:03.869393 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9152798d-5d54-4a88-8d83-05903594a058" containerName="heat-cfnapi" Oct 13 08:15:03 crc kubenswrapper[4833]: E1013 08:15:03.869413 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40b04da5-acdc-491e-b1a6-c2a377dc8284" containerName="collect-profiles" Oct 13 08:15:03 crc kubenswrapper[4833]: I1013 08:15:03.869421 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="40b04da5-acdc-491e-b1a6-c2a377dc8284" containerName="collect-profiles" Oct 13 08:15:03 crc kubenswrapper[4833]: E1013 08:15:03.869441 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e" containerName="heat-api" Oct 13 08:15:03 crc kubenswrapper[4833]: I1013 08:15:03.869451 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1ef5fc-eeeb-410a-90d3-998b02a6fe1e" containerName="heat-api" Oct 13 08:15:03 crc kubenswrapper[4833]: I1013 08:15:03.869717 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="9152798d-5d54-4a88-8d83-05903594a058" containerName="heat-cfnapi" Oct 13 08:15:03 crc kubenswrapper[4833]: I1013 08:15:03.869744 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="40b04da5-acdc-491e-b1a6-c2a377dc8284" containerName="collect-profiles" Oct 13 08:15:03 crc kubenswrapper[4833]: I1013 08:15:03.871852 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k" Oct 13 08:15:03 crc kubenswrapper[4833]: I1013 08:15:03.874048 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 13 08:15:03 crc kubenswrapper[4833]: I1013 08:15:03.879116 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k"] Oct 13 08:15:03 crc kubenswrapper[4833]: I1013 08:15:03.974895 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339055-77zpf" event={"ID":"40b04da5-acdc-491e-b1a6-c2a377dc8284","Type":"ContainerDied","Data":"4bcb9aa40523b3e2d065721b49de399bc76c5e9bcc11fe21a7c9e56731a1924f"} Oct 13 08:15:03 crc kubenswrapper[4833]: I1013 08:15:03.974932 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bcb9aa40523b3e2d065721b49de399bc76c5e9bcc11fe21a7c9e56731a1924f" Oct 13 08:15:03 crc kubenswrapper[4833]: I1013 08:15:03.974970 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339055-77zpf" Oct 13 08:15:04 crc kubenswrapper[4833]: I1013 08:15:04.035919 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a-bundle\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k\" (UID: \"ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k" Oct 13 08:15:04 crc kubenswrapper[4833]: I1013 08:15:04.036006 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a-util\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k\" (UID: \"ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k" Oct 13 08:15:04 crc kubenswrapper[4833]: I1013 08:15:04.036261 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfhrn\" (UniqueName: \"kubernetes.io/projected/ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a-kube-api-access-pfhrn\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k\" (UID: \"ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k" Oct 13 08:15:04 crc kubenswrapper[4833]: I1013 08:15:04.138227 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a-util\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k\" (UID: \"ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k" Oct 13 08:15:04 crc kubenswrapper[4833]: I1013 08:15:04.138422 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfhrn\" (UniqueName: \"kubernetes.io/projected/ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a-kube-api-access-pfhrn\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k\" (UID: \"ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k" Oct 13 08:15:04 crc kubenswrapper[4833]: I1013 08:15:04.138754 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a-bundle\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k\" (UID: \"ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k" Oct 13 08:15:04 crc kubenswrapper[4833]: I1013 08:15:04.139115 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a-util\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k\" (UID: \"ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k" Oct 13 08:15:04 crc kubenswrapper[4833]: I1013 08:15:04.139121 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a-bundle\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k\" (UID: \"ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k" Oct 13 08:15:04 crc kubenswrapper[4833]: I1013 08:15:04.156639 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfhrn\" (UniqueName: \"kubernetes.io/projected/ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a-kube-api-access-pfhrn\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k\" (UID: \"ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k" Oct 13 08:15:04 crc kubenswrapper[4833]: I1013 08:15:04.234865 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k" Oct 13 08:15:04 crc kubenswrapper[4833]: I1013 08:15:04.444409 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339010-9chh5"] Oct 13 08:15:04 crc kubenswrapper[4833]: I1013 08:15:04.464318 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339010-9chh5"] Oct 13 08:15:04 crc kubenswrapper[4833]: I1013 08:15:04.638653 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91db143d-f5c3-48fa-b831-85ab090ffb9f" path="/var/lib/kubelet/pods/91db143d-f5c3-48fa-b831-85ab090ffb9f/volumes" Oct 13 08:15:04 crc kubenswrapper[4833]: I1013 08:15:04.727898 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k"] Oct 13 08:15:04 crc kubenswrapper[4833]: W1013 08:15:04.734297 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded6a7dfb_e951_4ba3_a992_75b7bfd2ad5a.slice/crio-3ae5683791a282f51bcd12eb86fc5b85dce7b576b53f237dc408e0ee67eb4144 WatchSource:0}: Error finding container 3ae5683791a282f51bcd12eb86fc5b85dce7b576b53f237dc408e0ee67eb4144: Status 404 returned error can't find the container with id 3ae5683791a282f51bcd12eb86fc5b85dce7b576b53f237dc408e0ee67eb4144 Oct 13 08:15:04 crc kubenswrapper[4833]: I1013 08:15:04.986123 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k" event={"ID":"ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a","Type":"ContainerStarted","Data":"3ae5683791a282f51bcd12eb86fc5b85dce7b576b53f237dc408e0ee67eb4144"} Oct 13 08:15:06 crc kubenswrapper[4833]: I1013 08:15:06.003753 4833 generic.go:334] "Generic (PLEG): container finished" podID="ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a" containerID="a362e0a796474ff07f20201876df30a3addeb1c777350644fd618d1a79292010" exitCode=0 Oct 13 08:15:06 crc kubenswrapper[4833]: I1013 08:15:06.003851 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k" event={"ID":"ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a","Type":"ContainerDied","Data":"a362e0a796474ff07f20201876df30a3addeb1c777350644fd618d1a79292010"} Oct 13 08:15:06 crc kubenswrapper[4833]: I1013 08:15:06.009307 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 08:15:07 crc kubenswrapper[4833]: I1013 08:15:07.627937 4833 scope.go:117] "RemoveContainer" containerID="d4d4c44bb0e773c2232317552c7e13324e9005e08dbd3102a3b4703dc201b6e8" Oct 13 08:15:07 crc kubenswrapper[4833]: E1013 08:15:07.628962 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:15:08 crc kubenswrapper[4833]: I1013 08:15:08.029092 4833 generic.go:334] "Generic (PLEG): container finished" podID="ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a" containerID="a20c583417aeddc2a7a8da97280ff146a4e6216b0c52899bbe47883de67a7ba9" exitCode=0 Oct 13 08:15:08 crc kubenswrapper[4833]: I1013 08:15:08.029156 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k" event={"ID":"ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a","Type":"ContainerDied","Data":"a20c583417aeddc2a7a8da97280ff146a4e6216b0c52899bbe47883de67a7ba9"} Oct 13 08:15:09 crc kubenswrapper[4833]: I1013 08:15:09.041136 4833 generic.go:334] "Generic (PLEG): container finished" podID="ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a" containerID="cabffc797cef351e9bf450a8fac5def9835d58b8d46c542338c776d66e7b1587" exitCode=0 Oct 13 08:15:09 crc kubenswrapper[4833]: I1013 08:15:09.041203 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k" event={"ID":"ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a","Type":"ContainerDied","Data":"cabffc797cef351e9bf450a8fac5def9835d58b8d46c542338c776d66e7b1587"} Oct 13 08:15:10 crc kubenswrapper[4833]: I1013 08:15:10.508677 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k" Oct 13 08:15:10 crc kubenswrapper[4833]: I1013 08:15:10.689389 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a-util\") pod \"ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a\" (UID: \"ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a\") " Oct 13 08:15:10 crc kubenswrapper[4833]: I1013 08:15:10.689561 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfhrn\" (UniqueName: \"kubernetes.io/projected/ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a-kube-api-access-pfhrn\") pod \"ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a\" (UID: \"ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a\") " Oct 13 08:15:10 crc kubenswrapper[4833]: I1013 08:15:10.689637 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a-bundle\") pod \"ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a\" (UID: \"ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a\") " Oct 13 08:15:10 crc kubenswrapper[4833]: I1013 08:15:10.691268 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a-bundle" (OuterVolumeSpecName: "bundle") pod "ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a" (UID: "ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:15:10 crc kubenswrapper[4833]: I1013 08:15:10.702857 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a-kube-api-access-pfhrn" (OuterVolumeSpecName: "kube-api-access-pfhrn") pod "ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a" (UID: "ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a"). InnerVolumeSpecName "kube-api-access-pfhrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:15:10 crc kubenswrapper[4833]: I1013 08:15:10.707332 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a-util" (OuterVolumeSpecName: "util") pod "ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a" (UID: "ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:15:10 crc kubenswrapper[4833]: I1013 08:15:10.794661 4833 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a-util\") on node \"crc\" DevicePath \"\"" Oct 13 08:15:10 crc kubenswrapper[4833]: I1013 08:15:10.794705 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfhrn\" (UniqueName: \"kubernetes.io/projected/ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a-kube-api-access-pfhrn\") on node \"crc\" DevicePath \"\"" Oct 13 08:15:10 crc kubenswrapper[4833]: I1013 08:15:10.794726 4833 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:15:11 crc kubenswrapper[4833]: I1013 08:15:11.045276 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-9467-account-create-kw92q"] Oct 13 08:15:11 crc kubenswrapper[4833]: I1013 08:15:11.062112 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-9467-account-create-kw92q"] Oct 13 08:15:11 crc kubenswrapper[4833]: I1013 08:15:11.069275 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k" event={"ID":"ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a","Type":"ContainerDied","Data":"3ae5683791a282f51bcd12eb86fc5b85dce7b576b53f237dc408e0ee67eb4144"} Oct 13 08:15:11 crc kubenswrapper[4833]: I1013 08:15:11.069316 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ae5683791a282f51bcd12eb86fc5b85dce7b576b53f237dc408e0ee67eb4144" Oct 13 08:15:11 crc kubenswrapper[4833]: I1013 08:15:11.069397 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k" Oct 13 08:15:12 crc kubenswrapper[4833]: I1013 08:15:12.638608 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc7c7e79-b563-4b67-81de-4bd0ca50cb0a" path="/var/lib/kubelet/pods/dc7c7e79-b563-4b67-81de-4bd0ca50cb0a/volumes" Oct 13 08:15:18 crc kubenswrapper[4833]: I1013 08:15:18.094899 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-h5lrr"] Oct 13 08:15:18 crc kubenswrapper[4833]: I1013 08:15:18.102304 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-h5lrr"] Oct 13 08:15:18 crc kubenswrapper[4833]: I1013 08:15:18.696799 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4aefd2f1-a059-411f-a13c-767d0f58a117" path="/var/lib/kubelet/pods/4aefd2f1-a059-411f-a13c-767d0f58a117/volumes" Oct 13 08:15:21 crc kubenswrapper[4833]: I1013 08:15:21.627173 4833 scope.go:117] "RemoveContainer" containerID="d4d4c44bb0e773c2232317552c7e13324e9005e08dbd3102a3b4703dc201b6e8" Oct 13 08:15:21 crc kubenswrapper[4833]: E1013 08:15:21.627710 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:15:21 crc kubenswrapper[4833]: I1013 08:15:21.697892 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-7c8cf85677-7wnhh"] Oct 13 08:15:21 crc kubenswrapper[4833]: E1013 08:15:21.699962 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a" containerName="pull" Oct 13 08:15:21 crc kubenswrapper[4833]: I1013 08:15:21.699996 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a" containerName="pull" Oct 13 08:15:21 crc kubenswrapper[4833]: E1013 08:15:21.700020 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a" containerName="util" Oct 13 08:15:21 crc kubenswrapper[4833]: I1013 08:15:21.700028 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a" containerName="util" Oct 13 08:15:21 crc kubenswrapper[4833]: E1013 08:15:21.700039 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a" containerName="extract" Oct 13 08:15:21 crc kubenswrapper[4833]: I1013 08:15:21.700046 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a" containerName="extract" Oct 13 08:15:21 crc kubenswrapper[4833]: I1013 08:15:21.700321 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a" containerName="extract" Oct 13 08:15:21 crc kubenswrapper[4833]: I1013 08:15:21.701068 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-7wnhh" Oct 13 08:15:21 crc kubenswrapper[4833]: I1013 08:15:21.703249 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Oct 13 08:15:21 crc kubenswrapper[4833]: I1013 08:15:21.707156 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-mbbgj" Oct 13 08:15:21 crc kubenswrapper[4833]: I1013 08:15:21.713471 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Oct 13 08:15:21 crc kubenswrapper[4833]: I1013 08:15:21.733703 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-7c8cf85677-7wnhh"] Oct 13 08:15:21 crc kubenswrapper[4833]: I1013 08:15:21.837232 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7885f6cfd9-9pr5t"] Oct 13 08:15:21 crc kubenswrapper[4833]: I1013 08:15:21.838807 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7885f6cfd9-9pr5t" Oct 13 08:15:21 crc kubenswrapper[4833]: I1013 08:15:21.841738 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Oct 13 08:15:21 crc kubenswrapper[4833]: I1013 08:15:21.841767 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-d6q79" Oct 13 08:15:21 crc kubenswrapper[4833]: I1013 08:15:21.844339 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zppvz\" (UniqueName: \"kubernetes.io/projected/22d23ba0-fd38-403e-8ec5-6c42942bd13a-kube-api-access-zppvz\") pod \"obo-prometheus-operator-7c8cf85677-7wnhh\" (UID: \"22d23ba0-fd38-403e-8ec5-6c42942bd13a\") " pod="openshift-operators/obo-prometheus-operator-7c8cf85677-7wnhh" Oct 13 08:15:21 crc kubenswrapper[4833]: I1013 08:15:21.861359 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7885f6cfd9-wsw46"] Oct 13 08:15:21 crc kubenswrapper[4833]: I1013 08:15:21.862932 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7885f6cfd9-wsw46" Oct 13 08:15:21 crc kubenswrapper[4833]: I1013 08:15:21.875332 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7885f6cfd9-9pr5t"] Oct 13 08:15:21 crc kubenswrapper[4833]: I1013 08:15:21.885556 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7885f6cfd9-wsw46"] Oct 13 08:15:21 crc kubenswrapper[4833]: I1013 08:15:21.949850 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/937ac9f6-a3cb-405d-a685-e0fcee7d3cea-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7885f6cfd9-9pr5t\" (UID: \"937ac9f6-a3cb-405d-a685-e0fcee7d3cea\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7885f6cfd9-9pr5t" Oct 13 08:15:21 crc kubenswrapper[4833]: I1013 08:15:21.950249 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/937ac9f6-a3cb-405d-a685-e0fcee7d3cea-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7885f6cfd9-9pr5t\" (UID: \"937ac9f6-a3cb-405d-a685-e0fcee7d3cea\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7885f6cfd9-9pr5t" Oct 13 08:15:21 crc kubenswrapper[4833]: I1013 08:15:21.950491 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zppvz\" (UniqueName: \"kubernetes.io/projected/22d23ba0-fd38-403e-8ec5-6c42942bd13a-kube-api-access-zppvz\") pod \"obo-prometheus-operator-7c8cf85677-7wnhh\" (UID: \"22d23ba0-fd38-403e-8ec5-6c42942bd13a\") " pod="openshift-operators/obo-prometheus-operator-7c8cf85677-7wnhh" Oct 13 08:15:21 crc kubenswrapper[4833]: I1013 08:15:21.968266 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zppvz\" (UniqueName: \"kubernetes.io/projected/22d23ba0-fd38-403e-8ec5-6c42942bd13a-kube-api-access-zppvz\") pod \"obo-prometheus-operator-7c8cf85677-7wnhh\" (UID: \"22d23ba0-fd38-403e-8ec5-6c42942bd13a\") " pod="openshift-operators/obo-prometheus-operator-7c8cf85677-7wnhh" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.026181 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-7wnhh" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.052754 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/68878938-abe8-45e8-bd4e-dc920d54cdfa-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7885f6cfd9-wsw46\" (UID: \"68878938-abe8-45e8-bd4e-dc920d54cdfa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7885f6cfd9-wsw46" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.052853 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/937ac9f6-a3cb-405d-a685-e0fcee7d3cea-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7885f6cfd9-9pr5t\" (UID: \"937ac9f6-a3cb-405d-a685-e0fcee7d3cea\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7885f6cfd9-9pr5t" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.052920 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/68878938-abe8-45e8-bd4e-dc920d54cdfa-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7885f6cfd9-wsw46\" (UID: \"68878938-abe8-45e8-bd4e-dc920d54cdfa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7885f6cfd9-wsw46" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.052983 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/937ac9f6-a3cb-405d-a685-e0fcee7d3cea-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7885f6cfd9-9pr5t\" (UID: \"937ac9f6-a3cb-405d-a685-e0fcee7d3cea\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7885f6cfd9-9pr5t" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.063286 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/937ac9f6-a3cb-405d-a685-e0fcee7d3cea-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7885f6cfd9-9pr5t\" (UID: \"937ac9f6-a3cb-405d-a685-e0fcee7d3cea\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7885f6cfd9-9pr5t" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.066388 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/937ac9f6-a3cb-405d-a685-e0fcee7d3cea-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7885f6cfd9-9pr5t\" (UID: \"937ac9f6-a3cb-405d-a685-e0fcee7d3cea\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7885f6cfd9-9pr5t" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.091414 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-cc5f78dfc-9vqdj"] Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.102808 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-cc5f78dfc-9vqdj" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.111506 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-q7djb" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.111672 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.138456 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-cc5f78dfc-9vqdj"] Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.154615 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/68878938-abe8-45e8-bd4e-dc920d54cdfa-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7885f6cfd9-wsw46\" (UID: \"68878938-abe8-45e8-bd4e-dc920d54cdfa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7885f6cfd9-wsw46" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.154783 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/68878938-abe8-45e8-bd4e-dc920d54cdfa-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7885f6cfd9-wsw46\" (UID: \"68878938-abe8-45e8-bd4e-dc920d54cdfa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7885f6cfd9-wsw46" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.158703 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7885f6cfd9-9pr5t" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.159976 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/68878938-abe8-45e8-bd4e-dc920d54cdfa-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7885f6cfd9-wsw46\" (UID: \"68878938-abe8-45e8-bd4e-dc920d54cdfa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7885f6cfd9-wsw46" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.190105 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/68878938-abe8-45e8-bd4e-dc920d54cdfa-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7885f6cfd9-wsw46\" (UID: \"68878938-abe8-45e8-bd4e-dc920d54cdfa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7885f6cfd9-wsw46" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.259050 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tshtm\" (UniqueName: \"kubernetes.io/projected/7c9058d7-a182-413c-b6a7-f25560ca3989-kube-api-access-tshtm\") pod \"observability-operator-cc5f78dfc-9vqdj\" (UID: \"7c9058d7-a182-413c-b6a7-f25560ca3989\") " pod="openshift-operators/observability-operator-cc5f78dfc-9vqdj" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.259470 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7c9058d7-a182-413c-b6a7-f25560ca3989-observability-operator-tls\") pod \"observability-operator-cc5f78dfc-9vqdj\" (UID: \"7c9058d7-a182-413c-b6a7-f25560ca3989\") " pod="openshift-operators/observability-operator-cc5f78dfc-9vqdj" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.361368 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7c9058d7-a182-413c-b6a7-f25560ca3989-observability-operator-tls\") pod \"observability-operator-cc5f78dfc-9vqdj\" (UID: \"7c9058d7-a182-413c-b6a7-f25560ca3989\") " pod="openshift-operators/observability-operator-cc5f78dfc-9vqdj" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.361481 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tshtm\" (UniqueName: \"kubernetes.io/projected/7c9058d7-a182-413c-b6a7-f25560ca3989-kube-api-access-tshtm\") pod \"observability-operator-cc5f78dfc-9vqdj\" (UID: \"7c9058d7-a182-413c-b6a7-f25560ca3989\") " pod="openshift-operators/observability-operator-cc5f78dfc-9vqdj" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.367298 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7c9058d7-a182-413c-b6a7-f25560ca3989-observability-operator-tls\") pod \"observability-operator-cc5f78dfc-9vqdj\" (UID: \"7c9058d7-a182-413c-b6a7-f25560ca3989\") " pod="openshift-operators/observability-operator-cc5f78dfc-9vqdj" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.413720 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-54bc95c9fb-tjpvt"] Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.415305 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-54bc95c9fb-tjpvt" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.417452 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-272dx" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.418387 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tshtm\" (UniqueName: \"kubernetes.io/projected/7c9058d7-a182-413c-b6a7-f25560ca3989-kube-api-access-tshtm\") pod \"observability-operator-cc5f78dfc-9vqdj\" (UID: \"7c9058d7-a182-413c-b6a7-f25560ca3989\") " pod="openshift-operators/observability-operator-cc5f78dfc-9vqdj" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.439811 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-54bc95c9fb-tjpvt"] Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.463319 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0a572d30-12a1-498e-bc38-d27cb95328ce-openshift-service-ca\") pod \"perses-operator-54bc95c9fb-tjpvt\" (UID: \"0a572d30-12a1-498e-bc38-d27cb95328ce\") " pod="openshift-operators/perses-operator-54bc95c9fb-tjpvt" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.463956 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmgbz\" (UniqueName: \"kubernetes.io/projected/0a572d30-12a1-498e-bc38-d27cb95328ce-kube-api-access-dmgbz\") pod \"perses-operator-54bc95c9fb-tjpvt\" (UID: \"0a572d30-12a1-498e-bc38-d27cb95328ce\") " pod="openshift-operators/perses-operator-54bc95c9fb-tjpvt" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.483522 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7885f6cfd9-wsw46" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.568070 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmgbz\" (UniqueName: \"kubernetes.io/projected/0a572d30-12a1-498e-bc38-d27cb95328ce-kube-api-access-dmgbz\") pod \"perses-operator-54bc95c9fb-tjpvt\" (UID: \"0a572d30-12a1-498e-bc38-d27cb95328ce\") " pod="openshift-operators/perses-operator-54bc95c9fb-tjpvt" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.568246 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0a572d30-12a1-498e-bc38-d27cb95328ce-openshift-service-ca\") pod \"perses-operator-54bc95c9fb-tjpvt\" (UID: \"0a572d30-12a1-498e-bc38-d27cb95328ce\") " pod="openshift-operators/perses-operator-54bc95c9fb-tjpvt" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.569182 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0a572d30-12a1-498e-bc38-d27cb95328ce-openshift-service-ca\") pod \"perses-operator-54bc95c9fb-tjpvt\" (UID: \"0a572d30-12a1-498e-bc38-d27cb95328ce\") " pod="openshift-operators/perses-operator-54bc95c9fb-tjpvt" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.595648 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-cc5f78dfc-9vqdj" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.597034 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmgbz\" (UniqueName: \"kubernetes.io/projected/0a572d30-12a1-498e-bc38-d27cb95328ce-kube-api-access-dmgbz\") pod \"perses-operator-54bc95c9fb-tjpvt\" (UID: \"0a572d30-12a1-498e-bc38-d27cb95328ce\") " pod="openshift-operators/perses-operator-54bc95c9fb-tjpvt" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.767490 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-54bc95c9fb-tjpvt" Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.805509 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7885f6cfd9-9pr5t"] Oct 13 08:15:22 crc kubenswrapper[4833]: W1013 08:15:22.847348 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod937ac9f6_a3cb_405d_a685_e0fcee7d3cea.slice/crio-eae0f17dbb91c66343db47c7de3504163b2ce8dae1ada372dca9b9742c92c00a WatchSource:0}: Error finding container eae0f17dbb91c66343db47c7de3504163b2ce8dae1ada372dca9b9742c92c00a: Status 404 returned error can't find the container with id eae0f17dbb91c66343db47c7de3504163b2ce8dae1ada372dca9b9742c92c00a Oct 13 08:15:22 crc kubenswrapper[4833]: I1013 08:15:22.944523 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-7c8cf85677-7wnhh"] Oct 13 08:15:22 crc kubenswrapper[4833]: W1013 08:15:22.962680 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22d23ba0_fd38_403e_8ec5_6c42942bd13a.slice/crio-e22e63d37852f43d4923331e452cc1c28a9af283e7f489f4eeac3ba541117c29 WatchSource:0}: Error finding container e22e63d37852f43d4923331e452cc1c28a9af283e7f489f4eeac3ba541117c29: Status 404 returned error can't find the container with id e22e63d37852f43d4923331e452cc1c28a9af283e7f489f4eeac3ba541117c29 Oct 13 08:15:23 crc kubenswrapper[4833]: I1013 08:15:23.108078 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7885f6cfd9-wsw46"] Oct 13 08:15:23 crc kubenswrapper[4833]: I1013 08:15:23.223326 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-7wnhh" event={"ID":"22d23ba0-fd38-403e-8ec5-6c42942bd13a","Type":"ContainerStarted","Data":"e22e63d37852f43d4923331e452cc1c28a9af283e7f489f4eeac3ba541117c29"} Oct 13 08:15:23 crc kubenswrapper[4833]: I1013 08:15:23.224787 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7885f6cfd9-wsw46" event={"ID":"68878938-abe8-45e8-bd4e-dc920d54cdfa","Type":"ContainerStarted","Data":"c410796e6496e0e8a953ed6b3ac72adb7d0dde32056ae56e03055390c98bd4cd"} Oct 13 08:15:23 crc kubenswrapper[4833]: I1013 08:15:23.225845 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7885f6cfd9-9pr5t" event={"ID":"937ac9f6-a3cb-405d-a685-e0fcee7d3cea","Type":"ContainerStarted","Data":"eae0f17dbb91c66343db47c7de3504163b2ce8dae1ada372dca9b9742c92c00a"} Oct 13 08:15:23 crc kubenswrapper[4833]: I1013 08:15:23.305330 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-cc5f78dfc-9vqdj"] Oct 13 08:15:23 crc kubenswrapper[4833]: I1013 08:15:23.506985 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-54bc95c9fb-tjpvt"] Oct 13 08:15:23 crc kubenswrapper[4833]: W1013 08:15:23.511187 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a572d30_12a1_498e_bc38_d27cb95328ce.slice/crio-db0af7d6be8fd8e34c44012467566e3864ba9d733f4f77c95e359e5895935038 WatchSource:0}: Error finding container db0af7d6be8fd8e34c44012467566e3864ba9d733f4f77c95e359e5895935038: Status 404 returned error can't find the container with id db0af7d6be8fd8e34c44012467566e3864ba9d733f4f77c95e359e5895935038 Oct 13 08:15:24 crc kubenswrapper[4833]: I1013 08:15:24.252517 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-54bc95c9fb-tjpvt" event={"ID":"0a572d30-12a1-498e-bc38-d27cb95328ce","Type":"ContainerStarted","Data":"db0af7d6be8fd8e34c44012467566e3864ba9d733f4f77c95e359e5895935038"} Oct 13 08:15:24 crc kubenswrapper[4833]: I1013 08:15:24.253850 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-cc5f78dfc-9vqdj" event={"ID":"7c9058d7-a182-413c-b6a7-f25560ca3989","Type":"ContainerStarted","Data":"74c6765f20aee83b0f9aa70a3f8d7e54f2e9fad537d03c78e68363a82152883e"} Oct 13 08:15:31 crc kubenswrapper[4833]: I1013 08:15:31.357878 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-54bc95c9fb-tjpvt" event={"ID":"0a572d30-12a1-498e-bc38-d27cb95328ce","Type":"ContainerStarted","Data":"2a529f4da91bc53be62384e49a4aa1cc84a0644709789d5436591893f2c446ca"} Oct 13 08:15:31 crc kubenswrapper[4833]: I1013 08:15:31.358491 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-54bc95c9fb-tjpvt" Oct 13 08:15:31 crc kubenswrapper[4833]: I1013 08:15:31.360433 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7885f6cfd9-9pr5t" event={"ID":"937ac9f6-a3cb-405d-a685-e0fcee7d3cea","Type":"ContainerStarted","Data":"0bdfdbbe4e20e8b679399129b4821ba69dd58c054235bf57a0e7fe34e4a47094"} Oct 13 08:15:31 crc kubenswrapper[4833]: I1013 08:15:31.362628 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-7wnhh" event={"ID":"22d23ba0-fd38-403e-8ec5-6c42942bd13a","Type":"ContainerStarted","Data":"33b7b73c37a4a347e4516049b8368523a07b302f74ba6e4ab14b93dcf799845c"} Oct 13 08:15:31 crc kubenswrapper[4833]: I1013 08:15:31.363942 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7885f6cfd9-wsw46" event={"ID":"68878938-abe8-45e8-bd4e-dc920d54cdfa","Type":"ContainerStarted","Data":"9e8001073aa909e59a2d30e805997a63b38ecc36bd25340d7c4cc051a4030ceb"} Oct 13 08:15:31 crc kubenswrapper[4833]: I1013 08:15:31.365009 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-cc5f78dfc-9vqdj" event={"ID":"7c9058d7-a182-413c-b6a7-f25560ca3989","Type":"ContainerStarted","Data":"460521d0cc02fece7268e8821b328a55fb08ba81b5be7eae181fb0b8f635e917"} Oct 13 08:15:31 crc kubenswrapper[4833]: I1013 08:15:31.365231 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-cc5f78dfc-9vqdj" Oct 13 08:15:31 crc kubenswrapper[4833]: I1013 08:15:31.373995 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-54bc95c9fb-tjpvt" podStartSLOduration=2.807372903 podStartE2EDuration="9.373978854s" podCreationTimestamp="2025-10-13 08:15:22 +0000 UTC" firstStartedPulling="2025-10-13 08:15:23.515048484 +0000 UTC m=+6413.615471400" lastFinishedPulling="2025-10-13 08:15:30.081654435 +0000 UTC m=+6420.182077351" observedRunningTime="2025-10-13 08:15:31.371613206 +0000 UTC m=+6421.472036122" watchObservedRunningTime="2025-10-13 08:15:31.373978854 +0000 UTC m=+6421.474401770" Oct 13 08:15:31 crc kubenswrapper[4833]: I1013 08:15:31.375945 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-cc5f78dfc-9vqdj" Oct 13 08:15:31 crc kubenswrapper[4833]: I1013 08:15:31.397182 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-cc5f78dfc-9vqdj" podStartSLOduration=2.504964786 podStartE2EDuration="9.397164983s" podCreationTimestamp="2025-10-13 08:15:22 +0000 UTC" firstStartedPulling="2025-10-13 08:15:23.3049159 +0000 UTC m=+6413.405338816" lastFinishedPulling="2025-10-13 08:15:30.197116097 +0000 UTC m=+6420.297539013" observedRunningTime="2025-10-13 08:15:31.393774596 +0000 UTC m=+6421.494197532" watchObservedRunningTime="2025-10-13 08:15:31.397164983 +0000 UTC m=+6421.497587899" Oct 13 08:15:31 crc kubenswrapper[4833]: I1013 08:15:31.421161 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7885f6cfd9-wsw46" podStartSLOduration=6.241591154 podStartE2EDuration="10.421146054s" podCreationTimestamp="2025-10-13 08:15:21 +0000 UTC" firstStartedPulling="2025-10-13 08:15:23.145316043 +0000 UTC m=+6413.245738959" lastFinishedPulling="2025-10-13 08:15:27.324870943 +0000 UTC m=+6417.425293859" observedRunningTime="2025-10-13 08:15:31.415207675 +0000 UTC m=+6421.515630581" watchObservedRunningTime="2025-10-13 08:15:31.421146054 +0000 UTC m=+6421.521568970" Oct 13 08:15:31 crc kubenswrapper[4833]: I1013 08:15:31.439335 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-7wnhh" podStartSLOduration=3.353720636 podStartE2EDuration="10.439319461s" podCreationTimestamp="2025-10-13 08:15:21 +0000 UTC" firstStartedPulling="2025-10-13 08:15:22.993579809 +0000 UTC m=+6413.094002715" lastFinishedPulling="2025-10-13 08:15:30.079178624 +0000 UTC m=+6420.179601540" observedRunningTime="2025-10-13 08:15:31.432654291 +0000 UTC m=+6421.533077217" watchObservedRunningTime="2025-10-13 08:15:31.439319461 +0000 UTC m=+6421.539742377" Oct 13 08:15:31 crc kubenswrapper[4833]: I1013 08:15:31.469325 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7885f6cfd9-9pr5t" podStartSLOduration=3.252684113 podStartE2EDuration="10.469302603s" podCreationTimestamp="2025-10-13 08:15:21 +0000 UTC" firstStartedPulling="2025-10-13 08:15:22.858515969 +0000 UTC m=+6412.958938885" lastFinishedPulling="2025-10-13 08:15:30.075134449 +0000 UTC m=+6420.175557375" observedRunningTime="2025-10-13 08:15:31.459199856 +0000 UTC m=+6421.559622782" watchObservedRunningTime="2025-10-13 08:15:31.469302603 +0000 UTC m=+6421.569725519" Oct 13 08:15:34 crc kubenswrapper[4833]: I1013 08:15:34.627246 4833 scope.go:117] "RemoveContainer" containerID="d4d4c44bb0e773c2232317552c7e13324e9005e08dbd3102a3b4703dc201b6e8" Oct 13 08:15:34 crc kubenswrapper[4833]: E1013 08:15:34.627779 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:15:42 crc kubenswrapper[4833]: I1013 08:15:42.771269 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-54bc95c9fb-tjpvt" Oct 13 08:15:45 crc kubenswrapper[4833]: I1013 08:15:45.165451 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Oct 13 08:15:45 crc kubenswrapper[4833]: I1013 08:15:45.166128 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="573128e7-9275-410b-8695-b2beb20484f9" containerName="openstackclient" containerID="cri-o://7577c7fee48c73c05eba3347c4095af439391a496d474574993139365c4d6e75" gracePeriod=2 Oct 13 08:15:45 crc kubenswrapper[4833]: I1013 08:15:45.188367 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Oct 13 08:15:45 crc kubenswrapper[4833]: I1013 08:15:45.215759 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 13 08:15:45 crc kubenswrapper[4833]: E1013 08:15:45.216184 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="573128e7-9275-410b-8695-b2beb20484f9" containerName="openstackclient" Oct 13 08:15:45 crc kubenswrapper[4833]: I1013 08:15:45.216202 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="573128e7-9275-410b-8695-b2beb20484f9" containerName="openstackclient" Oct 13 08:15:45 crc kubenswrapper[4833]: I1013 08:15:45.216456 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="573128e7-9275-410b-8695-b2beb20484f9" containerName="openstackclient" Oct 13 08:15:45 crc kubenswrapper[4833]: I1013 08:15:45.217188 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 13 08:15:45 crc kubenswrapper[4833]: I1013 08:15:45.223313 4833 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="573128e7-9275-410b-8695-b2beb20484f9" podUID="2ea8e240-e5ac-4a76-970e-cf5cb0a94762" Oct 13 08:15:45 crc kubenswrapper[4833]: I1013 08:15:45.238125 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 13 08:15:45 crc kubenswrapper[4833]: I1013 08:15:45.356742 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2ea8e240-e5ac-4a76-970e-cf5cb0a94762-openstack-config-secret\") pod \"openstackclient\" (UID: \"2ea8e240-e5ac-4a76-970e-cf5cb0a94762\") " pod="openstack/openstackclient" Oct 13 08:15:45 crc kubenswrapper[4833]: I1013 08:15:45.356813 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea8e240-e5ac-4a76-970e-cf5cb0a94762-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2ea8e240-e5ac-4a76-970e-cf5cb0a94762\") " pod="openstack/openstackclient" Oct 13 08:15:45 crc kubenswrapper[4833]: I1013 08:15:45.356920 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2ea8e240-e5ac-4a76-970e-cf5cb0a94762-openstack-config\") pod \"openstackclient\" (UID: \"2ea8e240-e5ac-4a76-970e-cf5cb0a94762\") " pod="openstack/openstackclient" Oct 13 08:15:45 crc kubenswrapper[4833]: I1013 08:15:45.356974 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfbvt\" (UniqueName: \"kubernetes.io/projected/2ea8e240-e5ac-4a76-970e-cf5cb0a94762-kube-api-access-vfbvt\") pod \"openstackclient\" (UID: \"2ea8e240-e5ac-4a76-970e-cf5cb0a94762\") " pod="openstack/openstackclient" Oct 13 08:15:45 crc kubenswrapper[4833]: I1013 08:15:45.459126 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2ea8e240-e5ac-4a76-970e-cf5cb0a94762-openstack-config\") pod \"openstackclient\" (UID: \"2ea8e240-e5ac-4a76-970e-cf5cb0a94762\") " pod="openstack/openstackclient" Oct 13 08:15:45 crc kubenswrapper[4833]: I1013 08:15:45.459190 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfbvt\" (UniqueName: \"kubernetes.io/projected/2ea8e240-e5ac-4a76-970e-cf5cb0a94762-kube-api-access-vfbvt\") pod \"openstackclient\" (UID: \"2ea8e240-e5ac-4a76-970e-cf5cb0a94762\") " pod="openstack/openstackclient" Oct 13 08:15:45 crc kubenswrapper[4833]: I1013 08:15:45.459235 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2ea8e240-e5ac-4a76-970e-cf5cb0a94762-openstack-config-secret\") pod \"openstackclient\" (UID: \"2ea8e240-e5ac-4a76-970e-cf5cb0a94762\") " pod="openstack/openstackclient" Oct 13 08:15:45 crc kubenswrapper[4833]: I1013 08:15:45.459273 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea8e240-e5ac-4a76-970e-cf5cb0a94762-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2ea8e240-e5ac-4a76-970e-cf5cb0a94762\") " pod="openstack/openstackclient" Oct 13 08:15:45 crc kubenswrapper[4833]: I1013 08:15:45.460066 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2ea8e240-e5ac-4a76-970e-cf5cb0a94762-openstack-config\") pod \"openstackclient\" (UID: \"2ea8e240-e5ac-4a76-970e-cf5cb0a94762\") " pod="openstack/openstackclient" Oct 13 08:15:45 crc kubenswrapper[4833]: I1013 08:15:45.465175 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2ea8e240-e5ac-4a76-970e-cf5cb0a94762-openstack-config-secret\") pod \"openstackclient\" (UID: \"2ea8e240-e5ac-4a76-970e-cf5cb0a94762\") " pod="openstack/openstackclient" Oct 13 08:15:45 crc kubenswrapper[4833]: I1013 08:15:45.465600 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea8e240-e5ac-4a76-970e-cf5cb0a94762-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2ea8e240-e5ac-4a76-970e-cf5cb0a94762\") " pod="openstack/openstackclient" Oct 13 08:15:45 crc kubenswrapper[4833]: I1013 08:15:45.481240 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfbvt\" (UniqueName: \"kubernetes.io/projected/2ea8e240-e5ac-4a76-970e-cf5cb0a94762-kube-api-access-vfbvt\") pod \"openstackclient\" (UID: \"2ea8e240-e5ac-4a76-970e-cf5cb0a94762\") " pod="openstack/openstackclient" Oct 13 08:15:45 crc kubenswrapper[4833]: I1013 08:15:45.534149 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 13 08:15:45 crc kubenswrapper[4833]: I1013 08:15:45.535555 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 13 08:15:45 crc kubenswrapper[4833]: I1013 08:15:45.537871 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-hw74f" Oct 13 08:15:45 crc kubenswrapper[4833]: I1013 08:15:45.544096 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 13 08:15:45 crc kubenswrapper[4833]: I1013 08:15:45.553437 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 13 08:15:45 crc kubenswrapper[4833]: I1013 08:15:45.636607 4833 scope.go:117] "RemoveContainer" containerID="d4d4c44bb0e773c2232317552c7e13324e9005e08dbd3102a3b4703dc201b6e8" Oct 13 08:15:45 crc kubenswrapper[4833]: E1013 08:15:45.637225 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:15:45 crc kubenswrapper[4833]: I1013 08:15:45.662112 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7lb9\" (UniqueName: \"kubernetes.io/projected/3ca0b086-13c9-4a2d-b67a-3df07689ea9f-kube-api-access-f7lb9\") pod \"kube-state-metrics-0\" (UID: \"3ca0b086-13c9-4a2d-b67a-3df07689ea9f\") " pod="openstack/kube-state-metrics-0" Oct 13 08:15:45 crc kubenswrapper[4833]: I1013 08:15:45.765076 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7lb9\" (UniqueName: \"kubernetes.io/projected/3ca0b086-13c9-4a2d-b67a-3df07689ea9f-kube-api-access-f7lb9\") pod \"kube-state-metrics-0\" (UID: \"3ca0b086-13c9-4a2d-b67a-3df07689ea9f\") " pod="openstack/kube-state-metrics-0" Oct 13 08:15:45 crc kubenswrapper[4833]: I1013 08:15:45.814314 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7lb9\" (UniqueName: \"kubernetes.io/projected/3ca0b086-13c9-4a2d-b67a-3df07689ea9f-kube-api-access-f7lb9\") pod \"kube-state-metrics-0\" (UID: \"3ca0b086-13c9-4a2d-b67a-3df07689ea9f\") " pod="openstack/kube-state-metrics-0" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.012773 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.144239 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-rdxgm"] Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.167162 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-rdxgm"] Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.307438 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.309747 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.311675 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-x88cq" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.318654 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.318776 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.318881 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.380294 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.494400 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/19fdc94b-9174-42a2-ad1f-7f52f79daa6b-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"19fdc94b-9174-42a2-ad1f-7f52f79daa6b\") " pod="openstack/alertmanager-metric-storage-0" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.494552 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/19fdc94b-9174-42a2-ad1f-7f52f79daa6b-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"19fdc94b-9174-42a2-ad1f-7f52f79daa6b\") " pod="openstack/alertmanager-metric-storage-0" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.494669 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs4m5\" (UniqueName: \"kubernetes.io/projected/19fdc94b-9174-42a2-ad1f-7f52f79daa6b-kube-api-access-xs4m5\") pod \"alertmanager-metric-storage-0\" (UID: \"19fdc94b-9174-42a2-ad1f-7f52f79daa6b\") " pod="openstack/alertmanager-metric-storage-0" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.494819 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/19fdc94b-9174-42a2-ad1f-7f52f79daa6b-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"19fdc94b-9174-42a2-ad1f-7f52f79daa6b\") " pod="openstack/alertmanager-metric-storage-0" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.494848 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/19fdc94b-9174-42a2-ad1f-7f52f79daa6b-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"19fdc94b-9174-42a2-ad1f-7f52f79daa6b\") " pod="openstack/alertmanager-metric-storage-0" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.494885 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/19fdc94b-9174-42a2-ad1f-7f52f79daa6b-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"19fdc94b-9174-42a2-ad1f-7f52f79daa6b\") " pod="openstack/alertmanager-metric-storage-0" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.511715 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.595924 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/19fdc94b-9174-42a2-ad1f-7f52f79daa6b-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"19fdc94b-9174-42a2-ad1f-7f52f79daa6b\") " pod="openstack/alertmanager-metric-storage-0" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.596355 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/19fdc94b-9174-42a2-ad1f-7f52f79daa6b-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"19fdc94b-9174-42a2-ad1f-7f52f79daa6b\") " pod="openstack/alertmanager-metric-storage-0" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.596409 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs4m5\" (UniqueName: \"kubernetes.io/projected/19fdc94b-9174-42a2-ad1f-7f52f79daa6b-kube-api-access-xs4m5\") pod \"alertmanager-metric-storage-0\" (UID: \"19fdc94b-9174-42a2-ad1f-7f52f79daa6b\") " pod="openstack/alertmanager-metric-storage-0" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.596474 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/19fdc94b-9174-42a2-ad1f-7f52f79daa6b-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"19fdc94b-9174-42a2-ad1f-7f52f79daa6b\") " pod="openstack/alertmanager-metric-storage-0" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.596501 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/19fdc94b-9174-42a2-ad1f-7f52f79daa6b-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"19fdc94b-9174-42a2-ad1f-7f52f79daa6b\") " pod="openstack/alertmanager-metric-storage-0" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.596546 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/19fdc94b-9174-42a2-ad1f-7f52f79daa6b-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"19fdc94b-9174-42a2-ad1f-7f52f79daa6b\") " pod="openstack/alertmanager-metric-storage-0" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.597147 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/19fdc94b-9174-42a2-ad1f-7f52f79daa6b-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"19fdc94b-9174-42a2-ad1f-7f52f79daa6b\") " pod="openstack/alertmanager-metric-storage-0" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.648609 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/19fdc94b-9174-42a2-ad1f-7f52f79daa6b-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"19fdc94b-9174-42a2-ad1f-7f52f79daa6b\") " pod="openstack/alertmanager-metric-storage-0" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.669629 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/19fdc94b-9174-42a2-ad1f-7f52f79daa6b-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"19fdc94b-9174-42a2-ad1f-7f52f79daa6b\") " pod="openstack/alertmanager-metric-storage-0" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.671004 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/19fdc94b-9174-42a2-ad1f-7f52f79daa6b-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"19fdc94b-9174-42a2-ad1f-7f52f79daa6b\") " pod="openstack/alertmanager-metric-storage-0" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.677652 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8a0bb42-7255-43d8-8cc6-3a3696c94cde" path="/var/lib/kubelet/pods/e8a0bb42-7255-43d8-8cc6-3a3696c94cde/volumes" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.678355 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"2ea8e240-e5ac-4a76-970e-cf5cb0a94762","Type":"ContainerStarted","Data":"8838714e81b26329146d38e39fb89ea0d15f5fbda1874e9e8ba2c7d8ef847615"} Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.684992 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/19fdc94b-9174-42a2-ad1f-7f52f79daa6b-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"19fdc94b-9174-42a2-ad1f-7f52f79daa6b\") " pod="openstack/alertmanager-metric-storage-0" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.697383 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs4m5\" (UniqueName: \"kubernetes.io/projected/19fdc94b-9174-42a2-ad1f-7f52f79daa6b-kube-api-access-xs4m5\") pod \"alertmanager-metric-storage-0\" (UID: \"19fdc94b-9174-42a2-ad1f-7f52f79daa6b\") " pod="openstack/alertmanager-metric-storage-0" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.741145 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.828530 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.830846 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.844098 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-2n28t" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.844335 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.844433 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.844555 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.844711 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.845136 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Oct 13 08:15:46 crc kubenswrapper[4833]: I1013 08:15:46.876065 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.031773 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/76fbe037-9460-47b3-afd9-a14bca5d61eb-config\") pod \"prometheus-metric-storage-0\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.031821 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/76fbe037-9460-47b3-afd9-a14bca5d61eb-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.031894 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4427552d-b72b-4644-b9f3-d02ed7b49d08\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4427552d-b72b-4644-b9f3-d02ed7b49d08\") pod \"prometheus-metric-storage-0\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.031924 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/76fbe037-9460-47b3-afd9-a14bca5d61eb-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.031944 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/76fbe037-9460-47b3-afd9-a14bca5d61eb-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.031976 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/76fbe037-9460-47b3-afd9-a14bca5d61eb-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.032038 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/76fbe037-9460-47b3-afd9-a14bca5d61eb-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.032074 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcnqt\" (UniqueName: \"kubernetes.io/projected/76fbe037-9460-47b3-afd9-a14bca5d61eb-kube-api-access-lcnqt\") pod \"prometheus-metric-storage-0\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.078880 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.134346 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcnqt\" (UniqueName: \"kubernetes.io/projected/76fbe037-9460-47b3-afd9-a14bca5d61eb-kube-api-access-lcnqt\") pod \"prometheus-metric-storage-0\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.134427 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/76fbe037-9460-47b3-afd9-a14bca5d61eb-config\") pod \"prometheus-metric-storage-0\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.134452 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/76fbe037-9460-47b3-afd9-a14bca5d61eb-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.134506 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4427552d-b72b-4644-b9f3-d02ed7b49d08\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4427552d-b72b-4644-b9f3-d02ed7b49d08\") pod \"prometheus-metric-storage-0\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.134584 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/76fbe037-9460-47b3-afd9-a14bca5d61eb-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.134604 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/76fbe037-9460-47b3-afd9-a14bca5d61eb-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.134633 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/76fbe037-9460-47b3-afd9-a14bca5d61eb-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.134844 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/76fbe037-9460-47b3-afd9-a14bca5d61eb-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.135891 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/76fbe037-9460-47b3-afd9-a14bca5d61eb-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.146465 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/76fbe037-9460-47b3-afd9-a14bca5d61eb-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.152104 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/76fbe037-9460-47b3-afd9-a14bca5d61eb-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.155978 4833 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.156017 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4427552d-b72b-4644-b9f3-d02ed7b49d08\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4427552d-b72b-4644-b9f3-d02ed7b49d08\") pod \"prometheus-metric-storage-0\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/dc0087efccc13a5ad7870124f878357356eb0ef59dea8d42ec4c6c18d3ce37a0/globalmount\"" pod="openstack/prometheus-metric-storage-0" Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.156313 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/76fbe037-9460-47b3-afd9-a14bca5d61eb-config\") pod \"prometheus-metric-storage-0\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.180067 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/76fbe037-9460-47b3-afd9-a14bca5d61eb-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.185374 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcnqt\" (UniqueName: \"kubernetes.io/projected/76fbe037-9460-47b3-afd9-a14bca5d61eb-kube-api-access-lcnqt\") pod \"prometheus-metric-storage-0\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.194185 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/76fbe037-9460-47b3-afd9-a14bca5d61eb-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.215943 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4427552d-b72b-4644-b9f3-d02ed7b49d08\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4427552d-b72b-4644-b9f3-d02ed7b49d08\") pod \"prometheus-metric-storage-0\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:15:47 crc kubenswrapper[4833]: W1013 08:15:47.466206 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19fdc94b_9174_42a2_ad1f_7f52f79daa6b.slice/crio-77e8b499c12ab3420ebb5ef24b2ad1938e56794e4113d3e2917889122ef2d6aa WatchSource:0}: Error finding container 77e8b499c12ab3420ebb5ef24b2ad1938e56794e4113d3e2917889122ef2d6aa: Status 404 returned error can't find the container with id 77e8b499c12ab3420ebb5ef24b2ad1938e56794e4113d3e2917889122ef2d6aa Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.466768 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.504149 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.671935 4833 generic.go:334] "Generic (PLEG): container finished" podID="573128e7-9275-410b-8695-b2beb20484f9" containerID="7577c7fee48c73c05eba3347c4095af439391a496d474574993139365c4d6e75" exitCode=137 Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.675098 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3ca0b086-13c9-4a2d-b67a-3df07689ea9f","Type":"ContainerStarted","Data":"751cf68062c6a38f7d3ceb079cd8e4abdf3f6cba4d0c6a5035a8585d4afb1dc2"} Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.676356 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"2ea8e240-e5ac-4a76-970e-cf5cb0a94762","Type":"ContainerStarted","Data":"35aa3e508d0c2ca3bafe1c3b605d14cb5c5edac333b9ac10d9d4147726884d53"} Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.685367 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"19fdc94b-9174-42a2-ad1f-7f52f79daa6b","Type":"ContainerStarted","Data":"77e8b499c12ab3420ebb5ef24b2ad1938e56794e4113d3e2917889122ef2d6aa"} Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.838811 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.951373 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573128e7-9275-410b-8695-b2beb20484f9-combined-ca-bundle\") pod \"573128e7-9275-410b-8695-b2beb20484f9\" (UID: \"573128e7-9275-410b-8695-b2beb20484f9\") " Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.951499 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/573128e7-9275-410b-8695-b2beb20484f9-openstack-config-secret\") pod \"573128e7-9275-410b-8695-b2beb20484f9\" (UID: \"573128e7-9275-410b-8695-b2beb20484f9\") " Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.951527 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89258\" (UniqueName: \"kubernetes.io/projected/573128e7-9275-410b-8695-b2beb20484f9-kube-api-access-89258\") pod \"573128e7-9275-410b-8695-b2beb20484f9\" (UID: \"573128e7-9275-410b-8695-b2beb20484f9\") " Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.951668 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/573128e7-9275-410b-8695-b2beb20484f9-openstack-config\") pod \"573128e7-9275-410b-8695-b2beb20484f9\" (UID: \"573128e7-9275-410b-8695-b2beb20484f9\") " Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.957019 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/573128e7-9275-410b-8695-b2beb20484f9-kube-api-access-89258" (OuterVolumeSpecName: "kube-api-access-89258") pod "573128e7-9275-410b-8695-b2beb20484f9" (UID: "573128e7-9275-410b-8695-b2beb20484f9"). InnerVolumeSpecName "kube-api-access-89258". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.977702 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/573128e7-9275-410b-8695-b2beb20484f9-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "573128e7-9275-410b-8695-b2beb20484f9" (UID: "573128e7-9275-410b-8695-b2beb20484f9"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:15:47 crc kubenswrapper[4833]: I1013 08:15:47.988690 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/573128e7-9275-410b-8695-b2beb20484f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "573128e7-9275-410b-8695-b2beb20484f9" (UID: "573128e7-9275-410b-8695-b2beb20484f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:15:48 crc kubenswrapper[4833]: I1013 08:15:48.018677 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/573128e7-9275-410b-8695-b2beb20484f9-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "573128e7-9275-410b-8695-b2beb20484f9" (UID: "573128e7-9275-410b-8695-b2beb20484f9"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:15:48 crc kubenswrapper[4833]: I1013 08:15:48.036082 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.036064315 podStartE2EDuration="3.036064315s" podCreationTimestamp="2025-10-13 08:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:15:47.714977327 +0000 UTC m=+6437.815400243" watchObservedRunningTime="2025-10-13 08:15:48.036064315 +0000 UTC m=+6438.136487231" Oct 13 08:15:48 crc kubenswrapper[4833]: I1013 08:15:48.037672 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 13 08:15:48 crc kubenswrapper[4833]: W1013 08:15:48.039017 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76fbe037_9460_47b3_afd9_a14bca5d61eb.slice/crio-0369f397951fe632f08ee6c83f8345491f76d840b352627b2f1ac7a7ae67d2fa WatchSource:0}: Error finding container 0369f397951fe632f08ee6c83f8345491f76d840b352627b2f1ac7a7ae67d2fa: Status 404 returned error can't find the container with id 0369f397951fe632f08ee6c83f8345491f76d840b352627b2f1ac7a7ae67d2fa Oct 13 08:15:48 crc kubenswrapper[4833]: I1013 08:15:48.054433 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573128e7-9275-410b-8695-b2beb20484f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:15:48 crc kubenswrapper[4833]: I1013 08:15:48.054456 4833 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/573128e7-9275-410b-8695-b2beb20484f9-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 13 08:15:48 crc kubenswrapper[4833]: I1013 08:15:48.054467 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89258\" (UniqueName: \"kubernetes.io/projected/573128e7-9275-410b-8695-b2beb20484f9-kube-api-access-89258\") on node \"crc\" DevicePath \"\"" Oct 13 08:15:48 crc kubenswrapper[4833]: I1013 08:15:48.054476 4833 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/573128e7-9275-410b-8695-b2beb20484f9-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 13 08:15:48 crc kubenswrapper[4833]: I1013 08:15:48.641614 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="573128e7-9275-410b-8695-b2beb20484f9" path="/var/lib/kubelet/pods/573128e7-9275-410b-8695-b2beb20484f9/volumes" Oct 13 08:15:48 crc kubenswrapper[4833]: I1013 08:15:48.697225 4833 scope.go:117] "RemoveContainer" containerID="7577c7fee48c73c05eba3347c4095af439391a496d474574993139365c4d6e75" Oct 13 08:15:48 crc kubenswrapper[4833]: I1013 08:15:48.697425 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 13 08:15:48 crc kubenswrapper[4833]: I1013 08:15:48.700832 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"76fbe037-9460-47b3-afd9-a14bca5d61eb","Type":"ContainerStarted","Data":"0369f397951fe632f08ee6c83f8345491f76d840b352627b2f1ac7a7ae67d2fa"} Oct 13 08:15:48 crc kubenswrapper[4833]: I1013 08:15:48.706913 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3ca0b086-13c9-4a2d-b67a-3df07689ea9f","Type":"ContainerStarted","Data":"ce2ff48b288d569c2daea2c4de6a4a934bc1c06956ddc3b7af4d242e4575e7c2"} Oct 13 08:15:48 crc kubenswrapper[4833]: I1013 08:15:48.743610 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.282220864 podStartE2EDuration="3.74359424s" podCreationTimestamp="2025-10-13 08:15:45 +0000 UTC" firstStartedPulling="2025-10-13 08:15:47.105687336 +0000 UTC m=+6437.206110252" lastFinishedPulling="2025-10-13 08:15:47.567060712 +0000 UTC m=+6437.667483628" observedRunningTime="2025-10-13 08:15:48.738417373 +0000 UTC m=+6438.838840309" watchObservedRunningTime="2025-10-13 08:15:48.74359424 +0000 UTC m=+6438.844017156" Oct 13 08:15:49 crc kubenswrapper[4833]: I1013 08:15:49.719676 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 13 08:15:53 crc kubenswrapper[4833]: I1013 08:15:53.769980 4833 scope.go:117] "RemoveContainer" containerID="e849214437c4b6212427c2c853defeb99c7ab6ca4fa6e178da01d81866796277" Oct 13 08:15:54 crc kubenswrapper[4833]: I1013 08:15:54.004768 4833 scope.go:117] "RemoveContainer" containerID="ba1564ddd4c25a7af1e27230ca7758620e0484cf78d84e5ae663de7fbef5e22b" Oct 13 08:15:54 crc kubenswrapper[4833]: I1013 08:15:54.071312 4833 scope.go:117] "RemoveContainer" containerID="71e2dfd01c05c470207a663618ab14c6077d7ff6ae013e0ba734777e754fb07b" Oct 13 08:15:54 crc kubenswrapper[4833]: I1013 08:15:54.118232 4833 scope.go:117] "RemoveContainer" containerID="cf36ebb34bf44f17c44b05dc7c5dee6881a8de9e3b8a77267d14493792d17a58" Oct 13 08:15:54 crc kubenswrapper[4833]: I1013 08:15:54.165828 4833 scope.go:117] "RemoveContainer" containerID="a214e216d3bfa1eccf16a04212e5ec91cf8a9a7411baa081f51bc0977f4d636c" Oct 13 08:15:54 crc kubenswrapper[4833]: I1013 08:15:54.804080 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"76fbe037-9460-47b3-afd9-a14bca5d61eb","Type":"ContainerStarted","Data":"cbe0aad2b172db90e3c77e069df52b1f71826715e4507779d2df8cd092f71d2e"} Oct 13 08:15:55 crc kubenswrapper[4833]: I1013 08:15:55.820132 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"19fdc94b-9174-42a2-ad1f-7f52f79daa6b","Type":"ContainerStarted","Data":"1e07286daf978a2f3b22d782b4152fe7b6ec806b910333b501a001bc42e9c401"} Oct 13 08:15:56 crc kubenswrapper[4833]: I1013 08:15:56.019118 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 13 08:15:57 crc kubenswrapper[4833]: I1013 08:15:57.035326 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6dca-account-create-9jxnp"] Oct 13 08:15:57 crc kubenswrapper[4833]: I1013 08:15:57.045680 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6dca-account-create-9jxnp"] Oct 13 08:15:58 crc kubenswrapper[4833]: I1013 08:15:58.632239 4833 scope.go:117] "RemoveContainer" containerID="d4d4c44bb0e773c2232317552c7e13324e9005e08dbd3102a3b4703dc201b6e8" Oct 13 08:15:58 crc kubenswrapper[4833]: E1013 08:15:58.633317 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:15:58 crc kubenswrapper[4833]: I1013 08:15:58.648836 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="909cab0a-686f-44ed-a0e9-f17d241a151a" path="/var/lib/kubelet/pods/909cab0a-686f-44ed-a0e9-f17d241a151a/volumes" Oct 13 08:16:01 crc kubenswrapper[4833]: I1013 08:16:01.888472 4833 generic.go:334] "Generic (PLEG): container finished" podID="76fbe037-9460-47b3-afd9-a14bca5d61eb" containerID="cbe0aad2b172db90e3c77e069df52b1f71826715e4507779d2df8cd092f71d2e" exitCode=0 Oct 13 08:16:01 crc kubenswrapper[4833]: I1013 08:16:01.888519 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"76fbe037-9460-47b3-afd9-a14bca5d61eb","Type":"ContainerDied","Data":"cbe0aad2b172db90e3c77e069df52b1f71826715e4507779d2df8cd092f71d2e"} Oct 13 08:16:03 crc kubenswrapper[4833]: I1013 08:16:03.045602 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-k7s9w"] Oct 13 08:16:03 crc kubenswrapper[4833]: I1013 08:16:03.062502 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-k7s9w"] Oct 13 08:16:04 crc kubenswrapper[4833]: I1013 08:16:04.639775 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b134464-aede-4c01-b02c-97b08f757af5" path="/var/lib/kubelet/pods/6b134464-aede-4c01-b02c-97b08f757af5/volumes" Oct 13 08:16:04 crc kubenswrapper[4833]: I1013 08:16:04.935160 4833 generic.go:334] "Generic (PLEG): container finished" podID="19fdc94b-9174-42a2-ad1f-7f52f79daa6b" containerID="1e07286daf978a2f3b22d782b4152fe7b6ec806b910333b501a001bc42e9c401" exitCode=0 Oct 13 08:16:04 crc kubenswrapper[4833]: I1013 08:16:04.935200 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"19fdc94b-9174-42a2-ad1f-7f52f79daa6b","Type":"ContainerDied","Data":"1e07286daf978a2f3b22d782b4152fe7b6ec806b910333b501a001bc42e9c401"} Oct 13 08:16:07 crc kubenswrapper[4833]: I1013 08:16:07.971847 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"76fbe037-9460-47b3-afd9-a14bca5d61eb","Type":"ContainerStarted","Data":"57cf0090305feb99b2335047f322f58529400e94b44847415165c99a57637f80"} Oct 13 08:16:08 crc kubenswrapper[4833]: I1013 08:16:08.985798 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"19fdc94b-9174-42a2-ad1f-7f52f79daa6b","Type":"ContainerStarted","Data":"da2b19128be88e4e075cf76d6fa26e49b7d9a823e659cd219c64bbd02008a4fd"} Oct 13 08:16:09 crc kubenswrapper[4833]: I1013 08:16:09.627369 4833 scope.go:117] "RemoveContainer" containerID="d4d4c44bb0e773c2232317552c7e13324e9005e08dbd3102a3b4703dc201b6e8" Oct 13 08:16:09 crc kubenswrapper[4833]: E1013 08:16:09.628150 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:16:12 crc kubenswrapper[4833]: I1013 08:16:12.039178 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"76fbe037-9460-47b3-afd9-a14bca5d61eb","Type":"ContainerStarted","Data":"2bc15ef017b2fb048d6341921ae5f27cd8e8759b79ca3bfb8d5e3be7381c7475"} Oct 13 08:16:14 crc kubenswrapper[4833]: I1013 08:16:14.088648 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"19fdc94b-9174-42a2-ad1f-7f52f79daa6b","Type":"ContainerStarted","Data":"8fef1a48a4fac72facf3dcc2fa38fdacb922f5ce148f2eb322799f58c44a3bdb"} Oct 13 08:16:14 crc kubenswrapper[4833]: I1013 08:16:14.088967 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Oct 13 08:16:14 crc kubenswrapper[4833]: I1013 08:16:14.092731 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Oct 13 08:16:14 crc kubenswrapper[4833]: I1013 08:16:14.116759 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=7.01283249 podStartE2EDuration="28.11674265s" podCreationTimestamp="2025-10-13 08:15:46 +0000 UTC" firstStartedPulling="2025-10-13 08:15:47.468349056 +0000 UTC m=+6437.568771972" lastFinishedPulling="2025-10-13 08:16:08.572259186 +0000 UTC m=+6458.672682132" observedRunningTime="2025-10-13 08:16:14.112011585 +0000 UTC m=+6464.212434531" watchObservedRunningTime="2025-10-13 08:16:14.11674265 +0000 UTC m=+6464.217165566" Oct 13 08:16:19 crc kubenswrapper[4833]: I1013 08:16:19.159326 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"76fbe037-9460-47b3-afd9-a14bca5d61eb","Type":"ContainerStarted","Data":"6985186bdb335e2c7d5bd8d3e1dc57a49344f628eaf11e84b54d2f10fa553aa5"} Oct 13 08:16:19 crc kubenswrapper[4833]: I1013 08:16:19.201521 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.898179744 podStartE2EDuration="34.201494953s" podCreationTimestamp="2025-10-13 08:15:45 +0000 UTC" firstStartedPulling="2025-10-13 08:15:48.041102919 +0000 UTC m=+6438.141525845" lastFinishedPulling="2025-10-13 08:16:18.344418138 +0000 UTC m=+6468.444841054" observedRunningTime="2025-10-13 08:16:19.1915319 +0000 UTC m=+6469.291954816" watchObservedRunningTime="2025-10-13 08:16:19.201494953 +0000 UTC m=+6469.301917879" Oct 13 08:16:22 crc kubenswrapper[4833]: I1013 08:16:22.505147 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:23 crc kubenswrapper[4833]: I1013 08:16:23.628131 4833 scope.go:117] "RemoveContainer" containerID="d4d4c44bb0e773c2232317552c7e13324e9005e08dbd3102a3b4703dc201b6e8" Oct 13 08:16:23 crc kubenswrapper[4833]: E1013 08:16:23.629708 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:16:27 crc kubenswrapper[4833]: I1013 08:16:27.677808 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 13 08:16:27 crc kubenswrapper[4833]: I1013 08:16:27.681221 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 08:16:27 crc kubenswrapper[4833]: I1013 08:16:27.684400 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 13 08:16:27 crc kubenswrapper[4833]: I1013 08:16:27.684599 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 13 08:16:27 crc kubenswrapper[4833]: I1013 08:16:27.700011 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 08:16:27 crc kubenswrapper[4833]: I1013 08:16:27.833993 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-run-httpd\") pod \"ceilometer-0\" (UID: \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\") " pod="openstack/ceilometer-0" Oct 13 08:16:27 crc kubenswrapper[4833]: I1013 08:16:27.834094 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\") " pod="openstack/ceilometer-0" Oct 13 08:16:27 crc kubenswrapper[4833]: I1013 08:16:27.834191 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-config-data\") pod \"ceilometer-0\" (UID: \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\") " pod="openstack/ceilometer-0" Oct 13 08:16:27 crc kubenswrapper[4833]: I1013 08:16:27.834216 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\") " pod="openstack/ceilometer-0" Oct 13 08:16:27 crc kubenswrapper[4833]: I1013 08:16:27.834259 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn2jh\" (UniqueName: \"kubernetes.io/projected/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-kube-api-access-wn2jh\") pod \"ceilometer-0\" (UID: \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\") " pod="openstack/ceilometer-0" Oct 13 08:16:27 crc kubenswrapper[4833]: I1013 08:16:27.834358 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-scripts\") pod \"ceilometer-0\" (UID: \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\") " pod="openstack/ceilometer-0" Oct 13 08:16:27 crc kubenswrapper[4833]: I1013 08:16:27.834415 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-log-httpd\") pod \"ceilometer-0\" (UID: \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\") " pod="openstack/ceilometer-0" Oct 13 08:16:27 crc kubenswrapper[4833]: I1013 08:16:27.936076 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-scripts\") pod \"ceilometer-0\" (UID: \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\") " pod="openstack/ceilometer-0" Oct 13 08:16:27 crc kubenswrapper[4833]: I1013 08:16:27.936125 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-log-httpd\") pod \"ceilometer-0\" (UID: \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\") " pod="openstack/ceilometer-0" Oct 13 08:16:27 crc kubenswrapper[4833]: I1013 08:16:27.936178 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-run-httpd\") pod \"ceilometer-0\" (UID: \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\") " pod="openstack/ceilometer-0" Oct 13 08:16:27 crc kubenswrapper[4833]: I1013 08:16:27.936222 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\") " pod="openstack/ceilometer-0" Oct 13 08:16:27 crc kubenswrapper[4833]: I1013 08:16:27.936283 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-config-data\") pod \"ceilometer-0\" (UID: \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\") " pod="openstack/ceilometer-0" Oct 13 08:16:27 crc kubenswrapper[4833]: I1013 08:16:27.936301 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\") " pod="openstack/ceilometer-0" Oct 13 08:16:27 crc kubenswrapper[4833]: I1013 08:16:27.936329 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn2jh\" (UniqueName: \"kubernetes.io/projected/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-kube-api-access-wn2jh\") pod \"ceilometer-0\" (UID: \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\") " pod="openstack/ceilometer-0" Oct 13 08:16:27 crc kubenswrapper[4833]: I1013 08:16:27.937395 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-log-httpd\") pod \"ceilometer-0\" (UID: \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\") " pod="openstack/ceilometer-0" Oct 13 08:16:27 crc kubenswrapper[4833]: I1013 08:16:27.937914 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-run-httpd\") pod \"ceilometer-0\" (UID: \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\") " pod="openstack/ceilometer-0" Oct 13 08:16:27 crc kubenswrapper[4833]: I1013 08:16:27.942809 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\") " pod="openstack/ceilometer-0" Oct 13 08:16:27 crc kubenswrapper[4833]: I1013 08:16:27.942821 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-config-data\") pod \"ceilometer-0\" (UID: \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\") " pod="openstack/ceilometer-0" Oct 13 08:16:27 crc kubenswrapper[4833]: I1013 08:16:27.943288 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\") " pod="openstack/ceilometer-0" Oct 13 08:16:27 crc kubenswrapper[4833]: I1013 08:16:27.943415 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-scripts\") pod \"ceilometer-0\" (UID: \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\") " pod="openstack/ceilometer-0" Oct 13 08:16:27 crc kubenswrapper[4833]: I1013 08:16:27.959094 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn2jh\" (UniqueName: \"kubernetes.io/projected/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-kube-api-access-wn2jh\") pod \"ceilometer-0\" (UID: \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\") " pod="openstack/ceilometer-0" Oct 13 08:16:28 crc kubenswrapper[4833]: I1013 08:16:28.044911 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 08:16:28 crc kubenswrapper[4833]: I1013 08:16:28.599696 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 08:16:29 crc kubenswrapper[4833]: I1013 08:16:29.265396 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a","Type":"ContainerStarted","Data":"845f485be06cd57f2fd68e325e0caf5b10c3391a4770ea210e66e1111e527b10"} Oct 13 08:16:30 crc kubenswrapper[4833]: I1013 08:16:30.276790 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a","Type":"ContainerStarted","Data":"3343b3e3d0e54dcb8421271c520b8bd5f7de7c82c3209c61f692a53a6d3f2d87"} Oct 13 08:16:31 crc kubenswrapper[4833]: I1013 08:16:31.288879 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a","Type":"ContainerStarted","Data":"f1482bab91418c52ebdc23e94af7096ff2aeb9f8425db5212a5f6b07344fffe3"} Oct 13 08:16:31 crc kubenswrapper[4833]: I1013 08:16:31.289276 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a","Type":"ContainerStarted","Data":"cf4e0ccd900912766f52bf0e3fa373bf8e64e367a52a7fd970b82ca8c78ed6fe"} Oct 13 08:16:32 crc kubenswrapper[4833]: I1013 08:16:32.505892 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:32 crc kubenswrapper[4833]: I1013 08:16:32.509180 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:33 crc kubenswrapper[4833]: I1013 08:16:33.342952 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a","Type":"ContainerStarted","Data":"e22308bd039d7054fa9dfe17d545a67124c75c9773e3c1be3cab9b5c3027e537"} Oct 13 08:16:33 crc kubenswrapper[4833]: I1013 08:16:33.345613 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:33 crc kubenswrapper[4833]: I1013 08:16:33.376794 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.565702885 podStartE2EDuration="6.376770789s" podCreationTimestamp="2025-10-13 08:16:27 +0000 UTC" firstStartedPulling="2025-10-13 08:16:28.618740315 +0000 UTC m=+6478.719163231" lastFinishedPulling="2025-10-13 08:16:32.429808179 +0000 UTC m=+6482.530231135" observedRunningTime="2025-10-13 08:16:33.369583575 +0000 UTC m=+6483.470006501" watchObservedRunningTime="2025-10-13 08:16:33.376770789 +0000 UTC m=+6483.477193705" Oct 13 08:16:34 crc kubenswrapper[4833]: I1013 08:16:34.354780 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 13 08:16:34 crc kubenswrapper[4833]: I1013 08:16:34.627059 4833 scope.go:117] "RemoveContainer" containerID="d4d4c44bb0e773c2232317552c7e13324e9005e08dbd3102a3b4703dc201b6e8" Oct 13 08:16:34 crc kubenswrapper[4833]: E1013 08:16:34.627329 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:16:34 crc kubenswrapper[4833]: I1013 08:16:34.641760 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Oct 13 08:16:34 crc kubenswrapper[4833]: I1013 08:16:34.641949 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="2ea8e240-e5ac-4a76-970e-cf5cb0a94762" containerName="openstackclient" containerID="cri-o://35aa3e508d0c2ca3bafe1c3b605d14cb5c5edac333b9ac10d9d4147726884d53" gracePeriod=2 Oct 13 08:16:34 crc kubenswrapper[4833]: I1013 08:16:34.653651 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Oct 13 08:16:34 crc kubenswrapper[4833]: I1013 08:16:34.667472 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 13 08:16:34 crc kubenswrapper[4833]: E1013 08:16:34.668030 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea8e240-e5ac-4a76-970e-cf5cb0a94762" containerName="openstackclient" Oct 13 08:16:34 crc kubenswrapper[4833]: I1013 08:16:34.668048 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea8e240-e5ac-4a76-970e-cf5cb0a94762" containerName="openstackclient" Oct 13 08:16:34 crc kubenswrapper[4833]: I1013 08:16:34.668313 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ea8e240-e5ac-4a76-970e-cf5cb0a94762" containerName="openstackclient" Oct 13 08:16:34 crc kubenswrapper[4833]: I1013 08:16:34.669375 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 13 08:16:34 crc kubenswrapper[4833]: I1013 08:16:34.673135 4833 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="2ea8e240-e5ac-4a76-970e-cf5cb0a94762" podUID="85c42c8c-b34b-4d78-b2c7-e4bbd86f0a09" Oct 13 08:16:34 crc kubenswrapper[4833]: I1013 08:16:34.686871 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 13 08:16:34 crc kubenswrapper[4833]: I1013 08:16:34.796135 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82rmc\" (UniqueName: \"kubernetes.io/projected/85c42c8c-b34b-4d78-b2c7-e4bbd86f0a09-kube-api-access-82rmc\") pod \"openstackclient\" (UID: \"85c42c8c-b34b-4d78-b2c7-e4bbd86f0a09\") " pod="openstack/openstackclient" Oct 13 08:16:34 crc kubenswrapper[4833]: I1013 08:16:34.796390 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85c42c8c-b34b-4d78-b2c7-e4bbd86f0a09-combined-ca-bundle\") pod \"openstackclient\" (UID: \"85c42c8c-b34b-4d78-b2c7-e4bbd86f0a09\") " pod="openstack/openstackclient" Oct 13 08:16:34 crc kubenswrapper[4833]: I1013 08:16:34.796429 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/85c42c8c-b34b-4d78-b2c7-e4bbd86f0a09-openstack-config\") pod \"openstackclient\" (UID: \"85c42c8c-b34b-4d78-b2c7-e4bbd86f0a09\") " pod="openstack/openstackclient" Oct 13 08:16:34 crc kubenswrapper[4833]: I1013 08:16:34.796482 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/85c42c8c-b34b-4d78-b2c7-e4bbd86f0a09-openstack-config-secret\") pod \"openstackclient\" (UID: \"85c42c8c-b34b-4d78-b2c7-e4bbd86f0a09\") " pod="openstack/openstackclient" Oct 13 08:16:34 crc kubenswrapper[4833]: I1013 08:16:34.898885 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85c42c8c-b34b-4d78-b2c7-e4bbd86f0a09-combined-ca-bundle\") pod \"openstackclient\" (UID: \"85c42c8c-b34b-4d78-b2c7-e4bbd86f0a09\") " pod="openstack/openstackclient" Oct 13 08:16:34 crc kubenswrapper[4833]: I1013 08:16:34.898940 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/85c42c8c-b34b-4d78-b2c7-e4bbd86f0a09-openstack-config\") pod \"openstackclient\" (UID: \"85c42c8c-b34b-4d78-b2c7-e4bbd86f0a09\") " pod="openstack/openstackclient" Oct 13 08:16:34 crc kubenswrapper[4833]: I1013 08:16:34.898991 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/85c42c8c-b34b-4d78-b2c7-e4bbd86f0a09-openstack-config-secret\") pod \"openstackclient\" (UID: \"85c42c8c-b34b-4d78-b2c7-e4bbd86f0a09\") " pod="openstack/openstackclient" Oct 13 08:16:34 crc kubenswrapper[4833]: I1013 08:16:34.899036 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82rmc\" (UniqueName: \"kubernetes.io/projected/85c42c8c-b34b-4d78-b2c7-e4bbd86f0a09-kube-api-access-82rmc\") pod \"openstackclient\" (UID: \"85c42c8c-b34b-4d78-b2c7-e4bbd86f0a09\") " pod="openstack/openstackclient" Oct 13 08:16:34 crc kubenswrapper[4833]: I1013 08:16:34.899985 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/85c42c8c-b34b-4d78-b2c7-e4bbd86f0a09-openstack-config\") pod \"openstackclient\" (UID: \"85c42c8c-b34b-4d78-b2c7-e4bbd86f0a09\") " pod="openstack/openstackclient" Oct 13 08:16:34 crc kubenswrapper[4833]: I1013 08:16:34.905183 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85c42c8c-b34b-4d78-b2c7-e4bbd86f0a09-combined-ca-bundle\") pod \"openstackclient\" (UID: \"85c42c8c-b34b-4d78-b2c7-e4bbd86f0a09\") " pod="openstack/openstackclient" Oct 13 08:16:34 crc kubenswrapper[4833]: I1013 08:16:34.916168 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/85c42c8c-b34b-4d78-b2c7-e4bbd86f0a09-openstack-config-secret\") pod \"openstackclient\" (UID: \"85c42c8c-b34b-4d78-b2c7-e4bbd86f0a09\") " pod="openstack/openstackclient" Oct 13 08:16:34 crc kubenswrapper[4833]: I1013 08:16:34.951729 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82rmc\" (UniqueName: \"kubernetes.io/projected/85c42c8c-b34b-4d78-b2c7-e4bbd86f0a09-kube-api-access-82rmc\") pod \"openstackclient\" (UID: \"85c42c8c-b34b-4d78-b2c7-e4bbd86f0a09\") " pod="openstack/openstackclient" Oct 13 08:16:35 crc kubenswrapper[4833]: I1013 08:16:35.001011 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 13 08:16:35 crc kubenswrapper[4833]: I1013 08:16:35.770968 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 13 08:16:35 crc kubenswrapper[4833]: W1013 08:16:35.774330 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85c42c8c_b34b_4d78_b2c7_e4bbd86f0a09.slice/crio-854254198d3506754df24b6c6f0f81ce1505ff1c4f747ec0605920eafbd82427 WatchSource:0}: Error finding container 854254198d3506754df24b6c6f0f81ce1505ff1c4f747ec0605920eafbd82427: Status 404 returned error can't find the container with id 854254198d3506754df24b6c6f0f81ce1505ff1c4f747ec0605920eafbd82427 Oct 13 08:16:35 crc kubenswrapper[4833]: I1013 08:16:35.937841 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 13 08:16:35 crc kubenswrapper[4833]: I1013 08:16:35.938395 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="76fbe037-9460-47b3-afd9-a14bca5d61eb" containerName="prometheus" containerID="cri-o://57cf0090305feb99b2335047f322f58529400e94b44847415165c99a57637f80" gracePeriod=600 Oct 13 08:16:35 crc kubenswrapper[4833]: I1013 08:16:35.938446 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="76fbe037-9460-47b3-afd9-a14bca5d61eb" containerName="config-reloader" containerID="cri-o://2bc15ef017b2fb048d6341921ae5f27cd8e8759b79ca3bfb8d5e3be7381c7475" gracePeriod=600 Oct 13 08:16:35 crc kubenswrapper[4833]: I1013 08:16:35.938446 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="76fbe037-9460-47b3-afd9-a14bca5d61eb" containerName="thanos-sidecar" containerID="cri-o://6985186bdb335e2c7d5bd8d3e1dc57a49344f628eaf11e84b54d2f10fa553aa5" gracePeriod=600 Oct 13 08:16:36 crc kubenswrapper[4833]: I1013 08:16:36.377385 4833 generic.go:334] "Generic (PLEG): container finished" podID="76fbe037-9460-47b3-afd9-a14bca5d61eb" containerID="6985186bdb335e2c7d5bd8d3e1dc57a49344f628eaf11e84b54d2f10fa553aa5" exitCode=0 Oct 13 08:16:36 crc kubenswrapper[4833]: I1013 08:16:36.377646 4833 generic.go:334] "Generic (PLEG): container finished" podID="76fbe037-9460-47b3-afd9-a14bca5d61eb" containerID="2bc15ef017b2fb048d6341921ae5f27cd8e8759b79ca3bfb8d5e3be7381c7475" exitCode=0 Oct 13 08:16:36 crc kubenswrapper[4833]: I1013 08:16:36.377655 4833 generic.go:334] "Generic (PLEG): container finished" podID="76fbe037-9460-47b3-afd9-a14bca5d61eb" containerID="57cf0090305feb99b2335047f322f58529400e94b44847415165c99a57637f80" exitCode=0 Oct 13 08:16:36 crc kubenswrapper[4833]: I1013 08:16:36.377570 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"76fbe037-9460-47b3-afd9-a14bca5d61eb","Type":"ContainerDied","Data":"6985186bdb335e2c7d5bd8d3e1dc57a49344f628eaf11e84b54d2f10fa553aa5"} Oct 13 08:16:36 crc kubenswrapper[4833]: I1013 08:16:36.377747 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"76fbe037-9460-47b3-afd9-a14bca5d61eb","Type":"ContainerDied","Data":"2bc15ef017b2fb048d6341921ae5f27cd8e8759b79ca3bfb8d5e3be7381c7475"} Oct 13 08:16:36 crc kubenswrapper[4833]: I1013 08:16:36.377788 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"76fbe037-9460-47b3-afd9-a14bca5d61eb","Type":"ContainerDied","Data":"57cf0090305feb99b2335047f322f58529400e94b44847415165c99a57637f80"} Oct 13 08:16:36 crc kubenswrapper[4833]: I1013 08:16:36.379772 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"85c42c8c-b34b-4d78-b2c7-e4bbd86f0a09","Type":"ContainerStarted","Data":"eee9eb12cd5b053ff05837bd82d599a964e6145e67ab9492b1e645893f088294"} Oct 13 08:16:36 crc kubenswrapper[4833]: I1013 08:16:36.379806 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"85c42c8c-b34b-4d78-b2c7-e4bbd86f0a09","Type":"ContainerStarted","Data":"854254198d3506754df24b6c6f0f81ce1505ff1c4f747ec0605920eafbd82427"} Oct 13 08:16:36 crc kubenswrapper[4833]: I1013 08:16:36.406532 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.406515332 podStartE2EDuration="2.406515332s" podCreationTimestamp="2025-10-13 08:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:16:36.4000911 +0000 UTC m=+6486.500514016" watchObservedRunningTime="2025-10-13 08:16:36.406515332 +0000 UTC m=+6486.506938248" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.041724 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.051429 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.185773 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea8e240-e5ac-4a76-970e-cf5cb0a94762-combined-ca-bundle\") pod \"2ea8e240-e5ac-4a76-970e-cf5cb0a94762\" (UID: \"2ea8e240-e5ac-4a76-970e-cf5cb0a94762\") " Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.185852 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfbvt\" (UniqueName: \"kubernetes.io/projected/2ea8e240-e5ac-4a76-970e-cf5cb0a94762-kube-api-access-vfbvt\") pod \"2ea8e240-e5ac-4a76-970e-cf5cb0a94762\" (UID: \"2ea8e240-e5ac-4a76-970e-cf5cb0a94762\") " Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.185924 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/76fbe037-9460-47b3-afd9-a14bca5d61eb-config\") pod \"76fbe037-9460-47b3-afd9-a14bca5d61eb\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.186129 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4427552d-b72b-4644-b9f3-d02ed7b49d08\") pod \"76fbe037-9460-47b3-afd9-a14bca5d61eb\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.186183 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2ea8e240-e5ac-4a76-970e-cf5cb0a94762-openstack-config-secret\") pod \"2ea8e240-e5ac-4a76-970e-cf5cb0a94762\" (UID: \"2ea8e240-e5ac-4a76-970e-cf5cb0a94762\") " Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.186216 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2ea8e240-e5ac-4a76-970e-cf5cb0a94762-openstack-config\") pod \"2ea8e240-e5ac-4a76-970e-cf5cb0a94762\" (UID: \"2ea8e240-e5ac-4a76-970e-cf5cb0a94762\") " Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.186259 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcnqt\" (UniqueName: \"kubernetes.io/projected/76fbe037-9460-47b3-afd9-a14bca5d61eb-kube-api-access-lcnqt\") pod \"76fbe037-9460-47b3-afd9-a14bca5d61eb\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.186319 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/76fbe037-9460-47b3-afd9-a14bca5d61eb-thanos-prometheus-http-client-file\") pod \"76fbe037-9460-47b3-afd9-a14bca5d61eb\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.186359 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/76fbe037-9460-47b3-afd9-a14bca5d61eb-config-out\") pod \"76fbe037-9460-47b3-afd9-a14bca5d61eb\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.186472 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/76fbe037-9460-47b3-afd9-a14bca5d61eb-prometheus-metric-storage-rulefiles-0\") pod \"76fbe037-9460-47b3-afd9-a14bca5d61eb\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.186580 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/76fbe037-9460-47b3-afd9-a14bca5d61eb-tls-assets\") pod \"76fbe037-9460-47b3-afd9-a14bca5d61eb\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.186690 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/76fbe037-9460-47b3-afd9-a14bca5d61eb-web-config\") pod \"76fbe037-9460-47b3-afd9-a14bca5d61eb\" (UID: \"76fbe037-9460-47b3-afd9-a14bca5d61eb\") " Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.189492 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76fbe037-9460-47b3-afd9-a14bca5d61eb-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "76fbe037-9460-47b3-afd9-a14bca5d61eb" (UID: "76fbe037-9460-47b3-afd9-a14bca5d61eb"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.201567 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76fbe037-9460-47b3-afd9-a14bca5d61eb-kube-api-access-lcnqt" (OuterVolumeSpecName: "kube-api-access-lcnqt") pod "76fbe037-9460-47b3-afd9-a14bca5d61eb" (UID: "76fbe037-9460-47b3-afd9-a14bca5d61eb"). InnerVolumeSpecName "kube-api-access-lcnqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.202688 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76fbe037-9460-47b3-afd9-a14bca5d61eb-config-out" (OuterVolumeSpecName: "config-out") pod "76fbe037-9460-47b3-afd9-a14bca5d61eb" (UID: "76fbe037-9460-47b3-afd9-a14bca5d61eb"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.206692 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76fbe037-9460-47b3-afd9-a14bca5d61eb-config" (OuterVolumeSpecName: "config") pod "76fbe037-9460-47b3-afd9-a14bca5d61eb" (UID: "76fbe037-9460-47b3-afd9-a14bca5d61eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.206812 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76fbe037-9460-47b3-afd9-a14bca5d61eb-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "76fbe037-9460-47b3-afd9-a14bca5d61eb" (UID: "76fbe037-9460-47b3-afd9-a14bca5d61eb"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.220810 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76fbe037-9460-47b3-afd9-a14bca5d61eb-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "76fbe037-9460-47b3-afd9-a14bca5d61eb" (UID: "76fbe037-9460-47b3-afd9-a14bca5d61eb"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.232814 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ea8e240-e5ac-4a76-970e-cf5cb0a94762-kube-api-access-vfbvt" (OuterVolumeSpecName: "kube-api-access-vfbvt") pod "2ea8e240-e5ac-4a76-970e-cf5cb0a94762" (UID: "2ea8e240-e5ac-4a76-970e-cf5cb0a94762"). InnerVolumeSpecName "kube-api-access-vfbvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.237991 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-9rppj"] Oct 13 08:16:37 crc kubenswrapper[4833]: E1013 08:16:37.238569 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76fbe037-9460-47b3-afd9-a14bca5d61eb" containerName="init-config-reloader" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.238593 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="76fbe037-9460-47b3-afd9-a14bca5d61eb" containerName="init-config-reloader" Oct 13 08:16:37 crc kubenswrapper[4833]: E1013 08:16:37.238604 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76fbe037-9460-47b3-afd9-a14bca5d61eb" containerName="prometheus" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.238612 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="76fbe037-9460-47b3-afd9-a14bca5d61eb" containerName="prometheus" Oct 13 08:16:37 crc kubenswrapper[4833]: E1013 08:16:37.238628 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76fbe037-9460-47b3-afd9-a14bca5d61eb" containerName="thanos-sidecar" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.238637 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="76fbe037-9460-47b3-afd9-a14bca5d61eb" containerName="thanos-sidecar" Oct 13 08:16:37 crc kubenswrapper[4833]: E1013 08:16:37.238660 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76fbe037-9460-47b3-afd9-a14bca5d61eb" containerName="config-reloader" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.238668 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="76fbe037-9460-47b3-afd9-a14bca5d61eb" containerName="config-reloader" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.238969 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="76fbe037-9460-47b3-afd9-a14bca5d61eb" containerName="config-reloader" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.238995 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="76fbe037-9460-47b3-afd9-a14bca5d61eb" containerName="prometheus" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.239009 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="76fbe037-9460-47b3-afd9-a14bca5d61eb" containerName="thanos-sidecar" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.239905 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-9rppj" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.249394 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-9rppj"] Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.260076 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4427552d-b72b-4644-b9f3-d02ed7b49d08" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "76fbe037-9460-47b3-afd9-a14bca5d61eb" (UID: "76fbe037-9460-47b3-afd9-a14bca5d61eb"). InnerVolumeSpecName "pvc-4427552d-b72b-4644-b9f3-d02ed7b49d08". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.262048 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76fbe037-9460-47b3-afd9-a14bca5d61eb-web-config" (OuterVolumeSpecName: "web-config") pod "76fbe037-9460-47b3-afd9-a14bca5d61eb" (UID: "76fbe037-9460-47b3-afd9-a14bca5d61eb"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.266215 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ea8e240-e5ac-4a76-970e-cf5cb0a94762-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "2ea8e240-e5ac-4a76-970e-cf5cb0a94762" (UID: "2ea8e240-e5ac-4a76-970e-cf5cb0a94762"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.293442 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea8e240-e5ac-4a76-970e-cf5cb0a94762-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "2ea8e240-e5ac-4a76-970e-cf5cb0a94762" (UID: "2ea8e240-e5ac-4a76-970e-cf5cb0a94762"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.293463 4833 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/76fbe037-9460-47b3-afd9-a14bca5d61eb-web-config\") on node \"crc\" DevicePath \"\"" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.293516 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfbvt\" (UniqueName: \"kubernetes.io/projected/2ea8e240-e5ac-4a76-970e-cf5cb0a94762-kube-api-access-vfbvt\") on node \"crc\" DevicePath \"\"" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.293527 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/76fbe037-9460-47b3-afd9-a14bca5d61eb-config\") on node \"crc\" DevicePath \"\"" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.293579 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4427552d-b72b-4644-b9f3-d02ed7b49d08\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4427552d-b72b-4644-b9f3-d02ed7b49d08\") on node \"crc\" " Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.293592 4833 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2ea8e240-e5ac-4a76-970e-cf5cb0a94762-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.293604 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcnqt\" (UniqueName: \"kubernetes.io/projected/76fbe037-9460-47b3-afd9-a14bca5d61eb-kube-api-access-lcnqt\") on node \"crc\" DevicePath \"\"" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.293615 4833 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/76fbe037-9460-47b3-afd9-a14bca5d61eb-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.293626 4833 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/76fbe037-9460-47b3-afd9-a14bca5d61eb-config-out\") on node \"crc\" DevicePath \"\"" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.293636 4833 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/76fbe037-9460-47b3-afd9-a14bca5d61eb-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.293645 4833 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/76fbe037-9460-47b3-afd9-a14bca5d61eb-tls-assets\") on node \"crc\" DevicePath \"\"" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.323765 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea8e240-e5ac-4a76-970e-cf5cb0a94762-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ea8e240-e5ac-4a76-970e-cf5cb0a94762" (UID: "2ea8e240-e5ac-4a76-970e-cf5cb0a94762"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.330573 4833 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.331835 4833 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4427552d-b72b-4644-b9f3-d02ed7b49d08" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4427552d-b72b-4644-b9f3-d02ed7b49d08") on node "crc" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.395759 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbctg\" (UniqueName: \"kubernetes.io/projected/2a1ff485-e9d3-42cb-a91b-893838097d21-kube-api-access-mbctg\") pod \"aodh-db-create-9rppj\" (UID: \"2a1ff485-e9d3-42cb-a91b-893838097d21\") " pod="openstack/aodh-db-create-9rppj" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.395878 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea8e240-e5ac-4a76-970e-cf5cb0a94762-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.395900 4833 reconciler_common.go:293] "Volume detached for volume \"pvc-4427552d-b72b-4644-b9f3-d02ed7b49d08\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4427552d-b72b-4644-b9f3-d02ed7b49d08\") on node \"crc\" DevicePath \"\"" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.395910 4833 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2ea8e240-e5ac-4a76-970e-cf5cb0a94762-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.416981 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.417202 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"76fbe037-9460-47b3-afd9-a14bca5d61eb","Type":"ContainerDied","Data":"0369f397951fe632f08ee6c83f8345491f76d840b352627b2f1ac7a7ae67d2fa"} Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.417584 4833 scope.go:117] "RemoveContainer" containerID="6985186bdb335e2c7d5bd8d3e1dc57a49344f628eaf11e84b54d2f10fa553aa5" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.432220 4833 generic.go:334] "Generic (PLEG): container finished" podID="2ea8e240-e5ac-4a76-970e-cf5cb0a94762" containerID="35aa3e508d0c2ca3bafe1c3b605d14cb5c5edac333b9ac10d9d4147726884d53" exitCode=137 Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.433180 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.461685 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.465562 4833 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="2ea8e240-e5ac-4a76-970e-cf5cb0a94762" podUID="85c42c8c-b34b-4d78-b2c7-e4bbd86f0a09" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.466160 4833 scope.go:117] "RemoveContainer" containerID="2bc15ef017b2fb048d6341921ae5f27cd8e8759b79ca3bfb8d5e3be7381c7475" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.471590 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.498153 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbctg\" (UniqueName: \"kubernetes.io/projected/2a1ff485-e9d3-42cb-a91b-893838097d21-kube-api-access-mbctg\") pod \"aodh-db-create-9rppj\" (UID: \"2a1ff485-e9d3-42cb-a91b-893838097d21\") " pod="openstack/aodh-db-create-9rppj" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.519236 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.521779 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.523742 4833 scope.go:117] "RemoveContainer" containerID="57cf0090305feb99b2335047f322f58529400e94b44847415165c99a57637f80" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.530107 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.530273 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-2n28t" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.530458 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.531948 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.533969 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.537007 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.550895 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbctg\" (UniqueName: \"kubernetes.io/projected/2a1ff485-e9d3-42cb-a91b-893838097d21-kube-api-access-mbctg\") pod \"aodh-db-create-9rppj\" (UID: \"2a1ff485-e9d3-42cb-a91b-893838097d21\") " pod="openstack/aodh-db-create-9rppj" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.575254 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.599510 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.604002 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6389489a-7b63-44c5-aa24-8ff7f36399c9-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.604055 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6389489a-7b63-44c5-aa24-8ff7f36399c9-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.604102 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6389489a-7b63-44c5-aa24-8ff7f36399c9-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.604141 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4427552d-b72b-4644-b9f3-d02ed7b49d08\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4427552d-b72b-4644-b9f3-d02ed7b49d08\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.604161 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6389489a-7b63-44c5-aa24-8ff7f36399c9-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.604180 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6389489a-7b63-44c5-aa24-8ff7f36399c9-config\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.604211 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzdtf\" (UniqueName: \"kubernetes.io/projected/6389489a-7b63-44c5-aa24-8ff7f36399c9-kube-api-access-jzdtf\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.604242 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6389489a-7b63-44c5-aa24-8ff7f36399c9-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.604297 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6389489a-7b63-44c5-aa24-8ff7f36399c9-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.604314 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6389489a-7b63-44c5-aa24-8ff7f36399c9-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.604334 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6389489a-7b63-44c5-aa24-8ff7f36399c9-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.703075 4833 scope.go:117] "RemoveContainer" containerID="cbe0aad2b172db90e3c77e069df52b1f71826715e4507779d2df8cd092f71d2e" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.705721 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6389489a-7b63-44c5-aa24-8ff7f36399c9-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.705770 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6389489a-7b63-44c5-aa24-8ff7f36399c9-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.705878 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6389489a-7b63-44c5-aa24-8ff7f36399c9-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.705909 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6389489a-7b63-44c5-aa24-8ff7f36399c9-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.705956 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6389489a-7b63-44c5-aa24-8ff7f36399c9-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.706006 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4427552d-b72b-4644-b9f3-d02ed7b49d08\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4427552d-b72b-4644-b9f3-d02ed7b49d08\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.706034 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6389489a-7b63-44c5-aa24-8ff7f36399c9-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.706061 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6389489a-7b63-44c5-aa24-8ff7f36399c9-config\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.706096 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzdtf\" (UniqueName: \"kubernetes.io/projected/6389489a-7b63-44c5-aa24-8ff7f36399c9-kube-api-access-jzdtf\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.706126 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6389489a-7b63-44c5-aa24-8ff7f36399c9-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.706200 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6389489a-7b63-44c5-aa24-8ff7f36399c9-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.709849 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6389489a-7b63-44c5-aa24-8ff7f36399c9-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.725386 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6389489a-7b63-44c5-aa24-8ff7f36399c9-config\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.727045 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6389489a-7b63-44c5-aa24-8ff7f36399c9-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.728002 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6389489a-7b63-44c5-aa24-8ff7f36399c9-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.728520 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6389489a-7b63-44c5-aa24-8ff7f36399c9-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.729029 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6389489a-7b63-44c5-aa24-8ff7f36399c9-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.729190 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6389489a-7b63-44c5-aa24-8ff7f36399c9-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.729432 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6389489a-7b63-44c5-aa24-8ff7f36399c9-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.732065 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6389489a-7b63-44c5-aa24-8ff7f36399c9-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.736978 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-9rppj" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.750022 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzdtf\" (UniqueName: \"kubernetes.io/projected/6389489a-7b63-44c5-aa24-8ff7f36399c9-kube-api-access-jzdtf\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.775355 4833 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.775394 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4427552d-b72b-4644-b9f3-d02ed7b49d08\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4427552d-b72b-4644-b9f3-d02ed7b49d08\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/dc0087efccc13a5ad7870124f878357356eb0ef59dea8d42ec4c6c18d3ce37a0/globalmount\"" pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.786731 4833 scope.go:117] "RemoveContainer" containerID="35aa3e508d0c2ca3bafe1c3b605d14cb5c5edac333b9ac10d9d4147726884d53" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.855126 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4427552d-b72b-4644-b9f3-d02ed7b49d08\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4427552d-b72b-4644-b9f3-d02ed7b49d08\") pod \"prometheus-metric-storage-0\" (UID: \"6389489a-7b63-44c5-aa24-8ff7f36399c9\") " pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.911725 4833 scope.go:117] "RemoveContainer" containerID="35aa3e508d0c2ca3bafe1c3b605d14cb5c5edac333b9ac10d9d4147726884d53" Oct 13 08:16:37 crc kubenswrapper[4833]: E1013 08:16:37.915191 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35aa3e508d0c2ca3bafe1c3b605d14cb5c5edac333b9ac10d9d4147726884d53\": container with ID starting with 35aa3e508d0c2ca3bafe1c3b605d14cb5c5edac333b9ac10d9d4147726884d53 not found: ID does not exist" containerID="35aa3e508d0c2ca3bafe1c3b605d14cb5c5edac333b9ac10d9d4147726884d53" Oct 13 08:16:37 crc kubenswrapper[4833]: I1013 08:16:37.915227 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35aa3e508d0c2ca3bafe1c3b605d14cb5c5edac333b9ac10d9d4147726884d53"} err="failed to get container status \"35aa3e508d0c2ca3bafe1c3b605d14cb5c5edac333b9ac10d9d4147726884d53\": rpc error: code = NotFound desc = could not find container \"35aa3e508d0c2ca3bafe1c3b605d14cb5c5edac333b9ac10d9d4147726884d53\": container with ID starting with 35aa3e508d0c2ca3bafe1c3b605d14cb5c5edac333b9ac10d9d4147726884d53 not found: ID does not exist" Oct 13 08:16:38 crc kubenswrapper[4833]: I1013 08:16:38.144938 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:38 crc kubenswrapper[4833]: I1013 08:16:38.321192 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-9rppj"] Oct 13 08:16:38 crc kubenswrapper[4833]: I1013 08:16:38.464823 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-9rppj" event={"ID":"2a1ff485-e9d3-42cb-a91b-893838097d21","Type":"ContainerStarted","Data":"0347a6709e1f3d5b866c7e1314a9af7a6775b9a12129be55beb95224272e80ed"} Oct 13 08:16:38 crc kubenswrapper[4833]: I1013 08:16:38.639830 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ea8e240-e5ac-4a76-970e-cf5cb0a94762" path="/var/lib/kubelet/pods/2ea8e240-e5ac-4a76-970e-cf5cb0a94762/volumes" Oct 13 08:16:38 crc kubenswrapper[4833]: I1013 08:16:38.640602 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76fbe037-9460-47b3-afd9-a14bca5d61eb" path="/var/lib/kubelet/pods/76fbe037-9460-47b3-afd9-a14bca5d61eb/volumes" Oct 13 08:16:38 crc kubenswrapper[4833]: I1013 08:16:38.693062 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 13 08:16:38 crc kubenswrapper[4833]: W1013 08:16:38.721036 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6389489a_7b63_44c5_aa24_8ff7f36399c9.slice/crio-0724b6341c1d2fb3039e210026eaab6f3ba0ad88511c06c9701777eecfcde33c WatchSource:0}: Error finding container 0724b6341c1d2fb3039e210026eaab6f3ba0ad88511c06c9701777eecfcde33c: Status 404 returned error can't find the container with id 0724b6341c1d2fb3039e210026eaab6f3ba0ad88511c06c9701777eecfcde33c Oct 13 08:16:39 crc kubenswrapper[4833]: I1013 08:16:39.479173 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6389489a-7b63-44c5-aa24-8ff7f36399c9","Type":"ContainerStarted","Data":"0724b6341c1d2fb3039e210026eaab6f3ba0ad88511c06c9701777eecfcde33c"} Oct 13 08:16:39 crc kubenswrapper[4833]: I1013 08:16:39.481667 4833 generic.go:334] "Generic (PLEG): container finished" podID="2a1ff485-e9d3-42cb-a91b-893838097d21" containerID="2e2b0e2c9d8279ddc62d2714ab3ae03887332b78204506b9bc87477a0bcfee47" exitCode=0 Oct 13 08:16:39 crc kubenswrapper[4833]: I1013 08:16:39.481716 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-9rppj" event={"ID":"2a1ff485-e9d3-42cb-a91b-893838097d21","Type":"ContainerDied","Data":"2e2b0e2c9d8279ddc62d2714ab3ae03887332b78204506b9bc87477a0bcfee47"} Oct 13 08:16:41 crc kubenswrapper[4833]: I1013 08:16:41.117714 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-9rppj" Oct 13 08:16:41 crc kubenswrapper[4833]: I1013 08:16:41.191490 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbctg\" (UniqueName: \"kubernetes.io/projected/2a1ff485-e9d3-42cb-a91b-893838097d21-kube-api-access-mbctg\") pod \"2a1ff485-e9d3-42cb-a91b-893838097d21\" (UID: \"2a1ff485-e9d3-42cb-a91b-893838097d21\") " Oct 13 08:16:41 crc kubenswrapper[4833]: I1013 08:16:41.208927 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a1ff485-e9d3-42cb-a91b-893838097d21-kube-api-access-mbctg" (OuterVolumeSpecName: "kube-api-access-mbctg") pod "2a1ff485-e9d3-42cb-a91b-893838097d21" (UID: "2a1ff485-e9d3-42cb-a91b-893838097d21"). InnerVolumeSpecName "kube-api-access-mbctg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:16:41 crc kubenswrapper[4833]: I1013 08:16:41.294589 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbctg\" (UniqueName: \"kubernetes.io/projected/2a1ff485-e9d3-42cb-a91b-893838097d21-kube-api-access-mbctg\") on node \"crc\" DevicePath \"\"" Oct 13 08:16:41 crc kubenswrapper[4833]: I1013 08:16:41.506277 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-9rppj" event={"ID":"2a1ff485-e9d3-42cb-a91b-893838097d21","Type":"ContainerDied","Data":"0347a6709e1f3d5b866c7e1314a9af7a6775b9a12129be55beb95224272e80ed"} Oct 13 08:16:41 crc kubenswrapper[4833]: I1013 08:16:41.506318 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0347a6709e1f3d5b866c7e1314a9af7a6775b9a12129be55beb95224272e80ed" Oct 13 08:16:41 crc kubenswrapper[4833]: I1013 08:16:41.506329 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-9rppj" Oct 13 08:16:43 crc kubenswrapper[4833]: I1013 08:16:43.537746 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6389489a-7b63-44c5-aa24-8ff7f36399c9","Type":"ContainerStarted","Data":"e606a24a0f55c9bb40daefdacba169162cdbf8a45ecedf9389f545fa3eee4350"} Oct 13 08:16:47 crc kubenswrapper[4833]: I1013 08:16:47.307477 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-b02d-account-create-tddwc"] Oct 13 08:16:47 crc kubenswrapper[4833]: E1013 08:16:47.308414 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a1ff485-e9d3-42cb-a91b-893838097d21" containerName="mariadb-database-create" Oct 13 08:16:47 crc kubenswrapper[4833]: I1013 08:16:47.308431 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a1ff485-e9d3-42cb-a91b-893838097d21" containerName="mariadb-database-create" Oct 13 08:16:47 crc kubenswrapper[4833]: I1013 08:16:47.308729 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a1ff485-e9d3-42cb-a91b-893838097d21" containerName="mariadb-database-create" Oct 13 08:16:47 crc kubenswrapper[4833]: I1013 08:16:47.309613 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-b02d-account-create-tddwc" Oct 13 08:16:47 crc kubenswrapper[4833]: I1013 08:16:47.316296 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Oct 13 08:16:47 crc kubenswrapper[4833]: I1013 08:16:47.330993 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-b02d-account-create-tddwc"] Oct 13 08:16:47 crc kubenswrapper[4833]: I1013 08:16:47.451415 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rlm4\" (UniqueName: \"kubernetes.io/projected/57ae373d-930f-4db9-8ff9-d8c40a13c48d-kube-api-access-7rlm4\") pod \"aodh-b02d-account-create-tddwc\" (UID: \"57ae373d-930f-4db9-8ff9-d8c40a13c48d\") " pod="openstack/aodh-b02d-account-create-tddwc" Oct 13 08:16:47 crc kubenswrapper[4833]: I1013 08:16:47.554147 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rlm4\" (UniqueName: \"kubernetes.io/projected/57ae373d-930f-4db9-8ff9-d8c40a13c48d-kube-api-access-7rlm4\") pod \"aodh-b02d-account-create-tddwc\" (UID: \"57ae373d-930f-4db9-8ff9-d8c40a13c48d\") " pod="openstack/aodh-b02d-account-create-tddwc" Oct 13 08:16:47 crc kubenswrapper[4833]: I1013 08:16:47.617788 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rlm4\" (UniqueName: \"kubernetes.io/projected/57ae373d-930f-4db9-8ff9-d8c40a13c48d-kube-api-access-7rlm4\") pod \"aodh-b02d-account-create-tddwc\" (UID: \"57ae373d-930f-4db9-8ff9-d8c40a13c48d\") " pod="openstack/aodh-b02d-account-create-tddwc" Oct 13 08:16:47 crc kubenswrapper[4833]: I1013 08:16:47.628742 4833 scope.go:117] "RemoveContainer" containerID="d4d4c44bb0e773c2232317552c7e13324e9005e08dbd3102a3b4703dc201b6e8" Oct 13 08:16:47 crc kubenswrapper[4833]: E1013 08:16:47.629226 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:16:47 crc kubenswrapper[4833]: I1013 08:16:47.642895 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-b02d-account-create-tddwc" Oct 13 08:16:48 crc kubenswrapper[4833]: I1013 08:16:48.141036 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-b02d-account-create-tddwc"] Oct 13 08:16:48 crc kubenswrapper[4833]: I1013 08:16:48.615802 4833 generic.go:334] "Generic (PLEG): container finished" podID="57ae373d-930f-4db9-8ff9-d8c40a13c48d" containerID="c43384bc85adb03d237d99ce71ca2eee5410ab9da244547a01067d7825605946" exitCode=0 Oct 13 08:16:48 crc kubenswrapper[4833]: I1013 08:16:48.615843 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-b02d-account-create-tddwc" event={"ID":"57ae373d-930f-4db9-8ff9-d8c40a13c48d","Type":"ContainerDied","Data":"c43384bc85adb03d237d99ce71ca2eee5410ab9da244547a01067d7825605946"} Oct 13 08:16:48 crc kubenswrapper[4833]: I1013 08:16:48.615869 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-b02d-account-create-tddwc" event={"ID":"57ae373d-930f-4db9-8ff9-d8c40a13c48d","Type":"ContainerStarted","Data":"5c30c7f2c2894af1a84acdc6a655649aef2d5a7103bbed21f854faaac55d3d68"} Oct 13 08:16:50 crc kubenswrapper[4833]: I1013 08:16:50.100664 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-b02d-account-create-tddwc" Oct 13 08:16:50 crc kubenswrapper[4833]: I1013 08:16:50.154789 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rlm4\" (UniqueName: \"kubernetes.io/projected/57ae373d-930f-4db9-8ff9-d8c40a13c48d-kube-api-access-7rlm4\") pod \"57ae373d-930f-4db9-8ff9-d8c40a13c48d\" (UID: \"57ae373d-930f-4db9-8ff9-d8c40a13c48d\") " Oct 13 08:16:50 crc kubenswrapper[4833]: I1013 08:16:50.165773 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57ae373d-930f-4db9-8ff9-d8c40a13c48d-kube-api-access-7rlm4" (OuterVolumeSpecName: "kube-api-access-7rlm4") pod "57ae373d-930f-4db9-8ff9-d8c40a13c48d" (UID: "57ae373d-930f-4db9-8ff9-d8c40a13c48d"). InnerVolumeSpecName "kube-api-access-7rlm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:16:50 crc kubenswrapper[4833]: I1013 08:16:50.258293 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rlm4\" (UniqueName: \"kubernetes.io/projected/57ae373d-930f-4db9-8ff9-d8c40a13c48d-kube-api-access-7rlm4\") on node \"crc\" DevicePath \"\"" Oct 13 08:16:50 crc kubenswrapper[4833]: I1013 08:16:50.633808 4833 generic.go:334] "Generic (PLEG): container finished" podID="6389489a-7b63-44c5-aa24-8ff7f36399c9" containerID="e606a24a0f55c9bb40daefdacba169162cdbf8a45ecedf9389f545fa3eee4350" exitCode=0 Oct 13 08:16:50 crc kubenswrapper[4833]: I1013 08:16:50.636064 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-b02d-account-create-tddwc" Oct 13 08:16:50 crc kubenswrapper[4833]: I1013 08:16:50.639495 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6389489a-7b63-44c5-aa24-8ff7f36399c9","Type":"ContainerDied","Data":"e606a24a0f55c9bb40daefdacba169162cdbf8a45ecedf9389f545fa3eee4350"} Oct 13 08:16:50 crc kubenswrapper[4833]: I1013 08:16:50.639531 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-b02d-account-create-tddwc" event={"ID":"57ae373d-930f-4db9-8ff9-d8c40a13c48d","Type":"ContainerDied","Data":"5c30c7f2c2894af1a84acdc6a655649aef2d5a7103bbed21f854faaac55d3d68"} Oct 13 08:16:50 crc kubenswrapper[4833]: I1013 08:16:50.639564 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c30c7f2c2894af1a84acdc6a655649aef2d5a7103bbed21f854faaac55d3d68" Oct 13 08:16:51 crc kubenswrapper[4833]: I1013 08:16:51.645126 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6389489a-7b63-44c5-aa24-8ff7f36399c9","Type":"ContainerStarted","Data":"44c813451446a5a8502dfa3b2c84a58597a08d623ceeb97741ec58589fae4637"} Oct 13 08:16:52 crc kubenswrapper[4833]: I1013 08:16:52.801273 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-9wqcp"] Oct 13 08:16:52 crc kubenswrapper[4833]: E1013 08:16:52.802141 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57ae373d-930f-4db9-8ff9-d8c40a13c48d" containerName="mariadb-account-create" Oct 13 08:16:52 crc kubenswrapper[4833]: I1013 08:16:52.802160 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="57ae373d-930f-4db9-8ff9-d8c40a13c48d" containerName="mariadb-account-create" Oct 13 08:16:52 crc kubenswrapper[4833]: I1013 08:16:52.802418 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="57ae373d-930f-4db9-8ff9-d8c40a13c48d" containerName="mariadb-account-create" Oct 13 08:16:52 crc kubenswrapper[4833]: I1013 08:16:52.803306 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-9wqcp" Oct 13 08:16:52 crc kubenswrapper[4833]: I1013 08:16:52.805123 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-jbng6" Oct 13 08:16:52 crc kubenswrapper[4833]: I1013 08:16:52.805625 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Oct 13 08:16:52 crc kubenswrapper[4833]: I1013 08:16:52.806095 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Oct 13 08:16:52 crc kubenswrapper[4833]: I1013 08:16:52.822139 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-9wqcp"] Oct 13 08:16:52 crc kubenswrapper[4833]: I1013 08:16:52.917161 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svxcj\" (UniqueName: \"kubernetes.io/projected/c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f-kube-api-access-svxcj\") pod \"aodh-db-sync-9wqcp\" (UID: \"c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f\") " pod="openstack/aodh-db-sync-9wqcp" Oct 13 08:16:52 crc kubenswrapper[4833]: I1013 08:16:52.917304 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f-config-data\") pod \"aodh-db-sync-9wqcp\" (UID: \"c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f\") " pod="openstack/aodh-db-sync-9wqcp" Oct 13 08:16:52 crc kubenswrapper[4833]: I1013 08:16:52.917687 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f-scripts\") pod \"aodh-db-sync-9wqcp\" (UID: \"c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f\") " pod="openstack/aodh-db-sync-9wqcp" Oct 13 08:16:52 crc kubenswrapper[4833]: I1013 08:16:52.917728 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f-combined-ca-bundle\") pod \"aodh-db-sync-9wqcp\" (UID: \"c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f\") " pod="openstack/aodh-db-sync-9wqcp" Oct 13 08:16:53 crc kubenswrapper[4833]: I1013 08:16:53.019131 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f-scripts\") pod \"aodh-db-sync-9wqcp\" (UID: \"c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f\") " pod="openstack/aodh-db-sync-9wqcp" Oct 13 08:16:53 crc kubenswrapper[4833]: I1013 08:16:53.019171 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f-combined-ca-bundle\") pod \"aodh-db-sync-9wqcp\" (UID: \"c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f\") " pod="openstack/aodh-db-sync-9wqcp" Oct 13 08:16:53 crc kubenswrapper[4833]: I1013 08:16:53.019249 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svxcj\" (UniqueName: \"kubernetes.io/projected/c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f-kube-api-access-svxcj\") pod \"aodh-db-sync-9wqcp\" (UID: \"c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f\") " pod="openstack/aodh-db-sync-9wqcp" Oct 13 08:16:53 crc kubenswrapper[4833]: I1013 08:16:53.019333 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f-config-data\") pod \"aodh-db-sync-9wqcp\" (UID: \"c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f\") " pod="openstack/aodh-db-sync-9wqcp" Oct 13 08:16:53 crc kubenswrapper[4833]: I1013 08:16:53.024998 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f-scripts\") pod \"aodh-db-sync-9wqcp\" (UID: \"c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f\") " pod="openstack/aodh-db-sync-9wqcp" Oct 13 08:16:53 crc kubenswrapper[4833]: I1013 08:16:53.025805 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f-config-data\") pod \"aodh-db-sync-9wqcp\" (UID: \"c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f\") " pod="openstack/aodh-db-sync-9wqcp" Oct 13 08:16:53 crc kubenswrapper[4833]: I1013 08:16:53.026159 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f-combined-ca-bundle\") pod \"aodh-db-sync-9wqcp\" (UID: \"c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f\") " pod="openstack/aodh-db-sync-9wqcp" Oct 13 08:16:53 crc kubenswrapper[4833]: I1013 08:16:53.043097 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svxcj\" (UniqueName: \"kubernetes.io/projected/c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f-kube-api-access-svxcj\") pod \"aodh-db-sync-9wqcp\" (UID: \"c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f\") " pod="openstack/aodh-db-sync-9wqcp" Oct 13 08:16:53 crc kubenswrapper[4833]: I1013 08:16:53.171938 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-9wqcp" Oct 13 08:16:53 crc kubenswrapper[4833]: I1013 08:16:53.824366 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-9wqcp"] Oct 13 08:16:54 crc kubenswrapper[4833]: I1013 08:16:54.405282 4833 scope.go:117] "RemoveContainer" containerID="bcb94f369faa5e18cbba0fb7155cbb65a93627e835a22c44cb646b5830dad5b4" Oct 13 08:16:54 crc kubenswrapper[4833]: I1013 08:16:54.449928 4833 scope.go:117] "RemoveContainer" containerID="c059e3353b2e9fcd2728f1b27be6446414ea15278896d72775b4b206ce865b3a" Oct 13 08:16:54 crc kubenswrapper[4833]: I1013 08:16:54.505758 4833 scope.go:117] "RemoveContainer" containerID="039b522d1b46fc4a33e80b902619cb87f41f131570c9fbf285583446628721ed" Oct 13 08:16:54 crc kubenswrapper[4833]: I1013 08:16:54.680982 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-9wqcp" event={"ID":"c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f","Type":"ContainerStarted","Data":"04a1b8caa9ade7c68e626c49d3543c6fed0afb6324dc8d8053596b88fd17b592"} Oct 13 08:16:55 crc kubenswrapper[4833]: I1013 08:16:55.696573 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6389489a-7b63-44c5-aa24-8ff7f36399c9","Type":"ContainerStarted","Data":"49eba926886abdfccefaee44481d96b59fb908e5ef6b031a9e53002aaa335e0e"} Oct 13 08:16:55 crc kubenswrapper[4833]: I1013 08:16:55.696628 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6389489a-7b63-44c5-aa24-8ff7f36399c9","Type":"ContainerStarted","Data":"d84d80d57c3958fd08ec7c6f785c24464de46067bac37e2a0ebcb489dd1b4663"} Oct 13 08:16:55 crc kubenswrapper[4833]: I1013 08:16:55.728178 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=18.728160343 podStartE2EDuration="18.728160343s" podCreationTimestamp="2025-10-13 08:16:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:16:55.727048831 +0000 UTC m=+6505.827471757" watchObservedRunningTime="2025-10-13 08:16:55.728160343 +0000 UTC m=+6505.828583269" Oct 13 08:16:58 crc kubenswrapper[4833]: I1013 08:16:58.059491 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 13 08:16:58 crc kubenswrapper[4833]: I1013 08:16:58.146404 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Oct 13 08:16:58 crc kubenswrapper[4833]: I1013 08:16:58.627121 4833 scope.go:117] "RemoveContainer" containerID="d4d4c44bb0e773c2232317552c7e13324e9005e08dbd3102a3b4703dc201b6e8" Oct 13 08:16:58 crc kubenswrapper[4833]: E1013 08:16:58.627619 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:16:58 crc kubenswrapper[4833]: I1013 08:16:58.740296 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-9wqcp" event={"ID":"c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f","Type":"ContainerStarted","Data":"0f0020cdf9c08c030f23412131070f6cf4484c1d2ee5de8b3eb616fd96135e37"} Oct 13 08:16:58 crc kubenswrapper[4833]: I1013 08:16:58.770079 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-9wqcp" podStartSLOduration=2.216933431 podStartE2EDuration="6.77006088s" podCreationTimestamp="2025-10-13 08:16:52 +0000 UTC" firstStartedPulling="2025-10-13 08:16:53.834769557 +0000 UTC m=+6503.935192473" lastFinishedPulling="2025-10-13 08:16:58.387897006 +0000 UTC m=+6508.488319922" observedRunningTime="2025-10-13 08:16:58.768310001 +0000 UTC m=+6508.868732937" watchObservedRunningTime="2025-10-13 08:16:58.77006088 +0000 UTC m=+6508.870483786" Oct 13 08:17:00 crc kubenswrapper[4833]: I1013 08:17:00.044284 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-dmzfp"] Oct 13 08:17:00 crc kubenswrapper[4833]: I1013 08:17:00.057967 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-w2h2m"] Oct 13 08:17:00 crc kubenswrapper[4833]: I1013 08:17:00.072066 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-dmzfp"] Oct 13 08:17:00 crc kubenswrapper[4833]: I1013 08:17:00.082225 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-w2h2m"] Oct 13 08:17:00 crc kubenswrapper[4833]: I1013 08:17:00.090519 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-vngxq"] Oct 13 08:17:00 crc kubenswrapper[4833]: I1013 08:17:00.099382 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-vngxq"] Oct 13 08:17:00 crc kubenswrapper[4833]: I1013 08:17:00.640914 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1573d87d-2039-4e07-8b10-40e82b030687" path="/var/lib/kubelet/pods/1573d87d-2039-4e07-8b10-40e82b030687/volumes" Oct 13 08:17:00 crc kubenswrapper[4833]: I1013 08:17:00.642315 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fd16fb1-4ba4-40b5-8299-4ecb0a951d7c" path="/var/lib/kubelet/pods/3fd16fb1-4ba4-40b5-8299-4ecb0a951d7c/volumes" Oct 13 08:17:00 crc kubenswrapper[4833]: I1013 08:17:00.643097 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e82baa4c-bdbb-4c92-b1eb-2a47303b5e70" path="/var/lib/kubelet/pods/e82baa4c-bdbb-4c92-b1eb-2a47303b5e70/volumes" Oct 13 08:17:01 crc kubenswrapper[4833]: I1013 08:17:01.778960 4833 generic.go:334] "Generic (PLEG): container finished" podID="c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f" containerID="0f0020cdf9c08c030f23412131070f6cf4484c1d2ee5de8b3eb616fd96135e37" exitCode=0 Oct 13 08:17:01 crc kubenswrapper[4833]: I1013 08:17:01.779034 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-9wqcp" event={"ID":"c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f","Type":"ContainerDied","Data":"0f0020cdf9c08c030f23412131070f6cf4484c1d2ee5de8b3eb616fd96135e37"} Oct 13 08:17:02 crc kubenswrapper[4833]: I1013 08:17:02.041851 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 13 08:17:02 crc kubenswrapper[4833]: I1013 08:17:02.042084 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="3ca0b086-13c9-4a2d-b67a-3df07689ea9f" containerName="kube-state-metrics" containerID="cri-o://ce2ff48b288d569c2daea2c4de6a4a934bc1c06956ddc3b7af4d242e4575e7c2" gracePeriod=30 Oct 13 08:17:02 crc kubenswrapper[4833]: I1013 08:17:02.749915 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 13 08:17:02 crc kubenswrapper[4833]: I1013 08:17:02.807108 4833 generic.go:334] "Generic (PLEG): container finished" podID="3ca0b086-13c9-4a2d-b67a-3df07689ea9f" containerID="ce2ff48b288d569c2daea2c4de6a4a934bc1c06956ddc3b7af4d242e4575e7c2" exitCode=2 Oct 13 08:17:02 crc kubenswrapper[4833]: I1013 08:17:02.807508 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 13 08:17:02 crc kubenswrapper[4833]: I1013 08:17:02.808197 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3ca0b086-13c9-4a2d-b67a-3df07689ea9f","Type":"ContainerDied","Data":"ce2ff48b288d569c2daea2c4de6a4a934bc1c06956ddc3b7af4d242e4575e7c2"} Oct 13 08:17:02 crc kubenswrapper[4833]: I1013 08:17:02.808221 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3ca0b086-13c9-4a2d-b67a-3df07689ea9f","Type":"ContainerDied","Data":"751cf68062c6a38f7d3ceb079cd8e4abdf3f6cba4d0c6a5035a8585d4afb1dc2"} Oct 13 08:17:02 crc kubenswrapper[4833]: I1013 08:17:02.808238 4833 scope.go:117] "RemoveContainer" containerID="ce2ff48b288d569c2daea2c4de6a4a934bc1c06956ddc3b7af4d242e4575e7c2" Oct 13 08:17:02 crc kubenswrapper[4833]: I1013 08:17:02.836823 4833 scope.go:117] "RemoveContainer" containerID="ce2ff48b288d569c2daea2c4de6a4a934bc1c06956ddc3b7af4d242e4575e7c2" Oct 13 08:17:02 crc kubenswrapper[4833]: E1013 08:17:02.837222 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce2ff48b288d569c2daea2c4de6a4a934bc1c06956ddc3b7af4d242e4575e7c2\": container with ID starting with ce2ff48b288d569c2daea2c4de6a4a934bc1c06956ddc3b7af4d242e4575e7c2 not found: ID does not exist" containerID="ce2ff48b288d569c2daea2c4de6a4a934bc1c06956ddc3b7af4d242e4575e7c2" Oct 13 08:17:02 crc kubenswrapper[4833]: I1013 08:17:02.837256 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce2ff48b288d569c2daea2c4de6a4a934bc1c06956ddc3b7af4d242e4575e7c2"} err="failed to get container status \"ce2ff48b288d569c2daea2c4de6a4a934bc1c06956ddc3b7af4d242e4575e7c2\": rpc error: code = NotFound desc = could not find container \"ce2ff48b288d569c2daea2c4de6a4a934bc1c06956ddc3b7af4d242e4575e7c2\": container with ID starting with ce2ff48b288d569c2daea2c4de6a4a934bc1c06956ddc3b7af4d242e4575e7c2 not found: ID does not exist" Oct 13 08:17:02 crc kubenswrapper[4833]: I1013 08:17:02.895678 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7lb9\" (UniqueName: \"kubernetes.io/projected/3ca0b086-13c9-4a2d-b67a-3df07689ea9f-kube-api-access-f7lb9\") pod \"3ca0b086-13c9-4a2d-b67a-3df07689ea9f\" (UID: \"3ca0b086-13c9-4a2d-b67a-3df07689ea9f\") " Oct 13 08:17:02 crc kubenswrapper[4833]: I1013 08:17:02.912752 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ca0b086-13c9-4a2d-b67a-3df07689ea9f-kube-api-access-f7lb9" (OuterVolumeSpecName: "kube-api-access-f7lb9") pod "3ca0b086-13c9-4a2d-b67a-3df07689ea9f" (UID: "3ca0b086-13c9-4a2d-b67a-3df07689ea9f"). InnerVolumeSpecName "kube-api-access-f7lb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:17:02 crc kubenswrapper[4833]: I1013 08:17:02.998162 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7lb9\" (UniqueName: \"kubernetes.io/projected/3ca0b086-13c9-4a2d-b67a-3df07689ea9f-kube-api-access-f7lb9\") on node \"crc\" DevicePath \"\"" Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.139846 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.157718 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.172240 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 13 08:17:03 crc kubenswrapper[4833]: E1013 08:17:03.172929 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ca0b086-13c9-4a2d-b67a-3df07689ea9f" containerName="kube-state-metrics" Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.172954 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ca0b086-13c9-4a2d-b67a-3df07689ea9f" containerName="kube-state-metrics" Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.173193 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ca0b086-13c9-4a2d-b67a-3df07689ea9f" containerName="kube-state-metrics" Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.174965 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.177430 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.177439 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.194330 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.202444 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03e397ac-e05a-4f00-9ce6-91a68ac1d22f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"03e397ac-e05a-4f00-9ce6-91a68ac1d22f\") " pod="openstack/kube-state-metrics-0" Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.202565 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snp8d\" (UniqueName: \"kubernetes.io/projected/03e397ac-e05a-4f00-9ce6-91a68ac1d22f-kube-api-access-snp8d\") pod \"kube-state-metrics-0\" (UID: \"03e397ac-e05a-4f00-9ce6-91a68ac1d22f\") " pod="openstack/kube-state-metrics-0" Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.202603 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/03e397ac-e05a-4f00-9ce6-91a68ac1d22f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"03e397ac-e05a-4f00-9ce6-91a68ac1d22f\") " pod="openstack/kube-state-metrics-0" Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.202651 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/03e397ac-e05a-4f00-9ce6-91a68ac1d22f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"03e397ac-e05a-4f00-9ce6-91a68ac1d22f\") " pod="openstack/kube-state-metrics-0" Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.304696 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snp8d\" (UniqueName: \"kubernetes.io/projected/03e397ac-e05a-4f00-9ce6-91a68ac1d22f-kube-api-access-snp8d\") pod \"kube-state-metrics-0\" (UID: \"03e397ac-e05a-4f00-9ce6-91a68ac1d22f\") " pod="openstack/kube-state-metrics-0" Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.304781 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/03e397ac-e05a-4f00-9ce6-91a68ac1d22f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"03e397ac-e05a-4f00-9ce6-91a68ac1d22f\") " pod="openstack/kube-state-metrics-0" Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.304855 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/03e397ac-e05a-4f00-9ce6-91a68ac1d22f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"03e397ac-e05a-4f00-9ce6-91a68ac1d22f\") " pod="openstack/kube-state-metrics-0" Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.304986 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03e397ac-e05a-4f00-9ce6-91a68ac1d22f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"03e397ac-e05a-4f00-9ce6-91a68ac1d22f\") " pod="openstack/kube-state-metrics-0" Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.311022 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03e397ac-e05a-4f00-9ce6-91a68ac1d22f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"03e397ac-e05a-4f00-9ce6-91a68ac1d22f\") " pod="openstack/kube-state-metrics-0" Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.312806 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/03e397ac-e05a-4f00-9ce6-91a68ac1d22f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"03e397ac-e05a-4f00-9ce6-91a68ac1d22f\") " pod="openstack/kube-state-metrics-0" Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.314142 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/03e397ac-e05a-4f00-9ce6-91a68ac1d22f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"03e397ac-e05a-4f00-9ce6-91a68ac1d22f\") " pod="openstack/kube-state-metrics-0" Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.333993 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snp8d\" (UniqueName: \"kubernetes.io/projected/03e397ac-e05a-4f00-9ce6-91a68ac1d22f-kube-api-access-snp8d\") pod \"kube-state-metrics-0\" (UID: \"03e397ac-e05a-4f00-9ce6-91a68ac1d22f\") " pod="openstack/kube-state-metrics-0" Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.400228 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-9wqcp" Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.496035 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.508739 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f-scripts\") pod \"c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f\" (UID: \"c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f\") " Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.508794 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f-combined-ca-bundle\") pod \"c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f\" (UID: \"c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f\") " Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.508910 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f-config-data\") pod \"c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f\" (UID: \"c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f\") " Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.508962 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svxcj\" (UniqueName: \"kubernetes.io/projected/c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f-kube-api-access-svxcj\") pod \"c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f\" (UID: \"c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f\") " Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.513293 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f-scripts" (OuterVolumeSpecName: "scripts") pod "c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f" (UID: "c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.536342 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f-kube-api-access-svxcj" (OuterVolumeSpecName: "kube-api-access-svxcj") pod "c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f" (UID: "c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f"). InnerVolumeSpecName "kube-api-access-svxcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.538147 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f" (UID: "c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.547327 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f-config-data" (OuterVolumeSpecName: "config-data") pod "c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f" (UID: "c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.616433 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.616643 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.616730 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.616800 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svxcj\" (UniqueName: \"kubernetes.io/projected/c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f-kube-api-access-svxcj\") on node \"crc\" DevicePath \"\"" Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.850368 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-9wqcp" event={"ID":"c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f","Type":"ContainerDied","Data":"04a1b8caa9ade7c68e626c49d3543c6fed0afb6324dc8d8053596b88fd17b592"} Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.850600 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04a1b8caa9ade7c68e626c49d3543c6fed0afb6324dc8d8053596b88fd17b592" Oct 13 08:17:03 crc kubenswrapper[4833]: I1013 08:17:03.850389 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-9wqcp" Oct 13 08:17:04 crc kubenswrapper[4833]: I1013 08:17:04.045572 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 13 08:17:04 crc kubenswrapper[4833]: W1013 08:17:04.051984 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03e397ac_e05a_4f00_9ce6_91a68ac1d22f.slice/crio-a3ea292fd58b0e17bdbf8732825413fd15b6c201dde2ea2ff4a4516d063207e8 WatchSource:0}: Error finding container a3ea292fd58b0e17bdbf8732825413fd15b6c201dde2ea2ff4a4516d063207e8: Status 404 returned error can't find the container with id a3ea292fd58b0e17bdbf8732825413fd15b6c201dde2ea2ff4a4516d063207e8 Oct 13 08:17:04 crc kubenswrapper[4833]: I1013 08:17:04.328119 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 08:17:04 crc kubenswrapper[4833]: I1013 08:17:04.328647 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a" containerName="ceilometer-central-agent" containerID="cri-o://3343b3e3d0e54dcb8421271c520b8bd5f7de7c82c3209c61f692a53a6d3f2d87" gracePeriod=30 Oct 13 08:17:04 crc kubenswrapper[4833]: I1013 08:17:04.329058 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a" containerName="ceilometer-notification-agent" containerID="cri-o://cf4e0ccd900912766f52bf0e3fa373bf8e64e367a52a7fd970b82ca8c78ed6fe" gracePeriod=30 Oct 13 08:17:04 crc kubenswrapper[4833]: I1013 08:17:04.329085 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a" containerName="sg-core" containerID="cri-o://f1482bab91418c52ebdc23e94af7096ff2aeb9f8425db5212a5f6b07344fffe3" gracePeriod=30 Oct 13 08:17:04 crc kubenswrapper[4833]: I1013 08:17:04.329596 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a" containerName="proxy-httpd" containerID="cri-o://e22308bd039d7054fa9dfe17d545a67124c75c9773e3c1be3cab9b5c3027e537" gracePeriod=30 Oct 13 08:17:04 crc kubenswrapper[4833]: I1013 08:17:04.640729 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ca0b086-13c9-4a2d-b67a-3df07689ea9f" path="/var/lib/kubelet/pods/3ca0b086-13c9-4a2d-b67a-3df07689ea9f/volumes" Oct 13 08:17:04 crc kubenswrapper[4833]: I1013 08:17:04.865159 4833 generic.go:334] "Generic (PLEG): container finished" podID="f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a" containerID="e22308bd039d7054fa9dfe17d545a67124c75c9773e3c1be3cab9b5c3027e537" exitCode=0 Oct 13 08:17:04 crc kubenswrapper[4833]: I1013 08:17:04.865574 4833 generic.go:334] "Generic (PLEG): container finished" podID="f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a" containerID="f1482bab91418c52ebdc23e94af7096ff2aeb9f8425db5212a5f6b07344fffe3" exitCode=2 Oct 13 08:17:04 crc kubenswrapper[4833]: I1013 08:17:04.865236 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a","Type":"ContainerDied","Data":"e22308bd039d7054fa9dfe17d545a67124c75c9773e3c1be3cab9b5c3027e537"} Oct 13 08:17:04 crc kubenswrapper[4833]: I1013 08:17:04.865626 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a","Type":"ContainerDied","Data":"f1482bab91418c52ebdc23e94af7096ff2aeb9f8425db5212a5f6b07344fffe3"} Oct 13 08:17:04 crc kubenswrapper[4833]: I1013 08:17:04.865645 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a","Type":"ContainerDied","Data":"3343b3e3d0e54dcb8421271c520b8bd5f7de7c82c3209c61f692a53a6d3f2d87"} Oct 13 08:17:04 crc kubenswrapper[4833]: I1013 08:17:04.865590 4833 generic.go:334] "Generic (PLEG): container finished" podID="f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a" containerID="3343b3e3d0e54dcb8421271c520b8bd5f7de7c82c3209c61f692a53a6d3f2d87" exitCode=0 Oct 13 08:17:04 crc kubenswrapper[4833]: I1013 08:17:04.867803 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"03e397ac-e05a-4f00-9ce6-91a68ac1d22f","Type":"ContainerStarted","Data":"57beabbfc07f4d3ff2e2231af4497158a7fadc97e49e5b51420d1748538552bd"} Oct 13 08:17:04 crc kubenswrapper[4833]: I1013 08:17:04.867862 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"03e397ac-e05a-4f00-9ce6-91a68ac1d22f","Type":"ContainerStarted","Data":"a3ea292fd58b0e17bdbf8732825413fd15b6c201dde2ea2ff4a4516d063207e8"} Oct 13 08:17:04 crc kubenswrapper[4833]: I1013 08:17:04.867910 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 13 08:17:04 crc kubenswrapper[4833]: I1013 08:17:04.892837 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.444636633 podStartE2EDuration="1.892818934s" podCreationTimestamp="2025-10-13 08:17:03 +0000 UTC" firstStartedPulling="2025-10-13 08:17:04.05656427 +0000 UTC m=+6514.156987186" lastFinishedPulling="2025-10-13 08:17:04.504746561 +0000 UTC m=+6514.605169487" observedRunningTime="2025-10-13 08:17:04.888567303 +0000 UTC m=+6514.988990219" watchObservedRunningTime="2025-10-13 08:17:04.892818934 +0000 UTC m=+6514.993241850" Oct 13 08:17:07 crc kubenswrapper[4833]: I1013 08:17:07.276391 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Oct 13 08:17:07 crc kubenswrapper[4833]: E1013 08:17:07.277318 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f" containerName="aodh-db-sync" Oct 13 08:17:07 crc kubenswrapper[4833]: I1013 08:17:07.277337 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f" containerName="aodh-db-sync" Oct 13 08:17:07 crc kubenswrapper[4833]: I1013 08:17:07.277767 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f" containerName="aodh-db-sync" Oct 13 08:17:07 crc kubenswrapper[4833]: I1013 08:17:07.280990 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 13 08:17:07 crc kubenswrapper[4833]: I1013 08:17:07.283577 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-jbng6" Oct 13 08:17:07 crc kubenswrapper[4833]: I1013 08:17:07.283663 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Oct 13 08:17:07 crc kubenswrapper[4833]: I1013 08:17:07.283721 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Oct 13 08:17:07 crc kubenswrapper[4833]: I1013 08:17:07.292906 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Oct 13 08:17:07 crc kubenswrapper[4833]: I1013 08:17:07.328847 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4f03e5-c261-465d-9fe2-d8abe2249840-combined-ca-bundle\") pod \"aodh-0\" (UID: \"6c4f03e5-c261-465d-9fe2-d8abe2249840\") " pod="openstack/aodh-0" Oct 13 08:17:07 crc kubenswrapper[4833]: I1013 08:17:07.328989 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c4f03e5-c261-465d-9fe2-d8abe2249840-scripts\") pod \"aodh-0\" (UID: \"6c4f03e5-c261-465d-9fe2-d8abe2249840\") " pod="openstack/aodh-0" Oct 13 08:17:07 crc kubenswrapper[4833]: I1013 08:17:07.329153 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmts7\" (UniqueName: \"kubernetes.io/projected/6c4f03e5-c261-465d-9fe2-d8abe2249840-kube-api-access-mmts7\") pod \"aodh-0\" (UID: \"6c4f03e5-c261-465d-9fe2-d8abe2249840\") " pod="openstack/aodh-0" Oct 13 08:17:07 crc kubenswrapper[4833]: I1013 08:17:07.329185 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c4f03e5-c261-465d-9fe2-d8abe2249840-config-data\") pod \"aodh-0\" (UID: \"6c4f03e5-c261-465d-9fe2-d8abe2249840\") " pod="openstack/aodh-0" Oct 13 08:17:07 crc kubenswrapper[4833]: I1013 08:17:07.431713 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4f03e5-c261-465d-9fe2-d8abe2249840-combined-ca-bundle\") pod \"aodh-0\" (UID: \"6c4f03e5-c261-465d-9fe2-d8abe2249840\") " pod="openstack/aodh-0" Oct 13 08:17:07 crc kubenswrapper[4833]: I1013 08:17:07.431817 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c4f03e5-c261-465d-9fe2-d8abe2249840-scripts\") pod \"aodh-0\" (UID: \"6c4f03e5-c261-465d-9fe2-d8abe2249840\") " pod="openstack/aodh-0" Oct 13 08:17:07 crc kubenswrapper[4833]: I1013 08:17:07.431950 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmts7\" (UniqueName: \"kubernetes.io/projected/6c4f03e5-c261-465d-9fe2-d8abe2249840-kube-api-access-mmts7\") pod \"aodh-0\" (UID: \"6c4f03e5-c261-465d-9fe2-d8abe2249840\") " pod="openstack/aodh-0" Oct 13 08:17:07 crc kubenswrapper[4833]: I1013 08:17:07.431986 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c4f03e5-c261-465d-9fe2-d8abe2249840-config-data\") pod \"aodh-0\" (UID: \"6c4f03e5-c261-465d-9fe2-d8abe2249840\") " pod="openstack/aodh-0" Oct 13 08:17:07 crc kubenswrapper[4833]: I1013 08:17:07.440292 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c4f03e5-c261-465d-9fe2-d8abe2249840-scripts\") pod \"aodh-0\" (UID: \"6c4f03e5-c261-465d-9fe2-d8abe2249840\") " pod="openstack/aodh-0" Oct 13 08:17:07 crc kubenswrapper[4833]: I1013 08:17:07.440896 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c4f03e5-c261-465d-9fe2-d8abe2249840-config-data\") pod \"aodh-0\" (UID: \"6c4f03e5-c261-465d-9fe2-d8abe2249840\") " pod="openstack/aodh-0" Oct 13 08:17:07 crc kubenswrapper[4833]: I1013 08:17:07.450510 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4f03e5-c261-465d-9fe2-d8abe2249840-combined-ca-bundle\") pod \"aodh-0\" (UID: \"6c4f03e5-c261-465d-9fe2-d8abe2249840\") " pod="openstack/aodh-0" Oct 13 08:17:07 crc kubenswrapper[4833]: I1013 08:17:07.461371 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmts7\" (UniqueName: \"kubernetes.io/projected/6c4f03e5-c261-465d-9fe2-d8abe2249840-kube-api-access-mmts7\") pod \"aodh-0\" (UID: \"6c4f03e5-c261-465d-9fe2-d8abe2249840\") " pod="openstack/aodh-0" Oct 13 08:17:07 crc kubenswrapper[4833]: I1013 08:17:07.604207 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 13 08:17:07 crc kubenswrapper[4833]: I1013 08:17:07.926815 4833 generic.go:334] "Generic (PLEG): container finished" podID="f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a" containerID="cf4e0ccd900912766f52bf0e3fa373bf8e64e367a52a7fd970b82ca8c78ed6fe" exitCode=0 Oct 13 08:17:07 crc kubenswrapper[4833]: I1013 08:17:07.927065 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a","Type":"ContainerDied","Data":"cf4e0ccd900912766f52bf0e3fa373bf8e64e367a52a7fd970b82ca8c78ed6fe"} Oct 13 08:17:08 crc kubenswrapper[4833]: I1013 08:17:08.147384 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Oct 13 08:17:08 crc kubenswrapper[4833]: I1013 08:17:08.159193 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Oct 13 08:17:08 crc kubenswrapper[4833]: I1013 08:17:08.194783 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Oct 13 08:17:08 crc kubenswrapper[4833]: I1013 08:17:08.246448 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 08:17:08 crc kubenswrapper[4833]: I1013 08:17:08.348933 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-config-data\") pod \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\" (UID: \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\") " Oct 13 08:17:08 crc kubenswrapper[4833]: I1013 08:17:08.350436 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-combined-ca-bundle\") pod \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\" (UID: \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\") " Oct 13 08:17:08 crc kubenswrapper[4833]: I1013 08:17:08.350488 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-run-httpd\") pod \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\" (UID: \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\") " Oct 13 08:17:08 crc kubenswrapper[4833]: I1013 08:17:08.350566 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-scripts\") pod \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\" (UID: \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\") " Oct 13 08:17:08 crc kubenswrapper[4833]: I1013 08:17:08.350806 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-log-httpd\") pod \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\" (UID: \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\") " Oct 13 08:17:08 crc kubenswrapper[4833]: I1013 08:17:08.350839 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn2jh\" (UniqueName: \"kubernetes.io/projected/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-kube-api-access-wn2jh\") pod \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\" (UID: \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\") " Oct 13 08:17:08 crc kubenswrapper[4833]: I1013 08:17:08.350866 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-sg-core-conf-yaml\") pod \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\" (UID: \"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a\") " Oct 13 08:17:08 crc kubenswrapper[4833]: I1013 08:17:08.351109 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a" (UID: "f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:17:08 crc kubenswrapper[4833]: I1013 08:17:08.351618 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a" (UID: "f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:17:08 crc kubenswrapper[4833]: I1013 08:17:08.352079 4833 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 08:17:08 crc kubenswrapper[4833]: I1013 08:17:08.352161 4833 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 08:17:08 crc kubenswrapper[4833]: I1013 08:17:08.355429 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-scripts" (OuterVolumeSpecName: "scripts") pod "f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a" (UID: "f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:17:08 crc kubenswrapper[4833]: I1013 08:17:08.355576 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-kube-api-access-wn2jh" (OuterVolumeSpecName: "kube-api-access-wn2jh") pod "f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a" (UID: "f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a"). InnerVolumeSpecName "kube-api-access-wn2jh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:17:08 crc kubenswrapper[4833]: I1013 08:17:08.384365 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a" (UID: "f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:17:08 crc kubenswrapper[4833]: I1013 08:17:08.438201 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a" (UID: "f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:17:08 crc kubenswrapper[4833]: I1013 08:17:08.453918 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn2jh\" (UniqueName: \"kubernetes.io/projected/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-kube-api-access-wn2jh\") on node \"crc\" DevicePath \"\"" Oct 13 08:17:08 crc kubenswrapper[4833]: I1013 08:17:08.453978 4833 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 13 08:17:08 crc kubenswrapper[4833]: I1013 08:17:08.453991 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:17:08 crc kubenswrapper[4833]: I1013 08:17:08.454008 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 08:17:08 crc kubenswrapper[4833]: I1013 08:17:08.464440 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-config-data" (OuterVolumeSpecName: "config-data") pod "f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a" (UID: "f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:17:08 crc kubenswrapper[4833]: I1013 08:17:08.556041 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:17:08 crc kubenswrapper[4833]: I1013 08:17:08.936589 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6c4f03e5-c261-465d-9fe2-d8abe2249840","Type":"ContainerStarted","Data":"598828e6ba51669bef6bb3173963d8a73382537dfdb6dae238ed15c5da473349"} Oct 13 08:17:08 crc kubenswrapper[4833]: I1013 08:17:08.939109 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a","Type":"ContainerDied","Data":"845f485be06cd57f2fd68e325e0caf5b10c3391a4770ea210e66e1111e527b10"} Oct 13 08:17:08 crc kubenswrapper[4833]: I1013 08:17:08.939155 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 08:17:08 crc kubenswrapper[4833]: I1013 08:17:08.939161 4833 scope.go:117] "RemoveContainer" containerID="e22308bd039d7054fa9dfe17d545a67124c75c9773e3c1be3cab9b5c3027e537" Oct 13 08:17:08 crc kubenswrapper[4833]: I1013 08:17:08.944610 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Oct 13 08:17:08 crc kubenswrapper[4833]: I1013 08:17:08.957421 4833 scope.go:117] "RemoveContainer" containerID="f1482bab91418c52ebdc23e94af7096ff2aeb9f8425db5212a5f6b07344fffe3" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.004710 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.005949 4833 scope.go:117] "RemoveContainer" containerID="cf4e0ccd900912766f52bf0e3fa373bf8e64e367a52a7fd970b82ca8c78ed6fe" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.014653 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.026143 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 13 08:17:09 crc kubenswrapper[4833]: E1013 08:17:09.026597 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a" containerName="sg-core" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.026609 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a" containerName="sg-core" Oct 13 08:17:09 crc kubenswrapper[4833]: E1013 08:17:09.026631 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a" containerName="ceilometer-central-agent" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.026636 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a" containerName="ceilometer-central-agent" Oct 13 08:17:09 crc kubenswrapper[4833]: E1013 08:17:09.027982 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a" containerName="proxy-httpd" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.028013 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a" containerName="proxy-httpd" Oct 13 08:17:09 crc kubenswrapper[4833]: E1013 08:17:09.028068 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a" containerName="ceilometer-notification-agent" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.028076 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a" containerName="ceilometer-notification-agent" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.028442 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a" containerName="proxy-httpd" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.028470 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a" containerName="ceilometer-notification-agent" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.028481 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a" containerName="sg-core" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.028505 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a" containerName="ceilometer-central-agent" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.031351 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.034520 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.034987 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.035358 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.035420 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.072947 4833 scope.go:117] "RemoveContainer" containerID="3343b3e3d0e54dcb8421271c520b8bd5f7de7c82c3209c61f692a53a6d3f2d87" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.074351 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-run-httpd\") pod \"ceilometer-0\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " pod="openstack/ceilometer-0" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.074399 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " pod="openstack/ceilometer-0" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.074419 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-log-httpd\") pod \"ceilometer-0\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " pod="openstack/ceilometer-0" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.074673 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " pod="openstack/ceilometer-0" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.074709 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-scripts\") pod \"ceilometer-0\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " pod="openstack/ceilometer-0" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.075307 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-config-data\") pod \"ceilometer-0\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " pod="openstack/ceilometer-0" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.075987 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w6fw\" (UniqueName: \"kubernetes.io/projected/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-kube-api-access-5w6fw\") pod \"ceilometer-0\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " pod="openstack/ceilometer-0" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.076214 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " pod="openstack/ceilometer-0" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.178760 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " pod="openstack/ceilometer-0" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.178799 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-scripts\") pod \"ceilometer-0\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " pod="openstack/ceilometer-0" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.178846 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-config-data\") pod \"ceilometer-0\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " pod="openstack/ceilometer-0" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.178919 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w6fw\" (UniqueName: \"kubernetes.io/projected/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-kube-api-access-5w6fw\") pod \"ceilometer-0\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " pod="openstack/ceilometer-0" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.178963 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " pod="openstack/ceilometer-0" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.179008 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-run-httpd\") pod \"ceilometer-0\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " pod="openstack/ceilometer-0" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.179027 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-log-httpd\") pod \"ceilometer-0\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " pod="openstack/ceilometer-0" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.179042 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " pod="openstack/ceilometer-0" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.180039 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-run-httpd\") pod \"ceilometer-0\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " pod="openstack/ceilometer-0" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.180510 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-log-httpd\") pod \"ceilometer-0\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " pod="openstack/ceilometer-0" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.184673 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-scripts\") pod \"ceilometer-0\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " pod="openstack/ceilometer-0" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.185065 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " pod="openstack/ceilometer-0" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.199137 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w6fw\" (UniqueName: \"kubernetes.io/projected/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-kube-api-access-5w6fw\") pod \"ceilometer-0\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " pod="openstack/ceilometer-0" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.203190 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " pod="openstack/ceilometer-0" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.209044 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " pod="openstack/ceilometer-0" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.217406 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-config-data\") pod \"ceilometer-0\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " pod="openstack/ceilometer-0" Oct 13 08:17:09 crc kubenswrapper[4833]: I1013 08:17:09.349882 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 08:17:10 crc kubenswrapper[4833]: I1013 08:17:10.031990 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-96ac-account-create-4wt7n"] Oct 13 08:17:10 crc kubenswrapper[4833]: I1013 08:17:10.041671 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1d3f-account-create-7kwk2"] Oct 13 08:17:10 crc kubenswrapper[4833]: I1013 08:17:10.051153 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-96ac-account-create-4wt7n"] Oct 13 08:17:10 crc kubenswrapper[4833]: I1013 08:17:10.060010 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-1d3f-account-create-7kwk2"] Oct 13 08:17:10 crc kubenswrapper[4833]: I1013 08:17:10.185853 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 08:17:10 crc kubenswrapper[4833]: W1013 08:17:10.187164 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd36f1a3c_6d5e_4d42_9cea_2fa77812e1fb.slice/crio-e606f58691bd959a476cb5908ae28a2582032daaed5ec8939460d550ec352eab WatchSource:0}: Error finding container e606f58691bd959a476cb5908ae28a2582032daaed5ec8939460d550ec352eab: Status 404 returned error can't find the container with id e606f58691bd959a476cb5908ae28a2582032daaed5ec8939460d550ec352eab Oct 13 08:17:10 crc kubenswrapper[4833]: I1013 08:17:10.458858 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 08:17:10 crc kubenswrapper[4833]: I1013 08:17:10.629350 4833 scope.go:117] "RemoveContainer" containerID="d4d4c44bb0e773c2232317552c7e13324e9005e08dbd3102a3b4703dc201b6e8" Oct 13 08:17:10 crc kubenswrapper[4833]: E1013 08:17:10.629655 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:17:10 crc kubenswrapper[4833]: I1013 08:17:10.639955 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03f7fa85-3f1e-4c32-bea6-dff73995d9bb" path="/var/lib/kubelet/pods/03f7fa85-3f1e-4c32-bea6-dff73995d9bb/volumes" Oct 13 08:17:10 crc kubenswrapper[4833]: I1013 08:17:10.640567 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a" path="/var/lib/kubelet/pods/f482c8c2-1cb1-4aaf-96e9-c87f53ad7a0a/volumes" Oct 13 08:17:10 crc kubenswrapper[4833]: I1013 08:17:10.641234 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f912654a-1ed1-419f-a02a-42dc38b92b75" path="/var/lib/kubelet/pods/f912654a-1ed1-419f-a02a-42dc38b92b75/volumes" Oct 13 08:17:10 crc kubenswrapper[4833]: I1013 08:17:10.867768 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Oct 13 08:17:10 crc kubenswrapper[4833]: I1013 08:17:10.965848 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6c4f03e5-c261-465d-9fe2-d8abe2249840","Type":"ContainerStarted","Data":"f3d2432ef23c146f568df2dd9046244d222bde8c3123642e99ee9b7a8d4bcca4"} Oct 13 08:17:10 crc kubenswrapper[4833]: I1013 08:17:10.967371 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb","Type":"ContainerStarted","Data":"e606f58691bd959a476cb5908ae28a2582032daaed5ec8939460d550ec352eab"} Oct 13 08:17:11 crc kubenswrapper[4833]: I1013 08:17:11.037326 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-777d-account-create-vqgb4"] Oct 13 08:17:11 crc kubenswrapper[4833]: I1013 08:17:11.051392 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-777d-account-create-vqgb4"] Oct 13 08:17:11 crc kubenswrapper[4833]: I1013 08:17:11.978975 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb","Type":"ContainerStarted","Data":"b6b6d754a11170079cebe9a854122127b5295316549d80fb572629b8d683a414"} Oct 13 08:17:12 crc kubenswrapper[4833]: I1013 08:17:12.647737 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e3c4faf-8bcb-470d-ba72-7d24bdd8ddf6" path="/var/lib/kubelet/pods/3e3c4faf-8bcb-470d-ba72-7d24bdd8ddf6/volumes" Oct 13 08:17:13 crc kubenswrapper[4833]: I1013 08:17:13.517804 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 13 08:17:16 crc kubenswrapper[4833]: I1013 08:17:16.013822 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6c4f03e5-c261-465d-9fe2-d8abe2249840","Type":"ContainerStarted","Data":"98df48e34c7cb5030f437d95a3bbea43cd8d6638638724499b33b63abb6f2c4e"} Oct 13 08:17:16 crc kubenswrapper[4833]: I1013 08:17:16.015131 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb","Type":"ContainerStarted","Data":"aa171c79f05944b2148fc07de69bfe9970dba71d160e7fd94ae0cc87a60e37d4"} Oct 13 08:17:17 crc kubenswrapper[4833]: I1013 08:17:17.038747 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb","Type":"ContainerStarted","Data":"b42ae2e60d1808dee13842832dc9a7024666cbb82b8f017d293317afeaff3497"} Oct 13 08:17:18 crc kubenswrapper[4833]: I1013 08:17:18.056271 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6c4f03e5-c261-465d-9fe2-d8abe2249840","Type":"ContainerStarted","Data":"86c6f05f9c64345a21933bae4b736e954c80c1ed6573ea75536cdedf6dd38fb1"} Oct 13 08:17:18 crc kubenswrapper[4833]: I1013 08:17:18.059219 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb","Type":"ContainerStarted","Data":"ffbd0808491d29b975937694cde6793064e0392349d5a600ffd133d99471bcda"} Oct 13 08:17:18 crc kubenswrapper[4833]: I1013 08:17:18.059453 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb" containerName="ceilometer-central-agent" containerID="cri-o://b6b6d754a11170079cebe9a854122127b5295316549d80fb572629b8d683a414" gracePeriod=30 Oct 13 08:17:18 crc kubenswrapper[4833]: I1013 08:17:18.059897 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 13 08:17:18 crc kubenswrapper[4833]: I1013 08:17:18.060133 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb" containerName="proxy-httpd" containerID="cri-o://ffbd0808491d29b975937694cde6793064e0392349d5a600ffd133d99471bcda" gracePeriod=30 Oct 13 08:17:18 crc kubenswrapper[4833]: I1013 08:17:18.060177 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb" containerName="ceilometer-notification-agent" containerID="cri-o://aa171c79f05944b2148fc07de69bfe9970dba71d160e7fd94ae0cc87a60e37d4" gracePeriod=30 Oct 13 08:17:18 crc kubenswrapper[4833]: I1013 08:17:18.060261 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb" containerName="sg-core" containerID="cri-o://b42ae2e60d1808dee13842832dc9a7024666cbb82b8f017d293317afeaff3497" gracePeriod=30 Oct 13 08:17:18 crc kubenswrapper[4833]: I1013 08:17:18.089507 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.901909125 podStartE2EDuration="10.089462469s" podCreationTimestamp="2025-10-13 08:17:08 +0000 UTC" firstStartedPulling="2025-10-13 08:17:10.190597843 +0000 UTC m=+6520.291020779" lastFinishedPulling="2025-10-13 08:17:17.378151207 +0000 UTC m=+6527.478574123" observedRunningTime="2025-10-13 08:17:18.087299837 +0000 UTC m=+6528.187722753" watchObservedRunningTime="2025-10-13 08:17:18.089462469 +0000 UTC m=+6528.189885405" Oct 13 08:17:19 crc kubenswrapper[4833]: I1013 08:17:19.070997 4833 generic.go:334] "Generic (PLEG): container finished" podID="d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb" containerID="ffbd0808491d29b975937694cde6793064e0392349d5a600ffd133d99471bcda" exitCode=0 Oct 13 08:17:19 crc kubenswrapper[4833]: I1013 08:17:19.071593 4833 generic.go:334] "Generic (PLEG): container finished" podID="d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb" containerID="b42ae2e60d1808dee13842832dc9a7024666cbb82b8f017d293317afeaff3497" exitCode=2 Oct 13 08:17:19 crc kubenswrapper[4833]: I1013 08:17:19.071608 4833 generic.go:334] "Generic (PLEG): container finished" podID="d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb" containerID="aa171c79f05944b2148fc07de69bfe9970dba71d160e7fd94ae0cc87a60e37d4" exitCode=0 Oct 13 08:17:19 crc kubenswrapper[4833]: I1013 08:17:19.071099 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb","Type":"ContainerDied","Data":"ffbd0808491d29b975937694cde6793064e0392349d5a600ffd133d99471bcda"} Oct 13 08:17:19 crc kubenswrapper[4833]: I1013 08:17:19.071719 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb","Type":"ContainerDied","Data":"b42ae2e60d1808dee13842832dc9a7024666cbb82b8f017d293317afeaff3497"} Oct 13 08:17:19 crc kubenswrapper[4833]: I1013 08:17:19.071770 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb","Type":"ContainerDied","Data":"aa171c79f05944b2148fc07de69bfe9970dba71d160e7fd94ae0cc87a60e37d4"} Oct 13 08:17:19 crc kubenswrapper[4833]: I1013 08:17:19.074130 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6c4f03e5-c261-465d-9fe2-d8abe2249840","Type":"ContainerStarted","Data":"580d42cf4bc6e420ca15980ef5b35db9216b2480a0706f9d3103cb85e75dcdb8"} Oct 13 08:17:19 crc kubenswrapper[4833]: I1013 08:17:19.074264 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="6c4f03e5-c261-465d-9fe2-d8abe2249840" containerName="aodh-api" containerID="cri-o://f3d2432ef23c146f568df2dd9046244d222bde8c3123642e99ee9b7a8d4bcca4" gracePeriod=30 Oct 13 08:17:19 crc kubenswrapper[4833]: I1013 08:17:19.074311 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="6c4f03e5-c261-465d-9fe2-d8abe2249840" containerName="aodh-listener" containerID="cri-o://580d42cf4bc6e420ca15980ef5b35db9216b2480a0706f9d3103cb85e75dcdb8" gracePeriod=30 Oct 13 08:17:19 crc kubenswrapper[4833]: I1013 08:17:19.074330 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="6c4f03e5-c261-465d-9fe2-d8abe2249840" containerName="aodh-notifier" containerID="cri-o://86c6f05f9c64345a21933bae4b736e954c80c1ed6573ea75536cdedf6dd38fb1" gracePeriod=30 Oct 13 08:17:19 crc kubenswrapper[4833]: I1013 08:17:19.074359 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="6c4f03e5-c261-465d-9fe2-d8abe2249840" containerName="aodh-evaluator" containerID="cri-o://98df48e34c7cb5030f437d95a3bbea43cd8d6638638724499b33b63abb6f2c4e" gracePeriod=30 Oct 13 08:17:19 crc kubenswrapper[4833]: I1013 08:17:19.102435 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.914852056 podStartE2EDuration="12.102406346s" podCreationTimestamp="2025-10-13 08:17:07 +0000 UTC" firstStartedPulling="2025-10-13 08:17:08.198337796 +0000 UTC m=+6518.298760712" lastFinishedPulling="2025-10-13 08:17:18.385892086 +0000 UTC m=+6528.486315002" observedRunningTime="2025-10-13 08:17:19.092768872 +0000 UTC m=+6529.193191798" watchObservedRunningTime="2025-10-13 08:17:19.102406346 +0000 UTC m=+6529.202829302" Oct 13 08:17:20 crc kubenswrapper[4833]: I1013 08:17:20.058681 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-d5qlw"] Oct 13 08:17:20 crc kubenswrapper[4833]: I1013 08:17:20.073896 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-d5qlw"] Oct 13 08:17:20 crc kubenswrapper[4833]: I1013 08:17:20.088482 4833 generic.go:334] "Generic (PLEG): container finished" podID="6c4f03e5-c261-465d-9fe2-d8abe2249840" containerID="86c6f05f9c64345a21933bae4b736e954c80c1ed6573ea75536cdedf6dd38fb1" exitCode=0 Oct 13 08:17:20 crc kubenswrapper[4833]: I1013 08:17:20.088533 4833 generic.go:334] "Generic (PLEG): container finished" podID="6c4f03e5-c261-465d-9fe2-d8abe2249840" containerID="98df48e34c7cb5030f437d95a3bbea43cd8d6638638724499b33b63abb6f2c4e" exitCode=0 Oct 13 08:17:20 crc kubenswrapper[4833]: I1013 08:17:20.088574 4833 generic.go:334] "Generic (PLEG): container finished" podID="6c4f03e5-c261-465d-9fe2-d8abe2249840" containerID="f3d2432ef23c146f568df2dd9046244d222bde8c3123642e99ee9b7a8d4bcca4" exitCode=0 Oct 13 08:17:20 crc kubenswrapper[4833]: I1013 08:17:20.088604 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6c4f03e5-c261-465d-9fe2-d8abe2249840","Type":"ContainerDied","Data":"86c6f05f9c64345a21933bae4b736e954c80c1ed6573ea75536cdedf6dd38fb1"} Oct 13 08:17:20 crc kubenswrapper[4833]: I1013 08:17:20.088641 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6c4f03e5-c261-465d-9fe2-d8abe2249840","Type":"ContainerDied","Data":"98df48e34c7cb5030f437d95a3bbea43cd8d6638638724499b33b63abb6f2c4e"} Oct 13 08:17:20 crc kubenswrapper[4833]: I1013 08:17:20.088659 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6c4f03e5-c261-465d-9fe2-d8abe2249840","Type":"ContainerDied","Data":"f3d2432ef23c146f568df2dd9046244d222bde8c3123642e99ee9b7a8d4bcca4"} Oct 13 08:17:20 crc kubenswrapper[4833]: I1013 08:17:20.643039 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc34cc55-e653-4a59-ae09-762011632de0" path="/var/lib/kubelet/pods/bc34cc55-e653-4a59-ae09-762011632de0/volumes" Oct 13 08:17:20 crc kubenswrapper[4833]: I1013 08:17:20.744661 4833 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod57ae373d-930f-4db9-8ff9-d8c40a13c48d"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod57ae373d-930f-4db9-8ff9-d8c40a13c48d] : Timed out while waiting for systemd to remove kubepods-besteffort-pod57ae373d_930f_4db9_8ff9_d8c40a13c48d.slice" Oct 13 08:17:20 crc kubenswrapper[4833]: I1013 08:17:20.796162 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 08:17:20 crc kubenswrapper[4833]: I1013 08:17:20.994134 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-log-httpd\") pod \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " Oct 13 08:17:20 crc kubenswrapper[4833]: I1013 08:17:20.994744 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-ceilometer-tls-certs\") pod \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " Oct 13 08:17:20 crc kubenswrapper[4833]: I1013 08:17:20.995081 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-combined-ca-bundle\") pod \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " Oct 13 08:17:20 crc kubenswrapper[4833]: I1013 08:17:20.995357 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-scripts\") pod \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " Oct 13 08:17:20 crc kubenswrapper[4833]: I1013 08:17:20.995691 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w6fw\" (UniqueName: \"kubernetes.io/projected/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-kube-api-access-5w6fw\") pod \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " Oct 13 08:17:20 crc kubenswrapper[4833]: I1013 08:17:20.996063 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-run-httpd\") pod \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " Oct 13 08:17:20 crc kubenswrapper[4833]: I1013 08:17:20.996436 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-config-data\") pod \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " Oct 13 08:17:20 crc kubenswrapper[4833]: I1013 08:17:20.996804 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-sg-core-conf-yaml\") pod \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\" (UID: \"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb\") " Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:20.996576 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb" (UID: "d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:20.996990 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb" (UID: "d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.002503 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-kube-api-access-5w6fw" (OuterVolumeSpecName: "kube-api-access-5w6fw") pod "d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb" (UID: "d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb"). InnerVolumeSpecName "kube-api-access-5w6fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.002825 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-scripts" (OuterVolumeSpecName: "scripts") pod "d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb" (UID: "d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.030242 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb" (UID: "d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.086299 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb" (UID: "d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.098460 4833 generic.go:334] "Generic (PLEG): container finished" podID="d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb" containerID="b6b6d754a11170079cebe9a854122127b5295316549d80fb572629b8d683a414" exitCode=0 Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.098814 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb","Type":"ContainerDied","Data":"b6b6d754a11170079cebe9a854122127b5295316549d80fb572629b8d683a414"} Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.098970 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb","Type":"ContainerDied","Data":"e606f58691bd959a476cb5908ae28a2582032daaed5ec8939460d550ec352eab"} Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.099067 4833 scope.go:117] "RemoveContainer" containerID="ffbd0808491d29b975937694cde6793064e0392349d5a600ffd133d99471bcda" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.098835 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.104180 4833 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.104432 4833 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.104519 4833 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.104605 4833 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.104683 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.104768 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5w6fw\" (UniqueName: \"kubernetes.io/projected/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-kube-api-access-5w6fw\") on node \"crc\" DevicePath \"\"" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.120098 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb" (UID: "d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.144152 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-config-data" (OuterVolumeSpecName: "config-data") pod "d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb" (UID: "d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.184680 4833 scope.go:117] "RemoveContainer" containerID="b42ae2e60d1808dee13842832dc9a7024666cbb82b8f017d293317afeaff3497" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.205686 4833 scope.go:117] "RemoveContainer" containerID="aa171c79f05944b2148fc07de69bfe9970dba71d160e7fd94ae0cc87a60e37d4" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.206341 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.206363 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.227714 4833 scope.go:117] "RemoveContainer" containerID="b6b6d754a11170079cebe9a854122127b5295316549d80fb572629b8d683a414" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.252560 4833 scope.go:117] "RemoveContainer" containerID="ffbd0808491d29b975937694cde6793064e0392349d5a600ffd133d99471bcda" Oct 13 08:17:21 crc kubenswrapper[4833]: E1013 08:17:21.252936 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffbd0808491d29b975937694cde6793064e0392349d5a600ffd133d99471bcda\": container with ID starting with ffbd0808491d29b975937694cde6793064e0392349d5a600ffd133d99471bcda not found: ID does not exist" containerID="ffbd0808491d29b975937694cde6793064e0392349d5a600ffd133d99471bcda" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.252967 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffbd0808491d29b975937694cde6793064e0392349d5a600ffd133d99471bcda"} err="failed to get container status \"ffbd0808491d29b975937694cde6793064e0392349d5a600ffd133d99471bcda\": rpc error: code = NotFound desc = could not find container \"ffbd0808491d29b975937694cde6793064e0392349d5a600ffd133d99471bcda\": container with ID starting with ffbd0808491d29b975937694cde6793064e0392349d5a600ffd133d99471bcda not found: ID does not exist" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.252989 4833 scope.go:117] "RemoveContainer" containerID="b42ae2e60d1808dee13842832dc9a7024666cbb82b8f017d293317afeaff3497" Oct 13 08:17:21 crc kubenswrapper[4833]: E1013 08:17:21.253234 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b42ae2e60d1808dee13842832dc9a7024666cbb82b8f017d293317afeaff3497\": container with ID starting with b42ae2e60d1808dee13842832dc9a7024666cbb82b8f017d293317afeaff3497 not found: ID does not exist" containerID="b42ae2e60d1808dee13842832dc9a7024666cbb82b8f017d293317afeaff3497" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.253261 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b42ae2e60d1808dee13842832dc9a7024666cbb82b8f017d293317afeaff3497"} err="failed to get container status \"b42ae2e60d1808dee13842832dc9a7024666cbb82b8f017d293317afeaff3497\": rpc error: code = NotFound desc = could not find container \"b42ae2e60d1808dee13842832dc9a7024666cbb82b8f017d293317afeaff3497\": container with ID starting with b42ae2e60d1808dee13842832dc9a7024666cbb82b8f017d293317afeaff3497 not found: ID does not exist" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.253284 4833 scope.go:117] "RemoveContainer" containerID="aa171c79f05944b2148fc07de69bfe9970dba71d160e7fd94ae0cc87a60e37d4" Oct 13 08:17:21 crc kubenswrapper[4833]: E1013 08:17:21.254114 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa171c79f05944b2148fc07de69bfe9970dba71d160e7fd94ae0cc87a60e37d4\": container with ID starting with aa171c79f05944b2148fc07de69bfe9970dba71d160e7fd94ae0cc87a60e37d4 not found: ID does not exist" containerID="aa171c79f05944b2148fc07de69bfe9970dba71d160e7fd94ae0cc87a60e37d4" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.254159 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa171c79f05944b2148fc07de69bfe9970dba71d160e7fd94ae0cc87a60e37d4"} err="failed to get container status \"aa171c79f05944b2148fc07de69bfe9970dba71d160e7fd94ae0cc87a60e37d4\": rpc error: code = NotFound desc = could not find container \"aa171c79f05944b2148fc07de69bfe9970dba71d160e7fd94ae0cc87a60e37d4\": container with ID starting with aa171c79f05944b2148fc07de69bfe9970dba71d160e7fd94ae0cc87a60e37d4 not found: ID does not exist" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.254188 4833 scope.go:117] "RemoveContainer" containerID="b6b6d754a11170079cebe9a854122127b5295316549d80fb572629b8d683a414" Oct 13 08:17:21 crc kubenswrapper[4833]: E1013 08:17:21.254513 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6b6d754a11170079cebe9a854122127b5295316549d80fb572629b8d683a414\": container with ID starting with b6b6d754a11170079cebe9a854122127b5295316549d80fb572629b8d683a414 not found: ID does not exist" containerID="b6b6d754a11170079cebe9a854122127b5295316549d80fb572629b8d683a414" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.254552 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6b6d754a11170079cebe9a854122127b5295316549d80fb572629b8d683a414"} err="failed to get container status \"b6b6d754a11170079cebe9a854122127b5295316549d80fb572629b8d683a414\": rpc error: code = NotFound desc = could not find container \"b6b6d754a11170079cebe9a854122127b5295316549d80fb572629b8d683a414\": container with ID starting with b6b6d754a11170079cebe9a854122127b5295316549d80fb572629b8d683a414 not found: ID does not exist" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.453953 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.461495 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.488443 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 13 08:17:21 crc kubenswrapper[4833]: E1013 08:17:21.488917 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb" containerName="proxy-httpd" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.488940 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb" containerName="proxy-httpd" Oct 13 08:17:21 crc kubenswrapper[4833]: E1013 08:17:21.488977 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb" containerName="sg-core" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.488986 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb" containerName="sg-core" Oct 13 08:17:21 crc kubenswrapper[4833]: E1013 08:17:21.489002 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb" containerName="ceilometer-central-agent" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.489010 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb" containerName="ceilometer-central-agent" Oct 13 08:17:21 crc kubenswrapper[4833]: E1013 08:17:21.489035 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb" containerName="ceilometer-notification-agent" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.489043 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb" containerName="ceilometer-notification-agent" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.489261 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb" containerName="ceilometer-notification-agent" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.489285 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb" containerName="sg-core" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.489302 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb" containerName="proxy-httpd" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.489320 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb" containerName="ceilometer-central-agent" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.498719 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.513482 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d-config-data\") pod \"ceilometer-0\" (UID: \"746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d\") " pod="openstack/ceilometer-0" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.513575 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmctl\" (UniqueName: \"kubernetes.io/projected/746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d-kube-api-access-kmctl\") pod \"ceilometer-0\" (UID: \"746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d\") " pod="openstack/ceilometer-0" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.513618 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d\") " pod="openstack/ceilometer-0" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.513680 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d\") " pod="openstack/ceilometer-0" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.513758 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d-scripts\") pod \"ceilometer-0\" (UID: \"746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d\") " pod="openstack/ceilometer-0" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.513797 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d-log-httpd\") pod \"ceilometer-0\" (UID: \"746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d\") " pod="openstack/ceilometer-0" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.513848 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d\") " pod="openstack/ceilometer-0" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.513895 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d-run-httpd\") pod \"ceilometer-0\" (UID: \"746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d\") " pod="openstack/ceilometer-0" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.521678 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.533464 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.533925 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.534176 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.616484 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d-config-data\") pod \"ceilometer-0\" (UID: \"746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d\") " pod="openstack/ceilometer-0" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.616570 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmctl\" (UniqueName: \"kubernetes.io/projected/746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d-kube-api-access-kmctl\") pod \"ceilometer-0\" (UID: \"746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d\") " pod="openstack/ceilometer-0" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.616601 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d\") " pod="openstack/ceilometer-0" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.616653 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d\") " pod="openstack/ceilometer-0" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.616714 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d-scripts\") pod \"ceilometer-0\" (UID: \"746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d\") " pod="openstack/ceilometer-0" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.616737 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d-log-httpd\") pod \"ceilometer-0\" (UID: \"746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d\") " pod="openstack/ceilometer-0" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.616792 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d\") " pod="openstack/ceilometer-0" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.616832 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d-run-httpd\") pod \"ceilometer-0\" (UID: \"746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d\") " pod="openstack/ceilometer-0" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.617303 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d-run-httpd\") pod \"ceilometer-0\" (UID: \"746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d\") " pod="openstack/ceilometer-0" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.618186 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d-log-httpd\") pod \"ceilometer-0\" (UID: \"746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d\") " pod="openstack/ceilometer-0" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.622214 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d\") " pod="openstack/ceilometer-0" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.622455 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d\") " pod="openstack/ceilometer-0" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.622986 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d-config-data\") pod \"ceilometer-0\" (UID: \"746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d\") " pod="openstack/ceilometer-0" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.623524 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d-scripts\") pod \"ceilometer-0\" (UID: \"746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d\") " pod="openstack/ceilometer-0" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.625616 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d\") " pod="openstack/ceilometer-0" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.634264 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmctl\" (UniqueName: \"kubernetes.io/projected/746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d-kube-api-access-kmctl\") pod \"ceilometer-0\" (UID: \"746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d\") " pod="openstack/ceilometer-0" Oct 13 08:17:21 crc kubenswrapper[4833]: I1013 08:17:21.861600 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 08:17:22 crc kubenswrapper[4833]: I1013 08:17:22.312662 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 08:17:22 crc kubenswrapper[4833]: I1013 08:17:22.651232 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb" path="/var/lib/kubelet/pods/d36f1a3c-6d5e-4d42-9cea-2fa77812e1fb/volumes" Oct 13 08:17:23 crc kubenswrapper[4833]: I1013 08:17:23.141788 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d","Type":"ContainerStarted","Data":"db1da1c088588f76d468cf54a69e20d85ca44ebadfd579fbfacc01ccc5dfcd34"} Oct 13 08:17:23 crc kubenswrapper[4833]: I1013 08:17:23.142198 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d","Type":"ContainerStarted","Data":"a1a1199bcc77bae5128fcbe0a63838235cc9faf00ac281bd962ff17bcb80c880"} Oct 13 08:17:24 crc kubenswrapper[4833]: I1013 08:17:24.179123 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d","Type":"ContainerStarted","Data":"9340cee8ae2a1c3e423046ca3dde9765e130edea3c37545441308ec18d957f53"} Oct 13 08:17:24 crc kubenswrapper[4833]: I1013 08:17:24.627950 4833 scope.go:117] "RemoveContainer" containerID="d4d4c44bb0e773c2232317552c7e13324e9005e08dbd3102a3b4703dc201b6e8" Oct 13 08:17:24 crc kubenswrapper[4833]: E1013 08:17:24.628755 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:17:25 crc kubenswrapper[4833]: I1013 08:17:25.190505 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d","Type":"ContainerStarted","Data":"d6b62c7a54a8966316d0e8cf38b0ec03b4babcc21675948c81900a8e596f7735"} Oct 13 08:17:26 crc kubenswrapper[4833]: I1013 08:17:26.202805 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d","Type":"ContainerStarted","Data":"ff015b7eafc868a064a01111b389eb80fca4d85147a92877f086ea8e3378f737"} Oct 13 08:17:26 crc kubenswrapper[4833]: I1013 08:17:26.203260 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 13 08:17:26 crc kubenswrapper[4833]: I1013 08:17:26.232562 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.141475682 podStartE2EDuration="5.232528217s" podCreationTimestamp="2025-10-13 08:17:21 +0000 UTC" firstStartedPulling="2025-10-13 08:17:22.325113263 +0000 UTC m=+6532.425536189" lastFinishedPulling="2025-10-13 08:17:25.416165778 +0000 UTC m=+6535.516588724" observedRunningTime="2025-10-13 08:17:26.228382299 +0000 UTC m=+6536.328805225" watchObservedRunningTime="2025-10-13 08:17:26.232528217 +0000 UTC m=+6536.332951133" Oct 13 08:17:39 crc kubenswrapper[4833]: I1013 08:17:39.054634 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4gwzv"] Oct 13 08:17:39 crc kubenswrapper[4833]: I1013 08:17:39.072737 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-s55h8"] Oct 13 08:17:39 crc kubenswrapper[4833]: I1013 08:17:39.087816 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-s55h8"] Oct 13 08:17:39 crc kubenswrapper[4833]: I1013 08:17:39.099297 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4gwzv"] Oct 13 08:17:39 crc kubenswrapper[4833]: I1013 08:17:39.627701 4833 scope.go:117] "RemoveContainer" containerID="d4d4c44bb0e773c2232317552c7e13324e9005e08dbd3102a3b4703dc201b6e8" Oct 13 08:17:39 crc kubenswrapper[4833]: E1013 08:17:39.628051 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:17:40 crc kubenswrapper[4833]: I1013 08:17:40.647737 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5505d43c-ff8b-4643-876b-843f46e64eb4" path="/var/lib/kubelet/pods/5505d43c-ff8b-4643-876b-843f46e64eb4/volumes" Oct 13 08:17:40 crc kubenswrapper[4833]: I1013 08:17:40.649780 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f01ed8f4-afc5-4464-ab51-73cab30ef8d3" path="/var/lib/kubelet/pods/f01ed8f4-afc5-4464-ab51-73cab30ef8d3/volumes" Oct 13 08:17:49 crc kubenswrapper[4833]: I1013 08:17:49.485522 4833 generic.go:334] "Generic (PLEG): container finished" podID="6c4f03e5-c261-465d-9fe2-d8abe2249840" containerID="580d42cf4bc6e420ca15980ef5b35db9216b2480a0706f9d3103cb85e75dcdb8" exitCode=137 Oct 13 08:17:49 crc kubenswrapper[4833]: I1013 08:17:49.486098 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6c4f03e5-c261-465d-9fe2-d8abe2249840","Type":"ContainerDied","Data":"580d42cf4bc6e420ca15980ef5b35db9216b2480a0706f9d3103cb85e75dcdb8"} Oct 13 08:17:49 crc kubenswrapper[4833]: I1013 08:17:49.620444 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 13 08:17:49 crc kubenswrapper[4833]: I1013 08:17:49.789735 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmts7\" (UniqueName: \"kubernetes.io/projected/6c4f03e5-c261-465d-9fe2-d8abe2249840-kube-api-access-mmts7\") pod \"6c4f03e5-c261-465d-9fe2-d8abe2249840\" (UID: \"6c4f03e5-c261-465d-9fe2-d8abe2249840\") " Oct 13 08:17:49 crc kubenswrapper[4833]: I1013 08:17:49.789875 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c4f03e5-c261-465d-9fe2-d8abe2249840-scripts\") pod \"6c4f03e5-c261-465d-9fe2-d8abe2249840\" (UID: \"6c4f03e5-c261-465d-9fe2-d8abe2249840\") " Oct 13 08:17:49 crc kubenswrapper[4833]: I1013 08:17:49.789934 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c4f03e5-c261-465d-9fe2-d8abe2249840-config-data\") pod \"6c4f03e5-c261-465d-9fe2-d8abe2249840\" (UID: \"6c4f03e5-c261-465d-9fe2-d8abe2249840\") " Oct 13 08:17:49 crc kubenswrapper[4833]: I1013 08:17:49.789974 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4f03e5-c261-465d-9fe2-d8abe2249840-combined-ca-bundle\") pod \"6c4f03e5-c261-465d-9fe2-d8abe2249840\" (UID: \"6c4f03e5-c261-465d-9fe2-d8abe2249840\") " Oct 13 08:17:49 crc kubenswrapper[4833]: I1013 08:17:49.797874 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c4f03e5-c261-465d-9fe2-d8abe2249840-kube-api-access-mmts7" (OuterVolumeSpecName: "kube-api-access-mmts7") pod "6c4f03e5-c261-465d-9fe2-d8abe2249840" (UID: "6c4f03e5-c261-465d-9fe2-d8abe2249840"). InnerVolumeSpecName "kube-api-access-mmts7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:17:49 crc kubenswrapper[4833]: I1013 08:17:49.799058 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c4f03e5-c261-465d-9fe2-d8abe2249840-scripts" (OuterVolumeSpecName: "scripts") pod "6c4f03e5-c261-465d-9fe2-d8abe2249840" (UID: "6c4f03e5-c261-465d-9fe2-d8abe2249840"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:17:49 crc kubenswrapper[4833]: I1013 08:17:49.892980 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmts7\" (UniqueName: \"kubernetes.io/projected/6c4f03e5-c261-465d-9fe2-d8abe2249840-kube-api-access-mmts7\") on node \"crc\" DevicePath \"\"" Oct 13 08:17:49 crc kubenswrapper[4833]: I1013 08:17:49.893024 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c4f03e5-c261-465d-9fe2-d8abe2249840-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 08:17:49 crc kubenswrapper[4833]: I1013 08:17:49.970804 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c4f03e5-c261-465d-9fe2-d8abe2249840-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c4f03e5-c261-465d-9fe2-d8abe2249840" (UID: "6c4f03e5-c261-465d-9fe2-d8abe2249840"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:17:49 crc kubenswrapper[4833]: I1013 08:17:49.995369 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4f03e5-c261-465d-9fe2-d8abe2249840-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.012485 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c4f03e5-c261-465d-9fe2-d8abe2249840-config-data" (OuterVolumeSpecName: "config-data") pod "6c4f03e5-c261-465d-9fe2-d8abe2249840" (UID: "6c4f03e5-c261-465d-9fe2-d8abe2249840"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.097139 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c4f03e5-c261-465d-9fe2-d8abe2249840-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.503880 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6c4f03e5-c261-465d-9fe2-d8abe2249840","Type":"ContainerDied","Data":"598828e6ba51669bef6bb3173963d8a73382537dfdb6dae238ed15c5da473349"} Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.504077 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.504812 4833 scope.go:117] "RemoveContainer" containerID="580d42cf4bc6e420ca15980ef5b35db9216b2480a0706f9d3103cb85e75dcdb8" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.534015 4833 scope.go:117] "RemoveContainer" containerID="86c6f05f9c64345a21933bae4b736e954c80c1ed6573ea75536cdedf6dd38fb1" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.555979 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.575606 4833 scope.go:117] "RemoveContainer" containerID="98df48e34c7cb5030f437d95a3bbea43cd8d6638638724499b33b63abb6f2c4e" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.578254 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.596680 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Oct 13 08:17:50 crc kubenswrapper[4833]: E1013 08:17:50.597437 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f03e5-c261-465d-9fe2-d8abe2249840" containerName="aodh-evaluator" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.597656 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f03e5-c261-465d-9fe2-d8abe2249840" containerName="aodh-evaluator" Oct 13 08:17:50 crc kubenswrapper[4833]: E1013 08:17:50.597752 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f03e5-c261-465d-9fe2-d8abe2249840" containerName="aodh-notifier" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.597823 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f03e5-c261-465d-9fe2-d8abe2249840" containerName="aodh-notifier" Oct 13 08:17:50 crc kubenswrapper[4833]: E1013 08:17:50.597944 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f03e5-c261-465d-9fe2-d8abe2249840" containerName="aodh-api" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.598957 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f03e5-c261-465d-9fe2-d8abe2249840" containerName="aodh-api" Oct 13 08:17:50 crc kubenswrapper[4833]: E1013 08:17:50.599075 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f03e5-c261-465d-9fe2-d8abe2249840" containerName="aodh-listener" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.599166 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f03e5-c261-465d-9fe2-d8abe2249840" containerName="aodh-listener" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.599566 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f03e5-c261-465d-9fe2-d8abe2249840" containerName="aodh-evaluator" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.599814 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f03e5-c261-465d-9fe2-d8abe2249840" containerName="aodh-listener" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.599963 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f03e5-c261-465d-9fe2-d8abe2249840" containerName="aodh-notifier" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.600074 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f03e5-c261-465d-9fe2-d8abe2249840" containerName="aodh-api" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.602698 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.599763 4833 scope.go:117] "RemoveContainer" containerID="f3d2432ef23c146f568df2dd9046244d222bde8c3123642e99ee9b7a8d4bcca4" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.609785 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.610023 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-jbng6" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.610107 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.610133 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.610517 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.615456 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.649137 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c4f03e5-c261-465d-9fe2-d8abe2249840" path="/var/lib/kubelet/pods/6c4f03e5-c261-465d-9fe2-d8abe2249840/volumes" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.709104 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1a01396-e0aa-4626-9eaf-7a75da4ca8c4-public-tls-certs\") pod \"aodh-0\" (UID: \"d1a01396-e0aa-4626-9eaf-7a75da4ca8c4\") " pod="openstack/aodh-0" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.709276 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1a01396-e0aa-4626-9eaf-7a75da4ca8c4-internal-tls-certs\") pod \"aodh-0\" (UID: \"d1a01396-e0aa-4626-9eaf-7a75da4ca8c4\") " pod="openstack/aodh-0" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.709473 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a01396-e0aa-4626-9eaf-7a75da4ca8c4-combined-ca-bundle\") pod \"aodh-0\" (UID: \"d1a01396-e0aa-4626-9eaf-7a75da4ca8c4\") " pod="openstack/aodh-0" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.709851 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a01396-e0aa-4626-9eaf-7a75da4ca8c4-config-data\") pod \"aodh-0\" (UID: \"d1a01396-e0aa-4626-9eaf-7a75da4ca8c4\") " pod="openstack/aodh-0" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.710120 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9zxc\" (UniqueName: \"kubernetes.io/projected/d1a01396-e0aa-4626-9eaf-7a75da4ca8c4-kube-api-access-t9zxc\") pod \"aodh-0\" (UID: \"d1a01396-e0aa-4626-9eaf-7a75da4ca8c4\") " pod="openstack/aodh-0" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.710173 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1a01396-e0aa-4626-9eaf-7a75da4ca8c4-scripts\") pod \"aodh-0\" (UID: \"d1a01396-e0aa-4626-9eaf-7a75da4ca8c4\") " pod="openstack/aodh-0" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.812360 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9zxc\" (UniqueName: \"kubernetes.io/projected/d1a01396-e0aa-4626-9eaf-7a75da4ca8c4-kube-api-access-t9zxc\") pod \"aodh-0\" (UID: \"d1a01396-e0aa-4626-9eaf-7a75da4ca8c4\") " pod="openstack/aodh-0" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.812438 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1a01396-e0aa-4626-9eaf-7a75da4ca8c4-scripts\") pod \"aodh-0\" (UID: \"d1a01396-e0aa-4626-9eaf-7a75da4ca8c4\") " pod="openstack/aodh-0" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.812484 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1a01396-e0aa-4626-9eaf-7a75da4ca8c4-public-tls-certs\") pod \"aodh-0\" (UID: \"d1a01396-e0aa-4626-9eaf-7a75da4ca8c4\") " pod="openstack/aodh-0" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.812553 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1a01396-e0aa-4626-9eaf-7a75da4ca8c4-internal-tls-certs\") pod \"aodh-0\" (UID: \"d1a01396-e0aa-4626-9eaf-7a75da4ca8c4\") " pod="openstack/aodh-0" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.812629 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a01396-e0aa-4626-9eaf-7a75da4ca8c4-combined-ca-bundle\") pod \"aodh-0\" (UID: \"d1a01396-e0aa-4626-9eaf-7a75da4ca8c4\") " pod="openstack/aodh-0" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.812719 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a01396-e0aa-4626-9eaf-7a75da4ca8c4-config-data\") pod \"aodh-0\" (UID: \"d1a01396-e0aa-4626-9eaf-7a75da4ca8c4\") " pod="openstack/aodh-0" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.817193 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1a01396-e0aa-4626-9eaf-7a75da4ca8c4-scripts\") pod \"aodh-0\" (UID: \"d1a01396-e0aa-4626-9eaf-7a75da4ca8c4\") " pod="openstack/aodh-0" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.817376 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a01396-e0aa-4626-9eaf-7a75da4ca8c4-combined-ca-bundle\") pod \"aodh-0\" (UID: \"d1a01396-e0aa-4626-9eaf-7a75da4ca8c4\") " pod="openstack/aodh-0" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.818363 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1a01396-e0aa-4626-9eaf-7a75da4ca8c4-internal-tls-certs\") pod \"aodh-0\" (UID: \"d1a01396-e0aa-4626-9eaf-7a75da4ca8c4\") " pod="openstack/aodh-0" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.826402 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a01396-e0aa-4626-9eaf-7a75da4ca8c4-config-data\") pod \"aodh-0\" (UID: \"d1a01396-e0aa-4626-9eaf-7a75da4ca8c4\") " pod="openstack/aodh-0" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.827748 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1a01396-e0aa-4626-9eaf-7a75da4ca8c4-public-tls-certs\") pod \"aodh-0\" (UID: \"d1a01396-e0aa-4626-9eaf-7a75da4ca8c4\") " pod="openstack/aodh-0" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.834309 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9zxc\" (UniqueName: \"kubernetes.io/projected/d1a01396-e0aa-4626-9eaf-7a75da4ca8c4-kube-api-access-t9zxc\") pod \"aodh-0\" (UID: \"d1a01396-e0aa-4626-9eaf-7a75da4ca8c4\") " pod="openstack/aodh-0" Oct 13 08:17:50 crc kubenswrapper[4833]: I1013 08:17:50.936807 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 13 08:17:51 crc kubenswrapper[4833]: I1013 08:17:51.436260 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Oct 13 08:17:51 crc kubenswrapper[4833]: I1013 08:17:51.517983 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d1a01396-e0aa-4626-9eaf-7a75da4ca8c4","Type":"ContainerStarted","Data":"97dd28be280d8c11b4e7fcbe686ffe6e88f98964225c642e07eb5bccbf6a0f9f"} Oct 13 08:17:51 crc kubenswrapper[4833]: I1013 08:17:51.895028 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 13 08:17:52 crc kubenswrapper[4833]: I1013 08:17:52.534017 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d1a01396-e0aa-4626-9eaf-7a75da4ca8c4","Type":"ContainerStarted","Data":"b242eecae542c5a37269c0addf529bdf42db43841097a0f72aa64df59c9f97f0"} Oct 13 08:17:52 crc kubenswrapper[4833]: I1013 08:17:52.628882 4833 scope.go:117] "RemoveContainer" containerID="d4d4c44bb0e773c2232317552c7e13324e9005e08dbd3102a3b4703dc201b6e8" Oct 13 08:17:52 crc kubenswrapper[4833]: E1013 08:17:52.629401 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:17:53 crc kubenswrapper[4833]: I1013 08:17:53.563137 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d1a01396-e0aa-4626-9eaf-7a75da4ca8c4","Type":"ContainerStarted","Data":"ba31cdffcbca74e389969fe9ffacb88ed37a67693f8d00b7fb6bdc339cee8740"} Oct 13 08:17:54 crc kubenswrapper[4833]: I1013 08:17:54.576522 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d1a01396-e0aa-4626-9eaf-7a75da4ca8c4","Type":"ContainerStarted","Data":"c0a6cc87238c1103ed625391370f80680a574b1e5219c9bb4e9dc5a56a32dd63"} Oct 13 08:17:54 crc kubenswrapper[4833]: I1013 08:17:54.749303 4833 scope.go:117] "RemoveContainer" containerID="007d98f99ca4287b3c41915150b2deb6d9ef38eed2eb98d51a738384eb2af938" Oct 13 08:17:54 crc kubenswrapper[4833]: I1013 08:17:54.803728 4833 scope.go:117] "RemoveContainer" containerID="11063971e92da1fa7a57efc156de6920465e825efc9ea219766c066bf4888b14" Oct 13 08:17:54 crc kubenswrapper[4833]: I1013 08:17:54.848818 4833 scope.go:117] "RemoveContainer" containerID="ae26aa2c4f38d7fe0ed33a96d3b8a7b74bd259f213a5dac353fd4f8f94978da1" Oct 13 08:17:54 crc kubenswrapper[4833]: I1013 08:17:54.895176 4833 scope.go:117] "RemoveContainer" containerID="a69667235e16a56c18fbbea52a51157683029ce5975f076bbdfaefb13771b9eb" Oct 13 08:17:54 crc kubenswrapper[4833]: I1013 08:17:54.956618 4833 scope.go:117] "RemoveContainer" containerID="b9528076d34c6f4ab7de8725ef42f400732f54ae3847ccac307abd56be6dcd10" Oct 13 08:17:54 crc kubenswrapper[4833]: I1013 08:17:54.994146 4833 scope.go:117] "RemoveContainer" containerID="5ab1216531247aa7605e193a382346549c9f04a606ba2986266dc8f51ab8b208" Oct 13 08:17:55 crc kubenswrapper[4833]: I1013 08:17:55.031075 4833 scope.go:117] "RemoveContainer" containerID="844da1007de31aaeb1f08883e7fe1588a1633322d95536a4866c627f144a2b52" Oct 13 08:17:55 crc kubenswrapper[4833]: I1013 08:17:55.097011 4833 scope.go:117] "RemoveContainer" containerID="1e019ee8ab7636d3f4ae413125bf85692314ff29fbba2f5957ca978905a632ff" Oct 13 08:17:55 crc kubenswrapper[4833]: I1013 08:17:55.121426 4833 scope.go:117] "RemoveContainer" containerID="fb5432e2b6e6d68e6afd93f40a08f0a16e890be308d9d3c59af25bec8fd4be20" Oct 13 08:17:55 crc kubenswrapper[4833]: I1013 08:17:55.602932 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d1a01396-e0aa-4626-9eaf-7a75da4ca8c4","Type":"ContainerStarted","Data":"aea05a9de616bea373daa47394bfe6e470f0c908a41b187c6f76bad4d85ad690"} Oct 13 08:17:56 crc kubenswrapper[4833]: I1013 08:17:56.032939 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.144290156 podStartE2EDuration="6.032921326s" podCreationTimestamp="2025-10-13 08:17:50 +0000 UTC" firstStartedPulling="2025-10-13 08:17:51.434849319 +0000 UTC m=+6561.535272245" lastFinishedPulling="2025-10-13 08:17:54.323480499 +0000 UTC m=+6564.423903415" observedRunningTime="2025-10-13 08:17:55.702227125 +0000 UTC m=+6565.802650041" watchObservedRunningTime="2025-10-13 08:17:56.032921326 +0000 UTC m=+6566.133344242" Oct 13 08:17:56 crc kubenswrapper[4833]: I1013 08:17:56.040485 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-k6ds6"] Oct 13 08:17:56 crc kubenswrapper[4833]: I1013 08:17:56.050867 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-k6ds6"] Oct 13 08:17:56 crc kubenswrapper[4833]: I1013 08:17:56.644477 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1320a48-dc29-40b2-bdb6-b46a47c920a8" path="/var/lib/kubelet/pods/e1320a48-dc29-40b2-bdb6-b46a47c920a8/volumes" Oct 13 08:17:56 crc kubenswrapper[4833]: I1013 08:17:56.897873 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-989f485-s9wfv"] Oct 13 08:17:56 crc kubenswrapper[4833]: I1013 08:17:56.899878 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-989f485-s9wfv" Oct 13 08:17:56 crc kubenswrapper[4833]: I1013 08:17:56.901694 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1" Oct 13 08:17:56 crc kubenswrapper[4833]: I1013 08:17:56.916067 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-989f485-s9wfv"] Oct 13 08:17:56 crc kubenswrapper[4833]: I1013 08:17:56.958322 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-ovsdbserver-nb\") pod \"dnsmasq-dns-989f485-s9wfv\" (UID: \"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b\") " pod="openstack/dnsmasq-dns-989f485-s9wfv" Oct 13 08:17:56 crc kubenswrapper[4833]: I1013 08:17:56.958617 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-config\") pod \"dnsmasq-dns-989f485-s9wfv\" (UID: \"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b\") " pod="openstack/dnsmasq-dns-989f485-s9wfv" Oct 13 08:17:56 crc kubenswrapper[4833]: I1013 08:17:56.958655 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-openstack-cell1\") pod \"dnsmasq-dns-989f485-s9wfv\" (UID: \"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b\") " pod="openstack/dnsmasq-dns-989f485-s9wfv" Oct 13 08:17:56 crc kubenswrapper[4833]: I1013 08:17:56.959109 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhc74\" (UniqueName: \"kubernetes.io/projected/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-kube-api-access-rhc74\") pod \"dnsmasq-dns-989f485-s9wfv\" (UID: \"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b\") " pod="openstack/dnsmasq-dns-989f485-s9wfv" Oct 13 08:17:56 crc kubenswrapper[4833]: I1013 08:17:56.959168 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-dns-svc\") pod \"dnsmasq-dns-989f485-s9wfv\" (UID: \"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b\") " pod="openstack/dnsmasq-dns-989f485-s9wfv" Oct 13 08:17:56 crc kubenswrapper[4833]: I1013 08:17:56.959193 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-ovsdbserver-sb\") pod \"dnsmasq-dns-989f485-s9wfv\" (UID: \"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b\") " pod="openstack/dnsmasq-dns-989f485-s9wfv" Oct 13 08:17:57 crc kubenswrapper[4833]: I1013 08:17:57.061685 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhc74\" (UniqueName: \"kubernetes.io/projected/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-kube-api-access-rhc74\") pod \"dnsmasq-dns-989f485-s9wfv\" (UID: \"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b\") " pod="openstack/dnsmasq-dns-989f485-s9wfv" Oct 13 08:17:57 crc kubenswrapper[4833]: I1013 08:17:57.061796 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-dns-svc\") pod \"dnsmasq-dns-989f485-s9wfv\" (UID: \"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b\") " pod="openstack/dnsmasq-dns-989f485-s9wfv" Oct 13 08:17:57 crc kubenswrapper[4833]: I1013 08:17:57.061837 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-ovsdbserver-sb\") pod \"dnsmasq-dns-989f485-s9wfv\" (UID: \"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b\") " pod="openstack/dnsmasq-dns-989f485-s9wfv" Oct 13 08:17:57 crc kubenswrapper[4833]: I1013 08:17:57.061879 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-ovsdbserver-nb\") pod \"dnsmasq-dns-989f485-s9wfv\" (UID: \"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b\") " pod="openstack/dnsmasq-dns-989f485-s9wfv" Oct 13 08:17:57 crc kubenswrapper[4833]: I1013 08:17:57.061947 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-config\") pod \"dnsmasq-dns-989f485-s9wfv\" (UID: \"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b\") " pod="openstack/dnsmasq-dns-989f485-s9wfv" Oct 13 08:17:57 crc kubenswrapper[4833]: I1013 08:17:57.061993 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-openstack-cell1\") pod \"dnsmasq-dns-989f485-s9wfv\" (UID: \"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b\") " pod="openstack/dnsmasq-dns-989f485-s9wfv" Oct 13 08:17:57 crc kubenswrapper[4833]: I1013 08:17:57.062896 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-ovsdbserver-sb\") pod \"dnsmasq-dns-989f485-s9wfv\" (UID: \"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b\") " pod="openstack/dnsmasq-dns-989f485-s9wfv" Oct 13 08:17:57 crc kubenswrapper[4833]: I1013 08:17:57.062899 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-ovsdbserver-nb\") pod \"dnsmasq-dns-989f485-s9wfv\" (UID: \"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b\") " pod="openstack/dnsmasq-dns-989f485-s9wfv" Oct 13 08:17:57 crc kubenswrapper[4833]: I1013 08:17:57.062904 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-dns-svc\") pod \"dnsmasq-dns-989f485-s9wfv\" (UID: \"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b\") " pod="openstack/dnsmasq-dns-989f485-s9wfv" Oct 13 08:17:57 crc kubenswrapper[4833]: I1013 08:17:57.063108 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-openstack-cell1\") pod \"dnsmasq-dns-989f485-s9wfv\" (UID: \"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b\") " pod="openstack/dnsmasq-dns-989f485-s9wfv" Oct 13 08:17:57 crc kubenswrapper[4833]: I1013 08:17:57.063661 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-config\") pod \"dnsmasq-dns-989f485-s9wfv\" (UID: \"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b\") " pod="openstack/dnsmasq-dns-989f485-s9wfv" Oct 13 08:17:57 crc kubenswrapper[4833]: I1013 08:17:57.080260 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhc74\" (UniqueName: \"kubernetes.io/projected/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-kube-api-access-rhc74\") pod \"dnsmasq-dns-989f485-s9wfv\" (UID: \"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b\") " pod="openstack/dnsmasq-dns-989f485-s9wfv" Oct 13 08:17:57 crc kubenswrapper[4833]: I1013 08:17:57.227295 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-989f485-s9wfv" Oct 13 08:17:57 crc kubenswrapper[4833]: I1013 08:17:57.699601 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-989f485-s9wfv"] Oct 13 08:17:58 crc kubenswrapper[4833]: I1013 08:17:58.638367 4833 generic.go:334] "Generic (PLEG): container finished" podID="f6c0f7cb-03d5-4871-9f39-181d3ca8c00b" containerID="4ab96d4680ec3243e19ad7aacc0addb857325cac99ab661d3559203ad183f199" exitCode=0 Oct 13 08:17:58 crc kubenswrapper[4833]: I1013 08:17:58.658423 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-989f485-s9wfv" event={"ID":"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b","Type":"ContainerDied","Data":"4ab96d4680ec3243e19ad7aacc0addb857325cac99ab661d3559203ad183f199"} Oct 13 08:17:58 crc kubenswrapper[4833]: I1013 08:17:58.658484 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-989f485-s9wfv" event={"ID":"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b","Type":"ContainerStarted","Data":"92063f5e185977a8bd1e7574518a5e1121fcd207a416a16f662fa12f9c107729"} Oct 13 08:17:59 crc kubenswrapper[4833]: I1013 08:17:59.651971 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-989f485-s9wfv" event={"ID":"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b","Type":"ContainerStarted","Data":"7894dfde8c8bdd56cb77eda09d0129c4abf6186e99590401e6f98bab4a75e6f2"} Oct 13 08:17:59 crc kubenswrapper[4833]: I1013 08:17:59.652352 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-989f485-s9wfv" Oct 13 08:18:07 crc kubenswrapper[4833]: I1013 08:18:07.229861 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-989f485-s9wfv" Oct 13 08:18:07 crc kubenswrapper[4833]: I1013 08:18:07.256961 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-989f485-s9wfv" podStartSLOduration=11.256937581 podStartE2EDuration="11.256937581s" podCreationTimestamp="2025-10-13 08:17:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:17:59.676468878 +0000 UTC m=+6569.776891834" watchObservedRunningTime="2025-10-13 08:18:07.256937581 +0000 UTC m=+6577.357360537" Oct 13 08:18:07 crc kubenswrapper[4833]: I1013 08:18:07.322595 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cc6dfdd47-bv4vt"] Oct 13 08:18:07 crc kubenswrapper[4833]: I1013 08:18:07.322870 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cc6dfdd47-bv4vt" podUID="5e5711f8-a06c-4ca2-88eb-9a128bc0ace5" containerName="dnsmasq-dns" containerID="cri-o://386fabf5f1532486db306a843dfea8ffd580af5383c8c1a25d14b6943d8ebfd9" gracePeriod=10 Oct 13 08:18:07 crc kubenswrapper[4833]: I1013 08:18:07.494668 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-687555fd5c-7h7rk"] Oct 13 08:18:07 crc kubenswrapper[4833]: I1013 08:18:07.497314 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-687555fd5c-7h7rk" Oct 13 08:18:07 crc kubenswrapper[4833]: I1013 08:18:07.512833 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-687555fd5c-7h7rk"] Oct 13 08:18:07 crc kubenswrapper[4833]: I1013 08:18:07.627318 4833 scope.go:117] "RemoveContainer" containerID="d4d4c44bb0e773c2232317552c7e13324e9005e08dbd3102a3b4703dc201b6e8" Oct 13 08:18:07 crc kubenswrapper[4833]: E1013 08:18:07.627574 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:18:07 crc kubenswrapper[4833]: I1013 08:18:07.641479 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxkcp\" (UniqueName: \"kubernetes.io/projected/2ff252af-a98c-42f9-b3e5-9a18d5fa2d10-kube-api-access-hxkcp\") pod \"dnsmasq-dns-687555fd5c-7h7rk\" (UID: \"2ff252af-a98c-42f9-b3e5-9a18d5fa2d10\") " pod="openstack/dnsmasq-dns-687555fd5c-7h7rk" Oct 13 08:18:07 crc kubenswrapper[4833]: I1013 08:18:07.641523 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ff252af-a98c-42f9-b3e5-9a18d5fa2d10-ovsdbserver-nb\") pod \"dnsmasq-dns-687555fd5c-7h7rk\" (UID: \"2ff252af-a98c-42f9-b3e5-9a18d5fa2d10\") " pod="openstack/dnsmasq-dns-687555fd5c-7h7rk" Oct 13 08:18:07 crc kubenswrapper[4833]: I1013 08:18:07.641647 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff252af-a98c-42f9-b3e5-9a18d5fa2d10-config\") pod \"dnsmasq-dns-687555fd5c-7h7rk\" (UID: \"2ff252af-a98c-42f9-b3e5-9a18d5fa2d10\") " pod="openstack/dnsmasq-dns-687555fd5c-7h7rk" Oct 13 08:18:07 crc kubenswrapper[4833]: I1013 08:18:07.641697 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ff252af-a98c-42f9-b3e5-9a18d5fa2d10-dns-svc\") pod \"dnsmasq-dns-687555fd5c-7h7rk\" (UID: \"2ff252af-a98c-42f9-b3e5-9a18d5fa2d10\") " pod="openstack/dnsmasq-dns-687555fd5c-7h7rk" Oct 13 08:18:07 crc kubenswrapper[4833]: I1013 08:18:07.641807 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/2ff252af-a98c-42f9-b3e5-9a18d5fa2d10-openstack-cell1\") pod \"dnsmasq-dns-687555fd5c-7h7rk\" (UID: \"2ff252af-a98c-42f9-b3e5-9a18d5fa2d10\") " pod="openstack/dnsmasq-dns-687555fd5c-7h7rk" Oct 13 08:18:07 crc kubenswrapper[4833]: I1013 08:18:07.641841 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ff252af-a98c-42f9-b3e5-9a18d5fa2d10-ovsdbserver-sb\") pod \"dnsmasq-dns-687555fd5c-7h7rk\" (UID: \"2ff252af-a98c-42f9-b3e5-9a18d5fa2d10\") " pod="openstack/dnsmasq-dns-687555fd5c-7h7rk" Oct 13 08:18:07 crc kubenswrapper[4833]: I1013 08:18:07.743989 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ff252af-a98c-42f9-b3e5-9a18d5fa2d10-ovsdbserver-sb\") pod \"dnsmasq-dns-687555fd5c-7h7rk\" (UID: \"2ff252af-a98c-42f9-b3e5-9a18d5fa2d10\") " pod="openstack/dnsmasq-dns-687555fd5c-7h7rk" Oct 13 08:18:07 crc kubenswrapper[4833]: I1013 08:18:07.744105 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxkcp\" (UniqueName: \"kubernetes.io/projected/2ff252af-a98c-42f9-b3e5-9a18d5fa2d10-kube-api-access-hxkcp\") pod \"dnsmasq-dns-687555fd5c-7h7rk\" (UID: \"2ff252af-a98c-42f9-b3e5-9a18d5fa2d10\") " pod="openstack/dnsmasq-dns-687555fd5c-7h7rk" Oct 13 08:18:07 crc kubenswrapper[4833]: I1013 08:18:07.744137 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ff252af-a98c-42f9-b3e5-9a18d5fa2d10-ovsdbserver-nb\") pod \"dnsmasq-dns-687555fd5c-7h7rk\" (UID: \"2ff252af-a98c-42f9-b3e5-9a18d5fa2d10\") " pod="openstack/dnsmasq-dns-687555fd5c-7h7rk" Oct 13 08:18:07 crc kubenswrapper[4833]: I1013 08:18:07.744230 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff252af-a98c-42f9-b3e5-9a18d5fa2d10-config\") pod \"dnsmasq-dns-687555fd5c-7h7rk\" (UID: \"2ff252af-a98c-42f9-b3e5-9a18d5fa2d10\") " pod="openstack/dnsmasq-dns-687555fd5c-7h7rk" Oct 13 08:18:07 crc kubenswrapper[4833]: I1013 08:18:07.744340 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ff252af-a98c-42f9-b3e5-9a18d5fa2d10-dns-svc\") pod \"dnsmasq-dns-687555fd5c-7h7rk\" (UID: \"2ff252af-a98c-42f9-b3e5-9a18d5fa2d10\") " pod="openstack/dnsmasq-dns-687555fd5c-7h7rk" Oct 13 08:18:07 crc kubenswrapper[4833]: I1013 08:18:07.744614 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/2ff252af-a98c-42f9-b3e5-9a18d5fa2d10-openstack-cell1\") pod \"dnsmasq-dns-687555fd5c-7h7rk\" (UID: \"2ff252af-a98c-42f9-b3e5-9a18d5fa2d10\") " pod="openstack/dnsmasq-dns-687555fd5c-7h7rk" Oct 13 08:18:07 crc kubenswrapper[4833]: I1013 08:18:07.745660 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/2ff252af-a98c-42f9-b3e5-9a18d5fa2d10-openstack-cell1\") pod \"dnsmasq-dns-687555fd5c-7h7rk\" (UID: \"2ff252af-a98c-42f9-b3e5-9a18d5fa2d10\") " pod="openstack/dnsmasq-dns-687555fd5c-7h7rk" Oct 13 08:18:07 crc kubenswrapper[4833]: I1013 08:18:07.746300 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff252af-a98c-42f9-b3e5-9a18d5fa2d10-config\") pod \"dnsmasq-dns-687555fd5c-7h7rk\" (UID: \"2ff252af-a98c-42f9-b3e5-9a18d5fa2d10\") " pod="openstack/dnsmasq-dns-687555fd5c-7h7rk" Oct 13 08:18:07 crc kubenswrapper[4833]: I1013 08:18:07.751815 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ff252af-a98c-42f9-b3e5-9a18d5fa2d10-ovsdbserver-sb\") pod \"dnsmasq-dns-687555fd5c-7h7rk\" (UID: \"2ff252af-a98c-42f9-b3e5-9a18d5fa2d10\") " pod="openstack/dnsmasq-dns-687555fd5c-7h7rk" Oct 13 08:18:07 crc kubenswrapper[4833]: I1013 08:18:07.752078 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ff252af-a98c-42f9-b3e5-9a18d5fa2d10-dns-svc\") pod \"dnsmasq-dns-687555fd5c-7h7rk\" (UID: \"2ff252af-a98c-42f9-b3e5-9a18d5fa2d10\") " pod="openstack/dnsmasq-dns-687555fd5c-7h7rk" Oct 13 08:18:07 crc kubenswrapper[4833]: I1013 08:18:07.754930 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ff252af-a98c-42f9-b3e5-9a18d5fa2d10-ovsdbserver-nb\") pod \"dnsmasq-dns-687555fd5c-7h7rk\" (UID: \"2ff252af-a98c-42f9-b3e5-9a18d5fa2d10\") " pod="openstack/dnsmasq-dns-687555fd5c-7h7rk" Oct 13 08:18:07 crc kubenswrapper[4833]: I1013 08:18:07.761027 4833 generic.go:334] "Generic (PLEG): container finished" podID="5e5711f8-a06c-4ca2-88eb-9a128bc0ace5" containerID="386fabf5f1532486db306a843dfea8ffd580af5383c8c1a25d14b6943d8ebfd9" exitCode=0 Oct 13 08:18:07 crc kubenswrapper[4833]: I1013 08:18:07.761080 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cc6dfdd47-bv4vt" event={"ID":"5e5711f8-a06c-4ca2-88eb-9a128bc0ace5","Type":"ContainerDied","Data":"386fabf5f1532486db306a843dfea8ffd580af5383c8c1a25d14b6943d8ebfd9"} Oct 13 08:18:07 crc kubenswrapper[4833]: I1013 08:18:07.779893 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxkcp\" (UniqueName: \"kubernetes.io/projected/2ff252af-a98c-42f9-b3e5-9a18d5fa2d10-kube-api-access-hxkcp\") pod \"dnsmasq-dns-687555fd5c-7h7rk\" (UID: \"2ff252af-a98c-42f9-b3e5-9a18d5fa2d10\") " pod="openstack/dnsmasq-dns-687555fd5c-7h7rk" Oct 13 08:18:07 crc kubenswrapper[4833]: I1013 08:18:07.832834 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-687555fd5c-7h7rk" Oct 13 08:18:07 crc kubenswrapper[4833]: I1013 08:18:07.948236 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cc6dfdd47-bv4vt" Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.050245 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e5711f8-a06c-4ca2-88eb-9a128bc0ace5-config\") pod \"5e5711f8-a06c-4ca2-88eb-9a128bc0ace5\" (UID: \"5e5711f8-a06c-4ca2-88eb-9a128bc0ace5\") " Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.050292 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6fbn\" (UniqueName: \"kubernetes.io/projected/5e5711f8-a06c-4ca2-88eb-9a128bc0ace5-kube-api-access-p6fbn\") pod \"5e5711f8-a06c-4ca2-88eb-9a128bc0ace5\" (UID: \"5e5711f8-a06c-4ca2-88eb-9a128bc0ace5\") " Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.050335 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e5711f8-a06c-4ca2-88eb-9a128bc0ace5-dns-svc\") pod \"5e5711f8-a06c-4ca2-88eb-9a128bc0ace5\" (UID: \"5e5711f8-a06c-4ca2-88eb-9a128bc0ace5\") " Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.050361 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e5711f8-a06c-4ca2-88eb-9a128bc0ace5-ovsdbserver-nb\") pod \"5e5711f8-a06c-4ca2-88eb-9a128bc0ace5\" (UID: \"5e5711f8-a06c-4ca2-88eb-9a128bc0ace5\") " Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.050570 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e5711f8-a06c-4ca2-88eb-9a128bc0ace5-ovsdbserver-sb\") pod \"5e5711f8-a06c-4ca2-88eb-9a128bc0ace5\" (UID: \"5e5711f8-a06c-4ca2-88eb-9a128bc0ace5\") " Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.056310 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e5711f8-a06c-4ca2-88eb-9a128bc0ace5-kube-api-access-p6fbn" (OuterVolumeSpecName: "kube-api-access-p6fbn") pod "5e5711f8-a06c-4ca2-88eb-9a128bc0ace5" (UID: "5e5711f8-a06c-4ca2-88eb-9a128bc0ace5"). InnerVolumeSpecName "kube-api-access-p6fbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.119770 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e5711f8-a06c-4ca2-88eb-9a128bc0ace5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5e5711f8-a06c-4ca2-88eb-9a128bc0ace5" (UID: "5e5711f8-a06c-4ca2-88eb-9a128bc0ace5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.132808 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e5711f8-a06c-4ca2-88eb-9a128bc0ace5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5e5711f8-a06c-4ca2-88eb-9a128bc0ace5" (UID: "5e5711f8-a06c-4ca2-88eb-9a128bc0ace5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.135048 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e5711f8-a06c-4ca2-88eb-9a128bc0ace5-config" (OuterVolumeSpecName: "config") pod "5e5711f8-a06c-4ca2-88eb-9a128bc0ace5" (UID: "5e5711f8-a06c-4ca2-88eb-9a128bc0ace5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.152814 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e5711f8-a06c-4ca2-88eb-9a128bc0ace5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.152848 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e5711f8-a06c-4ca2-88eb-9a128bc0ace5-config\") on node \"crc\" DevicePath \"\"" Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.152858 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6fbn\" (UniqueName: \"kubernetes.io/projected/5e5711f8-a06c-4ca2-88eb-9a128bc0ace5-kube-api-access-p6fbn\") on node \"crc\" DevicePath \"\"" Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.152867 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e5711f8-a06c-4ca2-88eb-9a128bc0ace5-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.160885 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e5711f8-a06c-4ca2-88eb-9a128bc0ace5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5e5711f8-a06c-4ca2-88eb-9a128bc0ace5" (UID: "5e5711f8-a06c-4ca2-88eb-9a128bc0ace5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.255115 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e5711f8-a06c-4ca2-88eb-9a128bc0ace5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.291857 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-687555fd5c-7h7rk"] Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.358867 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nmpnr"] Oct 13 08:18:08 crc kubenswrapper[4833]: E1013 08:18:08.359441 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e5711f8-a06c-4ca2-88eb-9a128bc0ace5" containerName="dnsmasq-dns" Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.359464 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e5711f8-a06c-4ca2-88eb-9a128bc0ace5" containerName="dnsmasq-dns" Oct 13 08:18:08 crc kubenswrapper[4833]: E1013 08:18:08.359512 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e5711f8-a06c-4ca2-88eb-9a128bc0ace5" containerName="init" Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.359526 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e5711f8-a06c-4ca2-88eb-9a128bc0ace5" containerName="init" Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.360260 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e5711f8-a06c-4ca2-88eb-9a128bc0ace5" containerName="dnsmasq-dns" Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.365905 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmpnr" Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.368711 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nmpnr"] Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.459183 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxh4f\" (UniqueName: \"kubernetes.io/projected/1436f5f3-014b-4127-8324-8f8f3c904a7f-kube-api-access-jxh4f\") pod \"certified-operators-nmpnr\" (UID: \"1436f5f3-014b-4127-8324-8f8f3c904a7f\") " pod="openshift-marketplace/certified-operators-nmpnr" Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.459235 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1436f5f3-014b-4127-8324-8f8f3c904a7f-utilities\") pod \"certified-operators-nmpnr\" (UID: \"1436f5f3-014b-4127-8324-8f8f3c904a7f\") " pod="openshift-marketplace/certified-operators-nmpnr" Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.459308 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1436f5f3-014b-4127-8324-8f8f3c904a7f-catalog-content\") pod \"certified-operators-nmpnr\" (UID: \"1436f5f3-014b-4127-8324-8f8f3c904a7f\") " pod="openshift-marketplace/certified-operators-nmpnr" Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.561504 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxh4f\" (UniqueName: \"kubernetes.io/projected/1436f5f3-014b-4127-8324-8f8f3c904a7f-kube-api-access-jxh4f\") pod \"certified-operators-nmpnr\" (UID: \"1436f5f3-014b-4127-8324-8f8f3c904a7f\") " pod="openshift-marketplace/certified-operators-nmpnr" Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.561598 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1436f5f3-014b-4127-8324-8f8f3c904a7f-utilities\") pod \"certified-operators-nmpnr\" (UID: \"1436f5f3-014b-4127-8324-8f8f3c904a7f\") " pod="openshift-marketplace/certified-operators-nmpnr" Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.561664 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1436f5f3-014b-4127-8324-8f8f3c904a7f-catalog-content\") pod \"certified-operators-nmpnr\" (UID: \"1436f5f3-014b-4127-8324-8f8f3c904a7f\") " pod="openshift-marketplace/certified-operators-nmpnr" Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.562118 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1436f5f3-014b-4127-8324-8f8f3c904a7f-catalog-content\") pod \"certified-operators-nmpnr\" (UID: \"1436f5f3-014b-4127-8324-8f8f3c904a7f\") " pod="openshift-marketplace/certified-operators-nmpnr" Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.562235 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1436f5f3-014b-4127-8324-8f8f3c904a7f-utilities\") pod \"certified-operators-nmpnr\" (UID: \"1436f5f3-014b-4127-8324-8f8f3c904a7f\") " pod="openshift-marketplace/certified-operators-nmpnr" Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.578971 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxh4f\" (UniqueName: \"kubernetes.io/projected/1436f5f3-014b-4127-8324-8f8f3c904a7f-kube-api-access-jxh4f\") pod \"certified-operators-nmpnr\" (UID: \"1436f5f3-014b-4127-8324-8f8f3c904a7f\") " pod="openshift-marketplace/certified-operators-nmpnr" Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.693520 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmpnr" Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.773299 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cc6dfdd47-bv4vt" event={"ID":"5e5711f8-a06c-4ca2-88eb-9a128bc0ace5","Type":"ContainerDied","Data":"611fd4a0844b3c501efca3ba9137dc44ce1a139da723e34a03cc1d5c646b2283"} Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.773344 4833 scope.go:117] "RemoveContainer" containerID="386fabf5f1532486db306a843dfea8ffd580af5383c8c1a25d14b6943d8ebfd9" Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.773474 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cc6dfdd47-bv4vt" Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.778226 4833 generic.go:334] "Generic (PLEG): container finished" podID="2ff252af-a98c-42f9-b3e5-9a18d5fa2d10" containerID="00dd3bd988943755633ce4f4040c341b369885b9e0cad60895ad526b16da88fb" exitCode=0 Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.778260 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-687555fd5c-7h7rk" event={"ID":"2ff252af-a98c-42f9-b3e5-9a18d5fa2d10","Type":"ContainerDied","Data":"00dd3bd988943755633ce4f4040c341b369885b9e0cad60895ad526b16da88fb"} Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.778284 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-687555fd5c-7h7rk" event={"ID":"2ff252af-a98c-42f9-b3e5-9a18d5fa2d10","Type":"ContainerStarted","Data":"d3d5e740c29e3115792af8e1acb806da8a4b816c2f931021f186193cce3ddf0c"} Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.899548 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cc6dfdd47-bv4vt"] Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.908281 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cc6dfdd47-bv4vt"] Oct 13 08:18:08 crc kubenswrapper[4833]: I1013 08:18:08.929580 4833 scope.go:117] "RemoveContainer" containerID="493e87936fe944374ecd58c0e93ec067a671b5653f29be33abaae8c9b791ab65" Oct 13 08:18:09 crc kubenswrapper[4833]: I1013 08:18:09.183935 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nmpnr"] Oct 13 08:18:09 crc kubenswrapper[4833]: I1013 08:18:09.800975 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-687555fd5c-7h7rk" event={"ID":"2ff252af-a98c-42f9-b3e5-9a18d5fa2d10","Type":"ContainerStarted","Data":"f51575a43db80bb014ac8a2384f50077b3b112b6d0c82d4f3353441ae11c6751"} Oct 13 08:18:09 crc kubenswrapper[4833]: I1013 08:18:09.802037 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-687555fd5c-7h7rk" Oct 13 08:18:09 crc kubenswrapper[4833]: I1013 08:18:09.804644 4833 generic.go:334] "Generic (PLEG): container finished" podID="1436f5f3-014b-4127-8324-8f8f3c904a7f" containerID="ea0da6501e517fb297f86345d09bf8c9c7b911193b7fa62a2f43f0ee7210df97" exitCode=0 Oct 13 08:18:09 crc kubenswrapper[4833]: I1013 08:18:09.805281 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmpnr" event={"ID":"1436f5f3-014b-4127-8324-8f8f3c904a7f","Type":"ContainerDied","Data":"ea0da6501e517fb297f86345d09bf8c9c7b911193b7fa62a2f43f0ee7210df97"} Oct 13 08:18:09 crc kubenswrapper[4833]: I1013 08:18:09.805302 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmpnr" event={"ID":"1436f5f3-014b-4127-8324-8f8f3c904a7f","Type":"ContainerStarted","Data":"f2324d3eaea184ca45935f7b0a21371828ad702495d34095b336cf4a61d14671"} Oct 13 08:18:09 crc kubenswrapper[4833]: I1013 08:18:09.841592 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-687555fd5c-7h7rk" podStartSLOduration=2.841528669 podStartE2EDuration="2.841528669s" podCreationTimestamp="2025-10-13 08:18:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:18:09.831054971 +0000 UTC m=+6579.931477897" watchObservedRunningTime="2025-10-13 08:18:09.841528669 +0000 UTC m=+6579.941951595" Oct 13 08:18:10 crc kubenswrapper[4833]: I1013 08:18:10.575169 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gvg2x"] Oct 13 08:18:10 crc kubenswrapper[4833]: I1013 08:18:10.586418 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvg2x" Oct 13 08:18:10 crc kubenswrapper[4833]: I1013 08:18:10.591842 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvg2x"] Oct 13 08:18:10 crc kubenswrapper[4833]: I1013 08:18:10.643228 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e5711f8-a06c-4ca2-88eb-9a128bc0ace5" path="/var/lib/kubelet/pods/5e5711f8-a06c-4ca2-88eb-9a128bc0ace5/volumes" Oct 13 08:18:10 crc kubenswrapper[4833]: I1013 08:18:10.661957 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14fdabeb-2170-4730-a758-7ed435812649-catalog-content\") pod \"redhat-marketplace-gvg2x\" (UID: \"14fdabeb-2170-4730-a758-7ed435812649\") " pod="openshift-marketplace/redhat-marketplace-gvg2x" Oct 13 08:18:10 crc kubenswrapper[4833]: I1013 08:18:10.662428 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14fdabeb-2170-4730-a758-7ed435812649-utilities\") pod \"redhat-marketplace-gvg2x\" (UID: \"14fdabeb-2170-4730-a758-7ed435812649\") " pod="openshift-marketplace/redhat-marketplace-gvg2x" Oct 13 08:18:10 crc kubenswrapper[4833]: I1013 08:18:10.662493 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djtc7\" (UniqueName: \"kubernetes.io/projected/14fdabeb-2170-4730-a758-7ed435812649-kube-api-access-djtc7\") pod \"redhat-marketplace-gvg2x\" (UID: \"14fdabeb-2170-4730-a758-7ed435812649\") " pod="openshift-marketplace/redhat-marketplace-gvg2x" Oct 13 08:18:10 crc kubenswrapper[4833]: I1013 08:18:10.765097 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14fdabeb-2170-4730-a758-7ed435812649-utilities\") pod \"redhat-marketplace-gvg2x\" (UID: \"14fdabeb-2170-4730-a758-7ed435812649\") " pod="openshift-marketplace/redhat-marketplace-gvg2x" Oct 13 08:18:10 crc kubenswrapper[4833]: I1013 08:18:10.765453 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djtc7\" (UniqueName: \"kubernetes.io/projected/14fdabeb-2170-4730-a758-7ed435812649-kube-api-access-djtc7\") pod \"redhat-marketplace-gvg2x\" (UID: \"14fdabeb-2170-4730-a758-7ed435812649\") " pod="openshift-marketplace/redhat-marketplace-gvg2x" Oct 13 08:18:10 crc kubenswrapper[4833]: I1013 08:18:10.765602 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14fdabeb-2170-4730-a758-7ed435812649-catalog-content\") pod \"redhat-marketplace-gvg2x\" (UID: \"14fdabeb-2170-4730-a758-7ed435812649\") " pod="openshift-marketplace/redhat-marketplace-gvg2x" Oct 13 08:18:10 crc kubenswrapper[4833]: I1013 08:18:10.765601 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14fdabeb-2170-4730-a758-7ed435812649-utilities\") pod \"redhat-marketplace-gvg2x\" (UID: \"14fdabeb-2170-4730-a758-7ed435812649\") " pod="openshift-marketplace/redhat-marketplace-gvg2x" Oct 13 08:18:10 crc kubenswrapper[4833]: I1013 08:18:10.766110 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14fdabeb-2170-4730-a758-7ed435812649-catalog-content\") pod \"redhat-marketplace-gvg2x\" (UID: \"14fdabeb-2170-4730-a758-7ed435812649\") " pod="openshift-marketplace/redhat-marketplace-gvg2x" Oct 13 08:18:10 crc kubenswrapper[4833]: I1013 08:18:10.796659 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djtc7\" (UniqueName: \"kubernetes.io/projected/14fdabeb-2170-4730-a758-7ed435812649-kube-api-access-djtc7\") pod \"redhat-marketplace-gvg2x\" (UID: \"14fdabeb-2170-4730-a758-7ed435812649\") " pod="openshift-marketplace/redhat-marketplace-gvg2x" Oct 13 08:18:10 crc kubenswrapper[4833]: I1013 08:18:10.927325 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvg2x" Oct 13 08:18:11 crc kubenswrapper[4833]: W1013 08:18:11.497126 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14fdabeb_2170_4730_a758_7ed435812649.slice/crio-9e57c7d9fcdbceab7b6dca326b5113c8b7fc8242cc8748da20e9fbe670679c23 WatchSource:0}: Error finding container 9e57c7d9fcdbceab7b6dca326b5113c8b7fc8242cc8748da20e9fbe670679c23: Status 404 returned error can't find the container with id 9e57c7d9fcdbceab7b6dca326b5113c8b7fc8242cc8748da20e9fbe670679c23 Oct 13 08:18:11 crc kubenswrapper[4833]: I1013 08:18:11.498420 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvg2x"] Oct 13 08:18:11 crc kubenswrapper[4833]: I1013 08:18:11.829162 4833 generic.go:334] "Generic (PLEG): container finished" podID="14fdabeb-2170-4730-a758-7ed435812649" containerID="b498d2301c8dee56545d5924ca2e4d1bf123feefb1400aea0b2abff3d873a652" exitCode=0 Oct 13 08:18:11 crc kubenswrapper[4833]: I1013 08:18:11.829247 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvg2x" event={"ID":"14fdabeb-2170-4730-a758-7ed435812649","Type":"ContainerDied","Data":"b498d2301c8dee56545d5924ca2e4d1bf123feefb1400aea0b2abff3d873a652"} Oct 13 08:18:11 crc kubenswrapper[4833]: I1013 08:18:11.829277 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvg2x" event={"ID":"14fdabeb-2170-4730-a758-7ed435812649","Type":"ContainerStarted","Data":"9e57c7d9fcdbceab7b6dca326b5113c8b7fc8242cc8748da20e9fbe670679c23"} Oct 13 08:18:11 crc kubenswrapper[4833]: I1013 08:18:11.832113 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmpnr" event={"ID":"1436f5f3-014b-4127-8324-8f8f3c904a7f","Type":"ContainerStarted","Data":"5a6ae17d2012b4a9857a9d4822f4822d36f830b8f857d02e785bbea8f36e753e"} Oct 13 08:18:12 crc kubenswrapper[4833]: I1013 08:18:12.844396 4833 generic.go:334] "Generic (PLEG): container finished" podID="1436f5f3-014b-4127-8324-8f8f3c904a7f" containerID="5a6ae17d2012b4a9857a9d4822f4822d36f830b8f857d02e785bbea8f36e753e" exitCode=0 Oct 13 08:18:12 crc kubenswrapper[4833]: I1013 08:18:12.844885 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmpnr" event={"ID":"1436f5f3-014b-4127-8324-8f8f3c904a7f","Type":"ContainerDied","Data":"5a6ae17d2012b4a9857a9d4822f4822d36f830b8f857d02e785bbea8f36e753e"} Oct 13 08:18:13 crc kubenswrapper[4833]: I1013 08:18:13.418299 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v"] Oct 13 08:18:13 crc kubenswrapper[4833]: I1013 08:18:13.420727 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v" Oct 13 08:18:13 crc kubenswrapper[4833]: I1013 08:18:13.423997 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 13 08:18:13 crc kubenswrapper[4833]: I1013 08:18:13.424282 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 13 08:18:13 crc kubenswrapper[4833]: I1013 08:18:13.424890 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 08:18:13 crc kubenswrapper[4833]: I1013 08:18:13.426210 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-qqrx8" Oct 13 08:18:13 crc kubenswrapper[4833]: I1013 08:18:13.459985 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v"] Oct 13 08:18:13 crc kubenswrapper[4833]: I1013 08:18:13.551989 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e69c4e90-686a-4ca5-a9df-a661d4bb00ee-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v\" (UID: \"e69c4e90-686a-4ca5-a9df-a661d4bb00ee\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v" Oct 13 08:18:13 crc kubenswrapper[4833]: I1013 08:18:13.552180 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e69c4e90-686a-4ca5-a9df-a661d4bb00ee-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v\" (UID: \"e69c4e90-686a-4ca5-a9df-a661d4bb00ee\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v" Oct 13 08:18:13 crc kubenswrapper[4833]: I1013 08:18:13.552241 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e69c4e90-686a-4ca5-a9df-a661d4bb00ee-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v\" (UID: \"e69c4e90-686a-4ca5-a9df-a661d4bb00ee\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v" Oct 13 08:18:13 crc kubenswrapper[4833]: I1013 08:18:13.552351 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbz2m\" (UniqueName: \"kubernetes.io/projected/e69c4e90-686a-4ca5-a9df-a661d4bb00ee-kube-api-access-gbz2m\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v\" (UID: \"e69c4e90-686a-4ca5-a9df-a661d4bb00ee\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v" Oct 13 08:18:13 crc kubenswrapper[4833]: E1013 08:18:13.639811 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14fdabeb_2170_4730_a758_7ed435812649.slice/crio-conmon-d9fbf0905d9e3c0580e93facf095a80f4f8bb377c6d1b494281bc647407c3518.scope\": RecentStats: unable to find data in memory cache]" Oct 13 08:18:13 crc kubenswrapper[4833]: I1013 08:18:13.654224 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e69c4e90-686a-4ca5-a9df-a661d4bb00ee-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v\" (UID: \"e69c4e90-686a-4ca5-a9df-a661d4bb00ee\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v" Oct 13 08:18:13 crc kubenswrapper[4833]: I1013 08:18:13.654300 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e69c4e90-686a-4ca5-a9df-a661d4bb00ee-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v\" (UID: \"e69c4e90-686a-4ca5-a9df-a661d4bb00ee\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v" Oct 13 08:18:13 crc kubenswrapper[4833]: I1013 08:18:13.654877 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e69c4e90-686a-4ca5-a9df-a661d4bb00ee-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v\" (UID: \"e69c4e90-686a-4ca5-a9df-a661d4bb00ee\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v" Oct 13 08:18:13 crc kubenswrapper[4833]: I1013 08:18:13.654941 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbz2m\" (UniqueName: \"kubernetes.io/projected/e69c4e90-686a-4ca5-a9df-a661d4bb00ee-kube-api-access-gbz2m\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v\" (UID: \"e69c4e90-686a-4ca5-a9df-a661d4bb00ee\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v" Oct 13 08:18:13 crc kubenswrapper[4833]: I1013 08:18:13.661457 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e69c4e90-686a-4ca5-a9df-a661d4bb00ee-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v\" (UID: \"e69c4e90-686a-4ca5-a9df-a661d4bb00ee\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v" Oct 13 08:18:13 crc kubenswrapper[4833]: I1013 08:18:13.661800 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e69c4e90-686a-4ca5-a9df-a661d4bb00ee-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v\" (UID: \"e69c4e90-686a-4ca5-a9df-a661d4bb00ee\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v" Oct 13 08:18:13 crc kubenswrapper[4833]: I1013 08:18:13.664731 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e69c4e90-686a-4ca5-a9df-a661d4bb00ee-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v\" (UID: \"e69c4e90-686a-4ca5-a9df-a661d4bb00ee\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v" Oct 13 08:18:13 crc kubenswrapper[4833]: I1013 08:18:13.670803 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbz2m\" (UniqueName: \"kubernetes.io/projected/e69c4e90-686a-4ca5-a9df-a661d4bb00ee-kube-api-access-gbz2m\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v\" (UID: \"e69c4e90-686a-4ca5-a9df-a661d4bb00ee\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v" Oct 13 08:18:13 crc kubenswrapper[4833]: I1013 08:18:13.749162 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v" Oct 13 08:18:13 crc kubenswrapper[4833]: I1013 08:18:13.882779 4833 generic.go:334] "Generic (PLEG): container finished" podID="14fdabeb-2170-4730-a758-7ed435812649" containerID="d9fbf0905d9e3c0580e93facf095a80f4f8bb377c6d1b494281bc647407c3518" exitCode=0 Oct 13 08:18:13 crc kubenswrapper[4833]: I1013 08:18:13.882903 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvg2x" event={"ID":"14fdabeb-2170-4730-a758-7ed435812649","Type":"ContainerDied","Data":"d9fbf0905d9e3c0580e93facf095a80f4f8bb377c6d1b494281bc647407c3518"} Oct 13 08:18:14 crc kubenswrapper[4833]: I1013 08:18:14.442646 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v"] Oct 13 08:18:14 crc kubenswrapper[4833]: W1013 08:18:14.446601 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode69c4e90_686a_4ca5_a9df_a661d4bb00ee.slice/crio-0fcdf5379ccd3d8bdec052a4e20074f7533aef3eef3d9f5504168d10796c5239 WatchSource:0}: Error finding container 0fcdf5379ccd3d8bdec052a4e20074f7533aef3eef3d9f5504168d10796c5239: Status 404 returned error can't find the container with id 0fcdf5379ccd3d8bdec052a4e20074f7533aef3eef3d9f5504168d10796c5239 Oct 13 08:18:14 crc kubenswrapper[4833]: I1013 08:18:14.896296 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmpnr" event={"ID":"1436f5f3-014b-4127-8324-8f8f3c904a7f","Type":"ContainerStarted","Data":"4427db27da353a231a222aedc28fef1edbcb6ee8890314dac150c28ce16a090e"} Oct 13 08:18:14 crc kubenswrapper[4833]: I1013 08:18:14.900388 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvg2x" event={"ID":"14fdabeb-2170-4730-a758-7ed435812649","Type":"ContainerStarted","Data":"219ff26c652bbe09e5766301e2f4049dcd3e06f1cc1ebe083589fbcfbaaae93f"} Oct 13 08:18:14 crc kubenswrapper[4833]: I1013 08:18:14.903836 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v" event={"ID":"e69c4e90-686a-4ca5-a9df-a661d4bb00ee","Type":"ContainerStarted","Data":"0fcdf5379ccd3d8bdec052a4e20074f7533aef3eef3d9f5504168d10796c5239"} Oct 13 08:18:14 crc kubenswrapper[4833]: I1013 08:18:14.938899 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nmpnr" podStartSLOduration=3.290446839 podStartE2EDuration="6.93887335s" podCreationTimestamp="2025-10-13 08:18:08 +0000 UTC" firstStartedPulling="2025-10-13 08:18:09.806774161 +0000 UTC m=+6579.907197097" lastFinishedPulling="2025-10-13 08:18:13.455200692 +0000 UTC m=+6583.555623608" observedRunningTime="2025-10-13 08:18:14.925423448 +0000 UTC m=+6585.025846374" watchObservedRunningTime="2025-10-13 08:18:14.93887335 +0000 UTC m=+6585.039296266" Oct 13 08:18:14 crc kubenswrapper[4833]: I1013 08:18:14.948950 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gvg2x" podStartSLOduration=2.278135489 podStartE2EDuration="4.948923026s" podCreationTimestamp="2025-10-13 08:18:10 +0000 UTC" firstStartedPulling="2025-10-13 08:18:11.83110965 +0000 UTC m=+6581.931532566" lastFinishedPulling="2025-10-13 08:18:14.501897187 +0000 UTC m=+6584.602320103" observedRunningTime="2025-10-13 08:18:14.947882556 +0000 UTC m=+6585.048305492" watchObservedRunningTime="2025-10-13 08:18:14.948923026 +0000 UTC m=+6585.049345952" Oct 13 08:18:17 crc kubenswrapper[4833]: I1013 08:18:17.834854 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-687555fd5c-7h7rk" Oct 13 08:18:17 crc kubenswrapper[4833]: I1013 08:18:17.921852 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-989f485-s9wfv"] Oct 13 08:18:17 crc kubenswrapper[4833]: I1013 08:18:17.922120 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-989f485-s9wfv" podUID="f6c0f7cb-03d5-4871-9f39-181d3ca8c00b" containerName="dnsmasq-dns" containerID="cri-o://7894dfde8c8bdd56cb77eda09d0129c4abf6186e99590401e6f98bab4a75e6f2" gracePeriod=10 Oct 13 08:18:18 crc kubenswrapper[4833]: I1013 08:18:18.693795 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nmpnr" Oct 13 08:18:18 crc kubenswrapper[4833]: I1013 08:18:18.693942 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nmpnr" Oct 13 08:18:18 crc kubenswrapper[4833]: I1013 08:18:18.946908 4833 generic.go:334] "Generic (PLEG): container finished" podID="f6c0f7cb-03d5-4871-9f39-181d3ca8c00b" containerID="7894dfde8c8bdd56cb77eda09d0129c4abf6186e99590401e6f98bab4a75e6f2" exitCode=0 Oct 13 08:18:18 crc kubenswrapper[4833]: I1013 08:18:18.947567 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-989f485-s9wfv" event={"ID":"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b","Type":"ContainerDied","Data":"7894dfde8c8bdd56cb77eda09d0129c4abf6186e99590401e6f98bab4a75e6f2"} Oct 13 08:18:19 crc kubenswrapper[4833]: I1013 08:18:19.743234 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-nmpnr" podUID="1436f5f3-014b-4127-8324-8f8f3c904a7f" containerName="registry-server" probeResult="failure" output=< Oct 13 08:18:19 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Oct 13 08:18:19 crc kubenswrapper[4833]: > Oct 13 08:18:20 crc kubenswrapper[4833]: I1013 08:18:20.927582 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gvg2x" Oct 13 08:18:20 crc kubenswrapper[4833]: I1013 08:18:20.928059 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gvg2x" Oct 13 08:18:20 crc kubenswrapper[4833]: I1013 08:18:20.992988 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gvg2x" Oct 13 08:18:21 crc kubenswrapper[4833]: I1013 08:18:21.051080 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gvg2x" Oct 13 08:18:21 crc kubenswrapper[4833]: I1013 08:18:21.236520 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvg2x"] Oct 13 08:18:22 crc kubenswrapper[4833]: I1013 08:18:22.228025 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-989f485-s9wfv" podUID="f6c0f7cb-03d5-4871-9f39-181d3ca8c00b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.160:5353: connect: connection refused" Oct 13 08:18:22 crc kubenswrapper[4833]: I1013 08:18:22.627708 4833 scope.go:117] "RemoveContainer" containerID="d4d4c44bb0e773c2232317552c7e13324e9005e08dbd3102a3b4703dc201b6e8" Oct 13 08:18:22 crc kubenswrapper[4833]: E1013 08:18:22.628266 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:18:22 crc kubenswrapper[4833]: I1013 08:18:22.985508 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gvg2x" podUID="14fdabeb-2170-4730-a758-7ed435812649" containerName="registry-server" containerID="cri-o://219ff26c652bbe09e5766301e2f4049dcd3e06f1cc1ebe083589fbcfbaaae93f" gracePeriod=2 Oct 13 08:18:23 crc kubenswrapper[4833]: I1013 08:18:23.926804 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvg2x" Oct 13 08:18:23 crc kubenswrapper[4833]: I1013 08:18:23.938284 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-989f485-s9wfv" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.006113 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v" event={"ID":"e69c4e90-686a-4ca5-a9df-a661d4bb00ee","Type":"ContainerStarted","Data":"db37e0e316150850e973a973575920e5a4fc1443ac4b5c2409b8e753deb0c8f3"} Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.008424 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-989f485-s9wfv" event={"ID":"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b","Type":"ContainerDied","Data":"92063f5e185977a8bd1e7574518a5e1121fcd207a416a16f662fa12f9c107729"} Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.008457 4833 scope.go:117] "RemoveContainer" containerID="7894dfde8c8bdd56cb77eda09d0129c4abf6186e99590401e6f98bab4a75e6f2" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.008572 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-989f485-s9wfv" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.012230 4833 generic.go:334] "Generic (PLEG): container finished" podID="14fdabeb-2170-4730-a758-7ed435812649" containerID="219ff26c652bbe09e5766301e2f4049dcd3e06f1cc1ebe083589fbcfbaaae93f" exitCode=0 Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.012284 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvg2x" event={"ID":"14fdabeb-2170-4730-a758-7ed435812649","Type":"ContainerDied","Data":"219ff26c652bbe09e5766301e2f4049dcd3e06f1cc1ebe083589fbcfbaaae93f"} Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.012308 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvg2x" event={"ID":"14fdabeb-2170-4730-a758-7ed435812649","Type":"ContainerDied","Data":"9e57c7d9fcdbceab7b6dca326b5113c8b7fc8242cc8748da20e9fbe670679c23"} Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.012373 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvg2x" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.033928 4833 scope.go:117] "RemoveContainer" containerID="4ab96d4680ec3243e19ad7aacc0addb857325cac99ab661d3559203ad183f199" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.034142 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v" podStartSLOduration=1.961542964 podStartE2EDuration="11.034125738s" podCreationTimestamp="2025-10-13 08:18:13 +0000 UTC" firstStartedPulling="2025-10-13 08:18:14.449528028 +0000 UTC m=+6584.549950954" lastFinishedPulling="2025-10-13 08:18:23.522110812 +0000 UTC m=+6593.622533728" observedRunningTime="2025-10-13 08:18:24.020588163 +0000 UTC m=+6594.121011079" watchObservedRunningTime="2025-10-13 08:18:24.034125738 +0000 UTC m=+6594.134548654" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.053525 4833 scope.go:117] "RemoveContainer" containerID="219ff26c652bbe09e5766301e2f4049dcd3e06f1cc1ebe083589fbcfbaaae93f" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.069904 4833 scope.go:117] "RemoveContainer" containerID="d9fbf0905d9e3c0580e93facf095a80f4f8bb377c6d1b494281bc647407c3518" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.087593 4833 scope.go:117] "RemoveContainer" containerID="b498d2301c8dee56545d5924ca2e4d1bf123feefb1400aea0b2abff3d873a652" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.103850 4833 scope.go:117] "RemoveContainer" containerID="219ff26c652bbe09e5766301e2f4049dcd3e06f1cc1ebe083589fbcfbaaae93f" Oct 13 08:18:24 crc kubenswrapper[4833]: E1013 08:18:24.104166 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"219ff26c652bbe09e5766301e2f4049dcd3e06f1cc1ebe083589fbcfbaaae93f\": container with ID starting with 219ff26c652bbe09e5766301e2f4049dcd3e06f1cc1ebe083589fbcfbaaae93f not found: ID does not exist" containerID="219ff26c652bbe09e5766301e2f4049dcd3e06f1cc1ebe083589fbcfbaaae93f" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.104215 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"219ff26c652bbe09e5766301e2f4049dcd3e06f1cc1ebe083589fbcfbaaae93f"} err="failed to get container status \"219ff26c652bbe09e5766301e2f4049dcd3e06f1cc1ebe083589fbcfbaaae93f\": rpc error: code = NotFound desc = could not find container \"219ff26c652bbe09e5766301e2f4049dcd3e06f1cc1ebe083589fbcfbaaae93f\": container with ID starting with 219ff26c652bbe09e5766301e2f4049dcd3e06f1cc1ebe083589fbcfbaaae93f not found: ID does not exist" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.104238 4833 scope.go:117] "RemoveContainer" containerID="d9fbf0905d9e3c0580e93facf095a80f4f8bb377c6d1b494281bc647407c3518" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.104480 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14fdabeb-2170-4730-a758-7ed435812649-catalog-content\") pod \"14fdabeb-2170-4730-a758-7ed435812649\" (UID: \"14fdabeb-2170-4730-a758-7ed435812649\") " Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.104524 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14fdabeb-2170-4730-a758-7ed435812649-utilities\") pod \"14fdabeb-2170-4730-a758-7ed435812649\" (UID: \"14fdabeb-2170-4730-a758-7ed435812649\") " Oct 13 08:18:24 crc kubenswrapper[4833]: E1013 08:18:24.104574 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9fbf0905d9e3c0580e93facf095a80f4f8bb377c6d1b494281bc647407c3518\": container with ID starting with d9fbf0905d9e3c0580e93facf095a80f4f8bb377c6d1b494281bc647407c3518 not found: ID does not exist" containerID="d9fbf0905d9e3c0580e93facf095a80f4f8bb377c6d1b494281bc647407c3518" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.104625 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9fbf0905d9e3c0580e93facf095a80f4f8bb377c6d1b494281bc647407c3518"} err="failed to get container status \"d9fbf0905d9e3c0580e93facf095a80f4f8bb377c6d1b494281bc647407c3518\": rpc error: code = NotFound desc = could not find container \"d9fbf0905d9e3c0580e93facf095a80f4f8bb377c6d1b494281bc647407c3518\": container with ID starting with d9fbf0905d9e3c0580e93facf095a80f4f8bb377c6d1b494281bc647407c3518 not found: ID does not exist" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.104657 4833 scope.go:117] "RemoveContainer" containerID="b498d2301c8dee56545d5924ca2e4d1bf123feefb1400aea0b2abff3d873a652" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.104602 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-ovsdbserver-nb\") pod \"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b\" (UID: \"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b\") " Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.104750 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-openstack-cell1\") pod \"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b\" (UID: \"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b\") " Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.104959 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhc74\" (UniqueName: \"kubernetes.io/projected/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-kube-api-access-rhc74\") pod \"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b\" (UID: \"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b\") " Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.105053 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-config\") pod \"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b\" (UID: \"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b\") " Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.105323 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djtc7\" (UniqueName: \"kubernetes.io/projected/14fdabeb-2170-4730-a758-7ed435812649-kube-api-access-djtc7\") pod \"14fdabeb-2170-4730-a758-7ed435812649\" (UID: \"14fdabeb-2170-4730-a758-7ed435812649\") " Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.105332 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14fdabeb-2170-4730-a758-7ed435812649-utilities" (OuterVolumeSpecName: "utilities") pod "14fdabeb-2170-4730-a758-7ed435812649" (UID: "14fdabeb-2170-4730-a758-7ed435812649"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.105345 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-dns-svc\") pod \"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b\" (UID: \"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b\") " Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.105374 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-ovsdbserver-sb\") pod \"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b\" (UID: \"f6c0f7cb-03d5-4871-9f39-181d3ca8c00b\") " Oct 13 08:18:24 crc kubenswrapper[4833]: E1013 08:18:24.105176 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b498d2301c8dee56545d5924ca2e4d1bf123feefb1400aea0b2abff3d873a652\": container with ID starting with b498d2301c8dee56545d5924ca2e4d1bf123feefb1400aea0b2abff3d873a652 not found: ID does not exist" containerID="b498d2301c8dee56545d5924ca2e4d1bf123feefb1400aea0b2abff3d873a652" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.105736 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b498d2301c8dee56545d5924ca2e4d1bf123feefb1400aea0b2abff3d873a652"} err="failed to get container status \"b498d2301c8dee56545d5924ca2e4d1bf123feefb1400aea0b2abff3d873a652\": rpc error: code = NotFound desc = could not find container \"b498d2301c8dee56545d5924ca2e4d1bf123feefb1400aea0b2abff3d873a652\": container with ID starting with b498d2301c8dee56545d5924ca2e4d1bf123feefb1400aea0b2abff3d873a652 not found: ID does not exist" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.106269 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14fdabeb-2170-4730-a758-7ed435812649-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.110772 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14fdabeb-2170-4730-a758-7ed435812649-kube-api-access-djtc7" (OuterVolumeSpecName: "kube-api-access-djtc7") pod "14fdabeb-2170-4730-a758-7ed435812649" (UID: "14fdabeb-2170-4730-a758-7ed435812649"). InnerVolumeSpecName "kube-api-access-djtc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.110808 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-kube-api-access-rhc74" (OuterVolumeSpecName: "kube-api-access-rhc74") pod "f6c0f7cb-03d5-4871-9f39-181d3ca8c00b" (UID: "f6c0f7cb-03d5-4871-9f39-181d3ca8c00b"). InnerVolumeSpecName "kube-api-access-rhc74". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.115581 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14fdabeb-2170-4730-a758-7ed435812649-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14fdabeb-2170-4730-a758-7ed435812649" (UID: "14fdabeb-2170-4730-a758-7ed435812649"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.163110 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f6c0f7cb-03d5-4871-9f39-181d3ca8c00b" (UID: "f6c0f7cb-03d5-4871-9f39-181d3ca8c00b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.173338 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f6c0f7cb-03d5-4871-9f39-181d3ca8c00b" (UID: "f6c0f7cb-03d5-4871-9f39-181d3ca8c00b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.180894 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-config" (OuterVolumeSpecName: "config") pod "f6c0f7cb-03d5-4871-9f39-181d3ca8c00b" (UID: "f6c0f7cb-03d5-4871-9f39-181d3ca8c00b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.190259 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f6c0f7cb-03d5-4871-9f39-181d3ca8c00b" (UID: "f6c0f7cb-03d5-4871-9f39-181d3ca8c00b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.191371 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "f6c0f7cb-03d5-4871-9f39-181d3ca8c00b" (UID: "f6c0f7cb-03d5-4871-9f39-181d3ca8c00b"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.210149 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.210183 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.210194 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14fdabeb-2170-4730-a758-7ed435812649-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.210202 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.210210 4833 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-openstack-cell1\") on node \"crc\" DevicePath \"\"" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.210220 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhc74\" (UniqueName: \"kubernetes.io/projected/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-kube-api-access-rhc74\") on node \"crc\" DevicePath \"\"" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.210229 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b-config\") on node \"crc\" DevicePath \"\"" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.210238 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djtc7\" (UniqueName: \"kubernetes.io/projected/14fdabeb-2170-4730-a758-7ed435812649-kube-api-access-djtc7\") on node \"crc\" DevicePath \"\"" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.361918 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-989f485-s9wfv"] Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.371535 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-989f485-s9wfv"] Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.384648 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvg2x"] Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.392899 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvg2x"] Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.640920 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14fdabeb-2170-4730-a758-7ed435812649" path="/var/lib/kubelet/pods/14fdabeb-2170-4730-a758-7ed435812649/volumes" Oct 13 08:18:24 crc kubenswrapper[4833]: I1013 08:18:24.642108 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6c0f7cb-03d5-4871-9f39-181d3ca8c00b" path="/var/lib/kubelet/pods/f6c0f7cb-03d5-4871-9f39-181d3ca8c00b/volumes" Oct 13 08:18:28 crc kubenswrapper[4833]: I1013 08:18:28.774227 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nmpnr" Oct 13 08:18:28 crc kubenswrapper[4833]: I1013 08:18:28.858743 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nmpnr" Oct 13 08:18:29 crc kubenswrapper[4833]: I1013 08:18:29.021615 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nmpnr"] Oct 13 08:18:30 crc kubenswrapper[4833]: I1013 08:18:30.075643 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nmpnr" podUID="1436f5f3-014b-4127-8324-8f8f3c904a7f" containerName="registry-server" containerID="cri-o://4427db27da353a231a222aedc28fef1edbcb6ee8890314dac150c28ce16a090e" gracePeriod=2 Oct 13 08:18:30 crc kubenswrapper[4833]: I1013 08:18:30.603362 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmpnr" Oct 13 08:18:30 crc kubenswrapper[4833]: I1013 08:18:30.671814 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1436f5f3-014b-4127-8324-8f8f3c904a7f-catalog-content\") pod \"1436f5f3-014b-4127-8324-8f8f3c904a7f\" (UID: \"1436f5f3-014b-4127-8324-8f8f3c904a7f\") " Oct 13 08:18:30 crc kubenswrapper[4833]: I1013 08:18:30.672023 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxh4f\" (UniqueName: \"kubernetes.io/projected/1436f5f3-014b-4127-8324-8f8f3c904a7f-kube-api-access-jxh4f\") pod \"1436f5f3-014b-4127-8324-8f8f3c904a7f\" (UID: \"1436f5f3-014b-4127-8324-8f8f3c904a7f\") " Oct 13 08:18:30 crc kubenswrapper[4833]: I1013 08:18:30.672170 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1436f5f3-014b-4127-8324-8f8f3c904a7f-utilities\") pod \"1436f5f3-014b-4127-8324-8f8f3c904a7f\" (UID: \"1436f5f3-014b-4127-8324-8f8f3c904a7f\") " Oct 13 08:18:30 crc kubenswrapper[4833]: I1013 08:18:30.672746 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1436f5f3-014b-4127-8324-8f8f3c904a7f-utilities" (OuterVolumeSpecName: "utilities") pod "1436f5f3-014b-4127-8324-8f8f3c904a7f" (UID: "1436f5f3-014b-4127-8324-8f8f3c904a7f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:18:30 crc kubenswrapper[4833]: I1013 08:18:30.673359 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1436f5f3-014b-4127-8324-8f8f3c904a7f-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 08:18:30 crc kubenswrapper[4833]: I1013 08:18:30.679163 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1436f5f3-014b-4127-8324-8f8f3c904a7f-kube-api-access-jxh4f" (OuterVolumeSpecName: "kube-api-access-jxh4f") pod "1436f5f3-014b-4127-8324-8f8f3c904a7f" (UID: "1436f5f3-014b-4127-8324-8f8f3c904a7f"). InnerVolumeSpecName "kube-api-access-jxh4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:18:30 crc kubenswrapper[4833]: I1013 08:18:30.722672 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1436f5f3-014b-4127-8324-8f8f3c904a7f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1436f5f3-014b-4127-8324-8f8f3c904a7f" (UID: "1436f5f3-014b-4127-8324-8f8f3c904a7f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:18:30 crc kubenswrapper[4833]: I1013 08:18:30.775197 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1436f5f3-014b-4127-8324-8f8f3c904a7f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 08:18:30 crc kubenswrapper[4833]: I1013 08:18:30.775244 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxh4f\" (UniqueName: \"kubernetes.io/projected/1436f5f3-014b-4127-8324-8f8f3c904a7f-kube-api-access-jxh4f\") on node \"crc\" DevicePath \"\"" Oct 13 08:18:31 crc kubenswrapper[4833]: I1013 08:18:31.088763 4833 generic.go:334] "Generic (PLEG): container finished" podID="1436f5f3-014b-4127-8324-8f8f3c904a7f" containerID="4427db27da353a231a222aedc28fef1edbcb6ee8890314dac150c28ce16a090e" exitCode=0 Oct 13 08:18:31 crc kubenswrapper[4833]: I1013 08:18:31.088808 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmpnr" event={"ID":"1436f5f3-014b-4127-8324-8f8f3c904a7f","Type":"ContainerDied","Data":"4427db27da353a231a222aedc28fef1edbcb6ee8890314dac150c28ce16a090e"} Oct 13 08:18:31 crc kubenswrapper[4833]: I1013 08:18:31.088836 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmpnr" event={"ID":"1436f5f3-014b-4127-8324-8f8f3c904a7f","Type":"ContainerDied","Data":"f2324d3eaea184ca45935f7b0a21371828ad702495d34095b336cf4a61d14671"} Oct 13 08:18:31 crc kubenswrapper[4833]: I1013 08:18:31.088845 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmpnr" Oct 13 08:18:31 crc kubenswrapper[4833]: I1013 08:18:31.088855 4833 scope.go:117] "RemoveContainer" containerID="4427db27da353a231a222aedc28fef1edbcb6ee8890314dac150c28ce16a090e" Oct 13 08:18:31 crc kubenswrapper[4833]: I1013 08:18:31.131462 4833 scope.go:117] "RemoveContainer" containerID="5a6ae17d2012b4a9857a9d4822f4822d36f830b8f857d02e785bbea8f36e753e" Oct 13 08:18:31 crc kubenswrapper[4833]: I1013 08:18:31.131791 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nmpnr"] Oct 13 08:18:31 crc kubenswrapper[4833]: I1013 08:18:31.146736 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nmpnr"] Oct 13 08:18:31 crc kubenswrapper[4833]: I1013 08:18:31.174169 4833 scope.go:117] "RemoveContainer" containerID="ea0da6501e517fb297f86345d09bf8c9c7b911193b7fa62a2f43f0ee7210df97" Oct 13 08:18:31 crc kubenswrapper[4833]: I1013 08:18:31.220368 4833 scope.go:117] "RemoveContainer" containerID="4427db27da353a231a222aedc28fef1edbcb6ee8890314dac150c28ce16a090e" Oct 13 08:18:31 crc kubenswrapper[4833]: E1013 08:18:31.221485 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4427db27da353a231a222aedc28fef1edbcb6ee8890314dac150c28ce16a090e\": container with ID starting with 4427db27da353a231a222aedc28fef1edbcb6ee8890314dac150c28ce16a090e not found: ID does not exist" containerID="4427db27da353a231a222aedc28fef1edbcb6ee8890314dac150c28ce16a090e" Oct 13 08:18:31 crc kubenswrapper[4833]: I1013 08:18:31.221566 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4427db27da353a231a222aedc28fef1edbcb6ee8890314dac150c28ce16a090e"} err="failed to get container status \"4427db27da353a231a222aedc28fef1edbcb6ee8890314dac150c28ce16a090e\": rpc error: code = NotFound desc = could not find container \"4427db27da353a231a222aedc28fef1edbcb6ee8890314dac150c28ce16a090e\": container with ID starting with 4427db27da353a231a222aedc28fef1edbcb6ee8890314dac150c28ce16a090e not found: ID does not exist" Oct 13 08:18:31 crc kubenswrapper[4833]: I1013 08:18:31.221609 4833 scope.go:117] "RemoveContainer" containerID="5a6ae17d2012b4a9857a9d4822f4822d36f830b8f857d02e785bbea8f36e753e" Oct 13 08:18:31 crc kubenswrapper[4833]: E1013 08:18:31.222151 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a6ae17d2012b4a9857a9d4822f4822d36f830b8f857d02e785bbea8f36e753e\": container with ID starting with 5a6ae17d2012b4a9857a9d4822f4822d36f830b8f857d02e785bbea8f36e753e not found: ID does not exist" containerID="5a6ae17d2012b4a9857a9d4822f4822d36f830b8f857d02e785bbea8f36e753e" Oct 13 08:18:31 crc kubenswrapper[4833]: I1013 08:18:31.222257 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a6ae17d2012b4a9857a9d4822f4822d36f830b8f857d02e785bbea8f36e753e"} err="failed to get container status \"5a6ae17d2012b4a9857a9d4822f4822d36f830b8f857d02e785bbea8f36e753e\": rpc error: code = NotFound desc = could not find container \"5a6ae17d2012b4a9857a9d4822f4822d36f830b8f857d02e785bbea8f36e753e\": container with ID starting with 5a6ae17d2012b4a9857a9d4822f4822d36f830b8f857d02e785bbea8f36e753e not found: ID does not exist" Oct 13 08:18:31 crc kubenswrapper[4833]: I1013 08:18:31.222352 4833 scope.go:117] "RemoveContainer" containerID="ea0da6501e517fb297f86345d09bf8c9c7b911193b7fa62a2f43f0ee7210df97" Oct 13 08:18:31 crc kubenswrapper[4833]: E1013 08:18:31.223071 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea0da6501e517fb297f86345d09bf8c9c7b911193b7fa62a2f43f0ee7210df97\": container with ID starting with ea0da6501e517fb297f86345d09bf8c9c7b911193b7fa62a2f43f0ee7210df97 not found: ID does not exist" containerID="ea0da6501e517fb297f86345d09bf8c9c7b911193b7fa62a2f43f0ee7210df97" Oct 13 08:18:31 crc kubenswrapper[4833]: I1013 08:18:31.223116 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea0da6501e517fb297f86345d09bf8c9c7b911193b7fa62a2f43f0ee7210df97"} err="failed to get container status \"ea0da6501e517fb297f86345d09bf8c9c7b911193b7fa62a2f43f0ee7210df97\": rpc error: code = NotFound desc = could not find container \"ea0da6501e517fb297f86345d09bf8c9c7b911193b7fa62a2f43f0ee7210df97\": container with ID starting with ea0da6501e517fb297f86345d09bf8c9c7b911193b7fa62a2f43f0ee7210df97 not found: ID does not exist" Oct 13 08:18:32 crc kubenswrapper[4833]: I1013 08:18:32.638746 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1436f5f3-014b-4127-8324-8f8f3c904a7f" path="/var/lib/kubelet/pods/1436f5f3-014b-4127-8324-8f8f3c904a7f/volumes" Oct 13 08:18:37 crc kubenswrapper[4833]: I1013 08:18:37.626908 4833 scope.go:117] "RemoveContainer" containerID="d4d4c44bb0e773c2232317552c7e13324e9005e08dbd3102a3b4703dc201b6e8" Oct 13 08:18:37 crc kubenswrapper[4833]: E1013 08:18:37.627680 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:18:38 crc kubenswrapper[4833]: I1013 08:18:38.172620 4833 generic.go:334] "Generic (PLEG): container finished" podID="e69c4e90-686a-4ca5-a9df-a661d4bb00ee" containerID="db37e0e316150850e973a973575920e5a4fc1443ac4b5c2409b8e753deb0c8f3" exitCode=0 Oct 13 08:18:38 crc kubenswrapper[4833]: I1013 08:18:38.172718 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v" event={"ID":"e69c4e90-686a-4ca5-a9df-a661d4bb00ee","Type":"ContainerDied","Data":"db37e0e316150850e973a973575920e5a4fc1443ac4b5c2409b8e753deb0c8f3"} Oct 13 08:18:39 crc kubenswrapper[4833]: I1013 08:18:39.746153 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v" Oct 13 08:18:39 crc kubenswrapper[4833]: I1013 08:18:39.879510 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e69c4e90-686a-4ca5-a9df-a661d4bb00ee-pre-adoption-validation-combined-ca-bundle\") pod \"e69c4e90-686a-4ca5-a9df-a661d4bb00ee\" (UID: \"e69c4e90-686a-4ca5-a9df-a661d4bb00ee\") " Oct 13 08:18:39 crc kubenswrapper[4833]: I1013 08:18:39.879963 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e69c4e90-686a-4ca5-a9df-a661d4bb00ee-ssh-key\") pod \"e69c4e90-686a-4ca5-a9df-a661d4bb00ee\" (UID: \"e69c4e90-686a-4ca5-a9df-a661d4bb00ee\") " Oct 13 08:18:39 crc kubenswrapper[4833]: I1013 08:18:39.880072 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbz2m\" (UniqueName: \"kubernetes.io/projected/e69c4e90-686a-4ca5-a9df-a661d4bb00ee-kube-api-access-gbz2m\") pod \"e69c4e90-686a-4ca5-a9df-a661d4bb00ee\" (UID: \"e69c4e90-686a-4ca5-a9df-a661d4bb00ee\") " Oct 13 08:18:39 crc kubenswrapper[4833]: I1013 08:18:39.880127 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e69c4e90-686a-4ca5-a9df-a661d4bb00ee-inventory\") pod \"e69c4e90-686a-4ca5-a9df-a661d4bb00ee\" (UID: \"e69c4e90-686a-4ca5-a9df-a661d4bb00ee\") " Oct 13 08:18:39 crc kubenswrapper[4833]: I1013 08:18:39.885897 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e69c4e90-686a-4ca5-a9df-a661d4bb00ee-kube-api-access-gbz2m" (OuterVolumeSpecName: "kube-api-access-gbz2m") pod "e69c4e90-686a-4ca5-a9df-a661d4bb00ee" (UID: "e69c4e90-686a-4ca5-a9df-a661d4bb00ee"). InnerVolumeSpecName "kube-api-access-gbz2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:18:39 crc kubenswrapper[4833]: I1013 08:18:39.886123 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e69c4e90-686a-4ca5-a9df-a661d4bb00ee-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "e69c4e90-686a-4ca5-a9df-a661d4bb00ee" (UID: "e69c4e90-686a-4ca5-a9df-a661d4bb00ee"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:18:39 crc kubenswrapper[4833]: I1013 08:18:39.913860 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e69c4e90-686a-4ca5-a9df-a661d4bb00ee-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e69c4e90-686a-4ca5-a9df-a661d4bb00ee" (UID: "e69c4e90-686a-4ca5-a9df-a661d4bb00ee"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:18:39 crc kubenswrapper[4833]: I1013 08:18:39.936130 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e69c4e90-686a-4ca5-a9df-a661d4bb00ee-inventory" (OuterVolumeSpecName: "inventory") pod "e69c4e90-686a-4ca5-a9df-a661d4bb00ee" (UID: "e69c4e90-686a-4ca5-a9df-a661d4bb00ee"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:18:39 crc kubenswrapper[4833]: I1013 08:18:39.983357 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e69c4e90-686a-4ca5-a9df-a661d4bb00ee-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 08:18:39 crc kubenswrapper[4833]: I1013 08:18:39.983394 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbz2m\" (UniqueName: \"kubernetes.io/projected/e69c4e90-686a-4ca5-a9df-a661d4bb00ee-kube-api-access-gbz2m\") on node \"crc\" DevicePath \"\"" Oct 13 08:18:39 crc kubenswrapper[4833]: I1013 08:18:39.983411 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e69c4e90-686a-4ca5-a9df-a661d4bb00ee-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 08:18:39 crc kubenswrapper[4833]: I1013 08:18:39.983425 4833 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e69c4e90-686a-4ca5-a9df-a661d4bb00ee-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:18:40 crc kubenswrapper[4833]: I1013 08:18:40.203647 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v" event={"ID":"e69c4e90-686a-4ca5-a9df-a661d4bb00ee","Type":"ContainerDied","Data":"0fcdf5379ccd3d8bdec052a4e20074f7533aef3eef3d9f5504168d10796c5239"} Oct 13 08:18:40 crc kubenswrapper[4833]: I1013 08:18:40.203688 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fcdf5379ccd3d8bdec052a4e20074f7533aef3eef3d9f5504168d10796c5239" Oct 13 08:18:40 crc kubenswrapper[4833]: I1013 08:18:40.203749 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v" Oct 13 08:18:50 crc kubenswrapper[4833]: I1013 08:18:50.973858 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n"] Oct 13 08:18:50 crc kubenswrapper[4833]: E1013 08:18:50.975222 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c0f7cb-03d5-4871-9f39-181d3ca8c00b" containerName="dnsmasq-dns" Oct 13 08:18:50 crc kubenswrapper[4833]: I1013 08:18:50.975248 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c0f7cb-03d5-4871-9f39-181d3ca8c00b" containerName="dnsmasq-dns" Oct 13 08:18:50 crc kubenswrapper[4833]: E1013 08:18:50.975271 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e69c4e90-686a-4ca5-a9df-a661d4bb00ee" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Oct 13 08:18:50 crc kubenswrapper[4833]: I1013 08:18:50.975286 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e69c4e90-686a-4ca5-a9df-a661d4bb00ee" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Oct 13 08:18:50 crc kubenswrapper[4833]: E1013 08:18:50.975313 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14fdabeb-2170-4730-a758-7ed435812649" containerName="extract-content" Oct 13 08:18:50 crc kubenswrapper[4833]: I1013 08:18:50.975325 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="14fdabeb-2170-4730-a758-7ed435812649" containerName="extract-content" Oct 13 08:18:50 crc kubenswrapper[4833]: E1013 08:18:50.975355 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14fdabeb-2170-4730-a758-7ed435812649" containerName="registry-server" Oct 13 08:18:50 crc kubenswrapper[4833]: I1013 08:18:50.975366 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="14fdabeb-2170-4730-a758-7ed435812649" containerName="registry-server" Oct 13 08:18:50 crc kubenswrapper[4833]: E1013 08:18:50.975393 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1436f5f3-014b-4127-8324-8f8f3c904a7f" containerName="registry-server" Oct 13 08:18:50 crc kubenswrapper[4833]: I1013 08:18:50.975405 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1436f5f3-014b-4127-8324-8f8f3c904a7f" containerName="registry-server" Oct 13 08:18:50 crc kubenswrapper[4833]: E1013 08:18:50.975429 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1436f5f3-014b-4127-8324-8f8f3c904a7f" containerName="extract-content" Oct 13 08:18:50 crc kubenswrapper[4833]: I1013 08:18:50.975440 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1436f5f3-014b-4127-8324-8f8f3c904a7f" containerName="extract-content" Oct 13 08:18:50 crc kubenswrapper[4833]: E1013 08:18:50.975473 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1436f5f3-014b-4127-8324-8f8f3c904a7f" containerName="extract-utilities" Oct 13 08:18:50 crc kubenswrapper[4833]: I1013 08:18:50.975485 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1436f5f3-014b-4127-8324-8f8f3c904a7f" containerName="extract-utilities" Oct 13 08:18:50 crc kubenswrapper[4833]: E1013 08:18:50.975535 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c0f7cb-03d5-4871-9f39-181d3ca8c00b" containerName="init" Oct 13 08:18:50 crc kubenswrapper[4833]: I1013 08:18:50.975578 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c0f7cb-03d5-4871-9f39-181d3ca8c00b" containerName="init" Oct 13 08:18:50 crc kubenswrapper[4833]: E1013 08:18:50.975625 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14fdabeb-2170-4730-a758-7ed435812649" containerName="extract-utilities" Oct 13 08:18:50 crc kubenswrapper[4833]: I1013 08:18:50.975638 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="14fdabeb-2170-4730-a758-7ed435812649" containerName="extract-utilities" Oct 13 08:18:50 crc kubenswrapper[4833]: I1013 08:18:50.976054 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="1436f5f3-014b-4127-8324-8f8f3c904a7f" containerName="registry-server" Oct 13 08:18:50 crc kubenswrapper[4833]: I1013 08:18:50.976097 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="14fdabeb-2170-4730-a758-7ed435812649" containerName="registry-server" Oct 13 08:18:50 crc kubenswrapper[4833]: I1013 08:18:50.976121 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="e69c4e90-686a-4ca5-a9df-a661d4bb00ee" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Oct 13 08:18:50 crc kubenswrapper[4833]: I1013 08:18:50.976166 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6c0f7cb-03d5-4871-9f39-181d3ca8c00b" containerName="dnsmasq-dns" Oct 13 08:18:50 crc kubenswrapper[4833]: I1013 08:18:50.977706 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n" Oct 13 08:18:50 crc kubenswrapper[4833]: I1013 08:18:50.981190 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 08:18:50 crc kubenswrapper[4833]: I1013 08:18:50.981466 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 13 08:18:50 crc kubenswrapper[4833]: I1013 08:18:50.981873 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 13 08:18:50 crc kubenswrapper[4833]: I1013 08:18:50.983113 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n"] Oct 13 08:18:50 crc kubenswrapper[4833]: I1013 08:18:50.985905 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-qqrx8" Oct 13 08:18:51 crc kubenswrapper[4833]: I1013 08:18:51.156429 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e4d3fe9-fde6-4388-892d-6477fa1aa0c4-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n\" (UID: \"8e4d3fe9-fde6-4388-892d-6477fa1aa0c4\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n" Oct 13 08:18:51 crc kubenswrapper[4833]: I1013 08:18:51.157259 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdpq4\" (UniqueName: \"kubernetes.io/projected/8e4d3fe9-fde6-4388-892d-6477fa1aa0c4-kube-api-access-xdpq4\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n\" (UID: \"8e4d3fe9-fde6-4388-892d-6477fa1aa0c4\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n" Oct 13 08:18:51 crc kubenswrapper[4833]: I1013 08:18:51.157422 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e4d3fe9-fde6-4388-892d-6477fa1aa0c4-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n\" (UID: \"8e4d3fe9-fde6-4388-892d-6477fa1aa0c4\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n" Oct 13 08:18:51 crc kubenswrapper[4833]: I1013 08:18:51.157601 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8e4d3fe9-fde6-4388-892d-6477fa1aa0c4-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n\" (UID: \"8e4d3fe9-fde6-4388-892d-6477fa1aa0c4\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n" Oct 13 08:18:51 crc kubenswrapper[4833]: I1013 08:18:51.259800 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8e4d3fe9-fde6-4388-892d-6477fa1aa0c4-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n\" (UID: \"8e4d3fe9-fde6-4388-892d-6477fa1aa0c4\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n" Oct 13 08:18:51 crc kubenswrapper[4833]: I1013 08:18:51.260084 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e4d3fe9-fde6-4388-892d-6477fa1aa0c4-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n\" (UID: \"8e4d3fe9-fde6-4388-892d-6477fa1aa0c4\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n" Oct 13 08:18:51 crc kubenswrapper[4833]: I1013 08:18:51.260223 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdpq4\" (UniqueName: \"kubernetes.io/projected/8e4d3fe9-fde6-4388-892d-6477fa1aa0c4-kube-api-access-xdpq4\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n\" (UID: \"8e4d3fe9-fde6-4388-892d-6477fa1aa0c4\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n" Oct 13 08:18:51 crc kubenswrapper[4833]: I1013 08:18:51.260297 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e4d3fe9-fde6-4388-892d-6477fa1aa0c4-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n\" (UID: \"8e4d3fe9-fde6-4388-892d-6477fa1aa0c4\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n" Oct 13 08:18:51 crc kubenswrapper[4833]: I1013 08:18:51.267605 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e4d3fe9-fde6-4388-892d-6477fa1aa0c4-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n\" (UID: \"8e4d3fe9-fde6-4388-892d-6477fa1aa0c4\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n" Oct 13 08:18:51 crc kubenswrapper[4833]: I1013 08:18:51.272237 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e4d3fe9-fde6-4388-892d-6477fa1aa0c4-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n\" (UID: \"8e4d3fe9-fde6-4388-892d-6477fa1aa0c4\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n" Oct 13 08:18:51 crc kubenswrapper[4833]: I1013 08:18:51.277440 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8e4d3fe9-fde6-4388-892d-6477fa1aa0c4-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n\" (UID: \"8e4d3fe9-fde6-4388-892d-6477fa1aa0c4\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n" Oct 13 08:18:51 crc kubenswrapper[4833]: I1013 08:18:51.324749 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdpq4\" (UniqueName: \"kubernetes.io/projected/8e4d3fe9-fde6-4388-892d-6477fa1aa0c4-kube-api-access-xdpq4\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n\" (UID: \"8e4d3fe9-fde6-4388-892d-6477fa1aa0c4\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n" Oct 13 08:18:51 crc kubenswrapper[4833]: I1013 08:18:51.606937 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n" Oct 13 08:18:52 crc kubenswrapper[4833]: I1013 08:18:52.149498 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n"] Oct 13 08:18:52 crc kubenswrapper[4833]: I1013 08:18:52.376939 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n" event={"ID":"8e4d3fe9-fde6-4388-892d-6477fa1aa0c4","Type":"ContainerStarted","Data":"632ff0e6f345a3da0b02f20612c311210e179ab424af866566e91be0d0136121"} Oct 13 08:18:52 crc kubenswrapper[4833]: I1013 08:18:52.627800 4833 scope.go:117] "RemoveContainer" containerID="d4d4c44bb0e773c2232317552c7e13324e9005e08dbd3102a3b4703dc201b6e8" Oct 13 08:18:52 crc kubenswrapper[4833]: E1013 08:18:52.628336 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:18:53 crc kubenswrapper[4833]: I1013 08:18:53.385965 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n" event={"ID":"8e4d3fe9-fde6-4388-892d-6477fa1aa0c4","Type":"ContainerStarted","Data":"697329c65ff5771003929922d5ec04e0f9d6f3031ca811dd25281009fd49a702"} Oct 13 08:18:53 crc kubenswrapper[4833]: I1013 08:18:53.408365 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n" podStartSLOduration=3.014295911 podStartE2EDuration="3.408329153s" podCreationTimestamp="2025-10-13 08:18:50 +0000 UTC" firstStartedPulling="2025-10-13 08:18:52.143060944 +0000 UTC m=+6622.243483880" lastFinishedPulling="2025-10-13 08:18:52.537094196 +0000 UTC m=+6622.637517122" observedRunningTime="2025-10-13 08:18:53.40188282 +0000 UTC m=+6623.502305776" watchObservedRunningTime="2025-10-13 08:18:53.408329153 +0000 UTC m=+6623.508752119" Oct 13 08:18:55 crc kubenswrapper[4833]: I1013 08:18:55.447396 4833 scope.go:117] "RemoveContainer" containerID="04303dc519d7f152ba090fffe804beeca60ccce1aa5d48b0723905b79b3ff0f3" Oct 13 08:19:05 crc kubenswrapper[4833]: I1013 08:19:05.627716 4833 scope.go:117] "RemoveContainer" containerID="d4d4c44bb0e773c2232317552c7e13324e9005e08dbd3102a3b4703dc201b6e8" Oct 13 08:19:06 crc kubenswrapper[4833]: I1013 08:19:06.518622 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"a6279157bea251d8a71a2d9a31b3efaebd21be95ceab991153799af739895c64"} Oct 13 08:19:29 crc kubenswrapper[4833]: I1013 08:19:29.049026 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-create-8hbrm"] Oct 13 08:19:29 crc kubenswrapper[4833]: I1013 08:19:29.060316 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-create-8hbrm"] Oct 13 08:19:30 crc kubenswrapper[4833]: I1013 08:19:30.649941 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26b9de1b-e509-4ca6-8eb1-d31cade8c30e" path="/var/lib/kubelet/pods/26b9de1b-e509-4ca6-8eb1-d31cade8c30e/volumes" Oct 13 08:19:40 crc kubenswrapper[4833]: I1013 08:19:40.043239 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-7442-account-create-qdw4q"] Oct 13 08:19:40 crc kubenswrapper[4833]: I1013 08:19:40.053375 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-7442-account-create-qdw4q"] Oct 13 08:19:40 crc kubenswrapper[4833]: I1013 08:19:40.646163 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d7fd704-430c-4f1f-9250-5e0619873cd0" path="/var/lib/kubelet/pods/8d7fd704-430c-4f1f-9250-5e0619873cd0/volumes" Oct 13 08:19:46 crc kubenswrapper[4833]: I1013 08:19:46.029369 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-persistence-db-create-lzsts"] Oct 13 08:19:46 crc kubenswrapper[4833]: I1013 08:19:46.040788 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-persistence-db-create-lzsts"] Oct 13 08:19:46 crc kubenswrapper[4833]: I1013 08:19:46.643049 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a922c41a-416a-4a00-8360-37b21d30e628" path="/var/lib/kubelet/pods/a922c41a-416a-4a00-8360-37b21d30e628/volumes" Oct 13 08:19:55 crc kubenswrapper[4833]: I1013 08:19:55.624357 4833 scope.go:117] "RemoveContainer" containerID="48f0df259e23bba0065574c5ff0ff317ad966590bdf222575956dc62322c9e62" Oct 13 08:19:55 crc kubenswrapper[4833]: I1013 08:19:55.680063 4833 scope.go:117] "RemoveContainer" containerID="d6acac150a4c0368f9593e9cdba383e6d0fcb161bb8aa90d2d5807df2ceafa78" Oct 13 08:19:55 crc kubenswrapper[4833]: I1013 08:19:55.734510 4833 scope.go:117] "RemoveContainer" containerID="b72154f6d783e80e8bcc5b5a6e348cbcdb32a32bd379fe4860a9da1a65386805" Oct 13 08:19:56 crc kubenswrapper[4833]: I1013 08:19:56.045396 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-b12f-account-create-72jt8"] Oct 13 08:19:56 crc kubenswrapper[4833]: I1013 08:19:56.056850 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-b12f-account-create-72jt8"] Oct 13 08:19:56 crc kubenswrapper[4833]: I1013 08:19:56.648712 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25461515-6805-4b45-a203-c778beb80fb4" path="/var/lib/kubelet/pods/25461515-6805-4b45-a203-c778beb80fb4/volumes" Oct 13 08:20:40 crc kubenswrapper[4833]: I1013 08:20:40.056307 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-sync-5rmws"] Oct 13 08:20:40 crc kubenswrapper[4833]: I1013 08:20:40.065449 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-sync-5rmws"] Oct 13 08:20:40 crc kubenswrapper[4833]: I1013 08:20:40.640271 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="948a7fdd-f311-454f-b73f-3c62b09a90eb" path="/var/lib/kubelet/pods/948a7fdd-f311-454f-b73f-3c62b09a90eb/volumes" Oct 13 08:20:55 crc kubenswrapper[4833]: I1013 08:20:55.858612 4833 scope.go:117] "RemoveContainer" containerID="96d3d80e29049da8248ff57519fda8b5efc17505a8288bf019e45908f18ada3c" Oct 13 08:20:55 crc kubenswrapper[4833]: I1013 08:20:55.897248 4833 scope.go:117] "RemoveContainer" containerID="94d4494ae52fcd809196f14ecf43faa3136a692a4f73c8aa4d607171fc40daf9" Oct 13 08:20:55 crc kubenswrapper[4833]: I1013 08:20:55.980531 4833 scope.go:117] "RemoveContainer" containerID="4324029f5827ab866c52db217e214e944d5f60b903fa03bc6fbdb7e396911f31" Oct 13 08:21:20 crc kubenswrapper[4833]: I1013 08:21:20.116101 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fr4zn"] Oct 13 08:21:20 crc kubenswrapper[4833]: I1013 08:21:20.127898 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fr4zn" Oct 13 08:21:20 crc kubenswrapper[4833]: I1013 08:21:20.180921 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fr4zn"] Oct 13 08:21:20 crc kubenswrapper[4833]: I1013 08:21:20.203219 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75nsc\" (UniqueName: \"kubernetes.io/projected/2911ee94-e268-473a-aa38-fe565ee8c55f-kube-api-access-75nsc\") pod \"community-operators-fr4zn\" (UID: \"2911ee94-e268-473a-aa38-fe565ee8c55f\") " pod="openshift-marketplace/community-operators-fr4zn" Oct 13 08:21:20 crc kubenswrapper[4833]: I1013 08:21:20.203406 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2911ee94-e268-473a-aa38-fe565ee8c55f-catalog-content\") pod \"community-operators-fr4zn\" (UID: \"2911ee94-e268-473a-aa38-fe565ee8c55f\") " pod="openshift-marketplace/community-operators-fr4zn" Oct 13 08:21:20 crc kubenswrapper[4833]: I1013 08:21:20.203444 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2911ee94-e268-473a-aa38-fe565ee8c55f-utilities\") pod \"community-operators-fr4zn\" (UID: \"2911ee94-e268-473a-aa38-fe565ee8c55f\") " pod="openshift-marketplace/community-operators-fr4zn" Oct 13 08:21:20 crc kubenswrapper[4833]: I1013 08:21:20.305980 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2911ee94-e268-473a-aa38-fe565ee8c55f-catalog-content\") pod \"community-operators-fr4zn\" (UID: \"2911ee94-e268-473a-aa38-fe565ee8c55f\") " pod="openshift-marketplace/community-operators-fr4zn" Oct 13 08:21:20 crc kubenswrapper[4833]: I1013 08:21:20.306030 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2911ee94-e268-473a-aa38-fe565ee8c55f-utilities\") pod \"community-operators-fr4zn\" (UID: \"2911ee94-e268-473a-aa38-fe565ee8c55f\") " pod="openshift-marketplace/community-operators-fr4zn" Oct 13 08:21:20 crc kubenswrapper[4833]: I1013 08:21:20.306253 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75nsc\" (UniqueName: \"kubernetes.io/projected/2911ee94-e268-473a-aa38-fe565ee8c55f-kube-api-access-75nsc\") pod \"community-operators-fr4zn\" (UID: \"2911ee94-e268-473a-aa38-fe565ee8c55f\") " pod="openshift-marketplace/community-operators-fr4zn" Oct 13 08:21:20 crc kubenswrapper[4833]: I1013 08:21:20.306648 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2911ee94-e268-473a-aa38-fe565ee8c55f-catalog-content\") pod \"community-operators-fr4zn\" (UID: \"2911ee94-e268-473a-aa38-fe565ee8c55f\") " pod="openshift-marketplace/community-operators-fr4zn" Oct 13 08:21:20 crc kubenswrapper[4833]: I1013 08:21:20.307985 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2911ee94-e268-473a-aa38-fe565ee8c55f-utilities\") pod \"community-operators-fr4zn\" (UID: \"2911ee94-e268-473a-aa38-fe565ee8c55f\") " pod="openshift-marketplace/community-operators-fr4zn" Oct 13 08:21:20 crc kubenswrapper[4833]: I1013 08:21:20.326280 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75nsc\" (UniqueName: \"kubernetes.io/projected/2911ee94-e268-473a-aa38-fe565ee8c55f-kube-api-access-75nsc\") pod \"community-operators-fr4zn\" (UID: \"2911ee94-e268-473a-aa38-fe565ee8c55f\") " pod="openshift-marketplace/community-operators-fr4zn" Oct 13 08:21:20 crc kubenswrapper[4833]: I1013 08:21:20.487580 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fr4zn" Oct 13 08:21:20 crc kubenswrapper[4833]: I1013 08:21:20.980519 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fr4zn"] Oct 13 08:21:21 crc kubenswrapper[4833]: I1013 08:21:21.167288 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fr4zn" event={"ID":"2911ee94-e268-473a-aa38-fe565ee8c55f","Type":"ContainerStarted","Data":"b166293c381ef66989175ab1bc90f84ba11ce449d77d8ea8bcdd1bd94e89d1eb"} Oct 13 08:21:22 crc kubenswrapper[4833]: I1013 08:21:22.182597 4833 generic.go:334] "Generic (PLEG): container finished" podID="2911ee94-e268-473a-aa38-fe565ee8c55f" containerID="bd4ce04b010127b900d1875acb216dbcff47fdb862cfb35d472e6b2ef90c77b7" exitCode=0 Oct 13 08:21:22 crc kubenswrapper[4833]: I1013 08:21:22.182682 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fr4zn" event={"ID":"2911ee94-e268-473a-aa38-fe565ee8c55f","Type":"ContainerDied","Data":"bd4ce04b010127b900d1875acb216dbcff47fdb862cfb35d472e6b2ef90c77b7"} Oct 13 08:21:22 crc kubenswrapper[4833]: I1013 08:21:22.185963 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 08:21:23 crc kubenswrapper[4833]: I1013 08:21:23.195358 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fr4zn" event={"ID":"2911ee94-e268-473a-aa38-fe565ee8c55f","Type":"ContainerStarted","Data":"9f1f82c4d61903c68b20130ef6b6db4532994989208bcb8596f72250d4870f42"} Oct 13 08:21:25 crc kubenswrapper[4833]: I1013 08:21:25.221439 4833 generic.go:334] "Generic (PLEG): container finished" podID="2911ee94-e268-473a-aa38-fe565ee8c55f" containerID="9f1f82c4d61903c68b20130ef6b6db4532994989208bcb8596f72250d4870f42" exitCode=0 Oct 13 08:21:25 crc kubenswrapper[4833]: I1013 08:21:25.221500 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fr4zn" event={"ID":"2911ee94-e268-473a-aa38-fe565ee8c55f","Type":"ContainerDied","Data":"9f1f82c4d61903c68b20130ef6b6db4532994989208bcb8596f72250d4870f42"} Oct 13 08:21:26 crc kubenswrapper[4833]: I1013 08:21:26.233028 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fr4zn" event={"ID":"2911ee94-e268-473a-aa38-fe565ee8c55f","Type":"ContainerStarted","Data":"4e9a5ebacc950115b5a5232dc441abd863040e37c2072bcf761464f5d91d4fca"} Oct 13 08:21:26 crc kubenswrapper[4833]: I1013 08:21:26.252071 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fr4zn" podStartSLOduration=2.696098979 podStartE2EDuration="6.252054081s" podCreationTimestamp="2025-10-13 08:21:20 +0000 UTC" firstStartedPulling="2025-10-13 08:21:22.184708311 +0000 UTC m=+6772.285131277" lastFinishedPulling="2025-10-13 08:21:25.740663443 +0000 UTC m=+6775.841086379" observedRunningTime="2025-10-13 08:21:26.250975451 +0000 UTC m=+6776.351398377" watchObservedRunningTime="2025-10-13 08:21:26.252054081 +0000 UTC m=+6776.352477007" Oct 13 08:21:30 crc kubenswrapper[4833]: I1013 08:21:30.488124 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fr4zn" Oct 13 08:21:30 crc kubenswrapper[4833]: I1013 08:21:30.488754 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fr4zn" Oct 13 08:21:30 crc kubenswrapper[4833]: I1013 08:21:30.542877 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:21:30 crc kubenswrapper[4833]: I1013 08:21:30.542961 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 08:21:30 crc kubenswrapper[4833]: I1013 08:21:30.585938 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fr4zn" Oct 13 08:21:31 crc kubenswrapper[4833]: I1013 08:21:31.380129 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fr4zn" Oct 13 08:21:32 crc kubenswrapper[4833]: I1013 08:21:32.701359 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fr4zn"] Oct 13 08:21:33 crc kubenswrapper[4833]: I1013 08:21:33.314082 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fr4zn" podUID="2911ee94-e268-473a-aa38-fe565ee8c55f" containerName="registry-server" containerID="cri-o://4e9a5ebacc950115b5a5232dc441abd863040e37c2072bcf761464f5d91d4fca" gracePeriod=2 Oct 13 08:21:33 crc kubenswrapper[4833]: I1013 08:21:33.845606 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fr4zn" Oct 13 08:21:33 crc kubenswrapper[4833]: I1013 08:21:33.977581 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2911ee94-e268-473a-aa38-fe565ee8c55f-catalog-content\") pod \"2911ee94-e268-473a-aa38-fe565ee8c55f\" (UID: \"2911ee94-e268-473a-aa38-fe565ee8c55f\") " Oct 13 08:21:33 crc kubenswrapper[4833]: I1013 08:21:33.977725 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75nsc\" (UniqueName: \"kubernetes.io/projected/2911ee94-e268-473a-aa38-fe565ee8c55f-kube-api-access-75nsc\") pod \"2911ee94-e268-473a-aa38-fe565ee8c55f\" (UID: \"2911ee94-e268-473a-aa38-fe565ee8c55f\") " Oct 13 08:21:33 crc kubenswrapper[4833]: I1013 08:21:33.977854 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2911ee94-e268-473a-aa38-fe565ee8c55f-utilities\") pod \"2911ee94-e268-473a-aa38-fe565ee8c55f\" (UID: \"2911ee94-e268-473a-aa38-fe565ee8c55f\") " Oct 13 08:21:33 crc kubenswrapper[4833]: I1013 08:21:33.978623 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2911ee94-e268-473a-aa38-fe565ee8c55f-utilities" (OuterVolumeSpecName: "utilities") pod "2911ee94-e268-473a-aa38-fe565ee8c55f" (UID: "2911ee94-e268-473a-aa38-fe565ee8c55f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:21:33 crc kubenswrapper[4833]: I1013 08:21:33.983809 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2911ee94-e268-473a-aa38-fe565ee8c55f-kube-api-access-75nsc" (OuterVolumeSpecName: "kube-api-access-75nsc") pod "2911ee94-e268-473a-aa38-fe565ee8c55f" (UID: "2911ee94-e268-473a-aa38-fe565ee8c55f"). InnerVolumeSpecName "kube-api-access-75nsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:21:34 crc kubenswrapper[4833]: I1013 08:21:34.039071 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2911ee94-e268-473a-aa38-fe565ee8c55f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2911ee94-e268-473a-aa38-fe565ee8c55f" (UID: "2911ee94-e268-473a-aa38-fe565ee8c55f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:21:34 crc kubenswrapper[4833]: I1013 08:21:34.081467 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75nsc\" (UniqueName: \"kubernetes.io/projected/2911ee94-e268-473a-aa38-fe565ee8c55f-kube-api-access-75nsc\") on node \"crc\" DevicePath \"\"" Oct 13 08:21:34 crc kubenswrapper[4833]: I1013 08:21:34.081515 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2911ee94-e268-473a-aa38-fe565ee8c55f-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 08:21:34 crc kubenswrapper[4833]: I1013 08:21:34.081530 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2911ee94-e268-473a-aa38-fe565ee8c55f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 08:21:34 crc kubenswrapper[4833]: I1013 08:21:34.335532 4833 generic.go:334] "Generic (PLEG): container finished" podID="2911ee94-e268-473a-aa38-fe565ee8c55f" containerID="4e9a5ebacc950115b5a5232dc441abd863040e37c2072bcf761464f5d91d4fca" exitCode=0 Oct 13 08:21:34 crc kubenswrapper[4833]: I1013 08:21:34.335650 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fr4zn" Oct 13 08:21:34 crc kubenswrapper[4833]: I1013 08:21:34.335694 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fr4zn" event={"ID":"2911ee94-e268-473a-aa38-fe565ee8c55f","Type":"ContainerDied","Data":"4e9a5ebacc950115b5a5232dc441abd863040e37c2072bcf761464f5d91d4fca"} Oct 13 08:21:34 crc kubenswrapper[4833]: I1013 08:21:34.336311 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fr4zn" event={"ID":"2911ee94-e268-473a-aa38-fe565ee8c55f","Type":"ContainerDied","Data":"b166293c381ef66989175ab1bc90f84ba11ce449d77d8ea8bcdd1bd94e89d1eb"} Oct 13 08:21:34 crc kubenswrapper[4833]: I1013 08:21:34.336348 4833 scope.go:117] "RemoveContainer" containerID="4e9a5ebacc950115b5a5232dc441abd863040e37c2072bcf761464f5d91d4fca" Oct 13 08:21:34 crc kubenswrapper[4833]: I1013 08:21:34.384000 4833 scope.go:117] "RemoveContainer" containerID="9f1f82c4d61903c68b20130ef6b6db4532994989208bcb8596f72250d4870f42" Oct 13 08:21:34 crc kubenswrapper[4833]: I1013 08:21:34.387694 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fr4zn"] Oct 13 08:21:34 crc kubenswrapper[4833]: I1013 08:21:34.397198 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fr4zn"] Oct 13 08:21:34 crc kubenswrapper[4833]: I1013 08:21:34.420416 4833 scope.go:117] "RemoveContainer" containerID="bd4ce04b010127b900d1875acb216dbcff47fdb862cfb35d472e6b2ef90c77b7" Oct 13 08:21:34 crc kubenswrapper[4833]: I1013 08:21:34.454702 4833 scope.go:117] "RemoveContainer" containerID="4e9a5ebacc950115b5a5232dc441abd863040e37c2072bcf761464f5d91d4fca" Oct 13 08:21:34 crc kubenswrapper[4833]: E1013 08:21:34.455341 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e9a5ebacc950115b5a5232dc441abd863040e37c2072bcf761464f5d91d4fca\": container with ID starting with 4e9a5ebacc950115b5a5232dc441abd863040e37c2072bcf761464f5d91d4fca not found: ID does not exist" containerID="4e9a5ebacc950115b5a5232dc441abd863040e37c2072bcf761464f5d91d4fca" Oct 13 08:21:34 crc kubenswrapper[4833]: I1013 08:21:34.455407 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e9a5ebacc950115b5a5232dc441abd863040e37c2072bcf761464f5d91d4fca"} err="failed to get container status \"4e9a5ebacc950115b5a5232dc441abd863040e37c2072bcf761464f5d91d4fca\": rpc error: code = NotFound desc = could not find container \"4e9a5ebacc950115b5a5232dc441abd863040e37c2072bcf761464f5d91d4fca\": container with ID starting with 4e9a5ebacc950115b5a5232dc441abd863040e37c2072bcf761464f5d91d4fca not found: ID does not exist" Oct 13 08:21:34 crc kubenswrapper[4833]: I1013 08:21:34.455446 4833 scope.go:117] "RemoveContainer" containerID="9f1f82c4d61903c68b20130ef6b6db4532994989208bcb8596f72250d4870f42" Oct 13 08:21:34 crc kubenswrapper[4833]: E1013 08:21:34.456212 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f1f82c4d61903c68b20130ef6b6db4532994989208bcb8596f72250d4870f42\": container with ID starting with 9f1f82c4d61903c68b20130ef6b6db4532994989208bcb8596f72250d4870f42 not found: ID does not exist" containerID="9f1f82c4d61903c68b20130ef6b6db4532994989208bcb8596f72250d4870f42" Oct 13 08:21:34 crc kubenswrapper[4833]: I1013 08:21:34.456266 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f1f82c4d61903c68b20130ef6b6db4532994989208bcb8596f72250d4870f42"} err="failed to get container status \"9f1f82c4d61903c68b20130ef6b6db4532994989208bcb8596f72250d4870f42\": rpc error: code = NotFound desc = could not find container \"9f1f82c4d61903c68b20130ef6b6db4532994989208bcb8596f72250d4870f42\": container with ID starting with 9f1f82c4d61903c68b20130ef6b6db4532994989208bcb8596f72250d4870f42 not found: ID does not exist" Oct 13 08:21:34 crc kubenswrapper[4833]: I1013 08:21:34.456296 4833 scope.go:117] "RemoveContainer" containerID="bd4ce04b010127b900d1875acb216dbcff47fdb862cfb35d472e6b2ef90c77b7" Oct 13 08:21:34 crc kubenswrapper[4833]: E1013 08:21:34.456795 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd4ce04b010127b900d1875acb216dbcff47fdb862cfb35d472e6b2ef90c77b7\": container with ID starting with bd4ce04b010127b900d1875acb216dbcff47fdb862cfb35d472e6b2ef90c77b7 not found: ID does not exist" containerID="bd4ce04b010127b900d1875acb216dbcff47fdb862cfb35d472e6b2ef90c77b7" Oct 13 08:21:34 crc kubenswrapper[4833]: I1013 08:21:34.456874 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd4ce04b010127b900d1875acb216dbcff47fdb862cfb35d472e6b2ef90c77b7"} err="failed to get container status \"bd4ce04b010127b900d1875acb216dbcff47fdb862cfb35d472e6b2ef90c77b7\": rpc error: code = NotFound desc = could not find container \"bd4ce04b010127b900d1875acb216dbcff47fdb862cfb35d472e6b2ef90c77b7\": container with ID starting with bd4ce04b010127b900d1875acb216dbcff47fdb862cfb35d472e6b2ef90c77b7 not found: ID does not exist" Oct 13 08:21:34 crc kubenswrapper[4833]: I1013 08:21:34.645041 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2911ee94-e268-473a-aa38-fe565ee8c55f" path="/var/lib/kubelet/pods/2911ee94-e268-473a-aa38-fe565ee8c55f/volumes" Oct 13 08:22:00 crc kubenswrapper[4833]: I1013 08:22:00.542483 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:22:00 crc kubenswrapper[4833]: I1013 08:22:00.543377 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 08:22:05 crc kubenswrapper[4833]: I1013 08:22:05.834976 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5b5t7"] Oct 13 08:22:05 crc kubenswrapper[4833]: E1013 08:22:05.836831 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2911ee94-e268-473a-aa38-fe565ee8c55f" containerName="registry-server" Oct 13 08:22:05 crc kubenswrapper[4833]: I1013 08:22:05.836863 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2911ee94-e268-473a-aa38-fe565ee8c55f" containerName="registry-server" Oct 13 08:22:05 crc kubenswrapper[4833]: E1013 08:22:05.836898 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2911ee94-e268-473a-aa38-fe565ee8c55f" containerName="extract-content" Oct 13 08:22:05 crc kubenswrapper[4833]: I1013 08:22:05.836915 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2911ee94-e268-473a-aa38-fe565ee8c55f" containerName="extract-content" Oct 13 08:22:05 crc kubenswrapper[4833]: E1013 08:22:05.836975 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2911ee94-e268-473a-aa38-fe565ee8c55f" containerName="extract-utilities" Oct 13 08:22:05 crc kubenswrapper[4833]: I1013 08:22:05.836994 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2911ee94-e268-473a-aa38-fe565ee8c55f" containerName="extract-utilities" Oct 13 08:22:05 crc kubenswrapper[4833]: I1013 08:22:05.837618 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="2911ee94-e268-473a-aa38-fe565ee8c55f" containerName="registry-server" Oct 13 08:22:05 crc kubenswrapper[4833]: I1013 08:22:05.841830 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5b5t7" Oct 13 08:22:05 crc kubenswrapper[4833]: I1013 08:22:05.853078 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5b5t7"] Oct 13 08:22:05 crc kubenswrapper[4833]: I1013 08:22:05.990640 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51c05204-9642-416a-b338-a5b41dfc1889-utilities\") pod \"redhat-operators-5b5t7\" (UID: \"51c05204-9642-416a-b338-a5b41dfc1889\") " pod="openshift-marketplace/redhat-operators-5b5t7" Oct 13 08:22:05 crc kubenswrapper[4833]: I1013 08:22:05.990871 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51c05204-9642-416a-b338-a5b41dfc1889-catalog-content\") pod \"redhat-operators-5b5t7\" (UID: \"51c05204-9642-416a-b338-a5b41dfc1889\") " pod="openshift-marketplace/redhat-operators-5b5t7" Oct 13 08:22:05 crc kubenswrapper[4833]: I1013 08:22:05.990995 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqrnw\" (UniqueName: \"kubernetes.io/projected/51c05204-9642-416a-b338-a5b41dfc1889-kube-api-access-pqrnw\") pod \"redhat-operators-5b5t7\" (UID: \"51c05204-9642-416a-b338-a5b41dfc1889\") " pod="openshift-marketplace/redhat-operators-5b5t7" Oct 13 08:22:06 crc kubenswrapper[4833]: I1013 08:22:06.093705 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51c05204-9642-416a-b338-a5b41dfc1889-utilities\") pod \"redhat-operators-5b5t7\" (UID: \"51c05204-9642-416a-b338-a5b41dfc1889\") " pod="openshift-marketplace/redhat-operators-5b5t7" Oct 13 08:22:06 crc kubenswrapper[4833]: I1013 08:22:06.093889 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51c05204-9642-416a-b338-a5b41dfc1889-catalog-content\") pod \"redhat-operators-5b5t7\" (UID: \"51c05204-9642-416a-b338-a5b41dfc1889\") " pod="openshift-marketplace/redhat-operators-5b5t7" Oct 13 08:22:06 crc kubenswrapper[4833]: I1013 08:22:06.093987 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqrnw\" (UniqueName: \"kubernetes.io/projected/51c05204-9642-416a-b338-a5b41dfc1889-kube-api-access-pqrnw\") pod \"redhat-operators-5b5t7\" (UID: \"51c05204-9642-416a-b338-a5b41dfc1889\") " pod="openshift-marketplace/redhat-operators-5b5t7" Oct 13 08:22:06 crc kubenswrapper[4833]: I1013 08:22:06.094273 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51c05204-9642-416a-b338-a5b41dfc1889-catalog-content\") pod \"redhat-operators-5b5t7\" (UID: \"51c05204-9642-416a-b338-a5b41dfc1889\") " pod="openshift-marketplace/redhat-operators-5b5t7" Oct 13 08:22:06 crc kubenswrapper[4833]: I1013 08:22:06.094845 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51c05204-9642-416a-b338-a5b41dfc1889-utilities\") pod \"redhat-operators-5b5t7\" (UID: \"51c05204-9642-416a-b338-a5b41dfc1889\") " pod="openshift-marketplace/redhat-operators-5b5t7" Oct 13 08:22:06 crc kubenswrapper[4833]: I1013 08:22:06.115627 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqrnw\" (UniqueName: \"kubernetes.io/projected/51c05204-9642-416a-b338-a5b41dfc1889-kube-api-access-pqrnw\") pod \"redhat-operators-5b5t7\" (UID: \"51c05204-9642-416a-b338-a5b41dfc1889\") " pod="openshift-marketplace/redhat-operators-5b5t7" Oct 13 08:22:06 crc kubenswrapper[4833]: I1013 08:22:06.198523 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5b5t7" Oct 13 08:22:07 crc kubenswrapper[4833]: I1013 08:22:07.465231 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5b5t7"] Oct 13 08:22:07 crc kubenswrapper[4833]: I1013 08:22:07.783254 4833 generic.go:334] "Generic (PLEG): container finished" podID="51c05204-9642-416a-b338-a5b41dfc1889" containerID="5790c9e0bad5791ef474c8b0ec9842b7c4ff26ce0a50654e4935c9766397013c" exitCode=0 Oct 13 08:22:07 crc kubenswrapper[4833]: I1013 08:22:07.783376 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5b5t7" event={"ID":"51c05204-9642-416a-b338-a5b41dfc1889","Type":"ContainerDied","Data":"5790c9e0bad5791ef474c8b0ec9842b7c4ff26ce0a50654e4935c9766397013c"} Oct 13 08:22:07 crc kubenswrapper[4833]: I1013 08:22:07.783795 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5b5t7" event={"ID":"51c05204-9642-416a-b338-a5b41dfc1889","Type":"ContainerStarted","Data":"82e51fe2956069e4643a8500796e37e5fda2b3856df022df4eef09e67986f842"} Oct 13 08:22:09 crc kubenswrapper[4833]: I1013 08:22:09.805897 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5b5t7" event={"ID":"51c05204-9642-416a-b338-a5b41dfc1889","Type":"ContainerStarted","Data":"503f5d4de65ecd8650b19b3d2a3918d86a79e925141ca2226978a9a27e6e6e41"} Oct 13 08:22:12 crc kubenswrapper[4833]: I1013 08:22:12.840715 4833 generic.go:334] "Generic (PLEG): container finished" podID="51c05204-9642-416a-b338-a5b41dfc1889" containerID="503f5d4de65ecd8650b19b3d2a3918d86a79e925141ca2226978a9a27e6e6e41" exitCode=0 Oct 13 08:22:12 crc kubenswrapper[4833]: I1013 08:22:12.841078 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5b5t7" event={"ID":"51c05204-9642-416a-b338-a5b41dfc1889","Type":"ContainerDied","Data":"503f5d4de65ecd8650b19b3d2a3918d86a79e925141ca2226978a9a27e6e6e41"} Oct 13 08:22:13 crc kubenswrapper[4833]: I1013 08:22:13.862911 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5b5t7" event={"ID":"51c05204-9642-416a-b338-a5b41dfc1889","Type":"ContainerStarted","Data":"9972ad9660405a0784f038de80883961fbdcdbf5447ec5bdcbd1ceb1f9dae2a5"} Oct 13 08:22:13 crc kubenswrapper[4833]: I1013 08:22:13.886242 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5b5t7" podStartSLOduration=3.357837718 podStartE2EDuration="8.886217983s" podCreationTimestamp="2025-10-13 08:22:05 +0000 UTC" firstStartedPulling="2025-10-13 08:22:07.785371893 +0000 UTC m=+6817.885794819" lastFinishedPulling="2025-10-13 08:22:13.313752138 +0000 UTC m=+6823.414175084" observedRunningTime="2025-10-13 08:22:13.878700469 +0000 UTC m=+6823.979123425" watchObservedRunningTime="2025-10-13 08:22:13.886217983 +0000 UTC m=+6823.986640909" Oct 13 08:22:16 crc kubenswrapper[4833]: I1013 08:22:16.199179 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5b5t7" Oct 13 08:22:16 crc kubenswrapper[4833]: I1013 08:22:16.199577 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5b5t7" Oct 13 08:22:17 crc kubenswrapper[4833]: I1013 08:22:17.282426 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5b5t7" podUID="51c05204-9642-416a-b338-a5b41dfc1889" containerName="registry-server" probeResult="failure" output=< Oct 13 08:22:17 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Oct 13 08:22:17 crc kubenswrapper[4833]: > Oct 13 08:22:26 crc kubenswrapper[4833]: I1013 08:22:26.261847 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5b5t7" Oct 13 08:22:26 crc kubenswrapper[4833]: I1013 08:22:26.327224 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5b5t7" Oct 13 08:22:26 crc kubenswrapper[4833]: I1013 08:22:26.515530 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5b5t7"] Oct 13 08:22:28 crc kubenswrapper[4833]: I1013 08:22:28.015045 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5b5t7" podUID="51c05204-9642-416a-b338-a5b41dfc1889" containerName="registry-server" containerID="cri-o://9972ad9660405a0784f038de80883961fbdcdbf5447ec5bdcbd1ceb1f9dae2a5" gracePeriod=2 Oct 13 08:22:28 crc kubenswrapper[4833]: I1013 08:22:28.461278 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5b5t7" Oct 13 08:22:28 crc kubenswrapper[4833]: I1013 08:22:28.516304 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51c05204-9642-416a-b338-a5b41dfc1889-catalog-content\") pod \"51c05204-9642-416a-b338-a5b41dfc1889\" (UID: \"51c05204-9642-416a-b338-a5b41dfc1889\") " Oct 13 08:22:28 crc kubenswrapper[4833]: I1013 08:22:28.516441 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqrnw\" (UniqueName: \"kubernetes.io/projected/51c05204-9642-416a-b338-a5b41dfc1889-kube-api-access-pqrnw\") pod \"51c05204-9642-416a-b338-a5b41dfc1889\" (UID: \"51c05204-9642-416a-b338-a5b41dfc1889\") " Oct 13 08:22:28 crc kubenswrapper[4833]: I1013 08:22:28.516644 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51c05204-9642-416a-b338-a5b41dfc1889-utilities\") pod \"51c05204-9642-416a-b338-a5b41dfc1889\" (UID: \"51c05204-9642-416a-b338-a5b41dfc1889\") " Oct 13 08:22:28 crc kubenswrapper[4833]: I1013 08:22:28.517324 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51c05204-9642-416a-b338-a5b41dfc1889-utilities" (OuterVolumeSpecName: "utilities") pod "51c05204-9642-416a-b338-a5b41dfc1889" (UID: "51c05204-9642-416a-b338-a5b41dfc1889"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:22:28 crc kubenswrapper[4833]: I1013 08:22:28.517670 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51c05204-9642-416a-b338-a5b41dfc1889-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 08:22:28 crc kubenswrapper[4833]: I1013 08:22:28.535400 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51c05204-9642-416a-b338-a5b41dfc1889-kube-api-access-pqrnw" (OuterVolumeSpecName: "kube-api-access-pqrnw") pod "51c05204-9642-416a-b338-a5b41dfc1889" (UID: "51c05204-9642-416a-b338-a5b41dfc1889"). InnerVolumeSpecName "kube-api-access-pqrnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:22:28 crc kubenswrapper[4833]: I1013 08:22:28.624986 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqrnw\" (UniqueName: \"kubernetes.io/projected/51c05204-9642-416a-b338-a5b41dfc1889-kube-api-access-pqrnw\") on node \"crc\" DevicePath \"\"" Oct 13 08:22:28 crc kubenswrapper[4833]: I1013 08:22:28.626550 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51c05204-9642-416a-b338-a5b41dfc1889-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51c05204-9642-416a-b338-a5b41dfc1889" (UID: "51c05204-9642-416a-b338-a5b41dfc1889"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:22:28 crc kubenswrapper[4833]: I1013 08:22:28.728020 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51c05204-9642-416a-b338-a5b41dfc1889-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 08:22:29 crc kubenswrapper[4833]: I1013 08:22:29.034611 4833 generic.go:334] "Generic (PLEG): container finished" podID="51c05204-9642-416a-b338-a5b41dfc1889" containerID="9972ad9660405a0784f038de80883961fbdcdbf5447ec5bdcbd1ceb1f9dae2a5" exitCode=0 Oct 13 08:22:29 crc kubenswrapper[4833]: I1013 08:22:29.034662 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5b5t7" event={"ID":"51c05204-9642-416a-b338-a5b41dfc1889","Type":"ContainerDied","Data":"9972ad9660405a0784f038de80883961fbdcdbf5447ec5bdcbd1ceb1f9dae2a5"} Oct 13 08:22:29 crc kubenswrapper[4833]: I1013 08:22:29.034706 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5b5t7" Oct 13 08:22:29 crc kubenswrapper[4833]: I1013 08:22:29.034723 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5b5t7" event={"ID":"51c05204-9642-416a-b338-a5b41dfc1889","Type":"ContainerDied","Data":"82e51fe2956069e4643a8500796e37e5fda2b3856df022df4eef09e67986f842"} Oct 13 08:22:29 crc kubenswrapper[4833]: I1013 08:22:29.034749 4833 scope.go:117] "RemoveContainer" containerID="9972ad9660405a0784f038de80883961fbdcdbf5447ec5bdcbd1ceb1f9dae2a5" Oct 13 08:22:29 crc kubenswrapper[4833]: I1013 08:22:29.067295 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5b5t7"] Oct 13 08:22:29 crc kubenswrapper[4833]: I1013 08:22:29.074836 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5b5t7"] Oct 13 08:22:29 crc kubenswrapper[4833]: I1013 08:22:29.076927 4833 scope.go:117] "RemoveContainer" containerID="503f5d4de65ecd8650b19b3d2a3918d86a79e925141ca2226978a9a27e6e6e41" Oct 13 08:22:29 crc kubenswrapper[4833]: I1013 08:22:29.100811 4833 scope.go:117] "RemoveContainer" containerID="5790c9e0bad5791ef474c8b0ec9842b7c4ff26ce0a50654e4935c9766397013c" Oct 13 08:22:29 crc kubenswrapper[4833]: I1013 08:22:29.163217 4833 scope.go:117] "RemoveContainer" containerID="9972ad9660405a0784f038de80883961fbdcdbf5447ec5bdcbd1ceb1f9dae2a5" Oct 13 08:22:29 crc kubenswrapper[4833]: E1013 08:22:29.164155 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9972ad9660405a0784f038de80883961fbdcdbf5447ec5bdcbd1ceb1f9dae2a5\": container with ID starting with 9972ad9660405a0784f038de80883961fbdcdbf5447ec5bdcbd1ceb1f9dae2a5 not found: ID does not exist" containerID="9972ad9660405a0784f038de80883961fbdcdbf5447ec5bdcbd1ceb1f9dae2a5" Oct 13 08:22:29 crc kubenswrapper[4833]: I1013 08:22:29.164212 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9972ad9660405a0784f038de80883961fbdcdbf5447ec5bdcbd1ceb1f9dae2a5"} err="failed to get container status \"9972ad9660405a0784f038de80883961fbdcdbf5447ec5bdcbd1ceb1f9dae2a5\": rpc error: code = NotFound desc = could not find container \"9972ad9660405a0784f038de80883961fbdcdbf5447ec5bdcbd1ceb1f9dae2a5\": container with ID starting with 9972ad9660405a0784f038de80883961fbdcdbf5447ec5bdcbd1ceb1f9dae2a5 not found: ID does not exist" Oct 13 08:22:29 crc kubenswrapper[4833]: I1013 08:22:29.164246 4833 scope.go:117] "RemoveContainer" containerID="503f5d4de65ecd8650b19b3d2a3918d86a79e925141ca2226978a9a27e6e6e41" Oct 13 08:22:29 crc kubenswrapper[4833]: E1013 08:22:29.164804 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"503f5d4de65ecd8650b19b3d2a3918d86a79e925141ca2226978a9a27e6e6e41\": container with ID starting with 503f5d4de65ecd8650b19b3d2a3918d86a79e925141ca2226978a9a27e6e6e41 not found: ID does not exist" containerID="503f5d4de65ecd8650b19b3d2a3918d86a79e925141ca2226978a9a27e6e6e41" Oct 13 08:22:29 crc kubenswrapper[4833]: I1013 08:22:29.164868 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"503f5d4de65ecd8650b19b3d2a3918d86a79e925141ca2226978a9a27e6e6e41"} err="failed to get container status \"503f5d4de65ecd8650b19b3d2a3918d86a79e925141ca2226978a9a27e6e6e41\": rpc error: code = NotFound desc = could not find container \"503f5d4de65ecd8650b19b3d2a3918d86a79e925141ca2226978a9a27e6e6e41\": container with ID starting with 503f5d4de65ecd8650b19b3d2a3918d86a79e925141ca2226978a9a27e6e6e41 not found: ID does not exist" Oct 13 08:22:29 crc kubenswrapper[4833]: I1013 08:22:29.164907 4833 scope.go:117] "RemoveContainer" containerID="5790c9e0bad5791ef474c8b0ec9842b7c4ff26ce0a50654e4935c9766397013c" Oct 13 08:22:29 crc kubenswrapper[4833]: E1013 08:22:29.165222 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5790c9e0bad5791ef474c8b0ec9842b7c4ff26ce0a50654e4935c9766397013c\": container with ID starting with 5790c9e0bad5791ef474c8b0ec9842b7c4ff26ce0a50654e4935c9766397013c not found: ID does not exist" containerID="5790c9e0bad5791ef474c8b0ec9842b7c4ff26ce0a50654e4935c9766397013c" Oct 13 08:22:29 crc kubenswrapper[4833]: I1013 08:22:29.165256 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5790c9e0bad5791ef474c8b0ec9842b7c4ff26ce0a50654e4935c9766397013c"} err="failed to get container status \"5790c9e0bad5791ef474c8b0ec9842b7c4ff26ce0a50654e4935c9766397013c\": rpc error: code = NotFound desc = could not find container \"5790c9e0bad5791ef474c8b0ec9842b7c4ff26ce0a50654e4935c9766397013c\": container with ID starting with 5790c9e0bad5791ef474c8b0ec9842b7c4ff26ce0a50654e4935c9766397013c not found: ID does not exist" Oct 13 08:22:30 crc kubenswrapper[4833]: I1013 08:22:30.542453 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:22:30 crc kubenswrapper[4833]: I1013 08:22:30.542594 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 08:22:30 crc kubenswrapper[4833]: I1013 08:22:30.542680 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 08:22:30 crc kubenswrapper[4833]: I1013 08:22:30.544153 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a6279157bea251d8a71a2d9a31b3efaebd21be95ceab991153799af739895c64"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 08:22:30 crc kubenswrapper[4833]: I1013 08:22:30.544292 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://a6279157bea251d8a71a2d9a31b3efaebd21be95ceab991153799af739895c64" gracePeriod=600 Oct 13 08:22:30 crc kubenswrapper[4833]: I1013 08:22:30.650757 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51c05204-9642-416a-b338-a5b41dfc1889" path="/var/lib/kubelet/pods/51c05204-9642-416a-b338-a5b41dfc1889/volumes" Oct 13 08:22:31 crc kubenswrapper[4833]: I1013 08:22:31.064660 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="a6279157bea251d8a71a2d9a31b3efaebd21be95ceab991153799af739895c64" exitCode=0 Oct 13 08:22:31 crc kubenswrapper[4833]: I1013 08:22:31.064746 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"a6279157bea251d8a71a2d9a31b3efaebd21be95ceab991153799af739895c64"} Oct 13 08:22:31 crc kubenswrapper[4833]: I1013 08:22:31.065157 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"e750b4528493e1226803bb48b2f89a189fd7217c1157b888cd297653099141bd"} Oct 13 08:22:31 crc kubenswrapper[4833]: I1013 08:22:31.065194 4833 scope.go:117] "RemoveContainer" containerID="d4d4c44bb0e773c2232317552c7e13324e9005e08dbd3102a3b4703dc201b6e8" Oct 13 08:23:53 crc kubenswrapper[4833]: I1013 08:23:53.071615 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-whgqm"] Oct 13 08:23:53 crc kubenswrapper[4833]: I1013 08:23:53.084797 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-whgqm"] Oct 13 08:23:54 crc kubenswrapper[4833]: I1013 08:23:54.644294 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="633e0793-d72b-4b22-88f3-8c58511dd268" path="/var/lib/kubelet/pods/633e0793-d72b-4b22-88f3-8c58511dd268/volumes" Oct 13 08:23:56 crc kubenswrapper[4833]: I1013 08:23:56.182274 4833 scope.go:117] "RemoveContainer" containerID="85463d271053011f8742463109b09f517270dc130bdf684dd000987e4d1dadf3" Oct 13 08:24:03 crc kubenswrapper[4833]: I1013 08:24:03.041635 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-b888-account-create-nhhh8"] Oct 13 08:24:03 crc kubenswrapper[4833]: I1013 08:24:03.053602 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-b888-account-create-nhhh8"] Oct 13 08:24:04 crc kubenswrapper[4833]: I1013 08:24:04.640221 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a46f3081-52a3-46e0-9e1f-773be13994b9" path="/var/lib/kubelet/pods/a46f3081-52a3-46e0-9e1f-773be13994b9/volumes" Oct 13 08:24:19 crc kubenswrapper[4833]: I1013 08:24:19.050261 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-8q5lv"] Oct 13 08:24:19 crc kubenswrapper[4833]: I1013 08:24:19.061048 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-8q5lv"] Oct 13 08:24:20 crc kubenswrapper[4833]: I1013 08:24:20.648475 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01" path="/var/lib/kubelet/pods/9cb6aedb-ee3c-44ea-b1a3-9c6b63119d01/volumes" Oct 13 08:24:30 crc kubenswrapper[4833]: I1013 08:24:30.543084 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:24:30 crc kubenswrapper[4833]: I1013 08:24:30.543717 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 08:24:56 crc kubenswrapper[4833]: I1013 08:24:56.277622 4833 scope.go:117] "RemoveContainer" containerID="4ff117ea5cffce4b98b89a59680f1d178a628d40d23958ffcfda7a3db14ae8d7" Oct 13 08:24:56 crc kubenswrapper[4833]: I1013 08:24:56.320259 4833 scope.go:117] "RemoveContainer" containerID="d5a566a36d3105e7af84214e1d6eef7aea235d7c17e8aef56bc97bedac8d9914" Oct 13 08:25:00 crc kubenswrapper[4833]: I1013 08:25:00.543033 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:25:00 crc kubenswrapper[4833]: I1013 08:25:00.543585 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 08:25:30 crc kubenswrapper[4833]: I1013 08:25:30.543283 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:25:30 crc kubenswrapper[4833]: I1013 08:25:30.543953 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 08:25:30 crc kubenswrapper[4833]: I1013 08:25:30.544015 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 08:25:30 crc kubenswrapper[4833]: I1013 08:25:30.545123 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e750b4528493e1226803bb48b2f89a189fd7217c1157b888cd297653099141bd"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 08:25:30 crc kubenswrapper[4833]: I1013 08:25:30.545253 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://e750b4528493e1226803bb48b2f89a189fd7217c1157b888cd297653099141bd" gracePeriod=600 Oct 13 08:25:30 crc kubenswrapper[4833]: E1013 08:25:30.677628 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:25:31 crc kubenswrapper[4833]: I1013 08:25:31.196323 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="e750b4528493e1226803bb48b2f89a189fd7217c1157b888cd297653099141bd" exitCode=0 Oct 13 08:25:31 crc kubenswrapper[4833]: I1013 08:25:31.196360 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"e750b4528493e1226803bb48b2f89a189fd7217c1157b888cd297653099141bd"} Oct 13 08:25:31 crc kubenswrapper[4833]: I1013 08:25:31.196654 4833 scope.go:117] "RemoveContainer" containerID="a6279157bea251d8a71a2d9a31b3efaebd21be95ceab991153799af739895c64" Oct 13 08:25:31 crc kubenswrapper[4833]: I1013 08:25:31.197359 4833 scope.go:117] "RemoveContainer" containerID="e750b4528493e1226803bb48b2f89a189fd7217c1157b888cd297653099141bd" Oct 13 08:25:31 crc kubenswrapper[4833]: E1013 08:25:31.197672 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:25:42 crc kubenswrapper[4833]: I1013 08:25:42.628111 4833 scope.go:117] "RemoveContainer" containerID="e750b4528493e1226803bb48b2f89a189fd7217c1157b888cd297653099141bd" Oct 13 08:25:42 crc kubenswrapper[4833]: E1013 08:25:42.629120 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:25:54 crc kubenswrapper[4833]: I1013 08:25:54.628394 4833 scope.go:117] "RemoveContainer" containerID="e750b4528493e1226803bb48b2f89a189fd7217c1157b888cd297653099141bd" Oct 13 08:25:54 crc kubenswrapper[4833]: E1013 08:25:54.629723 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:26:09 crc kubenswrapper[4833]: I1013 08:26:09.628577 4833 scope.go:117] "RemoveContainer" containerID="e750b4528493e1226803bb48b2f89a189fd7217c1157b888cd297653099141bd" Oct 13 08:26:09 crc kubenswrapper[4833]: E1013 08:26:09.629735 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:26:24 crc kubenswrapper[4833]: I1013 08:26:24.628183 4833 scope.go:117] "RemoveContainer" containerID="e750b4528493e1226803bb48b2f89a189fd7217c1157b888cd297653099141bd" Oct 13 08:26:24 crc kubenswrapper[4833]: E1013 08:26:24.629497 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:26:38 crc kubenswrapper[4833]: I1013 08:26:38.627892 4833 scope.go:117] "RemoveContainer" containerID="e750b4528493e1226803bb48b2f89a189fd7217c1157b888cd297653099141bd" Oct 13 08:26:38 crc kubenswrapper[4833]: E1013 08:26:38.628821 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:26:42 crc kubenswrapper[4833]: I1013 08:26:42.062755 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-9rppj"] Oct 13 08:26:42 crc kubenswrapper[4833]: I1013 08:26:42.073083 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-9rppj"] Oct 13 08:26:42 crc kubenswrapper[4833]: I1013 08:26:42.644268 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a1ff485-e9d3-42cb-a91b-893838097d21" path="/var/lib/kubelet/pods/2a1ff485-e9d3-42cb-a91b-893838097d21/volumes" Oct 13 08:26:51 crc kubenswrapper[4833]: I1013 08:26:51.057057 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-b02d-account-create-tddwc"] Oct 13 08:26:51 crc kubenswrapper[4833]: I1013 08:26:51.067100 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-b02d-account-create-tddwc"] Oct 13 08:26:52 crc kubenswrapper[4833]: I1013 08:26:52.641614 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57ae373d-930f-4db9-8ff9-d8c40a13c48d" path="/var/lib/kubelet/pods/57ae373d-930f-4db9-8ff9-d8c40a13c48d/volumes" Oct 13 08:26:53 crc kubenswrapper[4833]: I1013 08:26:53.627231 4833 scope.go:117] "RemoveContainer" containerID="e750b4528493e1226803bb48b2f89a189fd7217c1157b888cd297653099141bd" Oct 13 08:26:53 crc kubenswrapper[4833]: E1013 08:26:53.627954 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:26:56 crc kubenswrapper[4833]: I1013 08:26:56.480949 4833 scope.go:117] "RemoveContainer" containerID="c43384bc85adb03d237d99ce71ca2eee5410ab9da244547a01067d7825605946" Oct 13 08:26:56 crc kubenswrapper[4833]: I1013 08:26:56.524047 4833 scope.go:117] "RemoveContainer" containerID="2e2b0e2c9d8279ddc62d2714ab3ae03887332b78204506b9bc87477a0bcfee47" Oct 13 08:27:03 crc kubenswrapper[4833]: I1013 08:27:03.030912 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-9wqcp"] Oct 13 08:27:03 crc kubenswrapper[4833]: I1013 08:27:03.050726 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-9wqcp"] Oct 13 08:27:04 crc kubenswrapper[4833]: I1013 08:27:04.646851 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f" path="/var/lib/kubelet/pods/c6ddd027-f2bf-4e3d-a8d3-cdb924407e7f/volumes" Oct 13 08:27:06 crc kubenswrapper[4833]: I1013 08:27:06.627575 4833 scope.go:117] "RemoveContainer" containerID="e750b4528493e1226803bb48b2f89a189fd7217c1157b888cd297653099141bd" Oct 13 08:27:06 crc kubenswrapper[4833]: E1013 08:27:06.628122 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:27:17 crc kubenswrapper[4833]: I1013 08:27:17.627305 4833 scope.go:117] "RemoveContainer" containerID="e750b4528493e1226803bb48b2f89a189fd7217c1157b888cd297653099141bd" Oct 13 08:27:17 crc kubenswrapper[4833]: E1013 08:27:17.627961 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:27:31 crc kubenswrapper[4833]: I1013 08:27:31.627086 4833 scope.go:117] "RemoveContainer" containerID="e750b4528493e1226803bb48b2f89a189fd7217c1157b888cd297653099141bd" Oct 13 08:27:31 crc kubenswrapper[4833]: E1013 08:27:31.628011 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:27:44 crc kubenswrapper[4833]: I1013 08:27:44.627698 4833 scope.go:117] "RemoveContainer" containerID="e750b4528493e1226803bb48b2f89a189fd7217c1157b888cd297653099141bd" Oct 13 08:27:44 crc kubenswrapper[4833]: E1013 08:27:44.628517 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:27:56 crc kubenswrapper[4833]: I1013 08:27:56.695399 4833 scope.go:117] "RemoveContainer" containerID="0f0020cdf9c08c030f23412131070f6cf4484c1d2ee5de8b3eb616fd96135e37" Oct 13 08:27:57 crc kubenswrapper[4833]: I1013 08:27:57.628324 4833 scope.go:117] "RemoveContainer" containerID="e750b4528493e1226803bb48b2f89a189fd7217c1157b888cd297653099141bd" Oct 13 08:27:57 crc kubenswrapper[4833]: E1013 08:27:57.629005 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:28:08 crc kubenswrapper[4833]: I1013 08:28:08.627773 4833 scope.go:117] "RemoveContainer" containerID="e750b4528493e1226803bb48b2f89a189fd7217c1157b888cd297653099141bd" Oct 13 08:28:08 crc kubenswrapper[4833]: E1013 08:28:08.629956 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:28:22 crc kubenswrapper[4833]: I1013 08:28:22.627829 4833 scope.go:117] "RemoveContainer" containerID="e750b4528493e1226803bb48b2f89a189fd7217c1157b888cd297653099141bd" Oct 13 08:28:22 crc kubenswrapper[4833]: E1013 08:28:22.628495 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:28:34 crc kubenswrapper[4833]: I1013 08:28:34.631285 4833 scope.go:117] "RemoveContainer" containerID="e750b4528493e1226803bb48b2f89a189fd7217c1157b888cd297653099141bd" Oct 13 08:28:34 crc kubenswrapper[4833]: E1013 08:28:34.632322 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:28:48 crc kubenswrapper[4833]: I1013 08:28:48.627745 4833 scope.go:117] "RemoveContainer" containerID="e750b4528493e1226803bb48b2f89a189fd7217c1157b888cd297653099141bd" Oct 13 08:28:48 crc kubenswrapper[4833]: E1013 08:28:48.629032 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:29:01 crc kubenswrapper[4833]: I1013 08:29:01.628735 4833 scope.go:117] "RemoveContainer" containerID="e750b4528493e1226803bb48b2f89a189fd7217c1157b888cd297653099141bd" Oct 13 08:29:01 crc kubenswrapper[4833]: E1013 08:29:01.629761 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:29:09 crc kubenswrapper[4833]: I1013 08:29:09.626905 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nz7h9"] Oct 13 08:29:09 crc kubenswrapper[4833]: E1013 08:29:09.628155 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51c05204-9642-416a-b338-a5b41dfc1889" containerName="registry-server" Oct 13 08:29:09 crc kubenswrapper[4833]: I1013 08:29:09.628172 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="51c05204-9642-416a-b338-a5b41dfc1889" containerName="registry-server" Oct 13 08:29:09 crc kubenswrapper[4833]: E1013 08:29:09.628190 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51c05204-9642-416a-b338-a5b41dfc1889" containerName="extract-content" Oct 13 08:29:09 crc kubenswrapper[4833]: I1013 08:29:09.628197 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="51c05204-9642-416a-b338-a5b41dfc1889" containerName="extract-content" Oct 13 08:29:09 crc kubenswrapper[4833]: E1013 08:29:09.628205 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51c05204-9642-416a-b338-a5b41dfc1889" containerName="extract-utilities" Oct 13 08:29:09 crc kubenswrapper[4833]: I1013 08:29:09.628214 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="51c05204-9642-416a-b338-a5b41dfc1889" containerName="extract-utilities" Oct 13 08:29:09 crc kubenswrapper[4833]: I1013 08:29:09.628501 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="51c05204-9642-416a-b338-a5b41dfc1889" containerName="registry-server" Oct 13 08:29:09 crc kubenswrapper[4833]: I1013 08:29:09.630689 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nz7h9" Oct 13 08:29:09 crc kubenswrapper[4833]: I1013 08:29:09.649434 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nz7h9"] Oct 13 08:29:09 crc kubenswrapper[4833]: I1013 08:29:09.668048 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/991ea839-b955-4496-b166-02364fcdc871-utilities\") pod \"redhat-marketplace-nz7h9\" (UID: \"991ea839-b955-4496-b166-02364fcdc871\") " pod="openshift-marketplace/redhat-marketplace-nz7h9" Oct 13 08:29:09 crc kubenswrapper[4833]: I1013 08:29:09.668104 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/991ea839-b955-4496-b166-02364fcdc871-catalog-content\") pod \"redhat-marketplace-nz7h9\" (UID: \"991ea839-b955-4496-b166-02364fcdc871\") " pod="openshift-marketplace/redhat-marketplace-nz7h9" Oct 13 08:29:09 crc kubenswrapper[4833]: I1013 08:29:09.668637 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4jqh\" (UniqueName: \"kubernetes.io/projected/991ea839-b955-4496-b166-02364fcdc871-kube-api-access-c4jqh\") pod \"redhat-marketplace-nz7h9\" (UID: \"991ea839-b955-4496-b166-02364fcdc871\") " pod="openshift-marketplace/redhat-marketplace-nz7h9" Oct 13 08:29:09 crc kubenswrapper[4833]: I1013 08:29:09.771188 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/991ea839-b955-4496-b166-02364fcdc871-utilities\") pod \"redhat-marketplace-nz7h9\" (UID: \"991ea839-b955-4496-b166-02364fcdc871\") " pod="openshift-marketplace/redhat-marketplace-nz7h9" Oct 13 08:29:09 crc kubenswrapper[4833]: I1013 08:29:09.771767 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/991ea839-b955-4496-b166-02364fcdc871-catalog-content\") pod \"redhat-marketplace-nz7h9\" (UID: \"991ea839-b955-4496-b166-02364fcdc871\") " pod="openshift-marketplace/redhat-marketplace-nz7h9" Oct 13 08:29:09 crc kubenswrapper[4833]: I1013 08:29:09.771809 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/991ea839-b955-4496-b166-02364fcdc871-utilities\") pod \"redhat-marketplace-nz7h9\" (UID: \"991ea839-b955-4496-b166-02364fcdc871\") " pod="openshift-marketplace/redhat-marketplace-nz7h9" Oct 13 08:29:09 crc kubenswrapper[4833]: I1013 08:29:09.772069 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/991ea839-b955-4496-b166-02364fcdc871-catalog-content\") pod \"redhat-marketplace-nz7h9\" (UID: \"991ea839-b955-4496-b166-02364fcdc871\") " pod="openshift-marketplace/redhat-marketplace-nz7h9" Oct 13 08:29:09 crc kubenswrapper[4833]: I1013 08:29:09.772333 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4jqh\" (UniqueName: \"kubernetes.io/projected/991ea839-b955-4496-b166-02364fcdc871-kube-api-access-c4jqh\") pod \"redhat-marketplace-nz7h9\" (UID: \"991ea839-b955-4496-b166-02364fcdc871\") " pod="openshift-marketplace/redhat-marketplace-nz7h9" Oct 13 08:29:09 crc kubenswrapper[4833]: I1013 08:29:09.796381 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4jqh\" (UniqueName: \"kubernetes.io/projected/991ea839-b955-4496-b166-02364fcdc871-kube-api-access-c4jqh\") pod \"redhat-marketplace-nz7h9\" (UID: \"991ea839-b955-4496-b166-02364fcdc871\") " pod="openshift-marketplace/redhat-marketplace-nz7h9" Oct 13 08:29:09 crc kubenswrapper[4833]: I1013 08:29:09.950857 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nz7h9" Oct 13 08:29:10 crc kubenswrapper[4833]: I1013 08:29:10.427521 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nz7h9"] Oct 13 08:29:10 crc kubenswrapper[4833]: I1013 08:29:10.722558 4833 generic.go:334] "Generic (PLEG): container finished" podID="991ea839-b955-4496-b166-02364fcdc871" containerID="bf599d90e9fe6a1fddc23127cb25417db651e7e33a8a7a14cd7409c64375fb6f" exitCode=0 Oct 13 08:29:10 crc kubenswrapper[4833]: I1013 08:29:10.722610 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nz7h9" event={"ID":"991ea839-b955-4496-b166-02364fcdc871","Type":"ContainerDied","Data":"bf599d90e9fe6a1fddc23127cb25417db651e7e33a8a7a14cd7409c64375fb6f"} Oct 13 08:29:10 crc kubenswrapper[4833]: I1013 08:29:10.722874 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nz7h9" event={"ID":"991ea839-b955-4496-b166-02364fcdc871","Type":"ContainerStarted","Data":"46372d7e0e229a77d02f354de8af211924cd4fa88c955a302b501be5e8d23401"} Oct 13 08:29:10 crc kubenswrapper[4833]: I1013 08:29:10.724758 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 08:29:11 crc kubenswrapper[4833]: I1013 08:29:11.734930 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nz7h9" event={"ID":"991ea839-b955-4496-b166-02364fcdc871","Type":"ContainerStarted","Data":"90b1453cba443c47b87934344a560147e0bc74e31051ba42d0923fb638f132f0"} Oct 13 08:29:12 crc kubenswrapper[4833]: I1013 08:29:12.204784 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9lh76"] Oct 13 08:29:12 crc kubenswrapper[4833]: I1013 08:29:12.207711 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9lh76" Oct 13 08:29:12 crc kubenswrapper[4833]: I1013 08:29:12.218093 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9lh76"] Oct 13 08:29:12 crc kubenswrapper[4833]: I1013 08:29:12.223090 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd448509-1fa4-4cea-842d-7bccdb4f6eb2-utilities\") pod \"certified-operators-9lh76\" (UID: \"cd448509-1fa4-4cea-842d-7bccdb4f6eb2\") " pod="openshift-marketplace/certified-operators-9lh76" Oct 13 08:29:12 crc kubenswrapper[4833]: I1013 08:29:12.223311 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd448509-1fa4-4cea-842d-7bccdb4f6eb2-catalog-content\") pod \"certified-operators-9lh76\" (UID: \"cd448509-1fa4-4cea-842d-7bccdb4f6eb2\") " pod="openshift-marketplace/certified-operators-9lh76" Oct 13 08:29:12 crc kubenswrapper[4833]: I1013 08:29:12.223503 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmw6n\" (UniqueName: \"kubernetes.io/projected/cd448509-1fa4-4cea-842d-7bccdb4f6eb2-kube-api-access-wmw6n\") pod \"certified-operators-9lh76\" (UID: \"cd448509-1fa4-4cea-842d-7bccdb4f6eb2\") " pod="openshift-marketplace/certified-operators-9lh76" Oct 13 08:29:12 crc kubenswrapper[4833]: I1013 08:29:12.325704 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmw6n\" (UniqueName: \"kubernetes.io/projected/cd448509-1fa4-4cea-842d-7bccdb4f6eb2-kube-api-access-wmw6n\") pod \"certified-operators-9lh76\" (UID: \"cd448509-1fa4-4cea-842d-7bccdb4f6eb2\") " pod="openshift-marketplace/certified-operators-9lh76" Oct 13 08:29:12 crc kubenswrapper[4833]: I1013 08:29:12.326212 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd448509-1fa4-4cea-842d-7bccdb4f6eb2-utilities\") pod \"certified-operators-9lh76\" (UID: \"cd448509-1fa4-4cea-842d-7bccdb4f6eb2\") " pod="openshift-marketplace/certified-operators-9lh76" Oct 13 08:29:12 crc kubenswrapper[4833]: I1013 08:29:12.326325 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd448509-1fa4-4cea-842d-7bccdb4f6eb2-catalog-content\") pod \"certified-operators-9lh76\" (UID: \"cd448509-1fa4-4cea-842d-7bccdb4f6eb2\") " pod="openshift-marketplace/certified-operators-9lh76" Oct 13 08:29:12 crc kubenswrapper[4833]: I1013 08:29:12.326953 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd448509-1fa4-4cea-842d-7bccdb4f6eb2-catalog-content\") pod \"certified-operators-9lh76\" (UID: \"cd448509-1fa4-4cea-842d-7bccdb4f6eb2\") " pod="openshift-marketplace/certified-operators-9lh76" Oct 13 08:29:12 crc kubenswrapper[4833]: I1013 08:29:12.327529 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd448509-1fa4-4cea-842d-7bccdb4f6eb2-utilities\") pod \"certified-operators-9lh76\" (UID: \"cd448509-1fa4-4cea-842d-7bccdb4f6eb2\") " pod="openshift-marketplace/certified-operators-9lh76" Oct 13 08:29:12 crc kubenswrapper[4833]: I1013 08:29:12.349637 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmw6n\" (UniqueName: \"kubernetes.io/projected/cd448509-1fa4-4cea-842d-7bccdb4f6eb2-kube-api-access-wmw6n\") pod \"certified-operators-9lh76\" (UID: \"cd448509-1fa4-4cea-842d-7bccdb4f6eb2\") " pod="openshift-marketplace/certified-operators-9lh76" Oct 13 08:29:12 crc kubenswrapper[4833]: I1013 08:29:12.528553 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9lh76" Oct 13 08:29:12 crc kubenswrapper[4833]: I1013 08:29:12.751009 4833 generic.go:334] "Generic (PLEG): container finished" podID="991ea839-b955-4496-b166-02364fcdc871" containerID="90b1453cba443c47b87934344a560147e0bc74e31051ba42d0923fb638f132f0" exitCode=0 Oct 13 08:29:12 crc kubenswrapper[4833]: I1013 08:29:12.751119 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nz7h9" event={"ID":"991ea839-b955-4496-b166-02364fcdc871","Type":"ContainerDied","Data":"90b1453cba443c47b87934344a560147e0bc74e31051ba42d0923fb638f132f0"} Oct 13 08:29:13 crc kubenswrapper[4833]: I1013 08:29:13.117145 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9lh76"] Oct 13 08:29:13 crc kubenswrapper[4833]: W1013 08:29:13.123341 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd448509_1fa4_4cea_842d_7bccdb4f6eb2.slice/crio-8dd8998a987098fbf85ca14b8f4dbb220b36bc81bfeefa834f67448bfe49d5c7 WatchSource:0}: Error finding container 8dd8998a987098fbf85ca14b8f4dbb220b36bc81bfeefa834f67448bfe49d5c7: Status 404 returned error can't find the container with id 8dd8998a987098fbf85ca14b8f4dbb220b36bc81bfeefa834f67448bfe49d5c7 Oct 13 08:29:13 crc kubenswrapper[4833]: I1013 08:29:13.761834 4833 generic.go:334] "Generic (PLEG): container finished" podID="cd448509-1fa4-4cea-842d-7bccdb4f6eb2" containerID="b54692d11a403f649467be0ffb0f55576d7000b10940cff7aa59b85e76d9a2ea" exitCode=0 Oct 13 08:29:13 crc kubenswrapper[4833]: I1013 08:29:13.762114 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9lh76" event={"ID":"cd448509-1fa4-4cea-842d-7bccdb4f6eb2","Type":"ContainerDied","Data":"b54692d11a403f649467be0ffb0f55576d7000b10940cff7aa59b85e76d9a2ea"} Oct 13 08:29:13 crc kubenswrapper[4833]: I1013 08:29:13.762456 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9lh76" event={"ID":"cd448509-1fa4-4cea-842d-7bccdb4f6eb2","Type":"ContainerStarted","Data":"8dd8998a987098fbf85ca14b8f4dbb220b36bc81bfeefa834f67448bfe49d5c7"} Oct 13 08:29:13 crc kubenswrapper[4833]: I1013 08:29:13.767967 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nz7h9" event={"ID":"991ea839-b955-4496-b166-02364fcdc871","Type":"ContainerStarted","Data":"cf919bb18e6c4dab47b38a47873fac7d529cd2ba77d78a9bf32a45049cc5fca3"} Oct 13 08:29:13 crc kubenswrapper[4833]: I1013 08:29:13.813875 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nz7h9" podStartSLOduration=2.252884066 podStartE2EDuration="4.813853501s" podCreationTimestamp="2025-10-13 08:29:09 +0000 UTC" firstStartedPulling="2025-10-13 08:29:10.724439783 +0000 UTC m=+7240.824862719" lastFinishedPulling="2025-10-13 08:29:13.285409218 +0000 UTC m=+7243.385832154" observedRunningTime="2025-10-13 08:29:13.803832016 +0000 UTC m=+7243.904254932" watchObservedRunningTime="2025-10-13 08:29:13.813853501 +0000 UTC m=+7243.914276417" Oct 13 08:29:14 crc kubenswrapper[4833]: I1013 08:29:14.628809 4833 scope.go:117] "RemoveContainer" containerID="e750b4528493e1226803bb48b2f89a189fd7217c1157b888cd297653099141bd" Oct 13 08:29:14 crc kubenswrapper[4833]: E1013 08:29:14.630020 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:29:14 crc kubenswrapper[4833]: I1013 08:29:14.787323 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9lh76" event={"ID":"cd448509-1fa4-4cea-842d-7bccdb4f6eb2","Type":"ContainerStarted","Data":"88577b5e5ef591a8059dd51bea9fb576e9a2e84dc2f79c400287999bb20e5924"} Oct 13 08:29:16 crc kubenswrapper[4833]: I1013 08:29:16.811645 4833 generic.go:334] "Generic (PLEG): container finished" podID="cd448509-1fa4-4cea-842d-7bccdb4f6eb2" containerID="88577b5e5ef591a8059dd51bea9fb576e9a2e84dc2f79c400287999bb20e5924" exitCode=0 Oct 13 08:29:16 crc kubenswrapper[4833]: I1013 08:29:16.811732 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9lh76" event={"ID":"cd448509-1fa4-4cea-842d-7bccdb4f6eb2","Type":"ContainerDied","Data":"88577b5e5ef591a8059dd51bea9fb576e9a2e84dc2f79c400287999bb20e5924"} Oct 13 08:29:17 crc kubenswrapper[4833]: I1013 08:29:17.824255 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9lh76" event={"ID":"cd448509-1fa4-4cea-842d-7bccdb4f6eb2","Type":"ContainerStarted","Data":"0b9eb02a04bf52718e991127f3308f679dd91c5afc478abec4e5312c6e7b6819"} Oct 13 08:29:17 crc kubenswrapper[4833]: I1013 08:29:17.854964 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9lh76" podStartSLOduration=2.308230435 podStartE2EDuration="5.854943854s" podCreationTimestamp="2025-10-13 08:29:12 +0000 UTC" firstStartedPulling="2025-10-13 08:29:13.764512878 +0000 UTC m=+7243.864935794" lastFinishedPulling="2025-10-13 08:29:17.311226287 +0000 UTC m=+7247.411649213" observedRunningTime="2025-10-13 08:29:17.84705704 +0000 UTC m=+7247.947479956" watchObservedRunningTime="2025-10-13 08:29:17.854943854 +0000 UTC m=+7247.955366770" Oct 13 08:29:19 crc kubenswrapper[4833]: I1013 08:29:19.951844 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nz7h9" Oct 13 08:29:19 crc kubenswrapper[4833]: I1013 08:29:19.952474 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nz7h9" Oct 13 08:29:20 crc kubenswrapper[4833]: I1013 08:29:20.020049 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nz7h9" Oct 13 08:29:20 crc kubenswrapper[4833]: I1013 08:29:20.918301 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nz7h9" Oct 13 08:29:21 crc kubenswrapper[4833]: I1013 08:29:21.203098 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nz7h9"] Oct 13 08:29:22 crc kubenswrapper[4833]: I1013 08:29:22.529919 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9lh76" Oct 13 08:29:22 crc kubenswrapper[4833]: I1013 08:29:22.529969 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9lh76" Oct 13 08:29:22 crc kubenswrapper[4833]: I1013 08:29:22.608815 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9lh76" Oct 13 08:29:22 crc kubenswrapper[4833]: I1013 08:29:22.885513 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nz7h9" podUID="991ea839-b955-4496-b166-02364fcdc871" containerName="registry-server" containerID="cri-o://cf919bb18e6c4dab47b38a47873fac7d529cd2ba77d78a9bf32a45049cc5fca3" gracePeriod=2 Oct 13 08:29:22 crc kubenswrapper[4833]: I1013 08:29:22.967313 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9lh76" Oct 13 08:29:23 crc kubenswrapper[4833]: I1013 08:29:23.435786 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nz7h9" Oct 13 08:29:23 crc kubenswrapper[4833]: I1013 08:29:23.592524 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4jqh\" (UniqueName: \"kubernetes.io/projected/991ea839-b955-4496-b166-02364fcdc871-kube-api-access-c4jqh\") pod \"991ea839-b955-4496-b166-02364fcdc871\" (UID: \"991ea839-b955-4496-b166-02364fcdc871\") " Oct 13 08:29:23 crc kubenswrapper[4833]: I1013 08:29:23.592711 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/991ea839-b955-4496-b166-02364fcdc871-utilities\") pod \"991ea839-b955-4496-b166-02364fcdc871\" (UID: \"991ea839-b955-4496-b166-02364fcdc871\") " Oct 13 08:29:23 crc kubenswrapper[4833]: I1013 08:29:23.592770 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/991ea839-b955-4496-b166-02364fcdc871-catalog-content\") pod \"991ea839-b955-4496-b166-02364fcdc871\" (UID: \"991ea839-b955-4496-b166-02364fcdc871\") " Oct 13 08:29:23 crc kubenswrapper[4833]: I1013 08:29:23.594416 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9lh76"] Oct 13 08:29:23 crc kubenswrapper[4833]: I1013 08:29:23.594407 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/991ea839-b955-4496-b166-02364fcdc871-utilities" (OuterVolumeSpecName: "utilities") pod "991ea839-b955-4496-b166-02364fcdc871" (UID: "991ea839-b955-4496-b166-02364fcdc871"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:29:23 crc kubenswrapper[4833]: I1013 08:29:23.600587 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/991ea839-b955-4496-b166-02364fcdc871-kube-api-access-c4jqh" (OuterVolumeSpecName: "kube-api-access-c4jqh") pod "991ea839-b955-4496-b166-02364fcdc871" (UID: "991ea839-b955-4496-b166-02364fcdc871"). InnerVolumeSpecName "kube-api-access-c4jqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:29:23 crc kubenswrapper[4833]: I1013 08:29:23.608158 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/991ea839-b955-4496-b166-02364fcdc871-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "991ea839-b955-4496-b166-02364fcdc871" (UID: "991ea839-b955-4496-b166-02364fcdc871"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:29:23 crc kubenswrapper[4833]: I1013 08:29:23.694457 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/991ea839-b955-4496-b166-02364fcdc871-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 08:29:23 crc kubenswrapper[4833]: I1013 08:29:23.694494 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/991ea839-b955-4496-b166-02364fcdc871-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 08:29:23 crc kubenswrapper[4833]: I1013 08:29:23.694507 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4jqh\" (UniqueName: \"kubernetes.io/projected/991ea839-b955-4496-b166-02364fcdc871-kube-api-access-c4jqh\") on node \"crc\" DevicePath \"\"" Oct 13 08:29:23 crc kubenswrapper[4833]: I1013 08:29:23.896351 4833 generic.go:334] "Generic (PLEG): container finished" podID="991ea839-b955-4496-b166-02364fcdc871" containerID="cf919bb18e6c4dab47b38a47873fac7d529cd2ba77d78a9bf32a45049cc5fca3" exitCode=0 Oct 13 08:29:23 crc kubenswrapper[4833]: I1013 08:29:23.896629 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nz7h9" event={"ID":"991ea839-b955-4496-b166-02364fcdc871","Type":"ContainerDied","Data":"cf919bb18e6c4dab47b38a47873fac7d529cd2ba77d78a9bf32a45049cc5fca3"} Oct 13 08:29:23 crc kubenswrapper[4833]: I1013 08:29:23.897155 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nz7h9" event={"ID":"991ea839-b955-4496-b166-02364fcdc871","Type":"ContainerDied","Data":"46372d7e0e229a77d02f354de8af211924cd4fa88c955a302b501be5e8d23401"} Oct 13 08:29:23 crc kubenswrapper[4833]: I1013 08:29:23.897248 4833 scope.go:117] "RemoveContainer" containerID="cf919bb18e6c4dab47b38a47873fac7d529cd2ba77d78a9bf32a45049cc5fca3" Oct 13 08:29:23 crc kubenswrapper[4833]: I1013 08:29:23.896725 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nz7h9" Oct 13 08:29:23 crc kubenswrapper[4833]: I1013 08:29:23.923392 4833 scope.go:117] "RemoveContainer" containerID="90b1453cba443c47b87934344a560147e0bc74e31051ba42d0923fb638f132f0" Oct 13 08:29:23 crc kubenswrapper[4833]: I1013 08:29:23.955038 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nz7h9"] Oct 13 08:29:23 crc kubenswrapper[4833]: I1013 08:29:23.961293 4833 scope.go:117] "RemoveContainer" containerID="bf599d90e9fe6a1fddc23127cb25417db651e7e33a8a7a14cd7409c64375fb6f" Oct 13 08:29:23 crc kubenswrapper[4833]: I1013 08:29:23.967588 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nz7h9"] Oct 13 08:29:24 crc kubenswrapper[4833]: I1013 08:29:24.010867 4833 scope.go:117] "RemoveContainer" containerID="cf919bb18e6c4dab47b38a47873fac7d529cd2ba77d78a9bf32a45049cc5fca3" Oct 13 08:29:24 crc kubenswrapper[4833]: E1013 08:29:24.011354 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf919bb18e6c4dab47b38a47873fac7d529cd2ba77d78a9bf32a45049cc5fca3\": container with ID starting with cf919bb18e6c4dab47b38a47873fac7d529cd2ba77d78a9bf32a45049cc5fca3 not found: ID does not exist" containerID="cf919bb18e6c4dab47b38a47873fac7d529cd2ba77d78a9bf32a45049cc5fca3" Oct 13 08:29:24 crc kubenswrapper[4833]: I1013 08:29:24.011423 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf919bb18e6c4dab47b38a47873fac7d529cd2ba77d78a9bf32a45049cc5fca3"} err="failed to get container status \"cf919bb18e6c4dab47b38a47873fac7d529cd2ba77d78a9bf32a45049cc5fca3\": rpc error: code = NotFound desc = could not find container \"cf919bb18e6c4dab47b38a47873fac7d529cd2ba77d78a9bf32a45049cc5fca3\": container with ID starting with cf919bb18e6c4dab47b38a47873fac7d529cd2ba77d78a9bf32a45049cc5fca3 not found: ID does not exist" Oct 13 08:29:24 crc kubenswrapper[4833]: I1013 08:29:24.011464 4833 scope.go:117] "RemoveContainer" containerID="90b1453cba443c47b87934344a560147e0bc74e31051ba42d0923fb638f132f0" Oct 13 08:29:24 crc kubenswrapper[4833]: E1013 08:29:24.012146 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90b1453cba443c47b87934344a560147e0bc74e31051ba42d0923fb638f132f0\": container with ID starting with 90b1453cba443c47b87934344a560147e0bc74e31051ba42d0923fb638f132f0 not found: ID does not exist" containerID="90b1453cba443c47b87934344a560147e0bc74e31051ba42d0923fb638f132f0" Oct 13 08:29:24 crc kubenswrapper[4833]: I1013 08:29:24.012194 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90b1453cba443c47b87934344a560147e0bc74e31051ba42d0923fb638f132f0"} err="failed to get container status \"90b1453cba443c47b87934344a560147e0bc74e31051ba42d0923fb638f132f0\": rpc error: code = NotFound desc = could not find container \"90b1453cba443c47b87934344a560147e0bc74e31051ba42d0923fb638f132f0\": container with ID starting with 90b1453cba443c47b87934344a560147e0bc74e31051ba42d0923fb638f132f0 not found: ID does not exist" Oct 13 08:29:24 crc kubenswrapper[4833]: I1013 08:29:24.012228 4833 scope.go:117] "RemoveContainer" containerID="bf599d90e9fe6a1fddc23127cb25417db651e7e33a8a7a14cd7409c64375fb6f" Oct 13 08:29:24 crc kubenswrapper[4833]: E1013 08:29:24.012707 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf599d90e9fe6a1fddc23127cb25417db651e7e33a8a7a14cd7409c64375fb6f\": container with ID starting with bf599d90e9fe6a1fddc23127cb25417db651e7e33a8a7a14cd7409c64375fb6f not found: ID does not exist" containerID="bf599d90e9fe6a1fddc23127cb25417db651e7e33a8a7a14cd7409c64375fb6f" Oct 13 08:29:24 crc kubenswrapper[4833]: I1013 08:29:24.012779 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf599d90e9fe6a1fddc23127cb25417db651e7e33a8a7a14cd7409c64375fb6f"} err="failed to get container status \"bf599d90e9fe6a1fddc23127cb25417db651e7e33a8a7a14cd7409c64375fb6f\": rpc error: code = NotFound desc = could not find container \"bf599d90e9fe6a1fddc23127cb25417db651e7e33a8a7a14cd7409c64375fb6f\": container with ID starting with bf599d90e9fe6a1fddc23127cb25417db651e7e33a8a7a14cd7409c64375fb6f not found: ID does not exist" Oct 13 08:29:24 crc kubenswrapper[4833]: I1013 08:29:24.648056 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="991ea839-b955-4496-b166-02364fcdc871" path="/var/lib/kubelet/pods/991ea839-b955-4496-b166-02364fcdc871/volumes" Oct 13 08:29:24 crc kubenswrapper[4833]: I1013 08:29:24.908795 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9lh76" podUID="cd448509-1fa4-4cea-842d-7bccdb4f6eb2" containerName="registry-server" containerID="cri-o://0b9eb02a04bf52718e991127f3308f679dd91c5afc478abec4e5312c6e7b6819" gracePeriod=2 Oct 13 08:29:25 crc kubenswrapper[4833]: I1013 08:29:25.478273 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9lh76" Oct 13 08:29:25 crc kubenswrapper[4833]: I1013 08:29:25.652785 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmw6n\" (UniqueName: \"kubernetes.io/projected/cd448509-1fa4-4cea-842d-7bccdb4f6eb2-kube-api-access-wmw6n\") pod \"cd448509-1fa4-4cea-842d-7bccdb4f6eb2\" (UID: \"cd448509-1fa4-4cea-842d-7bccdb4f6eb2\") " Oct 13 08:29:25 crc kubenswrapper[4833]: I1013 08:29:25.654232 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd448509-1fa4-4cea-842d-7bccdb4f6eb2-utilities\") pod \"cd448509-1fa4-4cea-842d-7bccdb4f6eb2\" (UID: \"cd448509-1fa4-4cea-842d-7bccdb4f6eb2\") " Oct 13 08:29:25 crc kubenswrapper[4833]: I1013 08:29:25.654576 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd448509-1fa4-4cea-842d-7bccdb4f6eb2-catalog-content\") pod \"cd448509-1fa4-4cea-842d-7bccdb4f6eb2\" (UID: \"cd448509-1fa4-4cea-842d-7bccdb4f6eb2\") " Oct 13 08:29:25 crc kubenswrapper[4833]: I1013 08:29:25.655417 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd448509-1fa4-4cea-842d-7bccdb4f6eb2-utilities" (OuterVolumeSpecName: "utilities") pod "cd448509-1fa4-4cea-842d-7bccdb4f6eb2" (UID: "cd448509-1fa4-4cea-842d-7bccdb4f6eb2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:29:25 crc kubenswrapper[4833]: I1013 08:29:25.656504 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd448509-1fa4-4cea-842d-7bccdb4f6eb2-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 08:29:25 crc kubenswrapper[4833]: I1013 08:29:25.659154 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd448509-1fa4-4cea-842d-7bccdb4f6eb2-kube-api-access-wmw6n" (OuterVolumeSpecName: "kube-api-access-wmw6n") pod "cd448509-1fa4-4cea-842d-7bccdb4f6eb2" (UID: "cd448509-1fa4-4cea-842d-7bccdb4f6eb2"). InnerVolumeSpecName "kube-api-access-wmw6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:29:25 crc kubenswrapper[4833]: I1013 08:29:25.707125 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd448509-1fa4-4cea-842d-7bccdb4f6eb2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd448509-1fa4-4cea-842d-7bccdb4f6eb2" (UID: "cd448509-1fa4-4cea-842d-7bccdb4f6eb2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:29:25 crc kubenswrapper[4833]: I1013 08:29:25.758148 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmw6n\" (UniqueName: \"kubernetes.io/projected/cd448509-1fa4-4cea-842d-7bccdb4f6eb2-kube-api-access-wmw6n\") on node \"crc\" DevicePath \"\"" Oct 13 08:29:25 crc kubenswrapper[4833]: I1013 08:29:25.758178 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd448509-1fa4-4cea-842d-7bccdb4f6eb2-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 08:29:25 crc kubenswrapper[4833]: I1013 08:29:25.934691 4833 generic.go:334] "Generic (PLEG): container finished" podID="cd448509-1fa4-4cea-842d-7bccdb4f6eb2" containerID="0b9eb02a04bf52718e991127f3308f679dd91c5afc478abec4e5312c6e7b6819" exitCode=0 Oct 13 08:29:25 crc kubenswrapper[4833]: I1013 08:29:25.934742 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9lh76" event={"ID":"cd448509-1fa4-4cea-842d-7bccdb4f6eb2","Type":"ContainerDied","Data":"0b9eb02a04bf52718e991127f3308f679dd91c5afc478abec4e5312c6e7b6819"} Oct 13 08:29:25 crc kubenswrapper[4833]: I1013 08:29:25.934780 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9lh76" event={"ID":"cd448509-1fa4-4cea-842d-7bccdb4f6eb2","Type":"ContainerDied","Data":"8dd8998a987098fbf85ca14b8f4dbb220b36bc81bfeefa834f67448bfe49d5c7"} Oct 13 08:29:25 crc kubenswrapper[4833]: I1013 08:29:25.934798 4833 scope.go:117] "RemoveContainer" containerID="0b9eb02a04bf52718e991127f3308f679dd91c5afc478abec4e5312c6e7b6819" Oct 13 08:29:25 crc kubenswrapper[4833]: I1013 08:29:25.934909 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9lh76" Oct 13 08:29:25 crc kubenswrapper[4833]: I1013 08:29:25.969137 4833 scope.go:117] "RemoveContainer" containerID="88577b5e5ef591a8059dd51bea9fb576e9a2e84dc2f79c400287999bb20e5924" Oct 13 08:29:25 crc kubenswrapper[4833]: I1013 08:29:25.971744 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9lh76"] Oct 13 08:29:25 crc kubenswrapper[4833]: I1013 08:29:25.982852 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9lh76"] Oct 13 08:29:25 crc kubenswrapper[4833]: I1013 08:29:25.991200 4833 scope.go:117] "RemoveContainer" containerID="b54692d11a403f649467be0ffb0f55576d7000b10940cff7aa59b85e76d9a2ea" Oct 13 08:29:26 crc kubenswrapper[4833]: I1013 08:29:26.039829 4833 scope.go:117] "RemoveContainer" containerID="0b9eb02a04bf52718e991127f3308f679dd91c5afc478abec4e5312c6e7b6819" Oct 13 08:29:26 crc kubenswrapper[4833]: E1013 08:29:26.040276 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b9eb02a04bf52718e991127f3308f679dd91c5afc478abec4e5312c6e7b6819\": container with ID starting with 0b9eb02a04bf52718e991127f3308f679dd91c5afc478abec4e5312c6e7b6819 not found: ID does not exist" containerID="0b9eb02a04bf52718e991127f3308f679dd91c5afc478abec4e5312c6e7b6819" Oct 13 08:29:26 crc kubenswrapper[4833]: I1013 08:29:26.040305 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b9eb02a04bf52718e991127f3308f679dd91c5afc478abec4e5312c6e7b6819"} err="failed to get container status \"0b9eb02a04bf52718e991127f3308f679dd91c5afc478abec4e5312c6e7b6819\": rpc error: code = NotFound desc = could not find container \"0b9eb02a04bf52718e991127f3308f679dd91c5afc478abec4e5312c6e7b6819\": container with ID starting with 0b9eb02a04bf52718e991127f3308f679dd91c5afc478abec4e5312c6e7b6819 not found: ID does not exist" Oct 13 08:29:26 crc kubenswrapper[4833]: I1013 08:29:26.040342 4833 scope.go:117] "RemoveContainer" containerID="88577b5e5ef591a8059dd51bea9fb576e9a2e84dc2f79c400287999bb20e5924" Oct 13 08:29:26 crc kubenswrapper[4833]: E1013 08:29:26.040752 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88577b5e5ef591a8059dd51bea9fb576e9a2e84dc2f79c400287999bb20e5924\": container with ID starting with 88577b5e5ef591a8059dd51bea9fb576e9a2e84dc2f79c400287999bb20e5924 not found: ID does not exist" containerID="88577b5e5ef591a8059dd51bea9fb576e9a2e84dc2f79c400287999bb20e5924" Oct 13 08:29:26 crc kubenswrapper[4833]: I1013 08:29:26.040789 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88577b5e5ef591a8059dd51bea9fb576e9a2e84dc2f79c400287999bb20e5924"} err="failed to get container status \"88577b5e5ef591a8059dd51bea9fb576e9a2e84dc2f79c400287999bb20e5924\": rpc error: code = NotFound desc = could not find container \"88577b5e5ef591a8059dd51bea9fb576e9a2e84dc2f79c400287999bb20e5924\": container with ID starting with 88577b5e5ef591a8059dd51bea9fb576e9a2e84dc2f79c400287999bb20e5924 not found: ID does not exist" Oct 13 08:29:26 crc kubenswrapper[4833]: I1013 08:29:26.040823 4833 scope.go:117] "RemoveContainer" containerID="b54692d11a403f649467be0ffb0f55576d7000b10940cff7aa59b85e76d9a2ea" Oct 13 08:29:26 crc kubenswrapper[4833]: E1013 08:29:26.041239 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b54692d11a403f649467be0ffb0f55576d7000b10940cff7aa59b85e76d9a2ea\": container with ID starting with b54692d11a403f649467be0ffb0f55576d7000b10940cff7aa59b85e76d9a2ea not found: ID does not exist" containerID="b54692d11a403f649467be0ffb0f55576d7000b10940cff7aa59b85e76d9a2ea" Oct 13 08:29:26 crc kubenswrapper[4833]: I1013 08:29:26.041261 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b54692d11a403f649467be0ffb0f55576d7000b10940cff7aa59b85e76d9a2ea"} err="failed to get container status \"b54692d11a403f649467be0ffb0f55576d7000b10940cff7aa59b85e76d9a2ea\": rpc error: code = NotFound desc = could not find container \"b54692d11a403f649467be0ffb0f55576d7000b10940cff7aa59b85e76d9a2ea\": container with ID starting with b54692d11a403f649467be0ffb0f55576d7000b10940cff7aa59b85e76d9a2ea not found: ID does not exist" Oct 13 08:29:26 crc kubenswrapper[4833]: I1013 08:29:26.667419 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd448509-1fa4-4cea-842d-7bccdb4f6eb2" path="/var/lib/kubelet/pods/cd448509-1fa4-4cea-842d-7bccdb4f6eb2/volumes" Oct 13 08:29:27 crc kubenswrapper[4833]: I1013 08:29:27.627519 4833 scope.go:117] "RemoveContainer" containerID="e750b4528493e1226803bb48b2f89a189fd7217c1157b888cd297653099141bd" Oct 13 08:29:27 crc kubenswrapper[4833]: E1013 08:29:27.627982 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:29:42 crc kubenswrapper[4833]: I1013 08:29:42.627970 4833 scope.go:117] "RemoveContainer" containerID="e750b4528493e1226803bb48b2f89a189fd7217c1157b888cd297653099141bd" Oct 13 08:29:42 crc kubenswrapper[4833]: E1013 08:29:42.628756 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:29:56 crc kubenswrapper[4833]: I1013 08:29:56.628641 4833 scope.go:117] "RemoveContainer" containerID="e750b4528493e1226803bb48b2f89a189fd7217c1157b888cd297653099141bd" Oct 13 08:29:56 crc kubenswrapper[4833]: E1013 08:29:56.629646 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:30:00 crc kubenswrapper[4833]: I1013 08:30:00.174062 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339070-wsc64"] Oct 13 08:30:00 crc kubenswrapper[4833]: E1013 08:30:00.175054 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="991ea839-b955-4496-b166-02364fcdc871" containerName="registry-server" Oct 13 08:30:00 crc kubenswrapper[4833]: I1013 08:30:00.175068 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="991ea839-b955-4496-b166-02364fcdc871" containerName="registry-server" Oct 13 08:30:00 crc kubenswrapper[4833]: E1013 08:30:00.175079 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd448509-1fa4-4cea-842d-7bccdb4f6eb2" containerName="registry-server" Oct 13 08:30:00 crc kubenswrapper[4833]: I1013 08:30:00.175085 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd448509-1fa4-4cea-842d-7bccdb4f6eb2" containerName="registry-server" Oct 13 08:30:00 crc kubenswrapper[4833]: E1013 08:30:00.175129 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="991ea839-b955-4496-b166-02364fcdc871" containerName="extract-utilities" Oct 13 08:30:00 crc kubenswrapper[4833]: I1013 08:30:00.175139 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="991ea839-b955-4496-b166-02364fcdc871" containerName="extract-utilities" Oct 13 08:30:00 crc kubenswrapper[4833]: E1013 08:30:00.175154 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd448509-1fa4-4cea-842d-7bccdb4f6eb2" containerName="extract-content" Oct 13 08:30:00 crc kubenswrapper[4833]: I1013 08:30:00.175159 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd448509-1fa4-4cea-842d-7bccdb4f6eb2" containerName="extract-content" Oct 13 08:30:00 crc kubenswrapper[4833]: E1013 08:30:00.175170 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="991ea839-b955-4496-b166-02364fcdc871" containerName="extract-content" Oct 13 08:30:00 crc kubenswrapper[4833]: I1013 08:30:00.175176 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="991ea839-b955-4496-b166-02364fcdc871" containerName="extract-content" Oct 13 08:30:00 crc kubenswrapper[4833]: E1013 08:30:00.175184 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd448509-1fa4-4cea-842d-7bccdb4f6eb2" containerName="extract-utilities" Oct 13 08:30:00 crc kubenswrapper[4833]: I1013 08:30:00.175190 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd448509-1fa4-4cea-842d-7bccdb4f6eb2" containerName="extract-utilities" Oct 13 08:30:00 crc kubenswrapper[4833]: I1013 08:30:00.175375 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd448509-1fa4-4cea-842d-7bccdb4f6eb2" containerName="registry-server" Oct 13 08:30:00 crc kubenswrapper[4833]: I1013 08:30:00.175390 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="991ea839-b955-4496-b166-02364fcdc871" containerName="registry-server" Oct 13 08:30:00 crc kubenswrapper[4833]: I1013 08:30:00.176299 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339070-wsc64" Oct 13 08:30:00 crc kubenswrapper[4833]: I1013 08:30:00.178951 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 13 08:30:00 crc kubenswrapper[4833]: I1013 08:30:00.181179 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 13 08:30:00 crc kubenswrapper[4833]: I1013 08:30:00.188364 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339070-wsc64"] Oct 13 08:30:00 crc kubenswrapper[4833]: I1013 08:30:00.219278 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a150200-a8a1-42f8-add6-8b78e4b6eb6c-secret-volume\") pod \"collect-profiles-29339070-wsc64\" (UID: \"3a150200-a8a1-42f8-add6-8b78e4b6eb6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339070-wsc64" Oct 13 08:30:00 crc kubenswrapper[4833]: I1013 08:30:00.219440 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a150200-a8a1-42f8-add6-8b78e4b6eb6c-config-volume\") pod \"collect-profiles-29339070-wsc64\" (UID: \"3a150200-a8a1-42f8-add6-8b78e4b6eb6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339070-wsc64" Oct 13 08:30:00 crc kubenswrapper[4833]: I1013 08:30:00.219510 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsskb\" (UniqueName: \"kubernetes.io/projected/3a150200-a8a1-42f8-add6-8b78e4b6eb6c-kube-api-access-hsskb\") pod \"collect-profiles-29339070-wsc64\" (UID: \"3a150200-a8a1-42f8-add6-8b78e4b6eb6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339070-wsc64" Oct 13 08:30:00 crc kubenswrapper[4833]: I1013 08:30:00.321510 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a150200-a8a1-42f8-add6-8b78e4b6eb6c-secret-volume\") pod \"collect-profiles-29339070-wsc64\" (UID: \"3a150200-a8a1-42f8-add6-8b78e4b6eb6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339070-wsc64" Oct 13 08:30:00 crc kubenswrapper[4833]: I1013 08:30:00.321611 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a150200-a8a1-42f8-add6-8b78e4b6eb6c-config-volume\") pod \"collect-profiles-29339070-wsc64\" (UID: \"3a150200-a8a1-42f8-add6-8b78e4b6eb6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339070-wsc64" Oct 13 08:30:00 crc kubenswrapper[4833]: I1013 08:30:00.321634 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsskb\" (UniqueName: \"kubernetes.io/projected/3a150200-a8a1-42f8-add6-8b78e4b6eb6c-kube-api-access-hsskb\") pod \"collect-profiles-29339070-wsc64\" (UID: \"3a150200-a8a1-42f8-add6-8b78e4b6eb6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339070-wsc64" Oct 13 08:30:00 crc kubenswrapper[4833]: I1013 08:30:00.322723 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a150200-a8a1-42f8-add6-8b78e4b6eb6c-config-volume\") pod \"collect-profiles-29339070-wsc64\" (UID: \"3a150200-a8a1-42f8-add6-8b78e4b6eb6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339070-wsc64" Oct 13 08:30:00 crc kubenswrapper[4833]: I1013 08:30:00.327256 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a150200-a8a1-42f8-add6-8b78e4b6eb6c-secret-volume\") pod \"collect-profiles-29339070-wsc64\" (UID: \"3a150200-a8a1-42f8-add6-8b78e4b6eb6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339070-wsc64" Oct 13 08:30:00 crc kubenswrapper[4833]: I1013 08:30:00.337321 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsskb\" (UniqueName: \"kubernetes.io/projected/3a150200-a8a1-42f8-add6-8b78e4b6eb6c-kube-api-access-hsskb\") pod \"collect-profiles-29339070-wsc64\" (UID: \"3a150200-a8a1-42f8-add6-8b78e4b6eb6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339070-wsc64" Oct 13 08:30:00 crc kubenswrapper[4833]: I1013 08:30:00.515861 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339070-wsc64" Oct 13 08:30:00 crc kubenswrapper[4833]: I1013 08:30:00.992460 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339070-wsc64"] Oct 13 08:30:01 crc kubenswrapper[4833]: I1013 08:30:01.327684 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339070-wsc64" event={"ID":"3a150200-a8a1-42f8-add6-8b78e4b6eb6c","Type":"ContainerStarted","Data":"c9b9b6898475b5c85ca684d49ab9211072253ba774e669f1a483efeb2b08c9c8"} Oct 13 08:30:01 crc kubenswrapper[4833]: I1013 08:30:01.327903 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339070-wsc64" event={"ID":"3a150200-a8a1-42f8-add6-8b78e4b6eb6c","Type":"ContainerStarted","Data":"06172f15ccd398520ed25e5238e7d23747c8e7747846e1c1b1bba724acd193f0"} Oct 13 08:30:01 crc kubenswrapper[4833]: I1013 08:30:01.365825 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29339070-wsc64" podStartSLOduration=1.365782726 podStartE2EDuration="1.365782726s" podCreationTimestamp="2025-10-13 08:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 08:30:01.341529926 +0000 UTC m=+7291.441952872" watchObservedRunningTime="2025-10-13 08:30:01.365782726 +0000 UTC m=+7291.466205652" Oct 13 08:30:02 crc kubenswrapper[4833]: I1013 08:30:02.342725 4833 generic.go:334] "Generic (PLEG): container finished" podID="3a150200-a8a1-42f8-add6-8b78e4b6eb6c" containerID="c9b9b6898475b5c85ca684d49ab9211072253ba774e669f1a483efeb2b08c9c8" exitCode=0 Oct 13 08:30:02 crc kubenswrapper[4833]: I1013 08:30:02.342801 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339070-wsc64" event={"ID":"3a150200-a8a1-42f8-add6-8b78e4b6eb6c","Type":"ContainerDied","Data":"c9b9b6898475b5c85ca684d49ab9211072253ba774e669f1a483efeb2b08c9c8"} Oct 13 08:30:03 crc kubenswrapper[4833]: I1013 08:30:03.744874 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339070-wsc64" Oct 13 08:30:03 crc kubenswrapper[4833]: I1013 08:30:03.811999 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a150200-a8a1-42f8-add6-8b78e4b6eb6c-secret-volume\") pod \"3a150200-a8a1-42f8-add6-8b78e4b6eb6c\" (UID: \"3a150200-a8a1-42f8-add6-8b78e4b6eb6c\") " Oct 13 08:30:03 crc kubenswrapper[4833]: I1013 08:30:03.812058 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a150200-a8a1-42f8-add6-8b78e4b6eb6c-config-volume\") pod \"3a150200-a8a1-42f8-add6-8b78e4b6eb6c\" (UID: \"3a150200-a8a1-42f8-add6-8b78e4b6eb6c\") " Oct 13 08:30:03 crc kubenswrapper[4833]: I1013 08:30:03.812260 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsskb\" (UniqueName: \"kubernetes.io/projected/3a150200-a8a1-42f8-add6-8b78e4b6eb6c-kube-api-access-hsskb\") pod \"3a150200-a8a1-42f8-add6-8b78e4b6eb6c\" (UID: \"3a150200-a8a1-42f8-add6-8b78e4b6eb6c\") " Oct 13 08:30:03 crc kubenswrapper[4833]: I1013 08:30:03.813017 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a150200-a8a1-42f8-add6-8b78e4b6eb6c-config-volume" (OuterVolumeSpecName: "config-volume") pod "3a150200-a8a1-42f8-add6-8b78e4b6eb6c" (UID: "3a150200-a8a1-42f8-add6-8b78e4b6eb6c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:30:03 crc kubenswrapper[4833]: I1013 08:30:03.817407 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a150200-a8a1-42f8-add6-8b78e4b6eb6c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3a150200-a8a1-42f8-add6-8b78e4b6eb6c" (UID: "3a150200-a8a1-42f8-add6-8b78e4b6eb6c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:30:03 crc kubenswrapper[4833]: I1013 08:30:03.817757 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a150200-a8a1-42f8-add6-8b78e4b6eb6c-kube-api-access-hsskb" (OuterVolumeSpecName: "kube-api-access-hsskb") pod "3a150200-a8a1-42f8-add6-8b78e4b6eb6c" (UID: "3a150200-a8a1-42f8-add6-8b78e4b6eb6c"). InnerVolumeSpecName "kube-api-access-hsskb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:30:03 crc kubenswrapper[4833]: I1013 08:30:03.914755 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsskb\" (UniqueName: \"kubernetes.io/projected/3a150200-a8a1-42f8-add6-8b78e4b6eb6c-kube-api-access-hsskb\") on node \"crc\" DevicePath \"\"" Oct 13 08:30:03 crc kubenswrapper[4833]: I1013 08:30:03.914790 4833 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a150200-a8a1-42f8-add6-8b78e4b6eb6c-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 13 08:30:03 crc kubenswrapper[4833]: I1013 08:30:03.914801 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a150200-a8a1-42f8-add6-8b78e4b6eb6c-config-volume\") on node \"crc\" DevicePath \"\"" Oct 13 08:30:04 crc kubenswrapper[4833]: I1013 08:30:04.403401 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339070-wsc64" event={"ID":"3a150200-a8a1-42f8-add6-8b78e4b6eb6c","Type":"ContainerDied","Data":"06172f15ccd398520ed25e5238e7d23747c8e7747846e1c1b1bba724acd193f0"} Oct 13 08:30:04 crc kubenswrapper[4833]: I1013 08:30:04.403447 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06172f15ccd398520ed25e5238e7d23747c8e7747846e1c1b1bba724acd193f0" Oct 13 08:30:04 crc kubenswrapper[4833]: I1013 08:30:04.403517 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339070-wsc64" Oct 13 08:30:04 crc kubenswrapper[4833]: I1013 08:30:04.460362 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339025-lnjs4"] Oct 13 08:30:04 crc kubenswrapper[4833]: I1013 08:30:04.474304 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339025-lnjs4"] Oct 13 08:30:04 crc kubenswrapper[4833]: I1013 08:30:04.639961 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="781ba963-7821-4dfd-9baa-8a66f21e58c0" path="/var/lib/kubelet/pods/781ba963-7821-4dfd-9baa-8a66f21e58c0/volumes" Oct 13 08:30:11 crc kubenswrapper[4833]: I1013 08:30:11.627691 4833 scope.go:117] "RemoveContainer" containerID="e750b4528493e1226803bb48b2f89a189fd7217c1157b888cd297653099141bd" Oct 13 08:30:11 crc kubenswrapper[4833]: E1013 08:30:11.628649 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:30:26 crc kubenswrapper[4833]: I1013 08:30:26.628833 4833 scope.go:117] "RemoveContainer" containerID="e750b4528493e1226803bb48b2f89a189fd7217c1157b888cd297653099141bd" Oct 13 08:30:26 crc kubenswrapper[4833]: E1013 08:30:26.630191 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:30:40 crc kubenswrapper[4833]: I1013 08:30:40.643845 4833 scope.go:117] "RemoveContainer" containerID="e750b4528493e1226803bb48b2f89a189fd7217c1157b888cd297653099141bd" Oct 13 08:30:41 crc kubenswrapper[4833]: I1013 08:30:41.845840 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"a61878d9bc65041e67d954d25961f084c1ef7801850f460f69cbb89c62e52cb1"} Oct 13 08:30:56 crc kubenswrapper[4833]: I1013 08:30:56.855580 4833 scope.go:117] "RemoveContainer" containerID="56e448c4790e7a12a463971e3e51c26270f114ad6970436ad09b0932cb9e3191" Oct 13 08:31:31 crc kubenswrapper[4833]: I1013 08:31:31.764148 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xm68h"] Oct 13 08:31:31 crc kubenswrapper[4833]: E1013 08:31:31.766140 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a150200-a8a1-42f8-add6-8b78e4b6eb6c" containerName="collect-profiles" Oct 13 08:31:31 crc kubenswrapper[4833]: I1013 08:31:31.766174 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a150200-a8a1-42f8-add6-8b78e4b6eb6c" containerName="collect-profiles" Oct 13 08:31:31 crc kubenswrapper[4833]: I1013 08:31:31.767073 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a150200-a8a1-42f8-add6-8b78e4b6eb6c" containerName="collect-profiles" Oct 13 08:31:31 crc kubenswrapper[4833]: I1013 08:31:31.771865 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xm68h"] Oct 13 08:31:31 crc kubenswrapper[4833]: I1013 08:31:31.772128 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xm68h" Oct 13 08:31:31 crc kubenswrapper[4833]: I1013 08:31:31.895290 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d2a42b3-1c7e-4e6f-9299-06162b0f1048-utilities\") pod \"community-operators-xm68h\" (UID: \"4d2a42b3-1c7e-4e6f-9299-06162b0f1048\") " pod="openshift-marketplace/community-operators-xm68h" Oct 13 08:31:31 crc kubenswrapper[4833]: I1013 08:31:31.895365 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d2a42b3-1c7e-4e6f-9299-06162b0f1048-catalog-content\") pod \"community-operators-xm68h\" (UID: \"4d2a42b3-1c7e-4e6f-9299-06162b0f1048\") " pod="openshift-marketplace/community-operators-xm68h" Oct 13 08:31:31 crc kubenswrapper[4833]: I1013 08:31:31.895408 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j92g\" (UniqueName: \"kubernetes.io/projected/4d2a42b3-1c7e-4e6f-9299-06162b0f1048-kube-api-access-4j92g\") pod \"community-operators-xm68h\" (UID: \"4d2a42b3-1c7e-4e6f-9299-06162b0f1048\") " pod="openshift-marketplace/community-operators-xm68h" Oct 13 08:31:31 crc kubenswrapper[4833]: I1013 08:31:31.997982 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d2a42b3-1c7e-4e6f-9299-06162b0f1048-catalog-content\") pod \"community-operators-xm68h\" (UID: \"4d2a42b3-1c7e-4e6f-9299-06162b0f1048\") " pod="openshift-marketplace/community-operators-xm68h" Oct 13 08:31:31 crc kubenswrapper[4833]: I1013 08:31:31.998097 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j92g\" (UniqueName: \"kubernetes.io/projected/4d2a42b3-1c7e-4e6f-9299-06162b0f1048-kube-api-access-4j92g\") pod \"community-operators-xm68h\" (UID: \"4d2a42b3-1c7e-4e6f-9299-06162b0f1048\") " pod="openshift-marketplace/community-operators-xm68h" Oct 13 08:31:31 crc kubenswrapper[4833]: I1013 08:31:31.998366 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d2a42b3-1c7e-4e6f-9299-06162b0f1048-utilities\") pod \"community-operators-xm68h\" (UID: \"4d2a42b3-1c7e-4e6f-9299-06162b0f1048\") " pod="openshift-marketplace/community-operators-xm68h" Oct 13 08:31:31 crc kubenswrapper[4833]: I1013 08:31:31.998558 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d2a42b3-1c7e-4e6f-9299-06162b0f1048-catalog-content\") pod \"community-operators-xm68h\" (UID: \"4d2a42b3-1c7e-4e6f-9299-06162b0f1048\") " pod="openshift-marketplace/community-operators-xm68h" Oct 13 08:31:31 crc kubenswrapper[4833]: I1013 08:31:31.999047 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d2a42b3-1c7e-4e6f-9299-06162b0f1048-utilities\") pod \"community-operators-xm68h\" (UID: \"4d2a42b3-1c7e-4e6f-9299-06162b0f1048\") " pod="openshift-marketplace/community-operators-xm68h" Oct 13 08:31:32 crc kubenswrapper[4833]: I1013 08:31:32.016478 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j92g\" (UniqueName: \"kubernetes.io/projected/4d2a42b3-1c7e-4e6f-9299-06162b0f1048-kube-api-access-4j92g\") pod \"community-operators-xm68h\" (UID: \"4d2a42b3-1c7e-4e6f-9299-06162b0f1048\") " pod="openshift-marketplace/community-operators-xm68h" Oct 13 08:31:32 crc kubenswrapper[4833]: I1013 08:31:32.107754 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xm68h" Oct 13 08:31:32 crc kubenswrapper[4833]: I1013 08:31:32.685628 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xm68h"] Oct 13 08:31:33 crc kubenswrapper[4833]: I1013 08:31:33.435801 4833 generic.go:334] "Generic (PLEG): container finished" podID="4d2a42b3-1c7e-4e6f-9299-06162b0f1048" containerID="90ace387cfbf649074952ffd126603e0acd1c752687e26a99eedcea17bf0a794" exitCode=0 Oct 13 08:31:33 crc kubenswrapper[4833]: I1013 08:31:33.435903 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xm68h" event={"ID":"4d2a42b3-1c7e-4e6f-9299-06162b0f1048","Type":"ContainerDied","Data":"90ace387cfbf649074952ffd126603e0acd1c752687e26a99eedcea17bf0a794"} Oct 13 08:31:33 crc kubenswrapper[4833]: I1013 08:31:33.436583 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xm68h" event={"ID":"4d2a42b3-1c7e-4e6f-9299-06162b0f1048","Type":"ContainerStarted","Data":"929485432ce8d2708f70a9ff1b66c9bb711e4a59a6fb3063f0165b452652f5dc"} Oct 13 08:31:34 crc kubenswrapper[4833]: I1013 08:31:34.449366 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xm68h" event={"ID":"4d2a42b3-1c7e-4e6f-9299-06162b0f1048","Type":"ContainerStarted","Data":"d0a51256db3836b3d40772008f6998ec595fd112fbd82a6b47890830c3735ace"} Oct 13 08:31:36 crc kubenswrapper[4833]: I1013 08:31:36.481745 4833 generic.go:334] "Generic (PLEG): container finished" podID="4d2a42b3-1c7e-4e6f-9299-06162b0f1048" containerID="d0a51256db3836b3d40772008f6998ec595fd112fbd82a6b47890830c3735ace" exitCode=0 Oct 13 08:31:36 crc kubenswrapper[4833]: I1013 08:31:36.481798 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xm68h" event={"ID":"4d2a42b3-1c7e-4e6f-9299-06162b0f1048","Type":"ContainerDied","Data":"d0a51256db3836b3d40772008f6998ec595fd112fbd82a6b47890830c3735ace"} Oct 13 08:31:37 crc kubenswrapper[4833]: I1013 08:31:37.491459 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xm68h" event={"ID":"4d2a42b3-1c7e-4e6f-9299-06162b0f1048","Type":"ContainerStarted","Data":"59c4a0505690d1de27d032db9782ffc9e69b5c5dd01826964662915ca2e5515b"} Oct 13 08:31:37 crc kubenswrapper[4833]: I1013 08:31:37.517178 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xm68h" podStartSLOduration=3.072133571 podStartE2EDuration="6.517155049s" podCreationTimestamp="2025-10-13 08:31:31 +0000 UTC" firstStartedPulling="2025-10-13 08:31:33.43940548 +0000 UTC m=+7383.539828436" lastFinishedPulling="2025-10-13 08:31:36.884426958 +0000 UTC m=+7386.984849914" observedRunningTime="2025-10-13 08:31:37.50773364 +0000 UTC m=+7387.608156586" watchObservedRunningTime="2025-10-13 08:31:37.517155049 +0000 UTC m=+7387.617577975" Oct 13 08:31:42 crc kubenswrapper[4833]: I1013 08:31:42.108291 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xm68h" Oct 13 08:31:42 crc kubenswrapper[4833]: I1013 08:31:42.109196 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xm68h" Oct 13 08:31:42 crc kubenswrapper[4833]: I1013 08:31:42.199911 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xm68h" Oct 13 08:31:42 crc kubenswrapper[4833]: I1013 08:31:42.617807 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xm68h" Oct 13 08:31:42 crc kubenswrapper[4833]: I1013 08:31:42.687046 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xm68h"] Oct 13 08:31:44 crc kubenswrapper[4833]: I1013 08:31:44.574875 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xm68h" podUID="4d2a42b3-1c7e-4e6f-9299-06162b0f1048" containerName="registry-server" containerID="cri-o://59c4a0505690d1de27d032db9782ffc9e69b5c5dd01826964662915ca2e5515b" gracePeriod=2 Oct 13 08:31:45 crc kubenswrapper[4833]: I1013 08:31:45.108495 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xm68h" Oct 13 08:31:45 crc kubenswrapper[4833]: I1013 08:31:45.232390 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d2a42b3-1c7e-4e6f-9299-06162b0f1048-catalog-content\") pod \"4d2a42b3-1c7e-4e6f-9299-06162b0f1048\" (UID: \"4d2a42b3-1c7e-4e6f-9299-06162b0f1048\") " Oct 13 08:31:45 crc kubenswrapper[4833]: I1013 08:31:45.232525 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d2a42b3-1c7e-4e6f-9299-06162b0f1048-utilities\") pod \"4d2a42b3-1c7e-4e6f-9299-06162b0f1048\" (UID: \"4d2a42b3-1c7e-4e6f-9299-06162b0f1048\") " Oct 13 08:31:45 crc kubenswrapper[4833]: I1013 08:31:45.232585 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j92g\" (UniqueName: \"kubernetes.io/projected/4d2a42b3-1c7e-4e6f-9299-06162b0f1048-kube-api-access-4j92g\") pod \"4d2a42b3-1c7e-4e6f-9299-06162b0f1048\" (UID: \"4d2a42b3-1c7e-4e6f-9299-06162b0f1048\") " Oct 13 08:31:45 crc kubenswrapper[4833]: I1013 08:31:45.233270 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d2a42b3-1c7e-4e6f-9299-06162b0f1048-utilities" (OuterVolumeSpecName: "utilities") pod "4d2a42b3-1c7e-4e6f-9299-06162b0f1048" (UID: "4d2a42b3-1c7e-4e6f-9299-06162b0f1048"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:31:45 crc kubenswrapper[4833]: I1013 08:31:45.239211 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d2a42b3-1c7e-4e6f-9299-06162b0f1048-kube-api-access-4j92g" (OuterVolumeSpecName: "kube-api-access-4j92g") pod "4d2a42b3-1c7e-4e6f-9299-06162b0f1048" (UID: "4d2a42b3-1c7e-4e6f-9299-06162b0f1048"). InnerVolumeSpecName "kube-api-access-4j92g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:31:45 crc kubenswrapper[4833]: I1013 08:31:45.304194 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d2a42b3-1c7e-4e6f-9299-06162b0f1048-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d2a42b3-1c7e-4e6f-9299-06162b0f1048" (UID: "4d2a42b3-1c7e-4e6f-9299-06162b0f1048"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:31:45 crc kubenswrapper[4833]: I1013 08:31:45.335752 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d2a42b3-1c7e-4e6f-9299-06162b0f1048-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 08:31:45 crc kubenswrapper[4833]: I1013 08:31:45.335814 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d2a42b3-1c7e-4e6f-9299-06162b0f1048-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 08:31:45 crc kubenswrapper[4833]: I1013 08:31:45.335847 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4j92g\" (UniqueName: \"kubernetes.io/projected/4d2a42b3-1c7e-4e6f-9299-06162b0f1048-kube-api-access-4j92g\") on node \"crc\" DevicePath \"\"" Oct 13 08:31:45 crc kubenswrapper[4833]: I1013 08:31:45.599820 4833 generic.go:334] "Generic (PLEG): container finished" podID="4d2a42b3-1c7e-4e6f-9299-06162b0f1048" containerID="59c4a0505690d1de27d032db9782ffc9e69b5c5dd01826964662915ca2e5515b" exitCode=0 Oct 13 08:31:45 crc kubenswrapper[4833]: I1013 08:31:45.599871 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xm68h" event={"ID":"4d2a42b3-1c7e-4e6f-9299-06162b0f1048","Type":"ContainerDied","Data":"59c4a0505690d1de27d032db9782ffc9e69b5c5dd01826964662915ca2e5515b"} Oct 13 08:31:45 crc kubenswrapper[4833]: I1013 08:31:45.599909 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xm68h" event={"ID":"4d2a42b3-1c7e-4e6f-9299-06162b0f1048","Type":"ContainerDied","Data":"929485432ce8d2708f70a9ff1b66c9bb711e4a59a6fb3063f0165b452652f5dc"} Oct 13 08:31:45 crc kubenswrapper[4833]: I1013 08:31:45.599932 4833 scope.go:117] "RemoveContainer" containerID="59c4a0505690d1de27d032db9782ffc9e69b5c5dd01826964662915ca2e5515b" Oct 13 08:31:45 crc kubenswrapper[4833]: I1013 08:31:45.599974 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xm68h" Oct 13 08:31:45 crc kubenswrapper[4833]: I1013 08:31:45.638458 4833 scope.go:117] "RemoveContainer" containerID="d0a51256db3836b3d40772008f6998ec595fd112fbd82a6b47890830c3735ace" Oct 13 08:31:45 crc kubenswrapper[4833]: I1013 08:31:45.653453 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xm68h"] Oct 13 08:31:45 crc kubenswrapper[4833]: I1013 08:31:45.660727 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xm68h"] Oct 13 08:31:45 crc kubenswrapper[4833]: I1013 08:31:45.693096 4833 scope.go:117] "RemoveContainer" containerID="90ace387cfbf649074952ffd126603e0acd1c752687e26a99eedcea17bf0a794" Oct 13 08:31:45 crc kubenswrapper[4833]: I1013 08:31:45.746102 4833 scope.go:117] "RemoveContainer" containerID="59c4a0505690d1de27d032db9782ffc9e69b5c5dd01826964662915ca2e5515b" Oct 13 08:31:45 crc kubenswrapper[4833]: E1013 08:31:45.746666 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59c4a0505690d1de27d032db9782ffc9e69b5c5dd01826964662915ca2e5515b\": container with ID starting with 59c4a0505690d1de27d032db9782ffc9e69b5c5dd01826964662915ca2e5515b not found: ID does not exist" containerID="59c4a0505690d1de27d032db9782ffc9e69b5c5dd01826964662915ca2e5515b" Oct 13 08:31:45 crc kubenswrapper[4833]: I1013 08:31:45.746721 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59c4a0505690d1de27d032db9782ffc9e69b5c5dd01826964662915ca2e5515b"} err="failed to get container status \"59c4a0505690d1de27d032db9782ffc9e69b5c5dd01826964662915ca2e5515b\": rpc error: code = NotFound desc = could not find container \"59c4a0505690d1de27d032db9782ffc9e69b5c5dd01826964662915ca2e5515b\": container with ID starting with 59c4a0505690d1de27d032db9782ffc9e69b5c5dd01826964662915ca2e5515b not found: ID does not exist" Oct 13 08:31:45 crc kubenswrapper[4833]: I1013 08:31:45.746787 4833 scope.go:117] "RemoveContainer" containerID="d0a51256db3836b3d40772008f6998ec595fd112fbd82a6b47890830c3735ace" Oct 13 08:31:45 crc kubenswrapper[4833]: E1013 08:31:45.747232 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0a51256db3836b3d40772008f6998ec595fd112fbd82a6b47890830c3735ace\": container with ID starting with d0a51256db3836b3d40772008f6998ec595fd112fbd82a6b47890830c3735ace not found: ID does not exist" containerID="d0a51256db3836b3d40772008f6998ec595fd112fbd82a6b47890830c3735ace" Oct 13 08:31:45 crc kubenswrapper[4833]: I1013 08:31:45.747266 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0a51256db3836b3d40772008f6998ec595fd112fbd82a6b47890830c3735ace"} err="failed to get container status \"d0a51256db3836b3d40772008f6998ec595fd112fbd82a6b47890830c3735ace\": rpc error: code = NotFound desc = could not find container \"d0a51256db3836b3d40772008f6998ec595fd112fbd82a6b47890830c3735ace\": container with ID starting with d0a51256db3836b3d40772008f6998ec595fd112fbd82a6b47890830c3735ace not found: ID does not exist" Oct 13 08:31:45 crc kubenswrapper[4833]: I1013 08:31:45.747286 4833 scope.go:117] "RemoveContainer" containerID="90ace387cfbf649074952ffd126603e0acd1c752687e26a99eedcea17bf0a794" Oct 13 08:31:45 crc kubenswrapper[4833]: E1013 08:31:45.749797 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90ace387cfbf649074952ffd126603e0acd1c752687e26a99eedcea17bf0a794\": container with ID starting with 90ace387cfbf649074952ffd126603e0acd1c752687e26a99eedcea17bf0a794 not found: ID does not exist" containerID="90ace387cfbf649074952ffd126603e0acd1c752687e26a99eedcea17bf0a794" Oct 13 08:31:45 crc kubenswrapper[4833]: I1013 08:31:45.749846 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90ace387cfbf649074952ffd126603e0acd1c752687e26a99eedcea17bf0a794"} err="failed to get container status \"90ace387cfbf649074952ffd126603e0acd1c752687e26a99eedcea17bf0a794\": rpc error: code = NotFound desc = could not find container \"90ace387cfbf649074952ffd126603e0acd1c752687e26a99eedcea17bf0a794\": container with ID starting with 90ace387cfbf649074952ffd126603e0acd1c752687e26a99eedcea17bf0a794 not found: ID does not exist" Oct 13 08:31:46 crc kubenswrapper[4833]: I1013 08:31:46.614703 4833 generic.go:334] "Generic (PLEG): container finished" podID="8e4d3fe9-fde6-4388-892d-6477fa1aa0c4" containerID="697329c65ff5771003929922d5ec04e0f9d6f3031ca811dd25281009fd49a702" exitCode=0 Oct 13 08:31:46 crc kubenswrapper[4833]: I1013 08:31:46.614747 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n" event={"ID":"8e4d3fe9-fde6-4388-892d-6477fa1aa0c4","Type":"ContainerDied","Data":"697329c65ff5771003929922d5ec04e0f9d6f3031ca811dd25281009fd49a702"} Oct 13 08:31:46 crc kubenswrapper[4833]: I1013 08:31:46.653097 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d2a42b3-1c7e-4e6f-9299-06162b0f1048" path="/var/lib/kubelet/pods/4d2a42b3-1c7e-4e6f-9299-06162b0f1048/volumes" Oct 13 08:31:48 crc kubenswrapper[4833]: I1013 08:31:48.111715 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n" Oct 13 08:31:48 crc kubenswrapper[4833]: I1013 08:31:48.207668 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e4d3fe9-fde6-4388-892d-6477fa1aa0c4-inventory\") pod \"8e4d3fe9-fde6-4388-892d-6477fa1aa0c4\" (UID: \"8e4d3fe9-fde6-4388-892d-6477fa1aa0c4\") " Oct 13 08:31:48 crc kubenswrapper[4833]: I1013 08:31:48.207964 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdpq4\" (UniqueName: \"kubernetes.io/projected/8e4d3fe9-fde6-4388-892d-6477fa1aa0c4-kube-api-access-xdpq4\") pod \"8e4d3fe9-fde6-4388-892d-6477fa1aa0c4\" (UID: \"8e4d3fe9-fde6-4388-892d-6477fa1aa0c4\") " Oct 13 08:31:48 crc kubenswrapper[4833]: I1013 08:31:48.209824 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8e4d3fe9-fde6-4388-892d-6477fa1aa0c4-ssh-key\") pod \"8e4d3fe9-fde6-4388-892d-6477fa1aa0c4\" (UID: \"8e4d3fe9-fde6-4388-892d-6477fa1aa0c4\") " Oct 13 08:31:48 crc kubenswrapper[4833]: I1013 08:31:48.210071 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e4d3fe9-fde6-4388-892d-6477fa1aa0c4-tripleo-cleanup-combined-ca-bundle\") pod \"8e4d3fe9-fde6-4388-892d-6477fa1aa0c4\" (UID: \"8e4d3fe9-fde6-4388-892d-6477fa1aa0c4\") " Oct 13 08:31:48 crc kubenswrapper[4833]: I1013 08:31:48.217036 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e4d3fe9-fde6-4388-892d-6477fa1aa0c4-kube-api-access-xdpq4" (OuterVolumeSpecName: "kube-api-access-xdpq4") pod "8e4d3fe9-fde6-4388-892d-6477fa1aa0c4" (UID: "8e4d3fe9-fde6-4388-892d-6477fa1aa0c4"). InnerVolumeSpecName "kube-api-access-xdpq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:31:48 crc kubenswrapper[4833]: I1013 08:31:48.227596 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e4d3fe9-fde6-4388-892d-6477fa1aa0c4-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "8e4d3fe9-fde6-4388-892d-6477fa1aa0c4" (UID: "8e4d3fe9-fde6-4388-892d-6477fa1aa0c4"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:31:48 crc kubenswrapper[4833]: I1013 08:31:48.252274 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e4d3fe9-fde6-4388-892d-6477fa1aa0c4-inventory" (OuterVolumeSpecName: "inventory") pod "8e4d3fe9-fde6-4388-892d-6477fa1aa0c4" (UID: "8e4d3fe9-fde6-4388-892d-6477fa1aa0c4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:31:48 crc kubenswrapper[4833]: I1013 08:31:48.265873 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e4d3fe9-fde6-4388-892d-6477fa1aa0c4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8e4d3fe9-fde6-4388-892d-6477fa1aa0c4" (UID: "8e4d3fe9-fde6-4388-892d-6477fa1aa0c4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:31:48 crc kubenswrapper[4833]: I1013 08:31:48.313365 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdpq4\" (UniqueName: \"kubernetes.io/projected/8e4d3fe9-fde6-4388-892d-6477fa1aa0c4-kube-api-access-xdpq4\") on node \"crc\" DevicePath \"\"" Oct 13 08:31:48 crc kubenswrapper[4833]: I1013 08:31:48.313644 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8e4d3fe9-fde6-4388-892d-6477fa1aa0c4-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 08:31:48 crc kubenswrapper[4833]: I1013 08:31:48.313658 4833 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e4d3fe9-fde6-4388-892d-6477fa1aa0c4-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:31:48 crc kubenswrapper[4833]: I1013 08:31:48.313671 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e4d3fe9-fde6-4388-892d-6477fa1aa0c4-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 08:31:48 crc kubenswrapper[4833]: I1013 08:31:48.642351 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n" Oct 13 08:31:48 crc kubenswrapper[4833]: I1013 08:31:48.650302 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n" event={"ID":"8e4d3fe9-fde6-4388-892d-6477fa1aa0c4","Type":"ContainerDied","Data":"632ff0e6f345a3da0b02f20612c311210e179ab424af866566e91be0d0136121"} Oct 13 08:31:48 crc kubenswrapper[4833]: I1013 08:31:48.650383 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="632ff0e6f345a3da0b02f20612c311210e179ab424af866566e91be0d0136121" Oct 13 08:31:56 crc kubenswrapper[4833]: I1013 08:31:56.135949 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-rlqwd"] Oct 13 08:31:56 crc kubenswrapper[4833]: E1013 08:31:56.136972 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d2a42b3-1c7e-4e6f-9299-06162b0f1048" containerName="extract-content" Oct 13 08:31:56 crc kubenswrapper[4833]: I1013 08:31:56.136988 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d2a42b3-1c7e-4e6f-9299-06162b0f1048" containerName="extract-content" Oct 13 08:31:56 crc kubenswrapper[4833]: E1013 08:31:56.137010 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e4d3fe9-fde6-4388-892d-6477fa1aa0c4" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Oct 13 08:31:56 crc kubenswrapper[4833]: I1013 08:31:56.137019 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e4d3fe9-fde6-4388-892d-6477fa1aa0c4" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Oct 13 08:31:56 crc kubenswrapper[4833]: E1013 08:31:56.137034 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d2a42b3-1c7e-4e6f-9299-06162b0f1048" containerName="extract-utilities" Oct 13 08:31:56 crc kubenswrapper[4833]: I1013 08:31:56.137043 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d2a42b3-1c7e-4e6f-9299-06162b0f1048" containerName="extract-utilities" Oct 13 08:31:56 crc kubenswrapper[4833]: E1013 08:31:56.137058 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d2a42b3-1c7e-4e6f-9299-06162b0f1048" containerName="registry-server" Oct 13 08:31:56 crc kubenswrapper[4833]: I1013 08:31:56.137065 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d2a42b3-1c7e-4e6f-9299-06162b0f1048" containerName="registry-server" Oct 13 08:31:56 crc kubenswrapper[4833]: I1013 08:31:56.137325 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e4d3fe9-fde6-4388-892d-6477fa1aa0c4" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Oct 13 08:31:56 crc kubenswrapper[4833]: I1013 08:31:56.137352 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d2a42b3-1c7e-4e6f-9299-06162b0f1048" containerName="registry-server" Oct 13 08:31:56 crc kubenswrapper[4833]: I1013 08:31:56.138281 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-rlqwd" Oct 13 08:31:56 crc kubenswrapper[4833]: I1013 08:31:56.142954 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 08:31:56 crc kubenswrapper[4833]: I1013 08:31:56.143012 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-qqrx8" Oct 13 08:31:56 crc kubenswrapper[4833]: I1013 08:31:56.143092 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 13 08:31:56 crc kubenswrapper[4833]: I1013 08:31:56.143151 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 13 08:31:56 crc kubenswrapper[4833]: I1013 08:31:56.151183 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-rlqwd"] Oct 13 08:31:56 crc kubenswrapper[4833]: I1013 08:31:56.201595 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqnc4\" (UniqueName: \"kubernetes.io/projected/9a5185c3-99ea-4511-a0ec-f614d10e420f-kube-api-access-fqnc4\") pod \"bootstrap-openstack-openstack-cell1-rlqwd\" (UID: \"9a5185c3-99ea-4511-a0ec-f614d10e420f\") " pod="openstack/bootstrap-openstack-openstack-cell1-rlqwd" Oct 13 08:31:56 crc kubenswrapper[4833]: I1013 08:31:56.201677 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a5185c3-99ea-4511-a0ec-f614d10e420f-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-rlqwd\" (UID: \"9a5185c3-99ea-4511-a0ec-f614d10e420f\") " pod="openstack/bootstrap-openstack-openstack-cell1-rlqwd" Oct 13 08:31:56 crc kubenswrapper[4833]: I1013 08:31:56.201720 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a5185c3-99ea-4511-a0ec-f614d10e420f-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-rlqwd\" (UID: \"9a5185c3-99ea-4511-a0ec-f614d10e420f\") " pod="openstack/bootstrap-openstack-openstack-cell1-rlqwd" Oct 13 08:31:56 crc kubenswrapper[4833]: I1013 08:31:56.201894 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a5185c3-99ea-4511-a0ec-f614d10e420f-inventory\") pod \"bootstrap-openstack-openstack-cell1-rlqwd\" (UID: \"9a5185c3-99ea-4511-a0ec-f614d10e420f\") " pod="openstack/bootstrap-openstack-openstack-cell1-rlqwd" Oct 13 08:31:56 crc kubenswrapper[4833]: I1013 08:31:56.304449 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a5185c3-99ea-4511-a0ec-f614d10e420f-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-rlqwd\" (UID: \"9a5185c3-99ea-4511-a0ec-f614d10e420f\") " pod="openstack/bootstrap-openstack-openstack-cell1-rlqwd" Oct 13 08:31:56 crc kubenswrapper[4833]: I1013 08:31:56.304511 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a5185c3-99ea-4511-a0ec-f614d10e420f-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-rlqwd\" (UID: \"9a5185c3-99ea-4511-a0ec-f614d10e420f\") " pod="openstack/bootstrap-openstack-openstack-cell1-rlqwd" Oct 13 08:31:56 crc kubenswrapper[4833]: I1013 08:31:56.304596 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a5185c3-99ea-4511-a0ec-f614d10e420f-inventory\") pod \"bootstrap-openstack-openstack-cell1-rlqwd\" (UID: \"9a5185c3-99ea-4511-a0ec-f614d10e420f\") " pod="openstack/bootstrap-openstack-openstack-cell1-rlqwd" Oct 13 08:31:56 crc kubenswrapper[4833]: I1013 08:31:56.305550 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnc4\" (UniqueName: \"kubernetes.io/projected/9a5185c3-99ea-4511-a0ec-f614d10e420f-kube-api-access-fqnc4\") pod \"bootstrap-openstack-openstack-cell1-rlqwd\" (UID: \"9a5185c3-99ea-4511-a0ec-f614d10e420f\") " pod="openstack/bootstrap-openstack-openstack-cell1-rlqwd" Oct 13 08:31:56 crc kubenswrapper[4833]: I1013 08:31:56.311530 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a5185c3-99ea-4511-a0ec-f614d10e420f-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-rlqwd\" (UID: \"9a5185c3-99ea-4511-a0ec-f614d10e420f\") " pod="openstack/bootstrap-openstack-openstack-cell1-rlqwd" Oct 13 08:31:56 crc kubenswrapper[4833]: I1013 08:31:56.311756 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a5185c3-99ea-4511-a0ec-f614d10e420f-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-rlqwd\" (UID: \"9a5185c3-99ea-4511-a0ec-f614d10e420f\") " pod="openstack/bootstrap-openstack-openstack-cell1-rlqwd" Oct 13 08:31:56 crc kubenswrapper[4833]: I1013 08:31:56.311996 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a5185c3-99ea-4511-a0ec-f614d10e420f-inventory\") pod \"bootstrap-openstack-openstack-cell1-rlqwd\" (UID: \"9a5185c3-99ea-4511-a0ec-f614d10e420f\") " pod="openstack/bootstrap-openstack-openstack-cell1-rlqwd" Oct 13 08:31:56 crc kubenswrapper[4833]: I1013 08:31:56.324892 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqnc4\" (UniqueName: \"kubernetes.io/projected/9a5185c3-99ea-4511-a0ec-f614d10e420f-kube-api-access-fqnc4\") pod \"bootstrap-openstack-openstack-cell1-rlqwd\" (UID: \"9a5185c3-99ea-4511-a0ec-f614d10e420f\") " pod="openstack/bootstrap-openstack-openstack-cell1-rlqwd" Oct 13 08:31:56 crc kubenswrapper[4833]: I1013 08:31:56.458715 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-rlqwd" Oct 13 08:31:57 crc kubenswrapper[4833]: I1013 08:31:57.010621 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-rlqwd"] Oct 13 08:31:57 crc kubenswrapper[4833]: I1013 08:31:57.736353 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-rlqwd" event={"ID":"9a5185c3-99ea-4511-a0ec-f614d10e420f","Type":"ContainerStarted","Data":"a4cf47875dda6cc02f366e0b6dd8c7671e34790048b8424370c6c3b1f7436904"} Oct 13 08:31:58 crc kubenswrapper[4833]: I1013 08:31:58.751089 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-rlqwd" event={"ID":"9a5185c3-99ea-4511-a0ec-f614d10e420f","Type":"ContainerStarted","Data":"46a0d53833e7be788fafd8ef8c080bc66624db088a72f7fed87988290e6cde04"} Oct 13 08:31:58 crc kubenswrapper[4833]: I1013 08:31:58.773248 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-cell1-rlqwd" podStartSLOduration=2.246678691 podStartE2EDuration="2.773222741s" podCreationTimestamp="2025-10-13 08:31:56 +0000 UTC" firstStartedPulling="2025-10-13 08:31:57.026034885 +0000 UTC m=+7407.126457801" lastFinishedPulling="2025-10-13 08:31:57.552578925 +0000 UTC m=+7407.653001851" observedRunningTime="2025-10-13 08:31:58.770621227 +0000 UTC m=+7408.871044183" watchObservedRunningTime="2025-10-13 08:31:58.773222741 +0000 UTC m=+7408.873645697" Oct 13 08:33:00 crc kubenswrapper[4833]: I1013 08:33:00.542578 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:33:00 crc kubenswrapper[4833]: I1013 08:33:00.543158 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 08:33:30 crc kubenswrapper[4833]: I1013 08:33:30.542408 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:33:30 crc kubenswrapper[4833]: I1013 08:33:30.544832 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 08:34:00 crc kubenswrapper[4833]: I1013 08:34:00.543427 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:34:00 crc kubenswrapper[4833]: I1013 08:34:00.544658 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 08:34:00 crc kubenswrapper[4833]: I1013 08:34:00.544729 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 08:34:00 crc kubenswrapper[4833]: I1013 08:34:00.546033 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a61878d9bc65041e67d954d25961f084c1ef7801850f460f69cbb89c62e52cb1"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 08:34:00 crc kubenswrapper[4833]: I1013 08:34:00.546255 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://a61878d9bc65041e67d954d25961f084c1ef7801850f460f69cbb89c62e52cb1" gracePeriod=600 Oct 13 08:34:01 crc kubenswrapper[4833]: I1013 08:34:01.116498 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="a61878d9bc65041e67d954d25961f084c1ef7801850f460f69cbb89c62e52cb1" exitCode=0 Oct 13 08:34:01 crc kubenswrapper[4833]: I1013 08:34:01.116561 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"a61878d9bc65041e67d954d25961f084c1ef7801850f460f69cbb89c62e52cb1"} Oct 13 08:34:01 crc kubenswrapper[4833]: I1013 08:34:01.116813 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"18757d11b2563537d0bd73435161e9ea0a84b6b44238545002dede1960b151c7"} Oct 13 08:34:01 crc kubenswrapper[4833]: I1013 08:34:01.116836 4833 scope.go:117] "RemoveContainer" containerID="e750b4528493e1226803bb48b2f89a189fd7217c1157b888cd297653099141bd" Oct 13 08:35:10 crc kubenswrapper[4833]: I1013 08:35:10.918039 4833 generic.go:334] "Generic (PLEG): container finished" podID="9a5185c3-99ea-4511-a0ec-f614d10e420f" containerID="46a0d53833e7be788fafd8ef8c080bc66624db088a72f7fed87988290e6cde04" exitCode=0 Oct 13 08:35:10 crc kubenswrapper[4833]: I1013 08:35:10.918156 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-rlqwd" event={"ID":"9a5185c3-99ea-4511-a0ec-f614d10e420f","Type":"ContainerDied","Data":"46a0d53833e7be788fafd8ef8c080bc66624db088a72f7fed87988290e6cde04"} Oct 13 08:35:12 crc kubenswrapper[4833]: I1013 08:35:12.381282 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-rlqwd" Oct 13 08:35:12 crc kubenswrapper[4833]: I1013 08:35:12.535334 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a5185c3-99ea-4511-a0ec-f614d10e420f-ssh-key\") pod \"9a5185c3-99ea-4511-a0ec-f614d10e420f\" (UID: \"9a5185c3-99ea-4511-a0ec-f614d10e420f\") " Oct 13 08:35:12 crc kubenswrapper[4833]: I1013 08:35:12.535686 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a5185c3-99ea-4511-a0ec-f614d10e420f-inventory\") pod \"9a5185c3-99ea-4511-a0ec-f614d10e420f\" (UID: \"9a5185c3-99ea-4511-a0ec-f614d10e420f\") " Oct 13 08:35:12 crc kubenswrapper[4833]: I1013 08:35:12.535802 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a5185c3-99ea-4511-a0ec-f614d10e420f-bootstrap-combined-ca-bundle\") pod \"9a5185c3-99ea-4511-a0ec-f614d10e420f\" (UID: \"9a5185c3-99ea-4511-a0ec-f614d10e420f\") " Oct 13 08:35:12 crc kubenswrapper[4833]: I1013 08:35:12.536491 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqnc4\" (UniqueName: \"kubernetes.io/projected/9a5185c3-99ea-4511-a0ec-f614d10e420f-kube-api-access-fqnc4\") pod \"9a5185c3-99ea-4511-a0ec-f614d10e420f\" (UID: \"9a5185c3-99ea-4511-a0ec-f614d10e420f\") " Oct 13 08:35:12 crc kubenswrapper[4833]: I1013 08:35:12.541641 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a5185c3-99ea-4511-a0ec-f614d10e420f-kube-api-access-fqnc4" (OuterVolumeSpecName: "kube-api-access-fqnc4") pod "9a5185c3-99ea-4511-a0ec-f614d10e420f" (UID: "9a5185c3-99ea-4511-a0ec-f614d10e420f"). InnerVolumeSpecName "kube-api-access-fqnc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:35:12 crc kubenswrapper[4833]: I1013 08:35:12.541781 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a5185c3-99ea-4511-a0ec-f614d10e420f-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "9a5185c3-99ea-4511-a0ec-f614d10e420f" (UID: "9a5185c3-99ea-4511-a0ec-f614d10e420f"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:35:12 crc kubenswrapper[4833]: I1013 08:35:12.568072 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a5185c3-99ea-4511-a0ec-f614d10e420f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9a5185c3-99ea-4511-a0ec-f614d10e420f" (UID: "9a5185c3-99ea-4511-a0ec-f614d10e420f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:35:12 crc kubenswrapper[4833]: I1013 08:35:12.589839 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a5185c3-99ea-4511-a0ec-f614d10e420f-inventory" (OuterVolumeSpecName: "inventory") pod "9a5185c3-99ea-4511-a0ec-f614d10e420f" (UID: "9a5185c3-99ea-4511-a0ec-f614d10e420f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:35:12 crc kubenswrapper[4833]: I1013 08:35:12.639356 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a5185c3-99ea-4511-a0ec-f614d10e420f-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 08:35:12 crc kubenswrapper[4833]: I1013 08:35:12.639406 4833 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a5185c3-99ea-4511-a0ec-f614d10e420f-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:35:12 crc kubenswrapper[4833]: I1013 08:35:12.639427 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqnc4\" (UniqueName: \"kubernetes.io/projected/9a5185c3-99ea-4511-a0ec-f614d10e420f-kube-api-access-fqnc4\") on node \"crc\" DevicePath \"\"" Oct 13 08:35:12 crc kubenswrapper[4833]: I1013 08:35:12.639446 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a5185c3-99ea-4511-a0ec-f614d10e420f-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 08:35:12 crc kubenswrapper[4833]: I1013 08:35:12.940976 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-rlqwd" event={"ID":"9a5185c3-99ea-4511-a0ec-f614d10e420f","Type":"ContainerDied","Data":"a4cf47875dda6cc02f366e0b6dd8c7671e34790048b8424370c6c3b1f7436904"} Oct 13 08:35:12 crc kubenswrapper[4833]: I1013 08:35:12.941266 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4cf47875dda6cc02f366e0b6dd8c7671e34790048b8424370c6c3b1f7436904" Oct 13 08:35:12 crc kubenswrapper[4833]: I1013 08:35:12.941083 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-rlqwd" Oct 13 08:35:13 crc kubenswrapper[4833]: I1013 08:35:13.062961 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-vx4f2"] Oct 13 08:35:13 crc kubenswrapper[4833]: E1013 08:35:13.063949 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a5185c3-99ea-4511-a0ec-f614d10e420f" containerName="bootstrap-openstack-openstack-cell1" Oct 13 08:35:13 crc kubenswrapper[4833]: I1013 08:35:13.063991 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a5185c3-99ea-4511-a0ec-f614d10e420f" containerName="bootstrap-openstack-openstack-cell1" Oct 13 08:35:13 crc kubenswrapper[4833]: I1013 08:35:13.064508 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a5185c3-99ea-4511-a0ec-f614d10e420f" containerName="bootstrap-openstack-openstack-cell1" Oct 13 08:35:13 crc kubenswrapper[4833]: I1013 08:35:13.066077 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-vx4f2" Oct 13 08:35:13 crc kubenswrapper[4833]: I1013 08:35:13.068690 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 08:35:13 crc kubenswrapper[4833]: I1013 08:35:13.068735 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-qqrx8" Oct 13 08:35:13 crc kubenswrapper[4833]: I1013 08:35:13.069036 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 13 08:35:13 crc kubenswrapper[4833]: I1013 08:35:13.069035 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 13 08:35:13 crc kubenswrapper[4833]: I1013 08:35:13.084349 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-vx4f2"] Oct 13 08:35:13 crc kubenswrapper[4833]: I1013 08:35:13.151211 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4cd90ca5-fdd6-4c4b-8f4c-6c156a814975-ssh-key\") pod \"download-cache-openstack-openstack-cell1-vx4f2\" (UID: \"4cd90ca5-fdd6-4c4b-8f4c-6c156a814975\") " pod="openstack/download-cache-openstack-openstack-cell1-vx4f2" Oct 13 08:35:13 crc kubenswrapper[4833]: I1013 08:35:13.151278 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqfbr\" (UniqueName: \"kubernetes.io/projected/4cd90ca5-fdd6-4c4b-8f4c-6c156a814975-kube-api-access-tqfbr\") pod \"download-cache-openstack-openstack-cell1-vx4f2\" (UID: \"4cd90ca5-fdd6-4c4b-8f4c-6c156a814975\") " pod="openstack/download-cache-openstack-openstack-cell1-vx4f2" Oct 13 08:35:13 crc kubenswrapper[4833]: I1013 08:35:13.151426 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cd90ca5-fdd6-4c4b-8f4c-6c156a814975-inventory\") pod \"download-cache-openstack-openstack-cell1-vx4f2\" (UID: \"4cd90ca5-fdd6-4c4b-8f4c-6c156a814975\") " pod="openstack/download-cache-openstack-openstack-cell1-vx4f2" Oct 13 08:35:13 crc kubenswrapper[4833]: I1013 08:35:13.253856 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4cd90ca5-fdd6-4c4b-8f4c-6c156a814975-ssh-key\") pod \"download-cache-openstack-openstack-cell1-vx4f2\" (UID: \"4cd90ca5-fdd6-4c4b-8f4c-6c156a814975\") " pod="openstack/download-cache-openstack-openstack-cell1-vx4f2" Oct 13 08:35:13 crc kubenswrapper[4833]: I1013 08:35:13.253948 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqfbr\" (UniqueName: \"kubernetes.io/projected/4cd90ca5-fdd6-4c4b-8f4c-6c156a814975-kube-api-access-tqfbr\") pod \"download-cache-openstack-openstack-cell1-vx4f2\" (UID: \"4cd90ca5-fdd6-4c4b-8f4c-6c156a814975\") " pod="openstack/download-cache-openstack-openstack-cell1-vx4f2" Oct 13 08:35:13 crc kubenswrapper[4833]: I1013 08:35:13.254101 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cd90ca5-fdd6-4c4b-8f4c-6c156a814975-inventory\") pod \"download-cache-openstack-openstack-cell1-vx4f2\" (UID: \"4cd90ca5-fdd6-4c4b-8f4c-6c156a814975\") " pod="openstack/download-cache-openstack-openstack-cell1-vx4f2" Oct 13 08:35:13 crc kubenswrapper[4833]: I1013 08:35:13.258062 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4cd90ca5-fdd6-4c4b-8f4c-6c156a814975-ssh-key\") pod \"download-cache-openstack-openstack-cell1-vx4f2\" (UID: \"4cd90ca5-fdd6-4c4b-8f4c-6c156a814975\") " pod="openstack/download-cache-openstack-openstack-cell1-vx4f2" Oct 13 08:35:13 crc kubenswrapper[4833]: I1013 08:35:13.258603 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cd90ca5-fdd6-4c4b-8f4c-6c156a814975-inventory\") pod \"download-cache-openstack-openstack-cell1-vx4f2\" (UID: \"4cd90ca5-fdd6-4c4b-8f4c-6c156a814975\") " pod="openstack/download-cache-openstack-openstack-cell1-vx4f2" Oct 13 08:35:13 crc kubenswrapper[4833]: I1013 08:35:13.272213 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqfbr\" (UniqueName: \"kubernetes.io/projected/4cd90ca5-fdd6-4c4b-8f4c-6c156a814975-kube-api-access-tqfbr\") pod \"download-cache-openstack-openstack-cell1-vx4f2\" (UID: \"4cd90ca5-fdd6-4c4b-8f4c-6c156a814975\") " pod="openstack/download-cache-openstack-openstack-cell1-vx4f2" Oct 13 08:35:13 crc kubenswrapper[4833]: I1013 08:35:13.390453 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-vx4f2" Oct 13 08:35:14 crc kubenswrapper[4833]: I1013 08:35:14.056866 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-vx4f2"] Oct 13 08:35:14 crc kubenswrapper[4833]: I1013 08:35:14.068939 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 08:35:14 crc kubenswrapper[4833]: I1013 08:35:14.969038 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-vx4f2" event={"ID":"4cd90ca5-fdd6-4c4b-8f4c-6c156a814975","Type":"ContainerStarted","Data":"e2bbfc2c5395efdc55fc27c9dadad7a3a3d1856460e933fdf009312cd9867290"} Oct 13 08:35:14 crc kubenswrapper[4833]: I1013 08:35:14.969725 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-vx4f2" event={"ID":"4cd90ca5-fdd6-4c4b-8f4c-6c156a814975","Type":"ContainerStarted","Data":"0310a2817263aa9b5f8696704c5e959a3c1d7c5c621b2d667aef8cb3c288f0fe"} Oct 13 08:35:15 crc kubenswrapper[4833]: I1013 08:35:15.027934 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-cell1-vx4f2" podStartSLOduration=1.620574783 podStartE2EDuration="2.027905491s" podCreationTimestamp="2025-10-13 08:35:13 +0000 UTC" firstStartedPulling="2025-10-13 08:35:14.068390834 +0000 UTC m=+7604.168813790" lastFinishedPulling="2025-10-13 08:35:14.475721572 +0000 UTC m=+7604.576144498" observedRunningTime="2025-10-13 08:35:14.997249249 +0000 UTC m=+7605.097672225" watchObservedRunningTime="2025-10-13 08:35:15.027905491 +0000 UTC m=+7605.128328427" Oct 13 08:35:59 crc kubenswrapper[4833]: I1013 08:35:59.098804 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tr2zm"] Oct 13 08:35:59 crc kubenswrapper[4833]: I1013 08:35:59.101996 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tr2zm" Oct 13 08:35:59 crc kubenswrapper[4833]: I1013 08:35:59.111240 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tr2zm"] Oct 13 08:35:59 crc kubenswrapper[4833]: I1013 08:35:59.246195 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxmpj\" (UniqueName: \"kubernetes.io/projected/4ff846c5-cbda-4644-8c32-c0b1571aeea2-kube-api-access-pxmpj\") pod \"redhat-operators-tr2zm\" (UID: \"4ff846c5-cbda-4644-8c32-c0b1571aeea2\") " pod="openshift-marketplace/redhat-operators-tr2zm" Oct 13 08:35:59 crc kubenswrapper[4833]: I1013 08:35:59.246294 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ff846c5-cbda-4644-8c32-c0b1571aeea2-catalog-content\") pod \"redhat-operators-tr2zm\" (UID: \"4ff846c5-cbda-4644-8c32-c0b1571aeea2\") " pod="openshift-marketplace/redhat-operators-tr2zm" Oct 13 08:35:59 crc kubenswrapper[4833]: I1013 08:35:59.246341 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ff846c5-cbda-4644-8c32-c0b1571aeea2-utilities\") pod \"redhat-operators-tr2zm\" (UID: \"4ff846c5-cbda-4644-8c32-c0b1571aeea2\") " pod="openshift-marketplace/redhat-operators-tr2zm" Oct 13 08:35:59 crc kubenswrapper[4833]: I1013 08:35:59.347904 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxmpj\" (UniqueName: \"kubernetes.io/projected/4ff846c5-cbda-4644-8c32-c0b1571aeea2-kube-api-access-pxmpj\") pod \"redhat-operators-tr2zm\" (UID: \"4ff846c5-cbda-4644-8c32-c0b1571aeea2\") " pod="openshift-marketplace/redhat-operators-tr2zm" Oct 13 08:35:59 crc kubenswrapper[4833]: I1013 08:35:59.348003 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ff846c5-cbda-4644-8c32-c0b1571aeea2-catalog-content\") pod \"redhat-operators-tr2zm\" (UID: \"4ff846c5-cbda-4644-8c32-c0b1571aeea2\") " pod="openshift-marketplace/redhat-operators-tr2zm" Oct 13 08:35:59 crc kubenswrapper[4833]: I1013 08:35:59.348060 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ff846c5-cbda-4644-8c32-c0b1571aeea2-utilities\") pod \"redhat-operators-tr2zm\" (UID: \"4ff846c5-cbda-4644-8c32-c0b1571aeea2\") " pod="openshift-marketplace/redhat-operators-tr2zm" Oct 13 08:35:59 crc kubenswrapper[4833]: I1013 08:35:59.348699 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ff846c5-cbda-4644-8c32-c0b1571aeea2-utilities\") pod \"redhat-operators-tr2zm\" (UID: \"4ff846c5-cbda-4644-8c32-c0b1571aeea2\") " pod="openshift-marketplace/redhat-operators-tr2zm" Oct 13 08:35:59 crc kubenswrapper[4833]: I1013 08:35:59.349921 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ff846c5-cbda-4644-8c32-c0b1571aeea2-catalog-content\") pod \"redhat-operators-tr2zm\" (UID: \"4ff846c5-cbda-4644-8c32-c0b1571aeea2\") " pod="openshift-marketplace/redhat-operators-tr2zm" Oct 13 08:35:59 crc kubenswrapper[4833]: I1013 08:35:59.372258 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxmpj\" (UniqueName: \"kubernetes.io/projected/4ff846c5-cbda-4644-8c32-c0b1571aeea2-kube-api-access-pxmpj\") pod \"redhat-operators-tr2zm\" (UID: \"4ff846c5-cbda-4644-8c32-c0b1571aeea2\") " pod="openshift-marketplace/redhat-operators-tr2zm" Oct 13 08:35:59 crc kubenswrapper[4833]: I1013 08:35:59.429874 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tr2zm" Oct 13 08:35:59 crc kubenswrapper[4833]: I1013 08:35:59.891549 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tr2zm"] Oct 13 08:35:59 crc kubenswrapper[4833]: W1013 08:35:59.902172 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ff846c5_cbda_4644_8c32_c0b1571aeea2.slice/crio-b2d3c421e6c30e4cc39d0efea00d79834ec2046308d8b71aa02e46e36efe1777 WatchSource:0}: Error finding container b2d3c421e6c30e4cc39d0efea00d79834ec2046308d8b71aa02e46e36efe1777: Status 404 returned error can't find the container with id b2d3c421e6c30e4cc39d0efea00d79834ec2046308d8b71aa02e46e36efe1777 Oct 13 08:36:00 crc kubenswrapper[4833]: I1013 08:36:00.464673 4833 generic.go:334] "Generic (PLEG): container finished" podID="4ff846c5-cbda-4644-8c32-c0b1571aeea2" containerID="0f10b2803ddf93fe6530e22b50bbf15144fa18ab08822977f24ce344258cb314" exitCode=0 Oct 13 08:36:00 crc kubenswrapper[4833]: I1013 08:36:00.464725 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr2zm" event={"ID":"4ff846c5-cbda-4644-8c32-c0b1571aeea2","Type":"ContainerDied","Data":"0f10b2803ddf93fe6530e22b50bbf15144fa18ab08822977f24ce344258cb314"} Oct 13 08:36:00 crc kubenswrapper[4833]: I1013 08:36:00.466099 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr2zm" event={"ID":"4ff846c5-cbda-4644-8c32-c0b1571aeea2","Type":"ContainerStarted","Data":"b2d3c421e6c30e4cc39d0efea00d79834ec2046308d8b71aa02e46e36efe1777"} Oct 13 08:36:00 crc kubenswrapper[4833]: I1013 08:36:00.542572 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:36:00 crc kubenswrapper[4833]: I1013 08:36:00.542655 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 08:36:02 crc kubenswrapper[4833]: I1013 08:36:02.492237 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr2zm" event={"ID":"4ff846c5-cbda-4644-8c32-c0b1571aeea2","Type":"ContainerStarted","Data":"d882a7ff3293050a31dd4a3b3dcdf664a24c06a68a7e4dc3aec25aa422e790e9"} Oct 13 08:36:05 crc kubenswrapper[4833]: I1013 08:36:05.531571 4833 generic.go:334] "Generic (PLEG): container finished" podID="4ff846c5-cbda-4644-8c32-c0b1571aeea2" containerID="d882a7ff3293050a31dd4a3b3dcdf664a24c06a68a7e4dc3aec25aa422e790e9" exitCode=0 Oct 13 08:36:05 crc kubenswrapper[4833]: I1013 08:36:05.531693 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr2zm" event={"ID":"4ff846c5-cbda-4644-8c32-c0b1571aeea2","Type":"ContainerDied","Data":"d882a7ff3293050a31dd4a3b3dcdf664a24c06a68a7e4dc3aec25aa422e790e9"} Oct 13 08:36:06 crc kubenswrapper[4833]: I1013 08:36:06.552630 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr2zm" event={"ID":"4ff846c5-cbda-4644-8c32-c0b1571aeea2","Type":"ContainerStarted","Data":"22ddb7f528d3f00c595064018d8ea11d8704f3b92e916c87029cb02a8db34a96"} Oct 13 08:36:06 crc kubenswrapper[4833]: I1013 08:36:06.577186 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tr2zm" podStartSLOduration=1.959948571 podStartE2EDuration="7.577166615s" podCreationTimestamp="2025-10-13 08:35:59 +0000 UTC" firstStartedPulling="2025-10-13 08:36:00.467420309 +0000 UTC m=+7650.567843235" lastFinishedPulling="2025-10-13 08:36:06.084638343 +0000 UTC m=+7656.185061279" observedRunningTime="2025-10-13 08:36:06.569724313 +0000 UTC m=+7656.670147259" watchObservedRunningTime="2025-10-13 08:36:06.577166615 +0000 UTC m=+7656.677589541" Oct 13 08:36:09 crc kubenswrapper[4833]: I1013 08:36:09.430944 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tr2zm" Oct 13 08:36:09 crc kubenswrapper[4833]: I1013 08:36:09.431364 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tr2zm" Oct 13 08:36:10 crc kubenswrapper[4833]: I1013 08:36:10.499658 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tr2zm" podUID="4ff846c5-cbda-4644-8c32-c0b1571aeea2" containerName="registry-server" probeResult="failure" output=< Oct 13 08:36:10 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Oct 13 08:36:10 crc kubenswrapper[4833]: > Oct 13 08:36:19 crc kubenswrapper[4833]: I1013 08:36:19.489400 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tr2zm" Oct 13 08:36:19 crc kubenswrapper[4833]: I1013 08:36:19.571301 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tr2zm" Oct 13 08:36:19 crc kubenswrapper[4833]: I1013 08:36:19.735945 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tr2zm"] Oct 13 08:36:20 crc kubenswrapper[4833]: I1013 08:36:20.702218 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tr2zm" podUID="4ff846c5-cbda-4644-8c32-c0b1571aeea2" containerName="registry-server" containerID="cri-o://22ddb7f528d3f00c595064018d8ea11d8704f3b92e916c87029cb02a8db34a96" gracePeriod=2 Oct 13 08:36:21 crc kubenswrapper[4833]: I1013 08:36:21.267347 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tr2zm" Oct 13 08:36:21 crc kubenswrapper[4833]: I1013 08:36:21.372169 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ff846c5-cbda-4644-8c32-c0b1571aeea2-catalog-content\") pod \"4ff846c5-cbda-4644-8c32-c0b1571aeea2\" (UID: \"4ff846c5-cbda-4644-8c32-c0b1571aeea2\") " Oct 13 08:36:21 crc kubenswrapper[4833]: I1013 08:36:21.372280 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxmpj\" (UniqueName: \"kubernetes.io/projected/4ff846c5-cbda-4644-8c32-c0b1571aeea2-kube-api-access-pxmpj\") pod \"4ff846c5-cbda-4644-8c32-c0b1571aeea2\" (UID: \"4ff846c5-cbda-4644-8c32-c0b1571aeea2\") " Oct 13 08:36:21 crc kubenswrapper[4833]: I1013 08:36:21.372327 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ff846c5-cbda-4644-8c32-c0b1571aeea2-utilities\") pod \"4ff846c5-cbda-4644-8c32-c0b1571aeea2\" (UID: \"4ff846c5-cbda-4644-8c32-c0b1571aeea2\") " Oct 13 08:36:21 crc kubenswrapper[4833]: I1013 08:36:21.374469 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ff846c5-cbda-4644-8c32-c0b1571aeea2-utilities" (OuterVolumeSpecName: "utilities") pod "4ff846c5-cbda-4644-8c32-c0b1571aeea2" (UID: "4ff846c5-cbda-4644-8c32-c0b1571aeea2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:36:21 crc kubenswrapper[4833]: I1013 08:36:21.380726 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ff846c5-cbda-4644-8c32-c0b1571aeea2-kube-api-access-pxmpj" (OuterVolumeSpecName: "kube-api-access-pxmpj") pod "4ff846c5-cbda-4644-8c32-c0b1571aeea2" (UID: "4ff846c5-cbda-4644-8c32-c0b1571aeea2"). InnerVolumeSpecName "kube-api-access-pxmpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:36:21 crc kubenswrapper[4833]: I1013 08:36:21.461473 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ff846c5-cbda-4644-8c32-c0b1571aeea2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ff846c5-cbda-4644-8c32-c0b1571aeea2" (UID: "4ff846c5-cbda-4644-8c32-c0b1571aeea2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:36:21 crc kubenswrapper[4833]: I1013 08:36:21.475960 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ff846c5-cbda-4644-8c32-c0b1571aeea2-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 08:36:21 crc kubenswrapper[4833]: I1013 08:36:21.476030 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxmpj\" (UniqueName: \"kubernetes.io/projected/4ff846c5-cbda-4644-8c32-c0b1571aeea2-kube-api-access-pxmpj\") on node \"crc\" DevicePath \"\"" Oct 13 08:36:21 crc kubenswrapper[4833]: I1013 08:36:21.476053 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ff846c5-cbda-4644-8c32-c0b1571aeea2-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 08:36:21 crc kubenswrapper[4833]: I1013 08:36:21.717563 4833 generic.go:334] "Generic (PLEG): container finished" podID="4ff846c5-cbda-4644-8c32-c0b1571aeea2" containerID="22ddb7f528d3f00c595064018d8ea11d8704f3b92e916c87029cb02a8db34a96" exitCode=0 Oct 13 08:36:21 crc kubenswrapper[4833]: I1013 08:36:21.717907 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr2zm" event={"ID":"4ff846c5-cbda-4644-8c32-c0b1571aeea2","Type":"ContainerDied","Data":"22ddb7f528d3f00c595064018d8ea11d8704f3b92e916c87029cb02a8db34a96"} Oct 13 08:36:21 crc kubenswrapper[4833]: I1013 08:36:21.717939 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr2zm" event={"ID":"4ff846c5-cbda-4644-8c32-c0b1571aeea2","Type":"ContainerDied","Data":"b2d3c421e6c30e4cc39d0efea00d79834ec2046308d8b71aa02e46e36efe1777"} Oct 13 08:36:21 crc kubenswrapper[4833]: I1013 08:36:21.717961 4833 scope.go:117] "RemoveContainer" containerID="22ddb7f528d3f00c595064018d8ea11d8704f3b92e916c87029cb02a8db34a96" Oct 13 08:36:21 crc kubenswrapper[4833]: I1013 08:36:21.718018 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tr2zm" Oct 13 08:36:21 crc kubenswrapper[4833]: I1013 08:36:21.775260 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tr2zm"] Oct 13 08:36:21 crc kubenswrapper[4833]: I1013 08:36:21.779664 4833 scope.go:117] "RemoveContainer" containerID="d882a7ff3293050a31dd4a3b3dcdf664a24c06a68a7e4dc3aec25aa422e790e9" Oct 13 08:36:21 crc kubenswrapper[4833]: I1013 08:36:21.785942 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tr2zm"] Oct 13 08:36:21 crc kubenswrapper[4833]: I1013 08:36:21.806671 4833 scope.go:117] "RemoveContainer" containerID="0f10b2803ddf93fe6530e22b50bbf15144fa18ab08822977f24ce344258cb314" Oct 13 08:36:21 crc kubenswrapper[4833]: I1013 08:36:21.868579 4833 scope.go:117] "RemoveContainer" containerID="22ddb7f528d3f00c595064018d8ea11d8704f3b92e916c87029cb02a8db34a96" Oct 13 08:36:21 crc kubenswrapper[4833]: E1013 08:36:21.869044 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22ddb7f528d3f00c595064018d8ea11d8704f3b92e916c87029cb02a8db34a96\": container with ID starting with 22ddb7f528d3f00c595064018d8ea11d8704f3b92e916c87029cb02a8db34a96 not found: ID does not exist" containerID="22ddb7f528d3f00c595064018d8ea11d8704f3b92e916c87029cb02a8db34a96" Oct 13 08:36:21 crc kubenswrapper[4833]: I1013 08:36:21.869096 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22ddb7f528d3f00c595064018d8ea11d8704f3b92e916c87029cb02a8db34a96"} err="failed to get container status \"22ddb7f528d3f00c595064018d8ea11d8704f3b92e916c87029cb02a8db34a96\": rpc error: code = NotFound desc = could not find container \"22ddb7f528d3f00c595064018d8ea11d8704f3b92e916c87029cb02a8db34a96\": container with ID starting with 22ddb7f528d3f00c595064018d8ea11d8704f3b92e916c87029cb02a8db34a96 not found: ID does not exist" Oct 13 08:36:21 crc kubenswrapper[4833]: I1013 08:36:21.869129 4833 scope.go:117] "RemoveContainer" containerID="d882a7ff3293050a31dd4a3b3dcdf664a24c06a68a7e4dc3aec25aa422e790e9" Oct 13 08:36:21 crc kubenswrapper[4833]: E1013 08:36:21.869461 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d882a7ff3293050a31dd4a3b3dcdf664a24c06a68a7e4dc3aec25aa422e790e9\": container with ID starting with d882a7ff3293050a31dd4a3b3dcdf664a24c06a68a7e4dc3aec25aa422e790e9 not found: ID does not exist" containerID="d882a7ff3293050a31dd4a3b3dcdf664a24c06a68a7e4dc3aec25aa422e790e9" Oct 13 08:36:21 crc kubenswrapper[4833]: I1013 08:36:21.869501 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d882a7ff3293050a31dd4a3b3dcdf664a24c06a68a7e4dc3aec25aa422e790e9"} err="failed to get container status \"d882a7ff3293050a31dd4a3b3dcdf664a24c06a68a7e4dc3aec25aa422e790e9\": rpc error: code = NotFound desc = could not find container \"d882a7ff3293050a31dd4a3b3dcdf664a24c06a68a7e4dc3aec25aa422e790e9\": container with ID starting with d882a7ff3293050a31dd4a3b3dcdf664a24c06a68a7e4dc3aec25aa422e790e9 not found: ID does not exist" Oct 13 08:36:21 crc kubenswrapper[4833]: I1013 08:36:21.869529 4833 scope.go:117] "RemoveContainer" containerID="0f10b2803ddf93fe6530e22b50bbf15144fa18ab08822977f24ce344258cb314" Oct 13 08:36:21 crc kubenswrapper[4833]: E1013 08:36:21.870077 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f10b2803ddf93fe6530e22b50bbf15144fa18ab08822977f24ce344258cb314\": container with ID starting with 0f10b2803ddf93fe6530e22b50bbf15144fa18ab08822977f24ce344258cb314 not found: ID does not exist" containerID="0f10b2803ddf93fe6530e22b50bbf15144fa18ab08822977f24ce344258cb314" Oct 13 08:36:21 crc kubenswrapper[4833]: I1013 08:36:21.870232 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f10b2803ddf93fe6530e22b50bbf15144fa18ab08822977f24ce344258cb314"} err="failed to get container status \"0f10b2803ddf93fe6530e22b50bbf15144fa18ab08822977f24ce344258cb314\": rpc error: code = NotFound desc = could not find container \"0f10b2803ddf93fe6530e22b50bbf15144fa18ab08822977f24ce344258cb314\": container with ID starting with 0f10b2803ddf93fe6530e22b50bbf15144fa18ab08822977f24ce344258cb314 not found: ID does not exist" Oct 13 08:36:22 crc kubenswrapper[4833]: I1013 08:36:22.639188 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ff846c5-cbda-4644-8c32-c0b1571aeea2" path="/var/lib/kubelet/pods/4ff846c5-cbda-4644-8c32-c0b1571aeea2/volumes" Oct 13 08:36:30 crc kubenswrapper[4833]: I1013 08:36:30.543332 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:36:30 crc kubenswrapper[4833]: I1013 08:36:30.544353 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 08:36:54 crc kubenswrapper[4833]: I1013 08:36:54.118620 4833 generic.go:334] "Generic (PLEG): container finished" podID="4cd90ca5-fdd6-4c4b-8f4c-6c156a814975" containerID="e2bbfc2c5395efdc55fc27c9dadad7a3a3d1856460e933fdf009312cd9867290" exitCode=0 Oct 13 08:36:54 crc kubenswrapper[4833]: I1013 08:36:54.118835 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-vx4f2" event={"ID":"4cd90ca5-fdd6-4c4b-8f4c-6c156a814975","Type":"ContainerDied","Data":"e2bbfc2c5395efdc55fc27c9dadad7a3a3d1856460e933fdf009312cd9867290"} Oct 13 08:36:55 crc kubenswrapper[4833]: I1013 08:36:55.647187 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-vx4f2" Oct 13 08:36:55 crc kubenswrapper[4833]: I1013 08:36:55.714713 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4cd90ca5-fdd6-4c4b-8f4c-6c156a814975-ssh-key\") pod \"4cd90ca5-fdd6-4c4b-8f4c-6c156a814975\" (UID: \"4cd90ca5-fdd6-4c4b-8f4c-6c156a814975\") " Oct 13 08:36:55 crc kubenswrapper[4833]: I1013 08:36:55.714923 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqfbr\" (UniqueName: \"kubernetes.io/projected/4cd90ca5-fdd6-4c4b-8f4c-6c156a814975-kube-api-access-tqfbr\") pod \"4cd90ca5-fdd6-4c4b-8f4c-6c156a814975\" (UID: \"4cd90ca5-fdd6-4c4b-8f4c-6c156a814975\") " Oct 13 08:36:55 crc kubenswrapper[4833]: I1013 08:36:55.714961 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cd90ca5-fdd6-4c4b-8f4c-6c156a814975-inventory\") pod \"4cd90ca5-fdd6-4c4b-8f4c-6c156a814975\" (UID: \"4cd90ca5-fdd6-4c4b-8f4c-6c156a814975\") " Oct 13 08:36:55 crc kubenswrapper[4833]: I1013 08:36:55.720096 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cd90ca5-fdd6-4c4b-8f4c-6c156a814975-kube-api-access-tqfbr" (OuterVolumeSpecName: "kube-api-access-tqfbr") pod "4cd90ca5-fdd6-4c4b-8f4c-6c156a814975" (UID: "4cd90ca5-fdd6-4c4b-8f4c-6c156a814975"). InnerVolumeSpecName "kube-api-access-tqfbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:36:55 crc kubenswrapper[4833]: I1013 08:36:55.744590 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cd90ca5-fdd6-4c4b-8f4c-6c156a814975-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4cd90ca5-fdd6-4c4b-8f4c-6c156a814975" (UID: "4cd90ca5-fdd6-4c4b-8f4c-6c156a814975"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:36:55 crc kubenswrapper[4833]: I1013 08:36:55.748346 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cd90ca5-fdd6-4c4b-8f4c-6c156a814975-inventory" (OuterVolumeSpecName: "inventory") pod "4cd90ca5-fdd6-4c4b-8f4c-6c156a814975" (UID: "4cd90ca5-fdd6-4c4b-8f4c-6c156a814975"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:36:55 crc kubenswrapper[4833]: I1013 08:36:55.818830 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4cd90ca5-fdd6-4c4b-8f4c-6c156a814975-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 08:36:55 crc kubenswrapper[4833]: I1013 08:36:55.818958 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqfbr\" (UniqueName: \"kubernetes.io/projected/4cd90ca5-fdd6-4c4b-8f4c-6c156a814975-kube-api-access-tqfbr\") on node \"crc\" DevicePath \"\"" Oct 13 08:36:55 crc kubenswrapper[4833]: I1013 08:36:55.818986 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cd90ca5-fdd6-4c4b-8f4c-6c156a814975-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 08:36:56 crc kubenswrapper[4833]: I1013 08:36:56.147420 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-vx4f2" event={"ID":"4cd90ca5-fdd6-4c4b-8f4c-6c156a814975","Type":"ContainerDied","Data":"0310a2817263aa9b5f8696704c5e959a3c1d7c5c621b2d667aef8cb3c288f0fe"} Oct 13 08:36:56 crc kubenswrapper[4833]: I1013 08:36:56.147469 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0310a2817263aa9b5f8696704c5e959a3c1d7c5c621b2d667aef8cb3c288f0fe" Oct 13 08:36:56 crc kubenswrapper[4833]: I1013 08:36:56.147489 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-vx4f2" Oct 13 08:36:56 crc kubenswrapper[4833]: I1013 08:36:56.272760 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-kpjvj"] Oct 13 08:36:56 crc kubenswrapper[4833]: E1013 08:36:56.273291 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ff846c5-cbda-4644-8c32-c0b1571aeea2" containerName="extract-content" Oct 13 08:36:56 crc kubenswrapper[4833]: I1013 08:36:56.273311 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ff846c5-cbda-4644-8c32-c0b1571aeea2" containerName="extract-content" Oct 13 08:36:56 crc kubenswrapper[4833]: E1013 08:36:56.273330 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ff846c5-cbda-4644-8c32-c0b1571aeea2" containerName="registry-server" Oct 13 08:36:56 crc kubenswrapper[4833]: I1013 08:36:56.273338 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ff846c5-cbda-4644-8c32-c0b1571aeea2" containerName="registry-server" Oct 13 08:36:56 crc kubenswrapper[4833]: E1013 08:36:56.273363 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ff846c5-cbda-4644-8c32-c0b1571aeea2" containerName="extract-utilities" Oct 13 08:36:56 crc kubenswrapper[4833]: I1013 08:36:56.273371 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ff846c5-cbda-4644-8c32-c0b1571aeea2" containerName="extract-utilities" Oct 13 08:36:56 crc kubenswrapper[4833]: E1013 08:36:56.273395 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd90ca5-fdd6-4c4b-8f4c-6c156a814975" containerName="download-cache-openstack-openstack-cell1" Oct 13 08:36:56 crc kubenswrapper[4833]: I1013 08:36:56.273404 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd90ca5-fdd6-4c4b-8f4c-6c156a814975" containerName="download-cache-openstack-openstack-cell1" Oct 13 08:36:56 crc kubenswrapper[4833]: I1013 08:36:56.273697 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cd90ca5-fdd6-4c4b-8f4c-6c156a814975" containerName="download-cache-openstack-openstack-cell1" Oct 13 08:36:56 crc kubenswrapper[4833]: I1013 08:36:56.273720 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ff846c5-cbda-4644-8c32-c0b1571aeea2" containerName="registry-server" Oct 13 08:36:56 crc kubenswrapper[4833]: I1013 08:36:56.274577 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-kpjvj" Oct 13 08:36:56 crc kubenswrapper[4833]: I1013 08:36:56.278050 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 13 08:36:56 crc kubenswrapper[4833]: I1013 08:36:56.279236 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-qqrx8" Oct 13 08:36:56 crc kubenswrapper[4833]: I1013 08:36:56.279662 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 08:36:56 crc kubenswrapper[4833]: I1013 08:36:56.280025 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 13 08:36:56 crc kubenswrapper[4833]: I1013 08:36:56.288734 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-kpjvj"] Oct 13 08:36:56 crc kubenswrapper[4833]: I1013 08:36:56.332002 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvbq6\" (UniqueName: \"kubernetes.io/projected/c42a7993-128b-4196-b9b0-0622b7ecfca4-kube-api-access-pvbq6\") pod \"configure-network-openstack-openstack-cell1-kpjvj\" (UID: \"c42a7993-128b-4196-b9b0-0622b7ecfca4\") " pod="openstack/configure-network-openstack-openstack-cell1-kpjvj" Oct 13 08:36:56 crc kubenswrapper[4833]: I1013 08:36:56.332175 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c42a7993-128b-4196-b9b0-0622b7ecfca4-inventory\") pod \"configure-network-openstack-openstack-cell1-kpjvj\" (UID: \"c42a7993-128b-4196-b9b0-0622b7ecfca4\") " pod="openstack/configure-network-openstack-openstack-cell1-kpjvj" Oct 13 08:36:56 crc kubenswrapper[4833]: I1013 08:36:56.332262 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c42a7993-128b-4196-b9b0-0622b7ecfca4-ssh-key\") pod \"configure-network-openstack-openstack-cell1-kpjvj\" (UID: \"c42a7993-128b-4196-b9b0-0622b7ecfca4\") " pod="openstack/configure-network-openstack-openstack-cell1-kpjvj" Oct 13 08:36:56 crc kubenswrapper[4833]: I1013 08:36:56.434323 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvbq6\" (UniqueName: \"kubernetes.io/projected/c42a7993-128b-4196-b9b0-0622b7ecfca4-kube-api-access-pvbq6\") pod \"configure-network-openstack-openstack-cell1-kpjvj\" (UID: \"c42a7993-128b-4196-b9b0-0622b7ecfca4\") " pod="openstack/configure-network-openstack-openstack-cell1-kpjvj" Oct 13 08:36:56 crc kubenswrapper[4833]: I1013 08:36:56.434435 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c42a7993-128b-4196-b9b0-0622b7ecfca4-inventory\") pod \"configure-network-openstack-openstack-cell1-kpjvj\" (UID: \"c42a7993-128b-4196-b9b0-0622b7ecfca4\") " pod="openstack/configure-network-openstack-openstack-cell1-kpjvj" Oct 13 08:36:56 crc kubenswrapper[4833]: I1013 08:36:56.434496 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c42a7993-128b-4196-b9b0-0622b7ecfca4-ssh-key\") pod \"configure-network-openstack-openstack-cell1-kpjvj\" (UID: \"c42a7993-128b-4196-b9b0-0622b7ecfca4\") " pod="openstack/configure-network-openstack-openstack-cell1-kpjvj" Oct 13 08:36:56 crc kubenswrapper[4833]: I1013 08:36:56.439161 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c42a7993-128b-4196-b9b0-0622b7ecfca4-ssh-key\") pod \"configure-network-openstack-openstack-cell1-kpjvj\" (UID: \"c42a7993-128b-4196-b9b0-0622b7ecfca4\") " pod="openstack/configure-network-openstack-openstack-cell1-kpjvj" Oct 13 08:36:56 crc kubenswrapper[4833]: I1013 08:36:56.440306 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c42a7993-128b-4196-b9b0-0622b7ecfca4-inventory\") pod \"configure-network-openstack-openstack-cell1-kpjvj\" (UID: \"c42a7993-128b-4196-b9b0-0622b7ecfca4\") " pod="openstack/configure-network-openstack-openstack-cell1-kpjvj" Oct 13 08:36:56 crc kubenswrapper[4833]: I1013 08:36:56.461234 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvbq6\" (UniqueName: \"kubernetes.io/projected/c42a7993-128b-4196-b9b0-0622b7ecfca4-kube-api-access-pvbq6\") pod \"configure-network-openstack-openstack-cell1-kpjvj\" (UID: \"c42a7993-128b-4196-b9b0-0622b7ecfca4\") " pod="openstack/configure-network-openstack-openstack-cell1-kpjvj" Oct 13 08:36:56 crc kubenswrapper[4833]: I1013 08:36:56.621187 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-kpjvj" Oct 13 08:36:57 crc kubenswrapper[4833]: I1013 08:36:57.249158 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-kpjvj"] Oct 13 08:36:58 crc kubenswrapper[4833]: I1013 08:36:58.175328 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-kpjvj" event={"ID":"c42a7993-128b-4196-b9b0-0622b7ecfca4","Type":"ContainerStarted","Data":"afabe9e88ac7a8285b2b80efa4d7055f31a1641508f83e5e12ab4e7c0898bc40"} Oct 13 08:36:59 crc kubenswrapper[4833]: I1013 08:36:59.194907 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-kpjvj" event={"ID":"c42a7993-128b-4196-b9b0-0622b7ecfca4","Type":"ContainerStarted","Data":"3a72002f8e44f4984da06fb4cde73d02268267fc220d8a31157bffdd4125f346"} Oct 13 08:36:59 crc kubenswrapper[4833]: I1013 08:36:59.221392 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-cell1-kpjvj" podStartSLOduration=2.506127066 podStartE2EDuration="3.221374564s" podCreationTimestamp="2025-10-13 08:36:56 +0000 UTC" firstStartedPulling="2025-10-13 08:36:57.247442308 +0000 UTC m=+7707.347865234" lastFinishedPulling="2025-10-13 08:36:57.962689816 +0000 UTC m=+7708.063112732" observedRunningTime="2025-10-13 08:36:59.217425742 +0000 UTC m=+7709.317848698" watchObservedRunningTime="2025-10-13 08:36:59.221374564 +0000 UTC m=+7709.321797490" Oct 13 08:37:00 crc kubenswrapper[4833]: I1013 08:37:00.542366 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:37:00 crc kubenswrapper[4833]: I1013 08:37:00.542799 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 08:37:00 crc kubenswrapper[4833]: I1013 08:37:00.542870 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 08:37:00 crc kubenswrapper[4833]: I1013 08:37:00.543992 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"18757d11b2563537d0bd73435161e9ea0a84b6b44238545002dede1960b151c7"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 08:37:00 crc kubenswrapper[4833]: I1013 08:37:00.544093 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://18757d11b2563537d0bd73435161e9ea0a84b6b44238545002dede1960b151c7" gracePeriod=600 Oct 13 08:37:00 crc kubenswrapper[4833]: E1013 08:37:00.670133 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:37:01 crc kubenswrapper[4833]: I1013 08:37:01.222739 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="18757d11b2563537d0bd73435161e9ea0a84b6b44238545002dede1960b151c7" exitCode=0 Oct 13 08:37:01 crc kubenswrapper[4833]: I1013 08:37:01.223036 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"18757d11b2563537d0bd73435161e9ea0a84b6b44238545002dede1960b151c7"} Oct 13 08:37:01 crc kubenswrapper[4833]: I1013 08:37:01.223169 4833 scope.go:117] "RemoveContainer" containerID="a61878d9bc65041e67d954d25961f084c1ef7801850f460f69cbb89c62e52cb1" Oct 13 08:37:01 crc kubenswrapper[4833]: I1013 08:37:01.224337 4833 scope.go:117] "RemoveContainer" containerID="18757d11b2563537d0bd73435161e9ea0a84b6b44238545002dede1960b151c7" Oct 13 08:37:01 crc kubenswrapper[4833]: E1013 08:37:01.224753 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:37:16 crc kubenswrapper[4833]: I1013 08:37:16.629349 4833 scope.go:117] "RemoveContainer" containerID="18757d11b2563537d0bd73435161e9ea0a84b6b44238545002dede1960b151c7" Oct 13 08:37:16 crc kubenswrapper[4833]: E1013 08:37:16.630259 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:37:29 crc kubenswrapper[4833]: I1013 08:37:29.627110 4833 scope.go:117] "RemoveContainer" containerID="18757d11b2563537d0bd73435161e9ea0a84b6b44238545002dede1960b151c7" Oct 13 08:37:29 crc kubenswrapper[4833]: E1013 08:37:29.627976 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:37:43 crc kubenswrapper[4833]: I1013 08:37:43.628408 4833 scope.go:117] "RemoveContainer" containerID="18757d11b2563537d0bd73435161e9ea0a84b6b44238545002dede1960b151c7" Oct 13 08:37:43 crc kubenswrapper[4833]: E1013 08:37:43.629626 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:37:55 crc kubenswrapper[4833]: I1013 08:37:55.627567 4833 scope.go:117] "RemoveContainer" containerID="18757d11b2563537d0bd73435161e9ea0a84b6b44238545002dede1960b151c7" Oct 13 08:37:55 crc kubenswrapper[4833]: E1013 08:37:55.628584 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:38:08 crc kubenswrapper[4833]: I1013 08:38:08.628302 4833 scope.go:117] "RemoveContainer" containerID="18757d11b2563537d0bd73435161e9ea0a84b6b44238545002dede1960b151c7" Oct 13 08:38:08 crc kubenswrapper[4833]: E1013 08:38:08.629422 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:38:20 crc kubenswrapper[4833]: I1013 08:38:20.120190 4833 generic.go:334] "Generic (PLEG): container finished" podID="c42a7993-128b-4196-b9b0-0622b7ecfca4" containerID="3a72002f8e44f4984da06fb4cde73d02268267fc220d8a31157bffdd4125f346" exitCode=0 Oct 13 08:38:20 crc kubenswrapper[4833]: I1013 08:38:20.120339 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-kpjvj" event={"ID":"c42a7993-128b-4196-b9b0-0622b7ecfca4","Type":"ContainerDied","Data":"3a72002f8e44f4984da06fb4cde73d02268267fc220d8a31157bffdd4125f346"} Oct 13 08:38:21 crc kubenswrapper[4833]: I1013 08:38:21.596113 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-kpjvj" Oct 13 08:38:21 crc kubenswrapper[4833]: I1013 08:38:21.627290 4833 scope.go:117] "RemoveContainer" containerID="18757d11b2563537d0bd73435161e9ea0a84b6b44238545002dede1960b151c7" Oct 13 08:38:21 crc kubenswrapper[4833]: E1013 08:38:21.627722 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:38:21 crc kubenswrapper[4833]: I1013 08:38:21.660181 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c42a7993-128b-4196-b9b0-0622b7ecfca4-inventory\") pod \"c42a7993-128b-4196-b9b0-0622b7ecfca4\" (UID: \"c42a7993-128b-4196-b9b0-0622b7ecfca4\") " Oct 13 08:38:21 crc kubenswrapper[4833]: I1013 08:38:21.660474 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvbq6\" (UniqueName: \"kubernetes.io/projected/c42a7993-128b-4196-b9b0-0622b7ecfca4-kube-api-access-pvbq6\") pod \"c42a7993-128b-4196-b9b0-0622b7ecfca4\" (UID: \"c42a7993-128b-4196-b9b0-0622b7ecfca4\") " Oct 13 08:38:21 crc kubenswrapper[4833]: I1013 08:38:21.660516 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c42a7993-128b-4196-b9b0-0622b7ecfca4-ssh-key\") pod \"c42a7993-128b-4196-b9b0-0622b7ecfca4\" (UID: \"c42a7993-128b-4196-b9b0-0622b7ecfca4\") " Oct 13 08:38:21 crc kubenswrapper[4833]: I1013 08:38:21.682296 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c42a7993-128b-4196-b9b0-0622b7ecfca4-kube-api-access-pvbq6" (OuterVolumeSpecName: "kube-api-access-pvbq6") pod "c42a7993-128b-4196-b9b0-0622b7ecfca4" (UID: "c42a7993-128b-4196-b9b0-0622b7ecfca4"). InnerVolumeSpecName "kube-api-access-pvbq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:38:21 crc kubenswrapper[4833]: I1013 08:38:21.710476 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c42a7993-128b-4196-b9b0-0622b7ecfca4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c42a7993-128b-4196-b9b0-0622b7ecfca4" (UID: "c42a7993-128b-4196-b9b0-0622b7ecfca4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:38:21 crc kubenswrapper[4833]: I1013 08:38:21.714313 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c42a7993-128b-4196-b9b0-0622b7ecfca4-inventory" (OuterVolumeSpecName: "inventory") pod "c42a7993-128b-4196-b9b0-0622b7ecfca4" (UID: "c42a7993-128b-4196-b9b0-0622b7ecfca4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:38:21 crc kubenswrapper[4833]: I1013 08:38:21.772181 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvbq6\" (UniqueName: \"kubernetes.io/projected/c42a7993-128b-4196-b9b0-0622b7ecfca4-kube-api-access-pvbq6\") on node \"crc\" DevicePath \"\"" Oct 13 08:38:21 crc kubenswrapper[4833]: I1013 08:38:21.772229 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c42a7993-128b-4196-b9b0-0622b7ecfca4-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 08:38:21 crc kubenswrapper[4833]: I1013 08:38:21.772268 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c42a7993-128b-4196-b9b0-0622b7ecfca4-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 08:38:22 crc kubenswrapper[4833]: I1013 08:38:22.147422 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-kpjvj" event={"ID":"c42a7993-128b-4196-b9b0-0622b7ecfca4","Type":"ContainerDied","Data":"afabe9e88ac7a8285b2b80efa4d7055f31a1641508f83e5e12ab4e7c0898bc40"} Oct 13 08:38:22 crc kubenswrapper[4833]: I1013 08:38:22.147465 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afabe9e88ac7a8285b2b80efa4d7055f31a1641508f83e5e12ab4e7c0898bc40" Oct 13 08:38:22 crc kubenswrapper[4833]: I1013 08:38:22.147953 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-kpjvj" Oct 13 08:38:22 crc kubenswrapper[4833]: I1013 08:38:22.264450 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-ldvs7"] Oct 13 08:38:22 crc kubenswrapper[4833]: E1013 08:38:22.264977 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c42a7993-128b-4196-b9b0-0622b7ecfca4" containerName="configure-network-openstack-openstack-cell1" Oct 13 08:38:22 crc kubenswrapper[4833]: I1013 08:38:22.264998 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c42a7993-128b-4196-b9b0-0622b7ecfca4" containerName="configure-network-openstack-openstack-cell1" Oct 13 08:38:22 crc kubenswrapper[4833]: I1013 08:38:22.265277 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c42a7993-128b-4196-b9b0-0622b7ecfca4" containerName="configure-network-openstack-openstack-cell1" Oct 13 08:38:22 crc kubenswrapper[4833]: I1013 08:38:22.266183 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-ldvs7" Oct 13 08:38:22 crc kubenswrapper[4833]: I1013 08:38:22.268892 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-qqrx8" Oct 13 08:38:22 crc kubenswrapper[4833]: I1013 08:38:22.269361 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 13 08:38:22 crc kubenswrapper[4833]: I1013 08:38:22.271338 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 13 08:38:22 crc kubenswrapper[4833]: I1013 08:38:22.272495 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 08:38:22 crc kubenswrapper[4833]: I1013 08:38:22.293453 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-ldvs7"] Oct 13 08:38:22 crc kubenswrapper[4833]: I1013 08:38:22.384983 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78mdj\" (UniqueName: \"kubernetes.io/projected/eb27761f-f85a-4cb5-a221-b3d8eaf993c8-kube-api-access-78mdj\") pod \"validate-network-openstack-openstack-cell1-ldvs7\" (UID: \"eb27761f-f85a-4cb5-a221-b3d8eaf993c8\") " pod="openstack/validate-network-openstack-openstack-cell1-ldvs7" Oct 13 08:38:22 crc kubenswrapper[4833]: I1013 08:38:22.385173 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb27761f-f85a-4cb5-a221-b3d8eaf993c8-inventory\") pod \"validate-network-openstack-openstack-cell1-ldvs7\" (UID: \"eb27761f-f85a-4cb5-a221-b3d8eaf993c8\") " pod="openstack/validate-network-openstack-openstack-cell1-ldvs7" Oct 13 08:38:22 crc kubenswrapper[4833]: I1013 08:38:22.385517 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb27761f-f85a-4cb5-a221-b3d8eaf993c8-ssh-key\") pod \"validate-network-openstack-openstack-cell1-ldvs7\" (UID: \"eb27761f-f85a-4cb5-a221-b3d8eaf993c8\") " pod="openstack/validate-network-openstack-openstack-cell1-ldvs7" Oct 13 08:38:22 crc kubenswrapper[4833]: I1013 08:38:22.487722 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb27761f-f85a-4cb5-a221-b3d8eaf993c8-ssh-key\") pod \"validate-network-openstack-openstack-cell1-ldvs7\" (UID: \"eb27761f-f85a-4cb5-a221-b3d8eaf993c8\") " pod="openstack/validate-network-openstack-openstack-cell1-ldvs7" Oct 13 08:38:22 crc kubenswrapper[4833]: I1013 08:38:22.488007 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78mdj\" (UniqueName: \"kubernetes.io/projected/eb27761f-f85a-4cb5-a221-b3d8eaf993c8-kube-api-access-78mdj\") pod \"validate-network-openstack-openstack-cell1-ldvs7\" (UID: \"eb27761f-f85a-4cb5-a221-b3d8eaf993c8\") " pod="openstack/validate-network-openstack-openstack-cell1-ldvs7" Oct 13 08:38:22 crc kubenswrapper[4833]: I1013 08:38:22.488073 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb27761f-f85a-4cb5-a221-b3d8eaf993c8-inventory\") pod \"validate-network-openstack-openstack-cell1-ldvs7\" (UID: \"eb27761f-f85a-4cb5-a221-b3d8eaf993c8\") " pod="openstack/validate-network-openstack-openstack-cell1-ldvs7" Oct 13 08:38:22 crc kubenswrapper[4833]: I1013 08:38:22.494054 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb27761f-f85a-4cb5-a221-b3d8eaf993c8-ssh-key\") pod \"validate-network-openstack-openstack-cell1-ldvs7\" (UID: \"eb27761f-f85a-4cb5-a221-b3d8eaf993c8\") " pod="openstack/validate-network-openstack-openstack-cell1-ldvs7" Oct 13 08:38:22 crc kubenswrapper[4833]: I1013 08:38:22.495659 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb27761f-f85a-4cb5-a221-b3d8eaf993c8-inventory\") pod \"validate-network-openstack-openstack-cell1-ldvs7\" (UID: \"eb27761f-f85a-4cb5-a221-b3d8eaf993c8\") " pod="openstack/validate-network-openstack-openstack-cell1-ldvs7" Oct 13 08:38:22 crc kubenswrapper[4833]: I1013 08:38:22.510392 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78mdj\" (UniqueName: \"kubernetes.io/projected/eb27761f-f85a-4cb5-a221-b3d8eaf993c8-kube-api-access-78mdj\") pod \"validate-network-openstack-openstack-cell1-ldvs7\" (UID: \"eb27761f-f85a-4cb5-a221-b3d8eaf993c8\") " pod="openstack/validate-network-openstack-openstack-cell1-ldvs7" Oct 13 08:38:22 crc kubenswrapper[4833]: I1013 08:38:22.588127 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-ldvs7" Oct 13 08:38:23 crc kubenswrapper[4833]: I1013 08:38:23.185457 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-ldvs7"] Oct 13 08:38:24 crc kubenswrapper[4833]: I1013 08:38:24.168489 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-ldvs7" event={"ID":"eb27761f-f85a-4cb5-a221-b3d8eaf993c8","Type":"ContainerStarted","Data":"cea7b4b6b26c544efe3d0b5224cf5c0912833ae56dc33ac0dca060101917e3aa"} Oct 13 08:38:24 crc kubenswrapper[4833]: I1013 08:38:24.168868 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-ldvs7" event={"ID":"eb27761f-f85a-4cb5-a221-b3d8eaf993c8","Type":"ContainerStarted","Data":"aa3601fe6b6cacc66c22eec5f26ac4c32dd33e161249b0fbc4ca42f07364cb61"} Oct 13 08:38:24 crc kubenswrapper[4833]: I1013 08:38:24.192040 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-openstack-openstack-cell1-ldvs7" podStartSLOduration=1.618850295 podStartE2EDuration="2.191999699s" podCreationTimestamp="2025-10-13 08:38:22 +0000 UTC" firstStartedPulling="2025-10-13 08:38:23.194602965 +0000 UTC m=+7793.295025891" lastFinishedPulling="2025-10-13 08:38:23.767752359 +0000 UTC m=+7793.868175295" observedRunningTime="2025-10-13 08:38:24.18712348 +0000 UTC m=+7794.287546426" watchObservedRunningTime="2025-10-13 08:38:24.191999699 +0000 UTC m=+7794.292422655" Oct 13 08:38:29 crc kubenswrapper[4833]: I1013 08:38:29.223013 4833 generic.go:334] "Generic (PLEG): container finished" podID="eb27761f-f85a-4cb5-a221-b3d8eaf993c8" containerID="cea7b4b6b26c544efe3d0b5224cf5c0912833ae56dc33ac0dca060101917e3aa" exitCode=0 Oct 13 08:38:29 crc kubenswrapper[4833]: I1013 08:38:29.223086 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-ldvs7" event={"ID":"eb27761f-f85a-4cb5-a221-b3d8eaf993c8","Type":"ContainerDied","Data":"cea7b4b6b26c544efe3d0b5224cf5c0912833ae56dc33ac0dca060101917e3aa"} Oct 13 08:38:30 crc kubenswrapper[4833]: I1013 08:38:30.809971 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-ldvs7" Oct 13 08:38:30 crc kubenswrapper[4833]: I1013 08:38:30.898510 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb27761f-f85a-4cb5-a221-b3d8eaf993c8-inventory\") pod \"eb27761f-f85a-4cb5-a221-b3d8eaf993c8\" (UID: \"eb27761f-f85a-4cb5-a221-b3d8eaf993c8\") " Oct 13 08:38:30 crc kubenswrapper[4833]: I1013 08:38:30.898942 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78mdj\" (UniqueName: \"kubernetes.io/projected/eb27761f-f85a-4cb5-a221-b3d8eaf993c8-kube-api-access-78mdj\") pod \"eb27761f-f85a-4cb5-a221-b3d8eaf993c8\" (UID: \"eb27761f-f85a-4cb5-a221-b3d8eaf993c8\") " Oct 13 08:38:30 crc kubenswrapper[4833]: I1013 08:38:30.899082 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb27761f-f85a-4cb5-a221-b3d8eaf993c8-ssh-key\") pod \"eb27761f-f85a-4cb5-a221-b3d8eaf993c8\" (UID: \"eb27761f-f85a-4cb5-a221-b3d8eaf993c8\") " Oct 13 08:38:30 crc kubenswrapper[4833]: I1013 08:38:30.906895 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb27761f-f85a-4cb5-a221-b3d8eaf993c8-kube-api-access-78mdj" (OuterVolumeSpecName: "kube-api-access-78mdj") pod "eb27761f-f85a-4cb5-a221-b3d8eaf993c8" (UID: "eb27761f-f85a-4cb5-a221-b3d8eaf993c8"). InnerVolumeSpecName "kube-api-access-78mdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:38:30 crc kubenswrapper[4833]: I1013 08:38:30.939607 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb27761f-f85a-4cb5-a221-b3d8eaf993c8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "eb27761f-f85a-4cb5-a221-b3d8eaf993c8" (UID: "eb27761f-f85a-4cb5-a221-b3d8eaf993c8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:38:30 crc kubenswrapper[4833]: I1013 08:38:30.973288 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb27761f-f85a-4cb5-a221-b3d8eaf993c8-inventory" (OuterVolumeSpecName: "inventory") pod "eb27761f-f85a-4cb5-a221-b3d8eaf993c8" (UID: "eb27761f-f85a-4cb5-a221-b3d8eaf993c8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:38:31 crc kubenswrapper[4833]: I1013 08:38:31.002438 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb27761f-f85a-4cb5-a221-b3d8eaf993c8-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 08:38:31 crc kubenswrapper[4833]: I1013 08:38:31.002524 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78mdj\" (UniqueName: \"kubernetes.io/projected/eb27761f-f85a-4cb5-a221-b3d8eaf993c8-kube-api-access-78mdj\") on node \"crc\" DevicePath \"\"" Oct 13 08:38:31 crc kubenswrapper[4833]: I1013 08:38:31.002580 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb27761f-f85a-4cb5-a221-b3d8eaf993c8-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 08:38:31 crc kubenswrapper[4833]: I1013 08:38:31.254075 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-ldvs7" event={"ID":"eb27761f-f85a-4cb5-a221-b3d8eaf993c8","Type":"ContainerDied","Data":"aa3601fe6b6cacc66c22eec5f26ac4c32dd33e161249b0fbc4ca42f07364cb61"} Oct 13 08:38:31 crc kubenswrapper[4833]: I1013 08:38:31.254120 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa3601fe6b6cacc66c22eec5f26ac4c32dd33e161249b0fbc4ca42f07364cb61" Oct 13 08:38:31 crc kubenswrapper[4833]: I1013 08:38:31.254183 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-ldvs7" Oct 13 08:38:31 crc kubenswrapper[4833]: I1013 08:38:31.341261 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-cell1-sq85p"] Oct 13 08:38:31 crc kubenswrapper[4833]: E1013 08:38:31.341804 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb27761f-f85a-4cb5-a221-b3d8eaf993c8" containerName="validate-network-openstack-openstack-cell1" Oct 13 08:38:31 crc kubenswrapper[4833]: I1013 08:38:31.341830 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb27761f-f85a-4cb5-a221-b3d8eaf993c8" containerName="validate-network-openstack-openstack-cell1" Oct 13 08:38:31 crc kubenswrapper[4833]: I1013 08:38:31.342093 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb27761f-f85a-4cb5-a221-b3d8eaf993c8" containerName="validate-network-openstack-openstack-cell1" Oct 13 08:38:31 crc kubenswrapper[4833]: I1013 08:38:31.343016 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-sq85p" Oct 13 08:38:31 crc kubenswrapper[4833]: I1013 08:38:31.351600 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-qqrx8" Oct 13 08:38:31 crc kubenswrapper[4833]: I1013 08:38:31.352626 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 08:38:31 crc kubenswrapper[4833]: I1013 08:38:31.352765 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 13 08:38:31 crc kubenswrapper[4833]: I1013 08:38:31.352907 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 13 08:38:31 crc kubenswrapper[4833]: I1013 08:38:31.380839 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-sq85p"] Oct 13 08:38:31 crc kubenswrapper[4833]: I1013 08:38:31.412644 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/323322d4-ab6d-4f37-aea1-5c3497ce1522-inventory\") pod \"install-os-openstack-openstack-cell1-sq85p\" (UID: \"323322d4-ab6d-4f37-aea1-5c3497ce1522\") " pod="openstack/install-os-openstack-openstack-cell1-sq85p" Oct 13 08:38:31 crc kubenswrapper[4833]: I1013 08:38:31.412710 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/323322d4-ab6d-4f37-aea1-5c3497ce1522-ssh-key\") pod \"install-os-openstack-openstack-cell1-sq85p\" (UID: \"323322d4-ab6d-4f37-aea1-5c3497ce1522\") " pod="openstack/install-os-openstack-openstack-cell1-sq85p" Oct 13 08:38:31 crc kubenswrapper[4833]: I1013 08:38:31.412747 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5gb7\" (UniqueName: \"kubernetes.io/projected/323322d4-ab6d-4f37-aea1-5c3497ce1522-kube-api-access-d5gb7\") pod \"install-os-openstack-openstack-cell1-sq85p\" (UID: \"323322d4-ab6d-4f37-aea1-5c3497ce1522\") " pod="openstack/install-os-openstack-openstack-cell1-sq85p" Oct 13 08:38:31 crc kubenswrapper[4833]: I1013 08:38:31.514499 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/323322d4-ab6d-4f37-aea1-5c3497ce1522-inventory\") pod \"install-os-openstack-openstack-cell1-sq85p\" (UID: \"323322d4-ab6d-4f37-aea1-5c3497ce1522\") " pod="openstack/install-os-openstack-openstack-cell1-sq85p" Oct 13 08:38:31 crc kubenswrapper[4833]: I1013 08:38:31.514570 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/323322d4-ab6d-4f37-aea1-5c3497ce1522-ssh-key\") pod \"install-os-openstack-openstack-cell1-sq85p\" (UID: \"323322d4-ab6d-4f37-aea1-5c3497ce1522\") " pod="openstack/install-os-openstack-openstack-cell1-sq85p" Oct 13 08:38:31 crc kubenswrapper[4833]: I1013 08:38:31.514592 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5gb7\" (UniqueName: \"kubernetes.io/projected/323322d4-ab6d-4f37-aea1-5c3497ce1522-kube-api-access-d5gb7\") pod \"install-os-openstack-openstack-cell1-sq85p\" (UID: \"323322d4-ab6d-4f37-aea1-5c3497ce1522\") " pod="openstack/install-os-openstack-openstack-cell1-sq85p" Oct 13 08:38:31 crc kubenswrapper[4833]: I1013 08:38:31.520229 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/323322d4-ab6d-4f37-aea1-5c3497ce1522-inventory\") pod \"install-os-openstack-openstack-cell1-sq85p\" (UID: \"323322d4-ab6d-4f37-aea1-5c3497ce1522\") " pod="openstack/install-os-openstack-openstack-cell1-sq85p" Oct 13 08:38:31 crc kubenswrapper[4833]: I1013 08:38:31.521471 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/323322d4-ab6d-4f37-aea1-5c3497ce1522-ssh-key\") pod \"install-os-openstack-openstack-cell1-sq85p\" (UID: \"323322d4-ab6d-4f37-aea1-5c3497ce1522\") " pod="openstack/install-os-openstack-openstack-cell1-sq85p" Oct 13 08:38:31 crc kubenswrapper[4833]: I1013 08:38:31.541026 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5gb7\" (UniqueName: \"kubernetes.io/projected/323322d4-ab6d-4f37-aea1-5c3497ce1522-kube-api-access-d5gb7\") pod \"install-os-openstack-openstack-cell1-sq85p\" (UID: \"323322d4-ab6d-4f37-aea1-5c3497ce1522\") " pod="openstack/install-os-openstack-openstack-cell1-sq85p" Oct 13 08:38:31 crc kubenswrapper[4833]: I1013 08:38:31.670134 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-sq85p" Oct 13 08:38:32 crc kubenswrapper[4833]: I1013 08:38:32.287352 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-sq85p"] Oct 13 08:38:33 crc kubenswrapper[4833]: I1013 08:38:33.275875 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-sq85p" event={"ID":"323322d4-ab6d-4f37-aea1-5c3497ce1522","Type":"ContainerStarted","Data":"51650f9a11a4340ab325d5096fa5ddf405c42ce3264cf7c953dc4545a856a0f0"} Oct 13 08:38:33 crc kubenswrapper[4833]: I1013 08:38:33.276284 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-sq85p" event={"ID":"323322d4-ab6d-4f37-aea1-5c3497ce1522","Type":"ContainerStarted","Data":"f986d26cf32b863bb749b172a960c7cfe16e41fca1ebc7c0bebc700871ee775b"} Oct 13 08:38:33 crc kubenswrapper[4833]: I1013 08:38:33.295268 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-openstack-openstack-cell1-sq85p" podStartSLOduration=1.886295632 podStartE2EDuration="2.295250066s" podCreationTimestamp="2025-10-13 08:38:31 +0000 UTC" firstStartedPulling="2025-10-13 08:38:32.286511048 +0000 UTC m=+7802.386934004" lastFinishedPulling="2025-10-13 08:38:32.695465522 +0000 UTC m=+7802.795888438" observedRunningTime="2025-10-13 08:38:33.294618008 +0000 UTC m=+7803.395040964" watchObservedRunningTime="2025-10-13 08:38:33.295250066 +0000 UTC m=+7803.395672982" Oct 13 08:38:35 crc kubenswrapper[4833]: I1013 08:38:35.627395 4833 scope.go:117] "RemoveContainer" containerID="18757d11b2563537d0bd73435161e9ea0a84b6b44238545002dede1960b151c7" Oct 13 08:38:35 crc kubenswrapper[4833]: E1013 08:38:35.628244 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:38:48 crc kubenswrapper[4833]: I1013 08:38:48.627786 4833 scope.go:117] "RemoveContainer" containerID="18757d11b2563537d0bd73435161e9ea0a84b6b44238545002dede1960b151c7" Oct 13 08:38:48 crc kubenswrapper[4833]: E1013 08:38:48.628704 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:39:02 crc kubenswrapper[4833]: I1013 08:39:02.627581 4833 scope.go:117] "RemoveContainer" containerID="18757d11b2563537d0bd73435161e9ea0a84b6b44238545002dede1960b151c7" Oct 13 08:39:02 crc kubenswrapper[4833]: E1013 08:39:02.628702 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:39:14 crc kubenswrapper[4833]: I1013 08:39:14.627522 4833 scope.go:117] "RemoveContainer" containerID="18757d11b2563537d0bd73435161e9ea0a84b6b44238545002dede1960b151c7" Oct 13 08:39:14 crc kubenswrapper[4833]: E1013 08:39:14.628666 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:39:21 crc kubenswrapper[4833]: I1013 08:39:21.884947 4833 generic.go:334] "Generic (PLEG): container finished" podID="323322d4-ab6d-4f37-aea1-5c3497ce1522" containerID="51650f9a11a4340ab325d5096fa5ddf405c42ce3264cf7c953dc4545a856a0f0" exitCode=0 Oct 13 08:39:21 crc kubenswrapper[4833]: I1013 08:39:21.885011 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-sq85p" event={"ID":"323322d4-ab6d-4f37-aea1-5c3497ce1522","Type":"ContainerDied","Data":"51650f9a11a4340ab325d5096fa5ddf405c42ce3264cf7c953dc4545a856a0f0"} Oct 13 08:39:23 crc kubenswrapper[4833]: I1013 08:39:23.504988 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-sq85p" Oct 13 08:39:23 crc kubenswrapper[4833]: I1013 08:39:23.653245 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/323322d4-ab6d-4f37-aea1-5c3497ce1522-ssh-key\") pod \"323322d4-ab6d-4f37-aea1-5c3497ce1522\" (UID: \"323322d4-ab6d-4f37-aea1-5c3497ce1522\") " Oct 13 08:39:23 crc kubenswrapper[4833]: I1013 08:39:23.653340 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5gb7\" (UniqueName: \"kubernetes.io/projected/323322d4-ab6d-4f37-aea1-5c3497ce1522-kube-api-access-d5gb7\") pod \"323322d4-ab6d-4f37-aea1-5c3497ce1522\" (UID: \"323322d4-ab6d-4f37-aea1-5c3497ce1522\") " Oct 13 08:39:23 crc kubenswrapper[4833]: I1013 08:39:23.654766 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/323322d4-ab6d-4f37-aea1-5c3497ce1522-inventory\") pod \"323322d4-ab6d-4f37-aea1-5c3497ce1522\" (UID: \"323322d4-ab6d-4f37-aea1-5c3497ce1522\") " Oct 13 08:39:23 crc kubenswrapper[4833]: I1013 08:39:23.660907 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/323322d4-ab6d-4f37-aea1-5c3497ce1522-kube-api-access-d5gb7" (OuterVolumeSpecName: "kube-api-access-d5gb7") pod "323322d4-ab6d-4f37-aea1-5c3497ce1522" (UID: "323322d4-ab6d-4f37-aea1-5c3497ce1522"). InnerVolumeSpecName "kube-api-access-d5gb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:39:23 crc kubenswrapper[4833]: I1013 08:39:23.703373 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/323322d4-ab6d-4f37-aea1-5c3497ce1522-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "323322d4-ab6d-4f37-aea1-5c3497ce1522" (UID: "323322d4-ab6d-4f37-aea1-5c3497ce1522"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:39:23 crc kubenswrapper[4833]: I1013 08:39:23.711443 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/323322d4-ab6d-4f37-aea1-5c3497ce1522-inventory" (OuterVolumeSpecName: "inventory") pod "323322d4-ab6d-4f37-aea1-5c3497ce1522" (UID: "323322d4-ab6d-4f37-aea1-5c3497ce1522"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:39:23 crc kubenswrapper[4833]: I1013 08:39:23.757827 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/323322d4-ab6d-4f37-aea1-5c3497ce1522-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 08:39:23 crc kubenswrapper[4833]: I1013 08:39:23.757873 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5gb7\" (UniqueName: \"kubernetes.io/projected/323322d4-ab6d-4f37-aea1-5c3497ce1522-kube-api-access-d5gb7\") on node \"crc\" DevicePath \"\"" Oct 13 08:39:23 crc kubenswrapper[4833]: I1013 08:39:23.757910 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/323322d4-ab6d-4f37-aea1-5c3497ce1522-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 08:39:23 crc kubenswrapper[4833]: I1013 08:39:23.929243 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-sq85p" event={"ID":"323322d4-ab6d-4f37-aea1-5c3497ce1522","Type":"ContainerDied","Data":"f986d26cf32b863bb749b172a960c7cfe16e41fca1ebc7c0bebc700871ee775b"} Oct 13 08:39:23 crc kubenswrapper[4833]: I1013 08:39:23.929512 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f986d26cf32b863bb749b172a960c7cfe16e41fca1ebc7c0bebc700871ee775b" Oct 13 08:39:23 crc kubenswrapper[4833]: I1013 08:39:23.929677 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-sq85p" Oct 13 08:39:24 crc kubenswrapper[4833]: I1013 08:39:24.033790 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-f96v8"] Oct 13 08:39:24 crc kubenswrapper[4833]: E1013 08:39:24.034504 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="323322d4-ab6d-4f37-aea1-5c3497ce1522" containerName="install-os-openstack-openstack-cell1" Oct 13 08:39:24 crc kubenswrapper[4833]: I1013 08:39:24.034528 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="323322d4-ab6d-4f37-aea1-5c3497ce1522" containerName="install-os-openstack-openstack-cell1" Oct 13 08:39:24 crc kubenswrapper[4833]: I1013 08:39:24.034845 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="323322d4-ab6d-4f37-aea1-5c3497ce1522" containerName="install-os-openstack-openstack-cell1" Oct 13 08:39:24 crc kubenswrapper[4833]: I1013 08:39:24.035793 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-f96v8" Oct 13 08:39:24 crc kubenswrapper[4833]: I1013 08:39:24.039429 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-qqrx8" Oct 13 08:39:24 crc kubenswrapper[4833]: I1013 08:39:24.041084 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 13 08:39:24 crc kubenswrapper[4833]: I1013 08:39:24.041506 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 08:39:24 crc kubenswrapper[4833]: I1013 08:39:24.042099 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 13 08:39:24 crc kubenswrapper[4833]: I1013 08:39:24.049582 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-f96v8"] Oct 13 08:39:24 crc kubenswrapper[4833]: I1013 08:39:24.066000 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p725m\" (UniqueName: \"kubernetes.io/projected/a805910b-b612-45f1-9b8e-98c498855a3d-kube-api-access-p725m\") pod \"configure-os-openstack-openstack-cell1-f96v8\" (UID: \"a805910b-b612-45f1-9b8e-98c498855a3d\") " pod="openstack/configure-os-openstack-openstack-cell1-f96v8" Oct 13 08:39:24 crc kubenswrapper[4833]: I1013 08:39:24.066108 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a805910b-b612-45f1-9b8e-98c498855a3d-ssh-key\") pod \"configure-os-openstack-openstack-cell1-f96v8\" (UID: \"a805910b-b612-45f1-9b8e-98c498855a3d\") " pod="openstack/configure-os-openstack-openstack-cell1-f96v8" Oct 13 08:39:24 crc kubenswrapper[4833]: I1013 08:39:24.066228 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a805910b-b612-45f1-9b8e-98c498855a3d-inventory\") pod \"configure-os-openstack-openstack-cell1-f96v8\" (UID: \"a805910b-b612-45f1-9b8e-98c498855a3d\") " pod="openstack/configure-os-openstack-openstack-cell1-f96v8" Oct 13 08:39:24 crc kubenswrapper[4833]: I1013 08:39:24.168705 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a805910b-b612-45f1-9b8e-98c498855a3d-inventory\") pod \"configure-os-openstack-openstack-cell1-f96v8\" (UID: \"a805910b-b612-45f1-9b8e-98c498855a3d\") " pod="openstack/configure-os-openstack-openstack-cell1-f96v8" Oct 13 08:39:24 crc kubenswrapper[4833]: I1013 08:39:24.168877 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p725m\" (UniqueName: \"kubernetes.io/projected/a805910b-b612-45f1-9b8e-98c498855a3d-kube-api-access-p725m\") pod \"configure-os-openstack-openstack-cell1-f96v8\" (UID: \"a805910b-b612-45f1-9b8e-98c498855a3d\") " pod="openstack/configure-os-openstack-openstack-cell1-f96v8" Oct 13 08:39:24 crc kubenswrapper[4833]: I1013 08:39:24.169011 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a805910b-b612-45f1-9b8e-98c498855a3d-ssh-key\") pod \"configure-os-openstack-openstack-cell1-f96v8\" (UID: \"a805910b-b612-45f1-9b8e-98c498855a3d\") " pod="openstack/configure-os-openstack-openstack-cell1-f96v8" Oct 13 08:39:24 crc kubenswrapper[4833]: I1013 08:39:24.173211 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a805910b-b612-45f1-9b8e-98c498855a3d-inventory\") pod \"configure-os-openstack-openstack-cell1-f96v8\" (UID: \"a805910b-b612-45f1-9b8e-98c498855a3d\") " pod="openstack/configure-os-openstack-openstack-cell1-f96v8" Oct 13 08:39:24 crc kubenswrapper[4833]: I1013 08:39:24.173257 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a805910b-b612-45f1-9b8e-98c498855a3d-ssh-key\") pod \"configure-os-openstack-openstack-cell1-f96v8\" (UID: \"a805910b-b612-45f1-9b8e-98c498855a3d\") " pod="openstack/configure-os-openstack-openstack-cell1-f96v8" Oct 13 08:39:24 crc kubenswrapper[4833]: I1013 08:39:24.196328 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p725m\" (UniqueName: \"kubernetes.io/projected/a805910b-b612-45f1-9b8e-98c498855a3d-kube-api-access-p725m\") pod \"configure-os-openstack-openstack-cell1-f96v8\" (UID: \"a805910b-b612-45f1-9b8e-98c498855a3d\") " pod="openstack/configure-os-openstack-openstack-cell1-f96v8" Oct 13 08:39:24 crc kubenswrapper[4833]: I1013 08:39:24.370663 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-f96v8" Oct 13 08:39:25 crc kubenswrapper[4833]: I1013 08:39:25.006260 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-f96v8"] Oct 13 08:39:25 crc kubenswrapper[4833]: I1013 08:39:25.954997 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-f96v8" event={"ID":"a805910b-b612-45f1-9b8e-98c498855a3d","Type":"ContainerStarted","Data":"b68fcc47f23e58453f1a9592118be4db9e4f688162f6a9c59c825c7f736204ad"} Oct 13 08:39:25 crc kubenswrapper[4833]: I1013 08:39:25.955357 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-f96v8" event={"ID":"a805910b-b612-45f1-9b8e-98c498855a3d","Type":"ContainerStarted","Data":"992b58ad3a7a58a6e8681031dde4366170fc1532268f908e4d21f9fabd553dda"} Oct 13 08:39:25 crc kubenswrapper[4833]: I1013 08:39:25.994321 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-f96v8" podStartSLOduration=1.491692102 podStartE2EDuration="1.994288511s" podCreationTimestamp="2025-10-13 08:39:24 +0000 UTC" firstStartedPulling="2025-10-13 08:39:25.02236498 +0000 UTC m=+7855.122787906" lastFinishedPulling="2025-10-13 08:39:25.524961369 +0000 UTC m=+7855.625384315" observedRunningTime="2025-10-13 08:39:25.976496495 +0000 UTC m=+7856.076919441" watchObservedRunningTime="2025-10-13 08:39:25.994288511 +0000 UTC m=+7856.094711477" Oct 13 08:39:29 crc kubenswrapper[4833]: I1013 08:39:29.628434 4833 scope.go:117] "RemoveContainer" containerID="18757d11b2563537d0bd73435161e9ea0a84b6b44238545002dede1960b151c7" Oct 13 08:39:29 crc kubenswrapper[4833]: E1013 08:39:29.629833 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:39:41 crc kubenswrapper[4833]: I1013 08:39:41.627224 4833 scope.go:117] "RemoveContainer" containerID="18757d11b2563537d0bd73435161e9ea0a84b6b44238545002dede1960b151c7" Oct 13 08:39:41 crc kubenswrapper[4833]: E1013 08:39:41.628299 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:39:56 crc kubenswrapper[4833]: I1013 08:39:56.626882 4833 scope.go:117] "RemoveContainer" containerID="18757d11b2563537d0bd73435161e9ea0a84b6b44238545002dede1960b151c7" Oct 13 08:39:56 crc kubenswrapper[4833]: E1013 08:39:56.627725 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:40:07 crc kubenswrapper[4833]: I1013 08:40:07.627486 4833 scope.go:117] "RemoveContainer" containerID="18757d11b2563537d0bd73435161e9ea0a84b6b44238545002dede1960b151c7" Oct 13 08:40:07 crc kubenswrapper[4833]: E1013 08:40:07.628460 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:40:08 crc kubenswrapper[4833]: I1013 08:40:08.161339 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6pwxv"] Oct 13 08:40:08 crc kubenswrapper[4833]: I1013 08:40:08.165238 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6pwxv" Oct 13 08:40:08 crc kubenswrapper[4833]: I1013 08:40:08.176044 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6pwxv"] Oct 13 08:40:08 crc kubenswrapper[4833]: I1013 08:40:08.294204 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efb2c7bb-9316-435a-a261-957e22c38f0f-utilities\") pod \"certified-operators-6pwxv\" (UID: \"efb2c7bb-9316-435a-a261-957e22c38f0f\") " pod="openshift-marketplace/certified-operators-6pwxv" Oct 13 08:40:08 crc kubenswrapper[4833]: I1013 08:40:08.294778 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl45p\" (UniqueName: \"kubernetes.io/projected/efb2c7bb-9316-435a-a261-957e22c38f0f-kube-api-access-rl45p\") pod \"certified-operators-6pwxv\" (UID: \"efb2c7bb-9316-435a-a261-957e22c38f0f\") " pod="openshift-marketplace/certified-operators-6pwxv" Oct 13 08:40:08 crc kubenswrapper[4833]: I1013 08:40:08.294901 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efb2c7bb-9316-435a-a261-957e22c38f0f-catalog-content\") pod \"certified-operators-6pwxv\" (UID: \"efb2c7bb-9316-435a-a261-957e22c38f0f\") " pod="openshift-marketplace/certified-operators-6pwxv" Oct 13 08:40:08 crc kubenswrapper[4833]: I1013 08:40:08.396664 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efb2c7bb-9316-435a-a261-957e22c38f0f-utilities\") pod \"certified-operators-6pwxv\" (UID: \"efb2c7bb-9316-435a-a261-957e22c38f0f\") " pod="openshift-marketplace/certified-operators-6pwxv" Oct 13 08:40:08 crc kubenswrapper[4833]: I1013 08:40:08.396837 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl45p\" (UniqueName: \"kubernetes.io/projected/efb2c7bb-9316-435a-a261-957e22c38f0f-kube-api-access-rl45p\") pod \"certified-operators-6pwxv\" (UID: \"efb2c7bb-9316-435a-a261-957e22c38f0f\") " pod="openshift-marketplace/certified-operators-6pwxv" Oct 13 08:40:08 crc kubenswrapper[4833]: I1013 08:40:08.396868 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efb2c7bb-9316-435a-a261-957e22c38f0f-catalog-content\") pod \"certified-operators-6pwxv\" (UID: \"efb2c7bb-9316-435a-a261-957e22c38f0f\") " pod="openshift-marketplace/certified-operators-6pwxv" Oct 13 08:40:08 crc kubenswrapper[4833]: I1013 08:40:08.397150 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efb2c7bb-9316-435a-a261-957e22c38f0f-utilities\") pod \"certified-operators-6pwxv\" (UID: \"efb2c7bb-9316-435a-a261-957e22c38f0f\") " pod="openshift-marketplace/certified-operators-6pwxv" Oct 13 08:40:08 crc kubenswrapper[4833]: I1013 08:40:08.397191 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efb2c7bb-9316-435a-a261-957e22c38f0f-catalog-content\") pod \"certified-operators-6pwxv\" (UID: \"efb2c7bb-9316-435a-a261-957e22c38f0f\") " pod="openshift-marketplace/certified-operators-6pwxv" Oct 13 08:40:08 crc kubenswrapper[4833]: I1013 08:40:08.413709 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl45p\" (UniqueName: \"kubernetes.io/projected/efb2c7bb-9316-435a-a261-957e22c38f0f-kube-api-access-rl45p\") pod \"certified-operators-6pwxv\" (UID: \"efb2c7bb-9316-435a-a261-957e22c38f0f\") " pod="openshift-marketplace/certified-operators-6pwxv" Oct 13 08:40:08 crc kubenswrapper[4833]: I1013 08:40:08.488238 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6pwxv" Oct 13 08:40:09 crc kubenswrapper[4833]: I1013 08:40:09.072845 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6pwxv"] Oct 13 08:40:09 crc kubenswrapper[4833]: I1013 08:40:09.521322 4833 generic.go:334] "Generic (PLEG): container finished" podID="efb2c7bb-9316-435a-a261-957e22c38f0f" containerID="8121dda0f56f2dbd72aa775ec9cdd5500b2a46b2257f73700d0a975b17dd676c" exitCode=0 Oct 13 08:40:09 crc kubenswrapper[4833]: I1013 08:40:09.521373 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pwxv" event={"ID":"efb2c7bb-9316-435a-a261-957e22c38f0f","Type":"ContainerDied","Data":"8121dda0f56f2dbd72aa775ec9cdd5500b2a46b2257f73700d0a975b17dd676c"} Oct 13 08:40:09 crc kubenswrapper[4833]: I1013 08:40:09.521416 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pwxv" event={"ID":"efb2c7bb-9316-435a-a261-957e22c38f0f","Type":"ContainerStarted","Data":"a60ff4beb7ffd5caebbf13c2edd297633c1e495a03ae8a16134a692f80e38347"} Oct 13 08:40:10 crc kubenswrapper[4833]: I1013 08:40:10.545289 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pwxv" event={"ID":"efb2c7bb-9316-435a-a261-957e22c38f0f","Type":"ContainerStarted","Data":"2591aea4398ab9e35f28ea7fbf6bb6a4247a5a4a6b9e71e2405c0913d948f650"} Oct 13 08:40:11 crc kubenswrapper[4833]: I1013 08:40:11.556905 4833 generic.go:334] "Generic (PLEG): container finished" podID="efb2c7bb-9316-435a-a261-957e22c38f0f" containerID="2591aea4398ab9e35f28ea7fbf6bb6a4247a5a4a6b9e71e2405c0913d948f650" exitCode=0 Oct 13 08:40:11 crc kubenswrapper[4833]: I1013 08:40:11.556986 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pwxv" event={"ID":"efb2c7bb-9316-435a-a261-957e22c38f0f","Type":"ContainerDied","Data":"2591aea4398ab9e35f28ea7fbf6bb6a4247a5a4a6b9e71e2405c0913d948f650"} Oct 13 08:40:12 crc kubenswrapper[4833]: I1013 08:40:12.574304 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pwxv" event={"ID":"efb2c7bb-9316-435a-a261-957e22c38f0f","Type":"ContainerStarted","Data":"5d1e44fc0857eb84acebe2982ad5fdc189069ca2841442d3ae60a6ca6fe2e1b9"} Oct 13 08:40:12 crc kubenswrapper[4833]: I1013 08:40:12.577072 4833 generic.go:334] "Generic (PLEG): container finished" podID="a805910b-b612-45f1-9b8e-98c498855a3d" containerID="b68fcc47f23e58453f1a9592118be4db9e4f688162f6a9c59c825c7f736204ad" exitCode=0 Oct 13 08:40:12 crc kubenswrapper[4833]: I1013 08:40:12.577146 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-f96v8" event={"ID":"a805910b-b612-45f1-9b8e-98c498855a3d","Type":"ContainerDied","Data":"b68fcc47f23e58453f1a9592118be4db9e4f688162f6a9c59c825c7f736204ad"} Oct 13 08:40:12 crc kubenswrapper[4833]: I1013 08:40:12.619732 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6pwxv" podStartSLOduration=2.181507103 podStartE2EDuration="4.619708056s" podCreationTimestamp="2025-10-13 08:40:08 +0000 UTC" firstStartedPulling="2025-10-13 08:40:09.523782071 +0000 UTC m=+7899.624204977" lastFinishedPulling="2025-10-13 08:40:11.961983014 +0000 UTC m=+7902.062405930" observedRunningTime="2025-10-13 08:40:12.605863052 +0000 UTC m=+7902.706285968" watchObservedRunningTime="2025-10-13 08:40:12.619708056 +0000 UTC m=+7902.720130992" Oct 13 08:40:14 crc kubenswrapper[4833]: I1013 08:40:14.038726 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-f96v8" Oct 13 08:40:14 crc kubenswrapper[4833]: I1013 08:40:14.143530 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a805910b-b612-45f1-9b8e-98c498855a3d-inventory\") pod \"a805910b-b612-45f1-9b8e-98c498855a3d\" (UID: \"a805910b-b612-45f1-9b8e-98c498855a3d\") " Oct 13 08:40:14 crc kubenswrapper[4833]: I1013 08:40:14.143950 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a805910b-b612-45f1-9b8e-98c498855a3d-ssh-key\") pod \"a805910b-b612-45f1-9b8e-98c498855a3d\" (UID: \"a805910b-b612-45f1-9b8e-98c498855a3d\") " Oct 13 08:40:14 crc kubenswrapper[4833]: I1013 08:40:14.144240 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p725m\" (UniqueName: \"kubernetes.io/projected/a805910b-b612-45f1-9b8e-98c498855a3d-kube-api-access-p725m\") pod \"a805910b-b612-45f1-9b8e-98c498855a3d\" (UID: \"a805910b-b612-45f1-9b8e-98c498855a3d\") " Oct 13 08:40:14 crc kubenswrapper[4833]: I1013 08:40:14.150406 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a805910b-b612-45f1-9b8e-98c498855a3d-kube-api-access-p725m" (OuterVolumeSpecName: "kube-api-access-p725m") pod "a805910b-b612-45f1-9b8e-98c498855a3d" (UID: "a805910b-b612-45f1-9b8e-98c498855a3d"). InnerVolumeSpecName "kube-api-access-p725m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:40:14 crc kubenswrapper[4833]: I1013 08:40:14.182765 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a805910b-b612-45f1-9b8e-98c498855a3d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a805910b-b612-45f1-9b8e-98c498855a3d" (UID: "a805910b-b612-45f1-9b8e-98c498855a3d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:40:14 crc kubenswrapper[4833]: I1013 08:40:14.192586 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a805910b-b612-45f1-9b8e-98c498855a3d-inventory" (OuterVolumeSpecName: "inventory") pod "a805910b-b612-45f1-9b8e-98c498855a3d" (UID: "a805910b-b612-45f1-9b8e-98c498855a3d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:40:14 crc kubenswrapper[4833]: I1013 08:40:14.247752 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a805910b-b612-45f1-9b8e-98c498855a3d-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 08:40:14 crc kubenswrapper[4833]: I1013 08:40:14.247804 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p725m\" (UniqueName: \"kubernetes.io/projected/a805910b-b612-45f1-9b8e-98c498855a3d-kube-api-access-p725m\") on node \"crc\" DevicePath \"\"" Oct 13 08:40:14 crc kubenswrapper[4833]: I1013 08:40:14.247826 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a805910b-b612-45f1-9b8e-98c498855a3d-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 08:40:14 crc kubenswrapper[4833]: I1013 08:40:14.599884 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-f96v8" event={"ID":"a805910b-b612-45f1-9b8e-98c498855a3d","Type":"ContainerDied","Data":"992b58ad3a7a58a6e8681031dde4366170fc1532268f908e4d21f9fabd553dda"} Oct 13 08:40:14 crc kubenswrapper[4833]: I1013 08:40:14.600153 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="992b58ad3a7a58a6e8681031dde4366170fc1532268f908e4d21f9fabd553dda" Oct 13 08:40:14 crc kubenswrapper[4833]: I1013 08:40:14.599967 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-f96v8" Oct 13 08:40:14 crc kubenswrapper[4833]: I1013 08:40:14.751460 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-openstack-w8brv"] Oct 13 08:40:14 crc kubenswrapper[4833]: E1013 08:40:14.751950 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a805910b-b612-45f1-9b8e-98c498855a3d" containerName="configure-os-openstack-openstack-cell1" Oct 13 08:40:14 crc kubenswrapper[4833]: I1013 08:40:14.751967 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a805910b-b612-45f1-9b8e-98c498855a3d" containerName="configure-os-openstack-openstack-cell1" Oct 13 08:40:14 crc kubenswrapper[4833]: I1013 08:40:14.752171 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a805910b-b612-45f1-9b8e-98c498855a3d" containerName="configure-os-openstack-openstack-cell1" Oct 13 08:40:14 crc kubenswrapper[4833]: I1013 08:40:14.756354 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-w8brv" Oct 13 08:40:14 crc kubenswrapper[4833]: I1013 08:40:14.759622 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 13 08:40:14 crc kubenswrapper[4833]: I1013 08:40:14.759823 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 13 08:40:14 crc kubenswrapper[4833]: I1013 08:40:14.760352 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-qqrx8" Oct 13 08:40:14 crc kubenswrapper[4833]: I1013 08:40:14.760516 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 08:40:14 crc kubenswrapper[4833]: I1013 08:40:14.762697 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-w8brv"] Oct 13 08:40:14 crc kubenswrapper[4833]: I1013 08:40:14.860963 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdvhk\" (UniqueName: \"kubernetes.io/projected/d5672816-d3a2-49fb-bbeb-eb03f3d1639d-kube-api-access-tdvhk\") pod \"ssh-known-hosts-openstack-w8brv\" (UID: \"d5672816-d3a2-49fb-bbeb-eb03f3d1639d\") " pod="openstack/ssh-known-hosts-openstack-w8brv" Oct 13 08:40:14 crc kubenswrapper[4833]: I1013 08:40:14.861019 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d5672816-d3a2-49fb-bbeb-eb03f3d1639d-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-w8brv\" (UID: \"d5672816-d3a2-49fb-bbeb-eb03f3d1639d\") " pod="openstack/ssh-known-hosts-openstack-w8brv" Oct 13 08:40:14 crc kubenswrapper[4833]: I1013 08:40:14.861156 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d5672816-d3a2-49fb-bbeb-eb03f3d1639d-inventory-0\") pod \"ssh-known-hosts-openstack-w8brv\" (UID: \"d5672816-d3a2-49fb-bbeb-eb03f3d1639d\") " pod="openstack/ssh-known-hosts-openstack-w8brv" Oct 13 08:40:14 crc kubenswrapper[4833]: I1013 08:40:14.962905 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d5672816-d3a2-49fb-bbeb-eb03f3d1639d-inventory-0\") pod \"ssh-known-hosts-openstack-w8brv\" (UID: \"d5672816-d3a2-49fb-bbeb-eb03f3d1639d\") " pod="openstack/ssh-known-hosts-openstack-w8brv" Oct 13 08:40:14 crc kubenswrapper[4833]: I1013 08:40:14.963618 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdvhk\" (UniqueName: \"kubernetes.io/projected/d5672816-d3a2-49fb-bbeb-eb03f3d1639d-kube-api-access-tdvhk\") pod \"ssh-known-hosts-openstack-w8brv\" (UID: \"d5672816-d3a2-49fb-bbeb-eb03f3d1639d\") " pod="openstack/ssh-known-hosts-openstack-w8brv" Oct 13 08:40:14 crc kubenswrapper[4833]: I1013 08:40:14.963657 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d5672816-d3a2-49fb-bbeb-eb03f3d1639d-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-w8brv\" (UID: \"d5672816-d3a2-49fb-bbeb-eb03f3d1639d\") " pod="openstack/ssh-known-hosts-openstack-w8brv" Oct 13 08:40:14 crc kubenswrapper[4833]: I1013 08:40:14.968755 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d5672816-d3a2-49fb-bbeb-eb03f3d1639d-inventory-0\") pod \"ssh-known-hosts-openstack-w8brv\" (UID: \"d5672816-d3a2-49fb-bbeb-eb03f3d1639d\") " pod="openstack/ssh-known-hosts-openstack-w8brv" Oct 13 08:40:14 crc kubenswrapper[4833]: I1013 08:40:14.969151 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d5672816-d3a2-49fb-bbeb-eb03f3d1639d-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-w8brv\" (UID: \"d5672816-d3a2-49fb-bbeb-eb03f3d1639d\") " pod="openstack/ssh-known-hosts-openstack-w8brv" Oct 13 08:40:14 crc kubenswrapper[4833]: I1013 08:40:14.994143 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdvhk\" (UniqueName: \"kubernetes.io/projected/d5672816-d3a2-49fb-bbeb-eb03f3d1639d-kube-api-access-tdvhk\") pod \"ssh-known-hosts-openstack-w8brv\" (UID: \"d5672816-d3a2-49fb-bbeb-eb03f3d1639d\") " pod="openstack/ssh-known-hosts-openstack-w8brv" Oct 13 08:40:15 crc kubenswrapper[4833]: I1013 08:40:15.113764 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-w8brv" Oct 13 08:40:15 crc kubenswrapper[4833]: I1013 08:40:15.727398 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-w8brv"] Oct 13 08:40:15 crc kubenswrapper[4833]: W1013 08:40:15.731210 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5672816_d3a2_49fb_bbeb_eb03f3d1639d.slice/crio-909cecf0c12e5231e14cf9b1133f3cd2520ab0663124f387641b8563c987ab15 WatchSource:0}: Error finding container 909cecf0c12e5231e14cf9b1133f3cd2520ab0663124f387641b8563c987ab15: Status 404 returned error can't find the container with id 909cecf0c12e5231e14cf9b1133f3cd2520ab0663124f387641b8563c987ab15 Oct 13 08:40:15 crc kubenswrapper[4833]: I1013 08:40:15.733747 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 08:40:16 crc kubenswrapper[4833]: I1013 08:40:16.624441 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-w8brv" event={"ID":"d5672816-d3a2-49fb-bbeb-eb03f3d1639d","Type":"ContainerStarted","Data":"a0772ee2771220d757b4f9392378ee6388b62f6cf8b2e5a284658e551ab4ee06"} Oct 13 08:40:16 crc kubenswrapper[4833]: I1013 08:40:16.624798 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-w8brv" event={"ID":"d5672816-d3a2-49fb-bbeb-eb03f3d1639d","Type":"ContainerStarted","Data":"909cecf0c12e5231e14cf9b1133f3cd2520ab0663124f387641b8563c987ab15"} Oct 13 08:40:16 crc kubenswrapper[4833]: I1013 08:40:16.650750 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-openstack-w8brv" podStartSLOduration=2.173661652 podStartE2EDuration="2.650727874s" podCreationTimestamp="2025-10-13 08:40:14 +0000 UTC" firstStartedPulling="2025-10-13 08:40:15.733466879 +0000 UTC m=+7905.833889805" lastFinishedPulling="2025-10-13 08:40:16.210533101 +0000 UTC m=+7906.310956027" observedRunningTime="2025-10-13 08:40:16.647258685 +0000 UTC m=+7906.747681622" watchObservedRunningTime="2025-10-13 08:40:16.650727874 +0000 UTC m=+7906.751150800" Oct 13 08:40:18 crc kubenswrapper[4833]: I1013 08:40:18.489107 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6pwxv" Oct 13 08:40:18 crc kubenswrapper[4833]: I1013 08:40:18.490218 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6pwxv" Oct 13 08:40:18 crc kubenswrapper[4833]: I1013 08:40:18.582875 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6pwxv" Oct 13 08:40:18 crc kubenswrapper[4833]: I1013 08:40:18.628087 4833 scope.go:117] "RemoveContainer" containerID="18757d11b2563537d0bd73435161e9ea0a84b6b44238545002dede1960b151c7" Oct 13 08:40:18 crc kubenswrapper[4833]: E1013 08:40:18.628791 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:40:18 crc kubenswrapper[4833]: I1013 08:40:18.719008 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6pwxv" Oct 13 08:40:18 crc kubenswrapper[4833]: I1013 08:40:18.836876 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6pwxv"] Oct 13 08:40:20 crc kubenswrapper[4833]: I1013 08:40:20.671882 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6pwxv" podUID="efb2c7bb-9316-435a-a261-957e22c38f0f" containerName="registry-server" containerID="cri-o://5d1e44fc0857eb84acebe2982ad5fdc189069ca2841442d3ae60a6ca6fe2e1b9" gracePeriod=2 Oct 13 08:40:21 crc kubenswrapper[4833]: I1013 08:40:21.268068 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6pwxv" Oct 13 08:40:21 crc kubenswrapper[4833]: I1013 08:40:21.339304 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efb2c7bb-9316-435a-a261-957e22c38f0f-catalog-content\") pod \"efb2c7bb-9316-435a-a261-957e22c38f0f\" (UID: \"efb2c7bb-9316-435a-a261-957e22c38f0f\") " Oct 13 08:40:21 crc kubenswrapper[4833]: I1013 08:40:21.339806 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl45p\" (UniqueName: \"kubernetes.io/projected/efb2c7bb-9316-435a-a261-957e22c38f0f-kube-api-access-rl45p\") pod \"efb2c7bb-9316-435a-a261-957e22c38f0f\" (UID: \"efb2c7bb-9316-435a-a261-957e22c38f0f\") " Oct 13 08:40:21 crc kubenswrapper[4833]: I1013 08:40:21.339886 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efb2c7bb-9316-435a-a261-957e22c38f0f-utilities\") pod \"efb2c7bb-9316-435a-a261-957e22c38f0f\" (UID: \"efb2c7bb-9316-435a-a261-957e22c38f0f\") " Oct 13 08:40:21 crc kubenswrapper[4833]: I1013 08:40:21.340726 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efb2c7bb-9316-435a-a261-957e22c38f0f-utilities" (OuterVolumeSpecName: "utilities") pod "efb2c7bb-9316-435a-a261-957e22c38f0f" (UID: "efb2c7bb-9316-435a-a261-957e22c38f0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:40:21 crc kubenswrapper[4833]: I1013 08:40:21.346592 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efb2c7bb-9316-435a-a261-957e22c38f0f-kube-api-access-rl45p" (OuterVolumeSpecName: "kube-api-access-rl45p") pod "efb2c7bb-9316-435a-a261-957e22c38f0f" (UID: "efb2c7bb-9316-435a-a261-957e22c38f0f"). InnerVolumeSpecName "kube-api-access-rl45p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:40:21 crc kubenswrapper[4833]: I1013 08:40:21.381401 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efb2c7bb-9316-435a-a261-957e22c38f0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "efb2c7bb-9316-435a-a261-957e22c38f0f" (UID: "efb2c7bb-9316-435a-a261-957e22c38f0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:40:21 crc kubenswrapper[4833]: I1013 08:40:21.441853 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl45p\" (UniqueName: \"kubernetes.io/projected/efb2c7bb-9316-435a-a261-957e22c38f0f-kube-api-access-rl45p\") on node \"crc\" DevicePath \"\"" Oct 13 08:40:21 crc kubenswrapper[4833]: I1013 08:40:21.441884 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efb2c7bb-9316-435a-a261-957e22c38f0f-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 08:40:21 crc kubenswrapper[4833]: I1013 08:40:21.441895 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efb2c7bb-9316-435a-a261-957e22c38f0f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 08:40:21 crc kubenswrapper[4833]: I1013 08:40:21.687820 4833 generic.go:334] "Generic (PLEG): container finished" podID="efb2c7bb-9316-435a-a261-957e22c38f0f" containerID="5d1e44fc0857eb84acebe2982ad5fdc189069ca2841442d3ae60a6ca6fe2e1b9" exitCode=0 Oct 13 08:40:21 crc kubenswrapper[4833]: I1013 08:40:21.687873 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pwxv" event={"ID":"efb2c7bb-9316-435a-a261-957e22c38f0f","Type":"ContainerDied","Data":"5d1e44fc0857eb84acebe2982ad5fdc189069ca2841442d3ae60a6ca6fe2e1b9"} Oct 13 08:40:21 crc kubenswrapper[4833]: I1013 08:40:21.687900 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pwxv" event={"ID":"efb2c7bb-9316-435a-a261-957e22c38f0f","Type":"ContainerDied","Data":"a60ff4beb7ffd5caebbf13c2edd297633c1e495a03ae8a16134a692f80e38347"} Oct 13 08:40:21 crc kubenswrapper[4833]: I1013 08:40:21.687911 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6pwxv" Oct 13 08:40:21 crc kubenswrapper[4833]: I1013 08:40:21.687917 4833 scope.go:117] "RemoveContainer" containerID="5d1e44fc0857eb84acebe2982ad5fdc189069ca2841442d3ae60a6ca6fe2e1b9" Oct 13 08:40:21 crc kubenswrapper[4833]: I1013 08:40:21.736681 4833 scope.go:117] "RemoveContainer" containerID="2591aea4398ab9e35f28ea7fbf6bb6a4247a5a4a6b9e71e2405c0913d948f650" Oct 13 08:40:21 crc kubenswrapper[4833]: I1013 08:40:21.743852 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6pwxv"] Oct 13 08:40:21 crc kubenswrapper[4833]: I1013 08:40:21.771147 4833 scope.go:117] "RemoveContainer" containerID="8121dda0f56f2dbd72aa775ec9cdd5500b2a46b2257f73700d0a975b17dd676c" Oct 13 08:40:21 crc kubenswrapper[4833]: I1013 08:40:21.780188 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6pwxv"] Oct 13 08:40:21 crc kubenswrapper[4833]: I1013 08:40:21.814425 4833 scope.go:117] "RemoveContainer" containerID="5d1e44fc0857eb84acebe2982ad5fdc189069ca2841442d3ae60a6ca6fe2e1b9" Oct 13 08:40:21 crc kubenswrapper[4833]: E1013 08:40:21.814913 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d1e44fc0857eb84acebe2982ad5fdc189069ca2841442d3ae60a6ca6fe2e1b9\": container with ID starting with 5d1e44fc0857eb84acebe2982ad5fdc189069ca2841442d3ae60a6ca6fe2e1b9 not found: ID does not exist" containerID="5d1e44fc0857eb84acebe2982ad5fdc189069ca2841442d3ae60a6ca6fe2e1b9" Oct 13 08:40:21 crc kubenswrapper[4833]: I1013 08:40:21.814947 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d1e44fc0857eb84acebe2982ad5fdc189069ca2841442d3ae60a6ca6fe2e1b9"} err="failed to get container status \"5d1e44fc0857eb84acebe2982ad5fdc189069ca2841442d3ae60a6ca6fe2e1b9\": rpc error: code = NotFound desc = could not find container \"5d1e44fc0857eb84acebe2982ad5fdc189069ca2841442d3ae60a6ca6fe2e1b9\": container with ID starting with 5d1e44fc0857eb84acebe2982ad5fdc189069ca2841442d3ae60a6ca6fe2e1b9 not found: ID does not exist" Oct 13 08:40:21 crc kubenswrapper[4833]: I1013 08:40:21.814966 4833 scope.go:117] "RemoveContainer" containerID="2591aea4398ab9e35f28ea7fbf6bb6a4247a5a4a6b9e71e2405c0913d948f650" Oct 13 08:40:21 crc kubenswrapper[4833]: E1013 08:40:21.815174 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2591aea4398ab9e35f28ea7fbf6bb6a4247a5a4a6b9e71e2405c0913d948f650\": container with ID starting with 2591aea4398ab9e35f28ea7fbf6bb6a4247a5a4a6b9e71e2405c0913d948f650 not found: ID does not exist" containerID="2591aea4398ab9e35f28ea7fbf6bb6a4247a5a4a6b9e71e2405c0913d948f650" Oct 13 08:40:21 crc kubenswrapper[4833]: I1013 08:40:21.815197 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2591aea4398ab9e35f28ea7fbf6bb6a4247a5a4a6b9e71e2405c0913d948f650"} err="failed to get container status \"2591aea4398ab9e35f28ea7fbf6bb6a4247a5a4a6b9e71e2405c0913d948f650\": rpc error: code = NotFound desc = could not find container \"2591aea4398ab9e35f28ea7fbf6bb6a4247a5a4a6b9e71e2405c0913d948f650\": container with ID starting with 2591aea4398ab9e35f28ea7fbf6bb6a4247a5a4a6b9e71e2405c0913d948f650 not found: ID does not exist" Oct 13 08:40:21 crc kubenswrapper[4833]: I1013 08:40:21.815213 4833 scope.go:117] "RemoveContainer" containerID="8121dda0f56f2dbd72aa775ec9cdd5500b2a46b2257f73700d0a975b17dd676c" Oct 13 08:40:21 crc kubenswrapper[4833]: E1013 08:40:21.815418 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8121dda0f56f2dbd72aa775ec9cdd5500b2a46b2257f73700d0a975b17dd676c\": container with ID starting with 8121dda0f56f2dbd72aa775ec9cdd5500b2a46b2257f73700d0a975b17dd676c not found: ID does not exist" containerID="8121dda0f56f2dbd72aa775ec9cdd5500b2a46b2257f73700d0a975b17dd676c" Oct 13 08:40:21 crc kubenswrapper[4833]: I1013 08:40:21.815455 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8121dda0f56f2dbd72aa775ec9cdd5500b2a46b2257f73700d0a975b17dd676c"} err="failed to get container status \"8121dda0f56f2dbd72aa775ec9cdd5500b2a46b2257f73700d0a975b17dd676c\": rpc error: code = NotFound desc = could not find container \"8121dda0f56f2dbd72aa775ec9cdd5500b2a46b2257f73700d0a975b17dd676c\": container with ID starting with 8121dda0f56f2dbd72aa775ec9cdd5500b2a46b2257f73700d0a975b17dd676c not found: ID does not exist" Oct 13 08:40:22 crc kubenswrapper[4833]: I1013 08:40:22.638978 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efb2c7bb-9316-435a-a261-957e22c38f0f" path="/var/lib/kubelet/pods/efb2c7bb-9316-435a-a261-957e22c38f0f/volumes" Oct 13 08:40:25 crc kubenswrapper[4833]: I1013 08:40:25.738277 4833 generic.go:334] "Generic (PLEG): container finished" podID="d5672816-d3a2-49fb-bbeb-eb03f3d1639d" containerID="a0772ee2771220d757b4f9392378ee6388b62f6cf8b2e5a284658e551ab4ee06" exitCode=0 Oct 13 08:40:25 crc kubenswrapper[4833]: I1013 08:40:25.738329 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-w8brv" event={"ID":"d5672816-d3a2-49fb-bbeb-eb03f3d1639d","Type":"ContainerDied","Data":"a0772ee2771220d757b4f9392378ee6388b62f6cf8b2e5a284658e551ab4ee06"} Oct 13 08:40:27 crc kubenswrapper[4833]: I1013 08:40:27.257852 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-w8brv" Oct 13 08:40:27 crc kubenswrapper[4833]: I1013 08:40:27.398463 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d5672816-d3a2-49fb-bbeb-eb03f3d1639d-ssh-key-openstack-cell1\") pod \"d5672816-d3a2-49fb-bbeb-eb03f3d1639d\" (UID: \"d5672816-d3a2-49fb-bbeb-eb03f3d1639d\") " Oct 13 08:40:27 crc kubenswrapper[4833]: I1013 08:40:27.398896 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdvhk\" (UniqueName: \"kubernetes.io/projected/d5672816-d3a2-49fb-bbeb-eb03f3d1639d-kube-api-access-tdvhk\") pod \"d5672816-d3a2-49fb-bbeb-eb03f3d1639d\" (UID: \"d5672816-d3a2-49fb-bbeb-eb03f3d1639d\") " Oct 13 08:40:27 crc kubenswrapper[4833]: I1013 08:40:27.399023 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d5672816-d3a2-49fb-bbeb-eb03f3d1639d-inventory-0\") pod \"d5672816-d3a2-49fb-bbeb-eb03f3d1639d\" (UID: \"d5672816-d3a2-49fb-bbeb-eb03f3d1639d\") " Oct 13 08:40:27 crc kubenswrapper[4833]: I1013 08:40:27.405271 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5672816-d3a2-49fb-bbeb-eb03f3d1639d-kube-api-access-tdvhk" (OuterVolumeSpecName: "kube-api-access-tdvhk") pod "d5672816-d3a2-49fb-bbeb-eb03f3d1639d" (UID: "d5672816-d3a2-49fb-bbeb-eb03f3d1639d"). InnerVolumeSpecName "kube-api-access-tdvhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:40:27 crc kubenswrapper[4833]: I1013 08:40:27.432127 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5672816-d3a2-49fb-bbeb-eb03f3d1639d-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "d5672816-d3a2-49fb-bbeb-eb03f3d1639d" (UID: "d5672816-d3a2-49fb-bbeb-eb03f3d1639d"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:40:27 crc kubenswrapper[4833]: I1013 08:40:27.437912 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5672816-d3a2-49fb-bbeb-eb03f3d1639d-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "d5672816-d3a2-49fb-bbeb-eb03f3d1639d" (UID: "d5672816-d3a2-49fb-bbeb-eb03f3d1639d"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:40:27 crc kubenswrapper[4833]: I1013 08:40:27.501454 4833 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d5672816-d3a2-49fb-bbeb-eb03f3d1639d-inventory-0\") on node \"crc\" DevicePath \"\"" Oct 13 08:40:27 crc kubenswrapper[4833]: I1013 08:40:27.501508 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d5672816-d3a2-49fb-bbeb-eb03f3d1639d-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Oct 13 08:40:27 crc kubenswrapper[4833]: I1013 08:40:27.501528 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdvhk\" (UniqueName: \"kubernetes.io/projected/d5672816-d3a2-49fb-bbeb-eb03f3d1639d-kube-api-access-tdvhk\") on node \"crc\" DevicePath \"\"" Oct 13 08:40:27 crc kubenswrapper[4833]: I1013 08:40:27.775866 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-w8brv" event={"ID":"d5672816-d3a2-49fb-bbeb-eb03f3d1639d","Type":"ContainerDied","Data":"909cecf0c12e5231e14cf9b1133f3cd2520ab0663124f387641b8563c987ab15"} Oct 13 08:40:27 crc kubenswrapper[4833]: I1013 08:40:27.775927 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="909cecf0c12e5231e14cf9b1133f3cd2520ab0663124f387641b8563c987ab15" Oct 13 08:40:27 crc kubenswrapper[4833]: I1013 08:40:27.775947 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-w8brv" Oct 13 08:40:27 crc kubenswrapper[4833]: I1013 08:40:27.863242 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-openstack-openstack-cell1-f2xkf"] Oct 13 08:40:27 crc kubenswrapper[4833]: E1013 08:40:27.864017 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5672816-d3a2-49fb-bbeb-eb03f3d1639d" containerName="ssh-known-hosts-openstack" Oct 13 08:40:27 crc kubenswrapper[4833]: I1013 08:40:27.864095 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5672816-d3a2-49fb-bbeb-eb03f3d1639d" containerName="ssh-known-hosts-openstack" Oct 13 08:40:27 crc kubenswrapper[4833]: E1013 08:40:27.864163 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efb2c7bb-9316-435a-a261-957e22c38f0f" containerName="extract-utilities" Oct 13 08:40:27 crc kubenswrapper[4833]: I1013 08:40:27.864225 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="efb2c7bb-9316-435a-a261-957e22c38f0f" containerName="extract-utilities" Oct 13 08:40:27 crc kubenswrapper[4833]: E1013 08:40:27.864284 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efb2c7bb-9316-435a-a261-957e22c38f0f" containerName="extract-content" Oct 13 08:40:27 crc kubenswrapper[4833]: I1013 08:40:27.864365 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="efb2c7bb-9316-435a-a261-957e22c38f0f" containerName="extract-content" Oct 13 08:40:27 crc kubenswrapper[4833]: E1013 08:40:27.864441 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efb2c7bb-9316-435a-a261-957e22c38f0f" containerName="registry-server" Oct 13 08:40:27 crc kubenswrapper[4833]: I1013 08:40:27.864491 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="efb2c7bb-9316-435a-a261-957e22c38f0f" containerName="registry-server" Oct 13 08:40:27 crc kubenswrapper[4833]: I1013 08:40:27.864828 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="efb2c7bb-9316-435a-a261-957e22c38f0f" containerName="registry-server" Oct 13 08:40:27 crc kubenswrapper[4833]: I1013 08:40:27.864930 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5672816-d3a2-49fb-bbeb-eb03f3d1639d" containerName="ssh-known-hosts-openstack" Oct 13 08:40:27 crc kubenswrapper[4833]: I1013 08:40:27.865778 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-f2xkf" Oct 13 08:40:27 crc kubenswrapper[4833]: I1013 08:40:27.869525 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 13 08:40:27 crc kubenswrapper[4833]: I1013 08:40:27.869600 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 08:40:27 crc kubenswrapper[4833]: I1013 08:40:27.869521 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 13 08:40:27 crc kubenswrapper[4833]: I1013 08:40:27.870827 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-qqrx8" Oct 13 08:40:27 crc kubenswrapper[4833]: I1013 08:40:27.872956 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-f2xkf"] Oct 13 08:40:28 crc kubenswrapper[4833]: I1013 08:40:28.012760 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a29203c6-0802-488f-9269-b4725a8923b2-inventory\") pod \"run-os-openstack-openstack-cell1-f2xkf\" (UID: \"a29203c6-0802-488f-9269-b4725a8923b2\") " pod="openstack/run-os-openstack-openstack-cell1-f2xkf" Oct 13 08:40:28 crc kubenswrapper[4833]: I1013 08:40:28.013163 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a29203c6-0802-488f-9269-b4725a8923b2-ssh-key\") pod \"run-os-openstack-openstack-cell1-f2xkf\" (UID: \"a29203c6-0802-488f-9269-b4725a8923b2\") " pod="openstack/run-os-openstack-openstack-cell1-f2xkf" Oct 13 08:40:28 crc kubenswrapper[4833]: I1013 08:40:28.013339 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj9dr\" (UniqueName: \"kubernetes.io/projected/a29203c6-0802-488f-9269-b4725a8923b2-kube-api-access-bj9dr\") pod \"run-os-openstack-openstack-cell1-f2xkf\" (UID: \"a29203c6-0802-488f-9269-b4725a8923b2\") " pod="openstack/run-os-openstack-openstack-cell1-f2xkf" Oct 13 08:40:28 crc kubenswrapper[4833]: I1013 08:40:28.115904 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a29203c6-0802-488f-9269-b4725a8923b2-ssh-key\") pod \"run-os-openstack-openstack-cell1-f2xkf\" (UID: \"a29203c6-0802-488f-9269-b4725a8923b2\") " pod="openstack/run-os-openstack-openstack-cell1-f2xkf" Oct 13 08:40:28 crc kubenswrapper[4833]: I1013 08:40:28.116311 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj9dr\" (UniqueName: \"kubernetes.io/projected/a29203c6-0802-488f-9269-b4725a8923b2-kube-api-access-bj9dr\") pod \"run-os-openstack-openstack-cell1-f2xkf\" (UID: \"a29203c6-0802-488f-9269-b4725a8923b2\") " pod="openstack/run-os-openstack-openstack-cell1-f2xkf" Oct 13 08:40:28 crc kubenswrapper[4833]: I1013 08:40:28.116493 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a29203c6-0802-488f-9269-b4725a8923b2-inventory\") pod \"run-os-openstack-openstack-cell1-f2xkf\" (UID: \"a29203c6-0802-488f-9269-b4725a8923b2\") " pod="openstack/run-os-openstack-openstack-cell1-f2xkf" Oct 13 08:40:28 crc kubenswrapper[4833]: I1013 08:40:28.119954 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a29203c6-0802-488f-9269-b4725a8923b2-ssh-key\") pod \"run-os-openstack-openstack-cell1-f2xkf\" (UID: \"a29203c6-0802-488f-9269-b4725a8923b2\") " pod="openstack/run-os-openstack-openstack-cell1-f2xkf" Oct 13 08:40:28 crc kubenswrapper[4833]: I1013 08:40:28.121159 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a29203c6-0802-488f-9269-b4725a8923b2-inventory\") pod \"run-os-openstack-openstack-cell1-f2xkf\" (UID: \"a29203c6-0802-488f-9269-b4725a8923b2\") " pod="openstack/run-os-openstack-openstack-cell1-f2xkf" Oct 13 08:40:28 crc kubenswrapper[4833]: I1013 08:40:28.153001 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj9dr\" (UniqueName: \"kubernetes.io/projected/a29203c6-0802-488f-9269-b4725a8923b2-kube-api-access-bj9dr\") pod \"run-os-openstack-openstack-cell1-f2xkf\" (UID: \"a29203c6-0802-488f-9269-b4725a8923b2\") " pod="openstack/run-os-openstack-openstack-cell1-f2xkf" Oct 13 08:40:28 crc kubenswrapper[4833]: I1013 08:40:28.185017 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-f2xkf" Oct 13 08:40:28 crc kubenswrapper[4833]: I1013 08:40:28.789977 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-f2xkf"] Oct 13 08:40:29 crc kubenswrapper[4833]: I1013 08:40:29.809419 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-f2xkf" event={"ID":"a29203c6-0802-488f-9269-b4725a8923b2","Type":"ContainerStarted","Data":"46d54c178edccb801613be61eb02d2c5eecb8719f77290c4467dd87b0eecca8a"} Oct 13 08:40:30 crc kubenswrapper[4833]: I1013 08:40:30.820359 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-f2xkf" event={"ID":"a29203c6-0802-488f-9269-b4725a8923b2","Type":"ContainerStarted","Data":"e41386fbfe2daefdad721dd665927a0ba715db1fa78ed2047a22293cd10704a2"} Oct 13 08:40:30 crc kubenswrapper[4833]: I1013 08:40:30.840676 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-openstack-openstack-cell1-f2xkf" podStartSLOduration=3.106130727 podStartE2EDuration="3.840654194s" podCreationTimestamp="2025-10-13 08:40:27 +0000 UTC" firstStartedPulling="2025-10-13 08:40:28.801151151 +0000 UTC m=+7918.901574067" lastFinishedPulling="2025-10-13 08:40:29.535674588 +0000 UTC m=+7919.636097534" observedRunningTime="2025-10-13 08:40:30.834856509 +0000 UTC m=+7920.935279475" watchObservedRunningTime="2025-10-13 08:40:30.840654194 +0000 UTC m=+7920.941077110" Oct 13 08:40:33 crc kubenswrapper[4833]: I1013 08:40:33.626992 4833 scope.go:117] "RemoveContainer" containerID="18757d11b2563537d0bd73435161e9ea0a84b6b44238545002dede1960b151c7" Oct 13 08:40:33 crc kubenswrapper[4833]: E1013 08:40:33.627929 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:40:37 crc kubenswrapper[4833]: I1013 08:40:37.904810 4833 generic.go:334] "Generic (PLEG): container finished" podID="a29203c6-0802-488f-9269-b4725a8923b2" containerID="e41386fbfe2daefdad721dd665927a0ba715db1fa78ed2047a22293cd10704a2" exitCode=0 Oct 13 08:40:37 crc kubenswrapper[4833]: I1013 08:40:37.904937 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-f2xkf" event={"ID":"a29203c6-0802-488f-9269-b4725a8923b2","Type":"ContainerDied","Data":"e41386fbfe2daefdad721dd665927a0ba715db1fa78ed2047a22293cd10704a2"} Oct 13 08:40:39 crc kubenswrapper[4833]: I1013 08:40:39.376706 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-f2xkf" Oct 13 08:40:39 crc kubenswrapper[4833]: I1013 08:40:39.512500 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj9dr\" (UniqueName: \"kubernetes.io/projected/a29203c6-0802-488f-9269-b4725a8923b2-kube-api-access-bj9dr\") pod \"a29203c6-0802-488f-9269-b4725a8923b2\" (UID: \"a29203c6-0802-488f-9269-b4725a8923b2\") " Oct 13 08:40:39 crc kubenswrapper[4833]: I1013 08:40:39.512616 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a29203c6-0802-488f-9269-b4725a8923b2-ssh-key\") pod \"a29203c6-0802-488f-9269-b4725a8923b2\" (UID: \"a29203c6-0802-488f-9269-b4725a8923b2\") " Oct 13 08:40:39 crc kubenswrapper[4833]: I1013 08:40:39.512733 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a29203c6-0802-488f-9269-b4725a8923b2-inventory\") pod \"a29203c6-0802-488f-9269-b4725a8923b2\" (UID: \"a29203c6-0802-488f-9269-b4725a8923b2\") " Oct 13 08:40:39 crc kubenswrapper[4833]: I1013 08:40:39.518914 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a29203c6-0802-488f-9269-b4725a8923b2-kube-api-access-bj9dr" (OuterVolumeSpecName: "kube-api-access-bj9dr") pod "a29203c6-0802-488f-9269-b4725a8923b2" (UID: "a29203c6-0802-488f-9269-b4725a8923b2"). InnerVolumeSpecName "kube-api-access-bj9dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:40:39 crc kubenswrapper[4833]: I1013 08:40:39.542347 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a29203c6-0802-488f-9269-b4725a8923b2-inventory" (OuterVolumeSpecName: "inventory") pod "a29203c6-0802-488f-9269-b4725a8923b2" (UID: "a29203c6-0802-488f-9269-b4725a8923b2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:40:39 crc kubenswrapper[4833]: I1013 08:40:39.568136 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a29203c6-0802-488f-9269-b4725a8923b2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a29203c6-0802-488f-9269-b4725a8923b2" (UID: "a29203c6-0802-488f-9269-b4725a8923b2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:40:39 crc kubenswrapper[4833]: I1013 08:40:39.616065 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a29203c6-0802-488f-9269-b4725a8923b2-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 08:40:39 crc kubenswrapper[4833]: I1013 08:40:39.616111 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj9dr\" (UniqueName: \"kubernetes.io/projected/a29203c6-0802-488f-9269-b4725a8923b2-kube-api-access-bj9dr\") on node \"crc\" DevicePath \"\"" Oct 13 08:40:39 crc kubenswrapper[4833]: I1013 08:40:39.616125 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a29203c6-0802-488f-9269-b4725a8923b2-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 08:40:39 crc kubenswrapper[4833]: I1013 08:40:39.949416 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-f2xkf" event={"ID":"a29203c6-0802-488f-9269-b4725a8923b2","Type":"ContainerDied","Data":"46d54c178edccb801613be61eb02d2c5eecb8719f77290c4467dd87b0eecca8a"} Oct 13 08:40:39 crc kubenswrapper[4833]: I1013 08:40:39.949801 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46d54c178edccb801613be61eb02d2c5eecb8719f77290c4467dd87b0eecca8a" Oct 13 08:40:39 crc kubenswrapper[4833]: I1013 08:40:39.949864 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-f2xkf" Oct 13 08:40:40 crc kubenswrapper[4833]: I1013 08:40:40.017431 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-nk465"] Oct 13 08:40:40 crc kubenswrapper[4833]: E1013 08:40:40.017850 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a29203c6-0802-488f-9269-b4725a8923b2" containerName="run-os-openstack-openstack-cell1" Oct 13 08:40:40 crc kubenswrapper[4833]: I1013 08:40:40.017868 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a29203c6-0802-488f-9269-b4725a8923b2" containerName="run-os-openstack-openstack-cell1" Oct 13 08:40:40 crc kubenswrapper[4833]: I1013 08:40:40.018099 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a29203c6-0802-488f-9269-b4725a8923b2" containerName="run-os-openstack-openstack-cell1" Oct 13 08:40:40 crc kubenswrapper[4833]: I1013 08:40:40.018813 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-nk465" Oct 13 08:40:40 crc kubenswrapper[4833]: I1013 08:40:40.021922 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 13 08:40:40 crc kubenswrapper[4833]: I1013 08:40:40.021989 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 13 08:40:40 crc kubenswrapper[4833]: I1013 08:40:40.023322 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 08:40:40 crc kubenswrapper[4833]: I1013 08:40:40.023710 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-qqrx8" Oct 13 08:40:40 crc kubenswrapper[4833]: I1013 08:40:40.042058 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-nk465"] Oct 13 08:40:40 crc kubenswrapper[4833]: I1013 08:40:40.128795 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e0af3e4-7b62-491a-9bce-c6f749f15512-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-nk465\" (UID: \"1e0af3e4-7b62-491a-9bce-c6f749f15512\") " pod="openstack/reboot-os-openstack-openstack-cell1-nk465" Oct 13 08:40:40 crc kubenswrapper[4833]: I1013 08:40:40.129003 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e0af3e4-7b62-491a-9bce-c6f749f15512-inventory\") pod \"reboot-os-openstack-openstack-cell1-nk465\" (UID: \"1e0af3e4-7b62-491a-9bce-c6f749f15512\") " pod="openstack/reboot-os-openstack-openstack-cell1-nk465" Oct 13 08:40:40 crc kubenswrapper[4833]: I1013 08:40:40.129090 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r92w8\" (UniqueName: \"kubernetes.io/projected/1e0af3e4-7b62-491a-9bce-c6f749f15512-kube-api-access-r92w8\") pod \"reboot-os-openstack-openstack-cell1-nk465\" (UID: \"1e0af3e4-7b62-491a-9bce-c6f749f15512\") " pod="openstack/reboot-os-openstack-openstack-cell1-nk465" Oct 13 08:40:40 crc kubenswrapper[4833]: I1013 08:40:40.231921 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e0af3e4-7b62-491a-9bce-c6f749f15512-inventory\") pod \"reboot-os-openstack-openstack-cell1-nk465\" (UID: \"1e0af3e4-7b62-491a-9bce-c6f749f15512\") " pod="openstack/reboot-os-openstack-openstack-cell1-nk465" Oct 13 08:40:40 crc kubenswrapper[4833]: I1013 08:40:40.232695 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r92w8\" (UniqueName: \"kubernetes.io/projected/1e0af3e4-7b62-491a-9bce-c6f749f15512-kube-api-access-r92w8\") pod \"reboot-os-openstack-openstack-cell1-nk465\" (UID: \"1e0af3e4-7b62-491a-9bce-c6f749f15512\") " pod="openstack/reboot-os-openstack-openstack-cell1-nk465" Oct 13 08:40:40 crc kubenswrapper[4833]: I1013 08:40:40.232954 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e0af3e4-7b62-491a-9bce-c6f749f15512-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-nk465\" (UID: \"1e0af3e4-7b62-491a-9bce-c6f749f15512\") " pod="openstack/reboot-os-openstack-openstack-cell1-nk465" Oct 13 08:40:40 crc kubenswrapper[4833]: I1013 08:40:40.242987 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e0af3e4-7b62-491a-9bce-c6f749f15512-inventory\") pod \"reboot-os-openstack-openstack-cell1-nk465\" (UID: \"1e0af3e4-7b62-491a-9bce-c6f749f15512\") " pod="openstack/reboot-os-openstack-openstack-cell1-nk465" Oct 13 08:40:40 crc kubenswrapper[4833]: I1013 08:40:40.244028 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e0af3e4-7b62-491a-9bce-c6f749f15512-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-nk465\" (UID: \"1e0af3e4-7b62-491a-9bce-c6f749f15512\") " pod="openstack/reboot-os-openstack-openstack-cell1-nk465" Oct 13 08:40:40 crc kubenswrapper[4833]: I1013 08:40:40.255018 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r92w8\" (UniqueName: \"kubernetes.io/projected/1e0af3e4-7b62-491a-9bce-c6f749f15512-kube-api-access-r92w8\") pod \"reboot-os-openstack-openstack-cell1-nk465\" (UID: \"1e0af3e4-7b62-491a-9bce-c6f749f15512\") " pod="openstack/reboot-os-openstack-openstack-cell1-nk465" Oct 13 08:40:40 crc kubenswrapper[4833]: I1013 08:40:40.340994 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-nk465" Oct 13 08:40:40 crc kubenswrapper[4833]: I1013 08:40:40.962428 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-nk465"] Oct 13 08:40:41 crc kubenswrapper[4833]: I1013 08:40:41.980096 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-nk465" event={"ID":"1e0af3e4-7b62-491a-9bce-c6f749f15512","Type":"ContainerStarted","Data":"950b6df7814b8eac1d1bfb269244080d45c89fccaa1f71191921619b833583b5"} Oct 13 08:40:42 crc kubenswrapper[4833]: I1013 08:40:42.994416 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-nk465" event={"ID":"1e0af3e4-7b62-491a-9bce-c6f749f15512","Type":"ContainerStarted","Data":"28cc4e0e9968a1706f626eb8d8b66a4ffe398eda85d109347c25adad98b876ec"} Oct 13 08:40:43 crc kubenswrapper[4833]: I1013 08:40:43.021830 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-openstack-openstack-cell1-nk465" podStartSLOduration=3.315635145 podStartE2EDuration="4.021796915s" podCreationTimestamp="2025-10-13 08:40:39 +0000 UTC" firstStartedPulling="2025-10-13 08:40:40.96001962 +0000 UTC m=+7931.060442546" lastFinishedPulling="2025-10-13 08:40:41.66618139 +0000 UTC m=+7931.766604316" observedRunningTime="2025-10-13 08:40:43.016131984 +0000 UTC m=+7933.116554900" watchObservedRunningTime="2025-10-13 08:40:43.021796915 +0000 UTC m=+7933.122219871" Oct 13 08:40:48 crc kubenswrapper[4833]: I1013 08:40:48.628151 4833 scope.go:117] "RemoveContainer" containerID="18757d11b2563537d0bd73435161e9ea0a84b6b44238545002dede1960b151c7" Oct 13 08:40:48 crc kubenswrapper[4833]: E1013 08:40:48.629214 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:40:59 crc kubenswrapper[4833]: I1013 08:40:59.202428 4833 generic.go:334] "Generic (PLEG): container finished" podID="1e0af3e4-7b62-491a-9bce-c6f749f15512" containerID="28cc4e0e9968a1706f626eb8d8b66a4ffe398eda85d109347c25adad98b876ec" exitCode=0 Oct 13 08:40:59 crc kubenswrapper[4833]: I1013 08:40:59.202497 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-nk465" event={"ID":"1e0af3e4-7b62-491a-9bce-c6f749f15512","Type":"ContainerDied","Data":"28cc4e0e9968a1706f626eb8d8b66a4ffe398eda85d109347c25adad98b876ec"} Oct 13 08:40:59 crc kubenswrapper[4833]: I1013 08:40:59.627981 4833 scope.go:117] "RemoveContainer" containerID="18757d11b2563537d0bd73435161e9ea0a84b6b44238545002dede1960b151c7" Oct 13 08:40:59 crc kubenswrapper[4833]: E1013 08:40:59.628853 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:41:00 crc kubenswrapper[4833]: I1013 08:41:00.694630 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-nk465" Oct 13 08:41:00 crc kubenswrapper[4833]: I1013 08:41:00.801331 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e0af3e4-7b62-491a-9bce-c6f749f15512-inventory\") pod \"1e0af3e4-7b62-491a-9bce-c6f749f15512\" (UID: \"1e0af3e4-7b62-491a-9bce-c6f749f15512\") " Oct 13 08:41:00 crc kubenswrapper[4833]: I1013 08:41:00.801416 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e0af3e4-7b62-491a-9bce-c6f749f15512-ssh-key\") pod \"1e0af3e4-7b62-491a-9bce-c6f749f15512\" (UID: \"1e0af3e4-7b62-491a-9bce-c6f749f15512\") " Oct 13 08:41:00 crc kubenswrapper[4833]: I1013 08:41:00.801502 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r92w8\" (UniqueName: \"kubernetes.io/projected/1e0af3e4-7b62-491a-9bce-c6f749f15512-kube-api-access-r92w8\") pod \"1e0af3e4-7b62-491a-9bce-c6f749f15512\" (UID: \"1e0af3e4-7b62-491a-9bce-c6f749f15512\") " Oct 13 08:41:00 crc kubenswrapper[4833]: I1013 08:41:00.809700 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e0af3e4-7b62-491a-9bce-c6f749f15512-kube-api-access-r92w8" (OuterVolumeSpecName: "kube-api-access-r92w8") pod "1e0af3e4-7b62-491a-9bce-c6f749f15512" (UID: "1e0af3e4-7b62-491a-9bce-c6f749f15512"). InnerVolumeSpecName "kube-api-access-r92w8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:41:00 crc kubenswrapper[4833]: I1013 08:41:00.841824 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e0af3e4-7b62-491a-9bce-c6f749f15512-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1e0af3e4-7b62-491a-9bce-c6f749f15512" (UID: "1e0af3e4-7b62-491a-9bce-c6f749f15512"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:41:00 crc kubenswrapper[4833]: I1013 08:41:00.845629 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e0af3e4-7b62-491a-9bce-c6f749f15512-inventory" (OuterVolumeSpecName: "inventory") pod "1e0af3e4-7b62-491a-9bce-c6f749f15512" (UID: "1e0af3e4-7b62-491a-9bce-c6f749f15512"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:41:00 crc kubenswrapper[4833]: I1013 08:41:00.904571 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e0af3e4-7b62-491a-9bce-c6f749f15512-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 08:41:00 crc kubenswrapper[4833]: I1013 08:41:00.904626 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e0af3e4-7b62-491a-9bce-c6f749f15512-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 08:41:00 crc kubenswrapper[4833]: I1013 08:41:00.904640 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r92w8\" (UniqueName: \"kubernetes.io/projected/1e0af3e4-7b62-491a-9bce-c6f749f15512-kube-api-access-r92w8\") on node \"crc\" DevicePath \"\"" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.228205 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-nk465" event={"ID":"1e0af3e4-7b62-491a-9bce-c6f749f15512","Type":"ContainerDied","Data":"950b6df7814b8eac1d1bfb269244080d45c89fccaa1f71191921619b833583b5"} Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.228257 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-nk465" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.228271 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="950b6df7814b8eac1d1bfb269244080d45c89fccaa1f71191921619b833583b5" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.334365 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-d79ll"] Oct 13 08:41:01 crc kubenswrapper[4833]: E1013 08:41:01.334835 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e0af3e4-7b62-491a-9bce-c6f749f15512" containerName="reboot-os-openstack-openstack-cell1" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.334853 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e0af3e4-7b62-491a-9bce-c6f749f15512" containerName="reboot-os-openstack-openstack-cell1" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.335076 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e0af3e4-7b62-491a-9bce-c6f749f15512" containerName="reboot-os-openstack-openstack-cell1" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.336010 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.339510 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.339888 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-ovn-default-certs-0" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.340006 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-qqrx8" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.340169 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-telemetry-default-certs-0" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.340439 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.340612 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-neutron-metadata-default-certs-0" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.340691 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-libvirt-default-certs-0" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.340947 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.369735 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-d79ll"] Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.418124 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.418189 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.418220 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.418270 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-inventory\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.418299 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.418318 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-ssh-key\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.418369 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.418432 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.418492 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.418565 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.418591 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhx9l\" (UniqueName: \"kubernetes.io/projected/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-kube-api-access-fhx9l\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.418627 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.418702 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.418737 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.418781 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.520885 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.521156 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.521183 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhx9l\" (UniqueName: \"kubernetes.io/projected/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-kube-api-access-fhx9l\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.521212 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.521247 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.521274 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.521301 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.521398 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.521624 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.523736 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.523867 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-inventory\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.523909 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.523994 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-ssh-key\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.524079 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.524143 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.527167 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.528211 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.529185 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.529986 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.533730 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.533744 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.534220 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.534489 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.536484 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-inventory\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.539460 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.541968 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.543674 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.544338 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.546117 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhx9l\" (UniqueName: \"kubernetes.io/projected/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-kube-api-access-fhx9l\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.558119 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-ssh-key\") pod \"install-certs-openstack-openstack-cell1-d79ll\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:01 crc kubenswrapper[4833]: I1013 08:41:01.661361 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:02 crc kubenswrapper[4833]: I1013 08:41:02.234622 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-d79ll"] Oct 13 08:41:03 crc kubenswrapper[4833]: I1013 08:41:03.255592 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-d79ll" event={"ID":"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb","Type":"ContainerStarted","Data":"bffb22a2d366ba08ed4d570afc1657acbdff484f8b80ab8a8789aed5145c011b"} Oct 13 08:41:03 crc kubenswrapper[4833]: I1013 08:41:03.255889 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-d79ll" event={"ID":"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb","Type":"ContainerStarted","Data":"20760c5430086d06f5d5f455eccd0b58e100639f9aba572d56dcc16e6cf789fd"} Oct 13 08:41:03 crc kubenswrapper[4833]: I1013 08:41:03.297665 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-openstack-openstack-cell1-d79ll" podStartSLOduration=1.7173564030000001 podStartE2EDuration="2.297632251s" podCreationTimestamp="2025-10-13 08:41:01 +0000 UTC" firstStartedPulling="2025-10-13 08:41:02.233095056 +0000 UTC m=+7952.333517982" lastFinishedPulling="2025-10-13 08:41:02.813370914 +0000 UTC m=+7952.913793830" observedRunningTime="2025-10-13 08:41:03.277517849 +0000 UTC m=+7953.377940765" watchObservedRunningTime="2025-10-13 08:41:03.297632251 +0000 UTC m=+7953.398055247" Oct 13 08:41:14 crc kubenswrapper[4833]: I1013 08:41:14.627211 4833 scope.go:117] "RemoveContainer" containerID="18757d11b2563537d0bd73435161e9ea0a84b6b44238545002dede1960b151c7" Oct 13 08:41:14 crc kubenswrapper[4833]: E1013 08:41:14.627985 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:41:25 crc kubenswrapper[4833]: I1013 08:41:25.627100 4833 scope.go:117] "RemoveContainer" containerID="18757d11b2563537d0bd73435161e9ea0a84b6b44238545002dede1960b151c7" Oct 13 08:41:25 crc kubenswrapper[4833]: E1013 08:41:25.628163 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:41:40 crc kubenswrapper[4833]: I1013 08:41:40.644471 4833 scope.go:117] "RemoveContainer" containerID="18757d11b2563537d0bd73435161e9ea0a84b6b44238545002dede1960b151c7" Oct 13 08:41:40 crc kubenswrapper[4833]: E1013 08:41:40.645879 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:41:41 crc kubenswrapper[4833]: I1013 08:41:41.752409 4833 generic.go:334] "Generic (PLEG): container finished" podID="26d8c1a5-4eda-4010-8bd8-5634ce08c7fb" containerID="bffb22a2d366ba08ed4d570afc1657acbdff484f8b80ab8a8789aed5145c011b" exitCode=0 Oct 13 08:41:41 crc kubenswrapper[4833]: I1013 08:41:41.752494 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-d79ll" event={"ID":"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb","Type":"ContainerDied","Data":"bffb22a2d366ba08ed4d570afc1657acbdff484f8b80ab8a8789aed5145c011b"} Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.263698 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.406814 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-openstack-cell1-ovn-default-certs-0\") pod \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.406884 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-libvirt-combined-ca-bundle\") pod \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.406929 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-ovn-combined-ca-bundle\") pod \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.406999 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-neutron-metadata-combined-ca-bundle\") pod \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.407025 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-neutron-sriov-combined-ca-bundle\") pod \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.407049 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-inventory\") pod \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.407074 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-ssh-key\") pod \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.407777 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-nova-combined-ca-bundle\") pod \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.408102 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-openstack-cell1-telemetry-default-certs-0\") pod \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.408135 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-telemetry-combined-ca-bundle\") pod \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.408154 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-neutron-dhcp-combined-ca-bundle\") pod \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.408220 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhx9l\" (UniqueName: \"kubernetes.io/projected/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-kube-api-access-fhx9l\") pod \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.408251 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-openstack-cell1-neutron-metadata-default-certs-0\") pod \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.408276 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-openstack-cell1-libvirt-default-certs-0\") pod \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.408322 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-bootstrap-combined-ca-bundle\") pod \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\" (UID: \"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb\") " Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.412716 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "26d8c1a5-4eda-4010-8bd8-5634ce08c7fb" (UID: "26d8c1a5-4eda-4010-8bd8-5634ce08c7fb"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.413263 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "26d8c1a5-4eda-4010-8bd8-5634ce08c7fb" (UID: "26d8c1a5-4eda-4010-8bd8-5634ce08c7fb"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.413807 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "26d8c1a5-4eda-4010-8bd8-5634ce08c7fb" (UID: "26d8c1a5-4eda-4010-8bd8-5634ce08c7fb"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.413826 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-openstack-cell1-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-ovn-default-certs-0") pod "26d8c1a5-4eda-4010-8bd8-5634ce08c7fb" (UID: "26d8c1a5-4eda-4010-8bd8-5634ce08c7fb"). InnerVolumeSpecName "openstack-cell1-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.414669 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "26d8c1a5-4eda-4010-8bd8-5634ce08c7fb" (UID: "26d8c1a5-4eda-4010-8bd8-5634ce08c7fb"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.414685 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "26d8c1a5-4eda-4010-8bd8-5634ce08c7fb" (UID: "26d8c1a5-4eda-4010-8bd8-5634ce08c7fb"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.416250 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-openstack-cell1-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-telemetry-default-certs-0") pod "26d8c1a5-4eda-4010-8bd8-5634ce08c7fb" (UID: "26d8c1a5-4eda-4010-8bd8-5634ce08c7fb"). InnerVolumeSpecName "openstack-cell1-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.416360 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "26d8c1a5-4eda-4010-8bd8-5634ce08c7fb" (UID: "26d8c1a5-4eda-4010-8bd8-5634ce08c7fb"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.416582 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-kube-api-access-fhx9l" (OuterVolumeSpecName: "kube-api-access-fhx9l") pod "26d8c1a5-4eda-4010-8bd8-5634ce08c7fb" (UID: "26d8c1a5-4eda-4010-8bd8-5634ce08c7fb"). InnerVolumeSpecName "kube-api-access-fhx9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.417416 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "26d8c1a5-4eda-4010-8bd8-5634ce08c7fb" (UID: "26d8c1a5-4eda-4010-8bd8-5634ce08c7fb"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.417487 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-openstack-cell1-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-libvirt-default-certs-0") pod "26d8c1a5-4eda-4010-8bd8-5634ce08c7fb" (UID: "26d8c1a5-4eda-4010-8bd8-5634ce08c7fb"). InnerVolumeSpecName "openstack-cell1-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.417787 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-openstack-cell1-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-neutron-metadata-default-certs-0") pod "26d8c1a5-4eda-4010-8bd8-5634ce08c7fb" (UID: "26d8c1a5-4eda-4010-8bd8-5634ce08c7fb"). InnerVolumeSpecName "openstack-cell1-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.425479 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "26d8c1a5-4eda-4010-8bd8-5634ce08c7fb" (UID: "26d8c1a5-4eda-4010-8bd8-5634ce08c7fb"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.451496 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-inventory" (OuterVolumeSpecName: "inventory") pod "26d8c1a5-4eda-4010-8bd8-5634ce08c7fb" (UID: "26d8c1a5-4eda-4010-8bd8-5634ce08c7fb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.454070 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "26d8c1a5-4eda-4010-8bd8-5634ce08c7fb" (UID: "26d8c1a5-4eda-4010-8bd8-5634ce08c7fb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.518346 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.518391 4833 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.518405 4833 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-openstack-cell1-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.518415 4833 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.518427 4833 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.518436 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhx9l\" (UniqueName: \"kubernetes.io/projected/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-kube-api-access-fhx9l\") on node \"crc\" DevicePath \"\"" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.518448 4833 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-openstack-cell1-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.518459 4833 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-openstack-cell1-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.518469 4833 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.518478 4833 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-openstack-cell1-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.518487 4833 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.518497 4833 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.518506 4833 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.518520 4833 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.518529 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26d8c1a5-4eda-4010-8bd8-5634ce08c7fb-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.797014 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-d79ll" event={"ID":"26d8c1a5-4eda-4010-8bd8-5634ce08c7fb","Type":"ContainerDied","Data":"20760c5430086d06f5d5f455eccd0b58e100639f9aba572d56dcc16e6cf789fd"} Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.797413 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20760c5430086d06f5d5f455eccd0b58e100639f9aba572d56dcc16e6cf789fd" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.797656 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-d79ll" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.875227 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-openstack-openstack-cell1-c8thl"] Oct 13 08:41:43 crc kubenswrapper[4833]: E1013 08:41:43.875663 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26d8c1a5-4eda-4010-8bd8-5634ce08c7fb" containerName="install-certs-openstack-openstack-cell1" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.875681 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="26d8c1a5-4eda-4010-8bd8-5634ce08c7fb" containerName="install-certs-openstack-openstack-cell1" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.875891 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="26d8c1a5-4eda-4010-8bd8-5634ce08c7fb" containerName="install-certs-openstack-openstack-cell1" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.876665 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-c8thl" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.879788 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.892132 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-c8thl"] Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.892949 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-qqrx8" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.893094 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.893303 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Oct 13 08:41:43 crc kubenswrapper[4833]: I1013 08:41:43.893312 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 13 08:41:44 crc kubenswrapper[4833]: I1013 08:41:44.029518 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ccvd\" (UniqueName: \"kubernetes.io/projected/21601e8f-e29f-4fc3-a938-4fc556422961-kube-api-access-4ccvd\") pod \"ovn-openstack-openstack-cell1-c8thl\" (UID: \"21601e8f-e29f-4fc3-a938-4fc556422961\") " pod="openstack/ovn-openstack-openstack-cell1-c8thl" Oct 13 08:41:44 crc kubenswrapper[4833]: I1013 08:41:44.029845 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21601e8f-e29f-4fc3-a938-4fc556422961-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-c8thl\" (UID: \"21601e8f-e29f-4fc3-a938-4fc556422961\") " pod="openstack/ovn-openstack-openstack-cell1-c8thl" Oct 13 08:41:44 crc kubenswrapper[4833]: I1013 08:41:44.029923 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21601e8f-e29f-4fc3-a938-4fc556422961-inventory\") pod \"ovn-openstack-openstack-cell1-c8thl\" (UID: \"21601e8f-e29f-4fc3-a938-4fc556422961\") " pod="openstack/ovn-openstack-openstack-cell1-c8thl" Oct 13 08:41:44 crc kubenswrapper[4833]: I1013 08:41:44.030158 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/21601e8f-e29f-4fc3-a938-4fc556422961-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-c8thl\" (UID: \"21601e8f-e29f-4fc3-a938-4fc556422961\") " pod="openstack/ovn-openstack-openstack-cell1-c8thl" Oct 13 08:41:44 crc kubenswrapper[4833]: I1013 08:41:44.030420 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21601e8f-e29f-4fc3-a938-4fc556422961-ssh-key\") pod \"ovn-openstack-openstack-cell1-c8thl\" (UID: \"21601e8f-e29f-4fc3-a938-4fc556422961\") " pod="openstack/ovn-openstack-openstack-cell1-c8thl" Oct 13 08:41:44 crc kubenswrapper[4833]: I1013 08:41:44.132868 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21601e8f-e29f-4fc3-a938-4fc556422961-ssh-key\") pod \"ovn-openstack-openstack-cell1-c8thl\" (UID: \"21601e8f-e29f-4fc3-a938-4fc556422961\") " pod="openstack/ovn-openstack-openstack-cell1-c8thl" Oct 13 08:41:44 crc kubenswrapper[4833]: I1013 08:41:44.133101 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ccvd\" (UniqueName: \"kubernetes.io/projected/21601e8f-e29f-4fc3-a938-4fc556422961-kube-api-access-4ccvd\") pod \"ovn-openstack-openstack-cell1-c8thl\" (UID: \"21601e8f-e29f-4fc3-a938-4fc556422961\") " pod="openstack/ovn-openstack-openstack-cell1-c8thl" Oct 13 08:41:44 crc kubenswrapper[4833]: I1013 08:41:44.133194 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21601e8f-e29f-4fc3-a938-4fc556422961-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-c8thl\" (UID: \"21601e8f-e29f-4fc3-a938-4fc556422961\") " pod="openstack/ovn-openstack-openstack-cell1-c8thl" Oct 13 08:41:44 crc kubenswrapper[4833]: I1013 08:41:44.133234 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21601e8f-e29f-4fc3-a938-4fc556422961-inventory\") pod \"ovn-openstack-openstack-cell1-c8thl\" (UID: \"21601e8f-e29f-4fc3-a938-4fc556422961\") " pod="openstack/ovn-openstack-openstack-cell1-c8thl" Oct 13 08:41:44 crc kubenswrapper[4833]: I1013 08:41:44.133333 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/21601e8f-e29f-4fc3-a938-4fc556422961-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-c8thl\" (UID: \"21601e8f-e29f-4fc3-a938-4fc556422961\") " pod="openstack/ovn-openstack-openstack-cell1-c8thl" Oct 13 08:41:44 crc kubenswrapper[4833]: I1013 08:41:44.134883 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/21601e8f-e29f-4fc3-a938-4fc556422961-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-c8thl\" (UID: \"21601e8f-e29f-4fc3-a938-4fc556422961\") " pod="openstack/ovn-openstack-openstack-cell1-c8thl" Oct 13 08:41:44 crc kubenswrapper[4833]: I1013 08:41:44.138298 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21601e8f-e29f-4fc3-a938-4fc556422961-ssh-key\") pod \"ovn-openstack-openstack-cell1-c8thl\" (UID: \"21601e8f-e29f-4fc3-a938-4fc556422961\") " pod="openstack/ovn-openstack-openstack-cell1-c8thl" Oct 13 08:41:44 crc kubenswrapper[4833]: I1013 08:41:44.141053 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21601e8f-e29f-4fc3-a938-4fc556422961-inventory\") pod \"ovn-openstack-openstack-cell1-c8thl\" (UID: \"21601e8f-e29f-4fc3-a938-4fc556422961\") " pod="openstack/ovn-openstack-openstack-cell1-c8thl" Oct 13 08:41:44 crc kubenswrapper[4833]: I1013 08:41:44.151289 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21601e8f-e29f-4fc3-a938-4fc556422961-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-c8thl\" (UID: \"21601e8f-e29f-4fc3-a938-4fc556422961\") " pod="openstack/ovn-openstack-openstack-cell1-c8thl" Oct 13 08:41:44 crc kubenswrapper[4833]: I1013 08:41:44.167894 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ccvd\" (UniqueName: \"kubernetes.io/projected/21601e8f-e29f-4fc3-a938-4fc556422961-kube-api-access-4ccvd\") pod \"ovn-openstack-openstack-cell1-c8thl\" (UID: \"21601e8f-e29f-4fc3-a938-4fc556422961\") " pod="openstack/ovn-openstack-openstack-cell1-c8thl" Oct 13 08:41:44 crc kubenswrapper[4833]: I1013 08:41:44.207825 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-c8thl" Oct 13 08:41:44 crc kubenswrapper[4833]: I1013 08:41:44.827249 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-c8thl"] Oct 13 08:41:45 crc kubenswrapper[4833]: I1013 08:41:45.818549 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-c8thl" event={"ID":"21601e8f-e29f-4fc3-a938-4fc556422961","Type":"ContainerStarted","Data":"0055a464f4b247e53f3cf3e5271d5921ef0accc0ed20d0e9f78904ce89508d5d"} Oct 13 08:41:45 crc kubenswrapper[4833]: I1013 08:41:45.819122 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-c8thl" event={"ID":"21601e8f-e29f-4fc3-a938-4fc556422961","Type":"ContainerStarted","Data":"21de97b575c86e88673fd6e6065eeb18721339e37dfa5efa6e63c7ad7d985ba3"} Oct 13 08:41:45 crc kubenswrapper[4833]: I1013 08:41:45.841513 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-openstack-openstack-cell1-c8thl" podStartSLOduration=2.288503679 podStartE2EDuration="2.841496481s" podCreationTimestamp="2025-10-13 08:41:43 +0000 UTC" firstStartedPulling="2025-10-13 08:41:44.843733686 +0000 UTC m=+7994.944156602" lastFinishedPulling="2025-10-13 08:41:45.396726458 +0000 UTC m=+7995.497149404" observedRunningTime="2025-10-13 08:41:45.833887875 +0000 UTC m=+7995.934310781" watchObservedRunningTime="2025-10-13 08:41:45.841496481 +0000 UTC m=+7995.941919397" Oct 13 08:41:53 crc kubenswrapper[4833]: I1013 08:41:53.627079 4833 scope.go:117] "RemoveContainer" containerID="18757d11b2563537d0bd73435161e9ea0a84b6b44238545002dede1960b151c7" Oct 13 08:41:53 crc kubenswrapper[4833]: E1013 08:41:53.628183 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:42:06 crc kubenswrapper[4833]: I1013 08:42:06.629269 4833 scope.go:117] "RemoveContainer" containerID="18757d11b2563537d0bd73435161e9ea0a84b6b44238545002dede1960b151c7" Oct 13 08:42:07 crc kubenswrapper[4833]: I1013 08:42:07.064984 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"22856144922dcb85420c849eb3f6308a0a81a29149d29ed40f72d81846153551"} Oct 13 08:42:54 crc kubenswrapper[4833]: I1013 08:42:54.582973 4833 generic.go:334] "Generic (PLEG): container finished" podID="21601e8f-e29f-4fc3-a938-4fc556422961" containerID="0055a464f4b247e53f3cf3e5271d5921ef0accc0ed20d0e9f78904ce89508d5d" exitCode=0 Oct 13 08:42:54 crc kubenswrapper[4833]: I1013 08:42:54.583031 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-c8thl" event={"ID":"21601e8f-e29f-4fc3-a938-4fc556422961","Type":"ContainerDied","Data":"0055a464f4b247e53f3cf3e5271d5921ef0accc0ed20d0e9f78904ce89508d5d"} Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.115798 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-c8thl" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.206530 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21601e8f-e29f-4fc3-a938-4fc556422961-ovn-combined-ca-bundle\") pod \"21601e8f-e29f-4fc3-a938-4fc556422961\" (UID: \"21601e8f-e29f-4fc3-a938-4fc556422961\") " Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.206714 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ccvd\" (UniqueName: \"kubernetes.io/projected/21601e8f-e29f-4fc3-a938-4fc556422961-kube-api-access-4ccvd\") pod \"21601e8f-e29f-4fc3-a938-4fc556422961\" (UID: \"21601e8f-e29f-4fc3-a938-4fc556422961\") " Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.206886 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21601e8f-e29f-4fc3-a938-4fc556422961-inventory\") pod \"21601e8f-e29f-4fc3-a938-4fc556422961\" (UID: \"21601e8f-e29f-4fc3-a938-4fc556422961\") " Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.206960 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/21601e8f-e29f-4fc3-a938-4fc556422961-ovncontroller-config-0\") pod \"21601e8f-e29f-4fc3-a938-4fc556422961\" (UID: \"21601e8f-e29f-4fc3-a938-4fc556422961\") " Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.207006 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21601e8f-e29f-4fc3-a938-4fc556422961-ssh-key\") pod \"21601e8f-e29f-4fc3-a938-4fc556422961\" (UID: \"21601e8f-e29f-4fc3-a938-4fc556422961\") " Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.212877 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21601e8f-e29f-4fc3-a938-4fc556422961-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "21601e8f-e29f-4fc3-a938-4fc556422961" (UID: "21601e8f-e29f-4fc3-a938-4fc556422961"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.214835 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21601e8f-e29f-4fc3-a938-4fc556422961-kube-api-access-4ccvd" (OuterVolumeSpecName: "kube-api-access-4ccvd") pod "21601e8f-e29f-4fc3-a938-4fc556422961" (UID: "21601e8f-e29f-4fc3-a938-4fc556422961"). InnerVolumeSpecName "kube-api-access-4ccvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.245423 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21601e8f-e29f-4fc3-a938-4fc556422961-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "21601e8f-e29f-4fc3-a938-4fc556422961" (UID: "21601e8f-e29f-4fc3-a938-4fc556422961"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.273665 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21601e8f-e29f-4fc3-a938-4fc556422961-inventory" (OuterVolumeSpecName: "inventory") pod "21601e8f-e29f-4fc3-a938-4fc556422961" (UID: "21601e8f-e29f-4fc3-a938-4fc556422961"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.278368 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21601e8f-e29f-4fc3-a938-4fc556422961-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "21601e8f-e29f-4fc3-a938-4fc556422961" (UID: "21601e8f-e29f-4fc3-a938-4fc556422961"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.310039 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21601e8f-e29f-4fc3-a938-4fc556422961-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.310284 4833 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/21601e8f-e29f-4fc3-a938-4fc556422961-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.310373 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21601e8f-e29f-4fc3-a938-4fc556422961-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.310447 4833 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21601e8f-e29f-4fc3-a938-4fc556422961-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.310526 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ccvd\" (UniqueName: \"kubernetes.io/projected/21601e8f-e29f-4fc3-a938-4fc556422961-kube-api-access-4ccvd\") on node \"crc\" DevicePath \"\"" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.615054 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-c8thl" event={"ID":"21601e8f-e29f-4fc3-a938-4fc556422961","Type":"ContainerDied","Data":"21de97b575c86e88673fd6e6065eeb18721339e37dfa5efa6e63c7ad7d985ba3"} Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.615121 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21de97b575c86e88673fd6e6065eeb18721339e37dfa5efa6e63c7ad7d985ba3" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.615140 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-c8thl" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.742605 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-kcvtf"] Oct 13 08:42:56 crc kubenswrapper[4833]: E1013 08:42:56.744020 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21601e8f-e29f-4fc3-a938-4fc556422961" containerName="ovn-openstack-openstack-cell1" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.744448 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="21601e8f-e29f-4fc3-a938-4fc556422961" containerName="ovn-openstack-openstack-cell1" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.744825 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="21601e8f-e29f-4fc3-a938-4fc556422961" containerName="ovn-openstack-openstack-cell1" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.745980 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-kcvtf" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.748727 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.749258 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.749749 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.749954 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-qqrx8" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.750495 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.751601 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.755090 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-kcvtf"] Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.822738 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-kcvtf\" (UID: \"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-kcvtf" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.822925 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvmpr\" (UniqueName: \"kubernetes.io/projected/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-kube-api-access-mvmpr\") pod \"neutron-metadata-openstack-openstack-cell1-kcvtf\" (UID: \"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-kcvtf" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.823015 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-kcvtf\" (UID: \"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-kcvtf" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.823052 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-kcvtf\" (UID: \"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-kcvtf" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.823096 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-kcvtf\" (UID: \"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-kcvtf" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.823154 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-kcvtf\" (UID: \"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-kcvtf" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.924953 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-kcvtf\" (UID: \"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-kcvtf" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.925042 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-kcvtf\" (UID: \"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-kcvtf" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.925105 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-kcvtf\" (UID: \"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-kcvtf" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.925149 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvmpr\" (UniqueName: \"kubernetes.io/projected/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-kube-api-access-mvmpr\") pod \"neutron-metadata-openstack-openstack-cell1-kcvtf\" (UID: \"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-kcvtf" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.925190 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-kcvtf\" (UID: \"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-kcvtf" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.925223 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-kcvtf\" (UID: \"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-kcvtf" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.931291 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-kcvtf\" (UID: \"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-kcvtf" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.931944 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-kcvtf\" (UID: \"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-kcvtf" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.933444 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-kcvtf\" (UID: \"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-kcvtf" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.935195 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-kcvtf\" (UID: \"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-kcvtf" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.935774 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-kcvtf\" (UID: \"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-kcvtf" Oct 13 08:42:56 crc kubenswrapper[4833]: I1013 08:42:56.944121 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvmpr\" (UniqueName: \"kubernetes.io/projected/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-kube-api-access-mvmpr\") pod \"neutron-metadata-openstack-openstack-cell1-kcvtf\" (UID: \"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-kcvtf" Oct 13 08:42:57 crc kubenswrapper[4833]: I1013 08:42:57.094324 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-kcvtf" Oct 13 08:42:57 crc kubenswrapper[4833]: I1013 08:42:57.727947 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-kcvtf"] Oct 13 08:42:58 crc kubenswrapper[4833]: I1013 08:42:58.651992 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-kcvtf" event={"ID":"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f","Type":"ContainerStarted","Data":"fa032dce61ff40222e59a70f065107268f44e3f42c697a2456e59c3893d51505"} Oct 13 08:42:58 crc kubenswrapper[4833]: I1013 08:42:58.652804 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-kcvtf" event={"ID":"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f","Type":"ContainerStarted","Data":"5aaa370cbd337f2f7c09662056b547ce915247b3d3b2e3b7f4e7f96f1222565e"} Oct 13 08:42:58 crc kubenswrapper[4833]: I1013 08:42:58.668215 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-openstack-openstack-cell1-kcvtf" podStartSLOduration=2.243090401 podStartE2EDuration="2.668174975s" podCreationTimestamp="2025-10-13 08:42:56 +0000 UTC" firstStartedPulling="2025-10-13 08:42:57.75023101 +0000 UTC m=+8067.850653926" lastFinishedPulling="2025-10-13 08:42:58.175315574 +0000 UTC m=+8068.275738500" observedRunningTime="2025-10-13 08:42:58.664108549 +0000 UTC m=+8068.764531485" watchObservedRunningTime="2025-10-13 08:42:58.668174975 +0000 UTC m=+8068.768597891" Oct 13 08:43:53 crc kubenswrapper[4833]: I1013 08:43:53.291530 4833 generic.go:334] "Generic (PLEG): container finished" podID="7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f" containerID="fa032dce61ff40222e59a70f065107268f44e3f42c697a2456e59c3893d51505" exitCode=0 Oct 13 08:43:53 crc kubenswrapper[4833]: I1013 08:43:53.292216 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-kcvtf" event={"ID":"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f","Type":"ContainerDied","Data":"fa032dce61ff40222e59a70f065107268f44e3f42c697a2456e59c3893d51505"} Oct 13 08:43:54 crc kubenswrapper[4833]: I1013 08:43:54.888617 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-kcvtf" Oct 13 08:43:54 crc kubenswrapper[4833]: I1013 08:43:54.992227 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvmpr\" (UniqueName: \"kubernetes.io/projected/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-kube-api-access-mvmpr\") pod \"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f\" (UID: \"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f\") " Oct 13 08:43:54 crc kubenswrapper[4833]: I1013 08:43:54.992295 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-neutron-metadata-combined-ca-bundle\") pod \"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f\" (UID: \"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f\") " Oct 13 08:43:54 crc kubenswrapper[4833]: I1013 08:43:54.992317 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-ssh-key\") pod \"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f\" (UID: \"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f\") " Oct 13 08:43:54 crc kubenswrapper[4833]: I1013 08:43:54.992364 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-inventory\") pod \"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f\" (UID: \"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f\") " Oct 13 08:43:54 crc kubenswrapper[4833]: I1013 08:43:54.992388 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-nova-metadata-neutron-config-0\") pod \"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f\" (UID: \"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f\") " Oct 13 08:43:54 crc kubenswrapper[4833]: I1013 08:43:54.992429 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f\" (UID: \"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f\") " Oct 13 08:43:54 crc kubenswrapper[4833]: I1013 08:43:54.998468 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-kube-api-access-mvmpr" (OuterVolumeSpecName: "kube-api-access-mvmpr") pod "7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f" (UID: "7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f"). InnerVolumeSpecName "kube-api-access-mvmpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.005733 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f" (UID: "7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.022228 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f" (UID: "7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.026666 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f" (UID: "7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.028859 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-inventory" (OuterVolumeSpecName: "inventory") pod "7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f" (UID: "7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.032300 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f" (UID: "7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.095689 4833 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.095746 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvmpr\" (UniqueName: \"kubernetes.io/projected/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-kube-api-access-mvmpr\") on node \"crc\" DevicePath \"\"" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.095765 4833 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.095778 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.095790 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.095802 4833 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.315889 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-kcvtf" event={"ID":"7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f","Type":"ContainerDied","Data":"5aaa370cbd337f2f7c09662056b547ce915247b3d3b2e3b7f4e7f96f1222565e"} Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.315986 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aaa370cbd337f2f7c09662056b547ce915247b3d3b2e3b7f4e7f96f1222565e" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.315929 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-kcvtf" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.454985 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-dkj5f"] Oct 13 08:43:55 crc kubenswrapper[4833]: E1013 08:43:55.455420 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f" containerName="neutron-metadata-openstack-openstack-cell1" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.455436 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f" containerName="neutron-metadata-openstack-openstack-cell1" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.455667 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f" containerName="neutron-metadata-openstack-openstack-cell1" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.456371 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-dkj5f" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.458446 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.459029 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-qqrx8" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.459181 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.460682 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.460851 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.476106 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-dkj5f"] Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.607920 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3e31455-5a84-415b-be9a-91e2f7033095-inventory\") pod \"libvirt-openstack-openstack-cell1-dkj5f\" (UID: \"d3e31455-5a84-415b-be9a-91e2f7033095\") " pod="openstack/libvirt-openstack-openstack-cell1-dkj5f" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.608324 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d3e31455-5a84-415b-be9a-91e2f7033095-ssh-key\") pod \"libvirt-openstack-openstack-cell1-dkj5f\" (UID: \"d3e31455-5a84-415b-be9a-91e2f7033095\") " pod="openstack/libvirt-openstack-openstack-cell1-dkj5f" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.608390 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz22d\" (UniqueName: \"kubernetes.io/projected/d3e31455-5a84-415b-be9a-91e2f7033095-kube-api-access-fz22d\") pod \"libvirt-openstack-openstack-cell1-dkj5f\" (UID: \"d3e31455-5a84-415b-be9a-91e2f7033095\") " pod="openstack/libvirt-openstack-openstack-cell1-dkj5f" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.608463 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e31455-5a84-415b-be9a-91e2f7033095-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-dkj5f\" (UID: \"d3e31455-5a84-415b-be9a-91e2f7033095\") " pod="openstack/libvirt-openstack-openstack-cell1-dkj5f" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.608733 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d3e31455-5a84-415b-be9a-91e2f7033095-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-dkj5f\" (UID: \"d3e31455-5a84-415b-be9a-91e2f7033095\") " pod="openstack/libvirt-openstack-openstack-cell1-dkj5f" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.711459 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d3e31455-5a84-415b-be9a-91e2f7033095-ssh-key\") pod \"libvirt-openstack-openstack-cell1-dkj5f\" (UID: \"d3e31455-5a84-415b-be9a-91e2f7033095\") " pod="openstack/libvirt-openstack-openstack-cell1-dkj5f" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.711599 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz22d\" (UniqueName: \"kubernetes.io/projected/d3e31455-5a84-415b-be9a-91e2f7033095-kube-api-access-fz22d\") pod \"libvirt-openstack-openstack-cell1-dkj5f\" (UID: \"d3e31455-5a84-415b-be9a-91e2f7033095\") " pod="openstack/libvirt-openstack-openstack-cell1-dkj5f" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.711720 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e31455-5a84-415b-be9a-91e2f7033095-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-dkj5f\" (UID: \"d3e31455-5a84-415b-be9a-91e2f7033095\") " pod="openstack/libvirt-openstack-openstack-cell1-dkj5f" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.711872 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d3e31455-5a84-415b-be9a-91e2f7033095-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-dkj5f\" (UID: \"d3e31455-5a84-415b-be9a-91e2f7033095\") " pod="openstack/libvirt-openstack-openstack-cell1-dkj5f" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.711971 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3e31455-5a84-415b-be9a-91e2f7033095-inventory\") pod \"libvirt-openstack-openstack-cell1-dkj5f\" (UID: \"d3e31455-5a84-415b-be9a-91e2f7033095\") " pod="openstack/libvirt-openstack-openstack-cell1-dkj5f" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.717302 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d3e31455-5a84-415b-be9a-91e2f7033095-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-dkj5f\" (UID: \"d3e31455-5a84-415b-be9a-91e2f7033095\") " pod="openstack/libvirt-openstack-openstack-cell1-dkj5f" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.718722 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d3e31455-5a84-415b-be9a-91e2f7033095-ssh-key\") pod \"libvirt-openstack-openstack-cell1-dkj5f\" (UID: \"d3e31455-5a84-415b-be9a-91e2f7033095\") " pod="openstack/libvirt-openstack-openstack-cell1-dkj5f" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.719862 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e31455-5a84-415b-be9a-91e2f7033095-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-dkj5f\" (UID: \"d3e31455-5a84-415b-be9a-91e2f7033095\") " pod="openstack/libvirt-openstack-openstack-cell1-dkj5f" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.730297 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3e31455-5a84-415b-be9a-91e2f7033095-inventory\") pod \"libvirt-openstack-openstack-cell1-dkj5f\" (UID: \"d3e31455-5a84-415b-be9a-91e2f7033095\") " pod="openstack/libvirt-openstack-openstack-cell1-dkj5f" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.734583 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz22d\" (UniqueName: \"kubernetes.io/projected/d3e31455-5a84-415b-be9a-91e2f7033095-kube-api-access-fz22d\") pod \"libvirt-openstack-openstack-cell1-dkj5f\" (UID: \"d3e31455-5a84-415b-be9a-91e2f7033095\") " pod="openstack/libvirt-openstack-openstack-cell1-dkj5f" Oct 13 08:43:55 crc kubenswrapper[4833]: I1013 08:43:55.772814 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-dkj5f" Oct 13 08:43:56 crc kubenswrapper[4833]: I1013 08:43:56.351994 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-dkj5f"] Oct 13 08:43:57 crc kubenswrapper[4833]: I1013 08:43:57.341091 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-dkj5f" event={"ID":"d3e31455-5a84-415b-be9a-91e2f7033095","Type":"ContainerStarted","Data":"b0cee67a6311bcb0d63210bf4967a1af1525d1bfe86e4b0dff87becbde018ac3"} Oct 13 08:43:57 crc kubenswrapper[4833]: I1013 08:43:57.341661 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-dkj5f" event={"ID":"d3e31455-5a84-415b-be9a-91e2f7033095","Type":"ContainerStarted","Data":"b059427683f49ae85172747fbc7a07f744e3579cefa6d7e6658a5e7ec28b6992"} Oct 13 08:43:57 crc kubenswrapper[4833]: I1013 08:43:57.372446 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-openstack-openstack-cell1-dkj5f" podStartSLOduration=1.908384829 podStartE2EDuration="2.37242015s" podCreationTimestamp="2025-10-13 08:43:55 +0000 UTC" firstStartedPulling="2025-10-13 08:43:56.353297257 +0000 UTC m=+8126.453720173" lastFinishedPulling="2025-10-13 08:43:56.817332578 +0000 UTC m=+8126.917755494" observedRunningTime="2025-10-13 08:43:57.359609596 +0000 UTC m=+8127.460032552" watchObservedRunningTime="2025-10-13 08:43:57.37242015 +0000 UTC m=+8127.472843096" Oct 13 08:44:30 crc kubenswrapper[4833]: I1013 08:44:30.542495 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:44:30 crc kubenswrapper[4833]: I1013 08:44:30.543341 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 08:44:55 crc kubenswrapper[4833]: I1013 08:44:55.344369 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fh2r5"] Oct 13 08:44:55 crc kubenswrapper[4833]: I1013 08:44:55.348810 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fh2r5" Oct 13 08:44:55 crc kubenswrapper[4833]: I1013 08:44:55.359754 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fh2r5"] Oct 13 08:44:55 crc kubenswrapper[4833]: I1013 08:44:55.456408 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f425e14a-c2fa-41b0-b50b-7e11fd0c8523-catalog-content\") pod \"community-operators-fh2r5\" (UID: \"f425e14a-c2fa-41b0-b50b-7e11fd0c8523\") " pod="openshift-marketplace/community-operators-fh2r5" Oct 13 08:44:55 crc kubenswrapper[4833]: I1013 08:44:55.457216 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dwzd\" (UniqueName: \"kubernetes.io/projected/f425e14a-c2fa-41b0-b50b-7e11fd0c8523-kube-api-access-7dwzd\") pod \"community-operators-fh2r5\" (UID: \"f425e14a-c2fa-41b0-b50b-7e11fd0c8523\") " pod="openshift-marketplace/community-operators-fh2r5" Oct 13 08:44:55 crc kubenswrapper[4833]: I1013 08:44:55.457414 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f425e14a-c2fa-41b0-b50b-7e11fd0c8523-utilities\") pod \"community-operators-fh2r5\" (UID: \"f425e14a-c2fa-41b0-b50b-7e11fd0c8523\") " pod="openshift-marketplace/community-operators-fh2r5" Oct 13 08:44:55 crc kubenswrapper[4833]: I1013 08:44:55.560396 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dwzd\" (UniqueName: \"kubernetes.io/projected/f425e14a-c2fa-41b0-b50b-7e11fd0c8523-kube-api-access-7dwzd\") pod \"community-operators-fh2r5\" (UID: \"f425e14a-c2fa-41b0-b50b-7e11fd0c8523\") " pod="openshift-marketplace/community-operators-fh2r5" Oct 13 08:44:55 crc kubenswrapper[4833]: I1013 08:44:55.560595 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f425e14a-c2fa-41b0-b50b-7e11fd0c8523-utilities\") pod \"community-operators-fh2r5\" (UID: \"f425e14a-c2fa-41b0-b50b-7e11fd0c8523\") " pod="openshift-marketplace/community-operators-fh2r5" Oct 13 08:44:55 crc kubenswrapper[4833]: I1013 08:44:55.560653 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f425e14a-c2fa-41b0-b50b-7e11fd0c8523-catalog-content\") pod \"community-operators-fh2r5\" (UID: \"f425e14a-c2fa-41b0-b50b-7e11fd0c8523\") " pod="openshift-marketplace/community-operators-fh2r5" Oct 13 08:44:55 crc kubenswrapper[4833]: I1013 08:44:55.561272 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f425e14a-c2fa-41b0-b50b-7e11fd0c8523-utilities\") pod \"community-operators-fh2r5\" (UID: \"f425e14a-c2fa-41b0-b50b-7e11fd0c8523\") " pod="openshift-marketplace/community-operators-fh2r5" Oct 13 08:44:55 crc kubenswrapper[4833]: I1013 08:44:55.561290 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f425e14a-c2fa-41b0-b50b-7e11fd0c8523-catalog-content\") pod \"community-operators-fh2r5\" (UID: \"f425e14a-c2fa-41b0-b50b-7e11fd0c8523\") " pod="openshift-marketplace/community-operators-fh2r5" Oct 13 08:44:55 crc kubenswrapper[4833]: I1013 08:44:55.605325 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dwzd\" (UniqueName: \"kubernetes.io/projected/f425e14a-c2fa-41b0-b50b-7e11fd0c8523-kube-api-access-7dwzd\") pod \"community-operators-fh2r5\" (UID: \"f425e14a-c2fa-41b0-b50b-7e11fd0c8523\") " pod="openshift-marketplace/community-operators-fh2r5" Oct 13 08:44:55 crc kubenswrapper[4833]: I1013 08:44:55.677860 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fh2r5" Oct 13 08:44:56 crc kubenswrapper[4833]: I1013 08:44:56.258117 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fh2r5"] Oct 13 08:44:57 crc kubenswrapper[4833]: I1013 08:44:57.100946 4833 generic.go:334] "Generic (PLEG): container finished" podID="f425e14a-c2fa-41b0-b50b-7e11fd0c8523" containerID="4174404ed15a734acd6e29c86367a87f6628d940cda310da616cc4756dbcdf54" exitCode=0 Oct 13 08:44:57 crc kubenswrapper[4833]: I1013 08:44:57.101085 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fh2r5" event={"ID":"f425e14a-c2fa-41b0-b50b-7e11fd0c8523","Type":"ContainerDied","Data":"4174404ed15a734acd6e29c86367a87f6628d940cda310da616cc4756dbcdf54"} Oct 13 08:44:57 crc kubenswrapper[4833]: I1013 08:44:57.101299 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fh2r5" event={"ID":"f425e14a-c2fa-41b0-b50b-7e11fd0c8523","Type":"ContainerStarted","Data":"4b980853d4a86e005342b230c4f08e3e97a877598c265ecf93e03f7a20f3cc5d"} Oct 13 08:44:57 crc kubenswrapper[4833]: I1013 08:44:57.940268 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rms8h"] Oct 13 08:44:57 crc kubenswrapper[4833]: I1013 08:44:57.945007 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rms8h" Oct 13 08:44:57 crc kubenswrapper[4833]: I1013 08:44:57.952069 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rms8h"] Oct 13 08:44:58 crc kubenswrapper[4833]: I1013 08:44:58.127072 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57e3fa35-505e-4d28-96f0-1d0fb02ed435-utilities\") pod \"redhat-marketplace-rms8h\" (UID: \"57e3fa35-505e-4d28-96f0-1d0fb02ed435\") " pod="openshift-marketplace/redhat-marketplace-rms8h" Oct 13 08:44:58 crc kubenswrapper[4833]: I1013 08:44:58.128116 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpvvw\" (UniqueName: \"kubernetes.io/projected/57e3fa35-505e-4d28-96f0-1d0fb02ed435-kube-api-access-qpvvw\") pod \"redhat-marketplace-rms8h\" (UID: \"57e3fa35-505e-4d28-96f0-1d0fb02ed435\") " pod="openshift-marketplace/redhat-marketplace-rms8h" Oct 13 08:44:58 crc kubenswrapper[4833]: I1013 08:44:58.128383 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57e3fa35-505e-4d28-96f0-1d0fb02ed435-catalog-content\") pod \"redhat-marketplace-rms8h\" (UID: \"57e3fa35-505e-4d28-96f0-1d0fb02ed435\") " pod="openshift-marketplace/redhat-marketplace-rms8h" Oct 13 08:44:58 crc kubenswrapper[4833]: I1013 08:44:58.230858 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57e3fa35-505e-4d28-96f0-1d0fb02ed435-catalog-content\") pod \"redhat-marketplace-rms8h\" (UID: \"57e3fa35-505e-4d28-96f0-1d0fb02ed435\") " pod="openshift-marketplace/redhat-marketplace-rms8h" Oct 13 08:44:58 crc kubenswrapper[4833]: I1013 08:44:58.230989 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57e3fa35-505e-4d28-96f0-1d0fb02ed435-utilities\") pod \"redhat-marketplace-rms8h\" (UID: \"57e3fa35-505e-4d28-96f0-1d0fb02ed435\") " pod="openshift-marketplace/redhat-marketplace-rms8h" Oct 13 08:44:58 crc kubenswrapper[4833]: I1013 08:44:58.231051 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpvvw\" (UniqueName: \"kubernetes.io/projected/57e3fa35-505e-4d28-96f0-1d0fb02ed435-kube-api-access-qpvvw\") pod \"redhat-marketplace-rms8h\" (UID: \"57e3fa35-505e-4d28-96f0-1d0fb02ed435\") " pod="openshift-marketplace/redhat-marketplace-rms8h" Oct 13 08:44:58 crc kubenswrapper[4833]: I1013 08:44:58.231582 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57e3fa35-505e-4d28-96f0-1d0fb02ed435-utilities\") pod \"redhat-marketplace-rms8h\" (UID: \"57e3fa35-505e-4d28-96f0-1d0fb02ed435\") " pod="openshift-marketplace/redhat-marketplace-rms8h" Oct 13 08:44:58 crc kubenswrapper[4833]: I1013 08:44:58.231602 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57e3fa35-505e-4d28-96f0-1d0fb02ed435-catalog-content\") pod \"redhat-marketplace-rms8h\" (UID: \"57e3fa35-505e-4d28-96f0-1d0fb02ed435\") " pod="openshift-marketplace/redhat-marketplace-rms8h" Oct 13 08:44:58 crc kubenswrapper[4833]: I1013 08:44:58.251388 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpvvw\" (UniqueName: \"kubernetes.io/projected/57e3fa35-505e-4d28-96f0-1d0fb02ed435-kube-api-access-qpvvw\") pod \"redhat-marketplace-rms8h\" (UID: \"57e3fa35-505e-4d28-96f0-1d0fb02ed435\") " pod="openshift-marketplace/redhat-marketplace-rms8h" Oct 13 08:44:58 crc kubenswrapper[4833]: I1013 08:44:58.296232 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rms8h" Oct 13 08:44:58 crc kubenswrapper[4833]: I1013 08:44:58.796655 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rms8h"] Oct 13 08:44:58 crc kubenswrapper[4833]: W1013 08:44:58.808769 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57e3fa35_505e_4d28_96f0_1d0fb02ed435.slice/crio-0bf16bd1f6539efc43cc5cd3a51ec2b8aaefa5fba2e11262d734b8d0f5013840 WatchSource:0}: Error finding container 0bf16bd1f6539efc43cc5cd3a51ec2b8aaefa5fba2e11262d734b8d0f5013840: Status 404 returned error can't find the container with id 0bf16bd1f6539efc43cc5cd3a51ec2b8aaefa5fba2e11262d734b8d0f5013840 Oct 13 08:44:59 crc kubenswrapper[4833]: I1013 08:44:59.127284 4833 generic.go:334] "Generic (PLEG): container finished" podID="57e3fa35-505e-4d28-96f0-1d0fb02ed435" containerID="04868a04449426d11eea7d460b58cc021c6790176b35e09c49c90cb38dd7c444" exitCode=0 Oct 13 08:44:59 crc kubenswrapper[4833]: I1013 08:44:59.127397 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rms8h" event={"ID":"57e3fa35-505e-4d28-96f0-1d0fb02ed435","Type":"ContainerDied","Data":"04868a04449426d11eea7d460b58cc021c6790176b35e09c49c90cb38dd7c444"} Oct 13 08:44:59 crc kubenswrapper[4833]: I1013 08:44:59.127770 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rms8h" event={"ID":"57e3fa35-505e-4d28-96f0-1d0fb02ed435","Type":"ContainerStarted","Data":"0bf16bd1f6539efc43cc5cd3a51ec2b8aaefa5fba2e11262d734b8d0f5013840"} Oct 13 08:44:59 crc kubenswrapper[4833]: I1013 08:44:59.134913 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fh2r5" event={"ID":"f425e14a-c2fa-41b0-b50b-7e11fd0c8523","Type":"ContainerStarted","Data":"f66d8bfba9dbf53d2bb9ef4dff9ff4d23d0f22effce26ea55c6c4a216f632e25"} Oct 13 08:45:00 crc kubenswrapper[4833]: I1013 08:45:00.150821 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rms8h" event={"ID":"57e3fa35-505e-4d28-96f0-1d0fb02ed435","Type":"ContainerStarted","Data":"0f0eec7812edfe7a86863c4eef7e08c8846056ec2b3152fed84a687209c2386c"} Oct 13 08:45:00 crc kubenswrapper[4833]: I1013 08:45:00.155795 4833 generic.go:334] "Generic (PLEG): container finished" podID="f425e14a-c2fa-41b0-b50b-7e11fd0c8523" containerID="f66d8bfba9dbf53d2bb9ef4dff9ff4d23d0f22effce26ea55c6c4a216f632e25" exitCode=0 Oct 13 08:45:00 crc kubenswrapper[4833]: I1013 08:45:00.155850 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fh2r5" event={"ID":"f425e14a-c2fa-41b0-b50b-7e11fd0c8523","Type":"ContainerDied","Data":"f66d8bfba9dbf53d2bb9ef4dff9ff4d23d0f22effce26ea55c6c4a216f632e25"} Oct 13 08:45:00 crc kubenswrapper[4833]: I1013 08:45:00.194659 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339085-56vsh"] Oct 13 08:45:00 crc kubenswrapper[4833]: I1013 08:45:00.196003 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339085-56vsh" Oct 13 08:45:00 crc kubenswrapper[4833]: I1013 08:45:00.199092 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 13 08:45:00 crc kubenswrapper[4833]: I1013 08:45:00.207301 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 13 08:45:00 crc kubenswrapper[4833]: I1013 08:45:00.223137 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339085-56vsh"] Oct 13 08:45:00 crc kubenswrapper[4833]: I1013 08:45:00.284979 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/113f75fd-7d34-40ba-b6a4-dd90b91c2075-config-volume\") pod \"collect-profiles-29339085-56vsh\" (UID: \"113f75fd-7d34-40ba-b6a4-dd90b91c2075\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339085-56vsh" Oct 13 08:45:00 crc kubenswrapper[4833]: I1013 08:45:00.285104 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfbjn\" (UniqueName: \"kubernetes.io/projected/113f75fd-7d34-40ba-b6a4-dd90b91c2075-kube-api-access-tfbjn\") pod \"collect-profiles-29339085-56vsh\" (UID: \"113f75fd-7d34-40ba-b6a4-dd90b91c2075\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339085-56vsh" Oct 13 08:45:00 crc kubenswrapper[4833]: I1013 08:45:00.285138 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/113f75fd-7d34-40ba-b6a4-dd90b91c2075-secret-volume\") pod \"collect-profiles-29339085-56vsh\" (UID: \"113f75fd-7d34-40ba-b6a4-dd90b91c2075\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339085-56vsh" Oct 13 08:45:00 crc kubenswrapper[4833]: I1013 08:45:00.388018 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfbjn\" (UniqueName: \"kubernetes.io/projected/113f75fd-7d34-40ba-b6a4-dd90b91c2075-kube-api-access-tfbjn\") pod \"collect-profiles-29339085-56vsh\" (UID: \"113f75fd-7d34-40ba-b6a4-dd90b91c2075\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339085-56vsh" Oct 13 08:45:00 crc kubenswrapper[4833]: I1013 08:45:00.388404 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/113f75fd-7d34-40ba-b6a4-dd90b91c2075-secret-volume\") pod \"collect-profiles-29339085-56vsh\" (UID: \"113f75fd-7d34-40ba-b6a4-dd90b91c2075\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339085-56vsh" Oct 13 08:45:00 crc kubenswrapper[4833]: I1013 08:45:00.388697 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/113f75fd-7d34-40ba-b6a4-dd90b91c2075-config-volume\") pod \"collect-profiles-29339085-56vsh\" (UID: \"113f75fd-7d34-40ba-b6a4-dd90b91c2075\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339085-56vsh" Oct 13 08:45:00 crc kubenswrapper[4833]: I1013 08:45:00.389774 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/113f75fd-7d34-40ba-b6a4-dd90b91c2075-config-volume\") pod \"collect-profiles-29339085-56vsh\" (UID: \"113f75fd-7d34-40ba-b6a4-dd90b91c2075\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339085-56vsh" Oct 13 08:45:00 crc kubenswrapper[4833]: I1013 08:45:00.395457 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/113f75fd-7d34-40ba-b6a4-dd90b91c2075-secret-volume\") pod \"collect-profiles-29339085-56vsh\" (UID: \"113f75fd-7d34-40ba-b6a4-dd90b91c2075\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339085-56vsh" Oct 13 08:45:00 crc kubenswrapper[4833]: I1013 08:45:00.404844 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfbjn\" (UniqueName: \"kubernetes.io/projected/113f75fd-7d34-40ba-b6a4-dd90b91c2075-kube-api-access-tfbjn\") pod \"collect-profiles-29339085-56vsh\" (UID: \"113f75fd-7d34-40ba-b6a4-dd90b91c2075\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339085-56vsh" Oct 13 08:45:00 crc kubenswrapper[4833]: I1013 08:45:00.522930 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339085-56vsh" Oct 13 08:45:00 crc kubenswrapper[4833]: I1013 08:45:00.542361 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:45:00 crc kubenswrapper[4833]: I1013 08:45:00.542440 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 08:45:01 crc kubenswrapper[4833]: I1013 08:45:01.047186 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339085-56vsh"] Oct 13 08:45:01 crc kubenswrapper[4833]: W1013 08:45:01.059207 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod113f75fd_7d34_40ba_b6a4_dd90b91c2075.slice/crio-82476a5800e665e535cc3878363ca4c53bdc3c6f6df869dba3e066a6bdb93ce5 WatchSource:0}: Error finding container 82476a5800e665e535cc3878363ca4c53bdc3c6f6df869dba3e066a6bdb93ce5: Status 404 returned error can't find the container with id 82476a5800e665e535cc3878363ca4c53bdc3c6f6df869dba3e066a6bdb93ce5 Oct 13 08:45:01 crc kubenswrapper[4833]: I1013 08:45:01.168841 4833 generic.go:334] "Generic (PLEG): container finished" podID="57e3fa35-505e-4d28-96f0-1d0fb02ed435" containerID="0f0eec7812edfe7a86863c4eef7e08c8846056ec2b3152fed84a687209c2386c" exitCode=0 Oct 13 08:45:01 crc kubenswrapper[4833]: I1013 08:45:01.168956 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rms8h" event={"ID":"57e3fa35-505e-4d28-96f0-1d0fb02ed435","Type":"ContainerDied","Data":"0f0eec7812edfe7a86863c4eef7e08c8846056ec2b3152fed84a687209c2386c"} Oct 13 08:45:01 crc kubenswrapper[4833]: I1013 08:45:01.176864 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fh2r5" event={"ID":"f425e14a-c2fa-41b0-b50b-7e11fd0c8523","Type":"ContainerStarted","Data":"c0148287deb5130f718079e9376c7e1c25bc4d7a7f7dc47b375cff23475addc3"} Oct 13 08:45:01 crc kubenswrapper[4833]: I1013 08:45:01.178263 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339085-56vsh" event={"ID":"113f75fd-7d34-40ba-b6a4-dd90b91c2075","Type":"ContainerStarted","Data":"82476a5800e665e535cc3878363ca4c53bdc3c6f6df869dba3e066a6bdb93ce5"} Oct 13 08:45:01 crc kubenswrapper[4833]: E1013 08:45:01.670008 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod113f75fd_7d34_40ba_b6a4_dd90b91c2075.slice/crio-09fa82c8b65639d5b927894b605d72c3efdb95ec4b4cd8a7267ccfba5e7e54f9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod113f75fd_7d34_40ba_b6a4_dd90b91c2075.slice/crio-conmon-09fa82c8b65639d5b927894b605d72c3efdb95ec4b4cd8a7267ccfba5e7e54f9.scope\": RecentStats: unable to find data in memory cache]" Oct 13 08:45:02 crc kubenswrapper[4833]: I1013 08:45:02.189079 4833 generic.go:334] "Generic (PLEG): container finished" podID="113f75fd-7d34-40ba-b6a4-dd90b91c2075" containerID="09fa82c8b65639d5b927894b605d72c3efdb95ec4b4cd8a7267ccfba5e7e54f9" exitCode=0 Oct 13 08:45:02 crc kubenswrapper[4833]: I1013 08:45:02.189370 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339085-56vsh" event={"ID":"113f75fd-7d34-40ba-b6a4-dd90b91c2075","Type":"ContainerDied","Data":"09fa82c8b65639d5b927894b605d72c3efdb95ec4b4cd8a7267ccfba5e7e54f9"} Oct 13 08:45:02 crc kubenswrapper[4833]: I1013 08:45:02.225132 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fh2r5" podStartSLOduration=3.634937044 podStartE2EDuration="7.225107451s" podCreationTimestamp="2025-10-13 08:44:55 +0000 UTC" firstStartedPulling="2025-10-13 08:44:57.103352492 +0000 UTC m=+8187.203775438" lastFinishedPulling="2025-10-13 08:45:00.693522909 +0000 UTC m=+8190.793945845" observedRunningTime="2025-10-13 08:45:01.234825168 +0000 UTC m=+8191.335248094" watchObservedRunningTime="2025-10-13 08:45:02.225107451 +0000 UTC m=+8192.325530377" Oct 13 08:45:03 crc kubenswrapper[4833]: I1013 08:45:03.213442 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rms8h" event={"ID":"57e3fa35-505e-4d28-96f0-1d0fb02ed435","Type":"ContainerStarted","Data":"76e9d6c7f5b6d183b41a957c5b9d25a9deddeb938ae538c0eafa8eb432f66144"} Oct 13 08:45:03 crc kubenswrapper[4833]: I1013 08:45:03.242650 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rms8h" podStartSLOduration=3.228021666 podStartE2EDuration="6.242632249s" podCreationTimestamp="2025-10-13 08:44:57 +0000 UTC" firstStartedPulling="2025-10-13 08:44:59.1311122 +0000 UTC m=+8189.231535156" lastFinishedPulling="2025-10-13 08:45:02.145722793 +0000 UTC m=+8192.246145739" observedRunningTime="2025-10-13 08:45:03.237906805 +0000 UTC m=+8193.338329721" watchObservedRunningTime="2025-10-13 08:45:03.242632249 +0000 UTC m=+8193.343055165" Oct 13 08:45:03 crc kubenswrapper[4833]: I1013 08:45:03.576027 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339085-56vsh" Oct 13 08:45:03 crc kubenswrapper[4833]: I1013 08:45:03.757621 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/113f75fd-7d34-40ba-b6a4-dd90b91c2075-config-volume\") pod \"113f75fd-7d34-40ba-b6a4-dd90b91c2075\" (UID: \"113f75fd-7d34-40ba-b6a4-dd90b91c2075\") " Oct 13 08:45:03 crc kubenswrapper[4833]: I1013 08:45:03.757998 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/113f75fd-7d34-40ba-b6a4-dd90b91c2075-secret-volume\") pod \"113f75fd-7d34-40ba-b6a4-dd90b91c2075\" (UID: \"113f75fd-7d34-40ba-b6a4-dd90b91c2075\") " Oct 13 08:45:03 crc kubenswrapper[4833]: I1013 08:45:03.758070 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfbjn\" (UniqueName: \"kubernetes.io/projected/113f75fd-7d34-40ba-b6a4-dd90b91c2075-kube-api-access-tfbjn\") pod \"113f75fd-7d34-40ba-b6a4-dd90b91c2075\" (UID: \"113f75fd-7d34-40ba-b6a4-dd90b91c2075\") " Oct 13 08:45:03 crc kubenswrapper[4833]: I1013 08:45:03.758307 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/113f75fd-7d34-40ba-b6a4-dd90b91c2075-config-volume" (OuterVolumeSpecName: "config-volume") pod "113f75fd-7d34-40ba-b6a4-dd90b91c2075" (UID: "113f75fd-7d34-40ba-b6a4-dd90b91c2075"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:45:03 crc kubenswrapper[4833]: I1013 08:45:03.758762 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/113f75fd-7d34-40ba-b6a4-dd90b91c2075-config-volume\") on node \"crc\" DevicePath \"\"" Oct 13 08:45:03 crc kubenswrapper[4833]: I1013 08:45:03.766080 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/113f75fd-7d34-40ba-b6a4-dd90b91c2075-kube-api-access-tfbjn" (OuterVolumeSpecName: "kube-api-access-tfbjn") pod "113f75fd-7d34-40ba-b6a4-dd90b91c2075" (UID: "113f75fd-7d34-40ba-b6a4-dd90b91c2075"). InnerVolumeSpecName "kube-api-access-tfbjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:45:03 crc kubenswrapper[4833]: I1013 08:45:03.777717 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/113f75fd-7d34-40ba-b6a4-dd90b91c2075-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "113f75fd-7d34-40ba-b6a4-dd90b91c2075" (UID: "113f75fd-7d34-40ba-b6a4-dd90b91c2075"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:45:03 crc kubenswrapper[4833]: I1013 08:45:03.860644 4833 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/113f75fd-7d34-40ba-b6a4-dd90b91c2075-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 13 08:45:03 crc kubenswrapper[4833]: I1013 08:45:03.860678 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfbjn\" (UniqueName: \"kubernetes.io/projected/113f75fd-7d34-40ba-b6a4-dd90b91c2075-kube-api-access-tfbjn\") on node \"crc\" DevicePath \"\"" Oct 13 08:45:04 crc kubenswrapper[4833]: I1013 08:45:04.225084 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339085-56vsh" event={"ID":"113f75fd-7d34-40ba-b6a4-dd90b91c2075","Type":"ContainerDied","Data":"82476a5800e665e535cc3878363ca4c53bdc3c6f6df869dba3e066a6bdb93ce5"} Oct 13 08:45:04 crc kubenswrapper[4833]: I1013 08:45:04.225115 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339085-56vsh" Oct 13 08:45:04 crc kubenswrapper[4833]: I1013 08:45:04.225135 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82476a5800e665e535cc3878363ca4c53bdc3c6f6df869dba3e066a6bdb93ce5" Oct 13 08:45:04 crc kubenswrapper[4833]: I1013 08:45:04.671963 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339040-5mx8m"] Oct 13 08:45:04 crc kubenswrapper[4833]: I1013 08:45:04.684162 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339040-5mx8m"] Oct 13 08:45:05 crc kubenswrapper[4833]: I1013 08:45:05.678097 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fh2r5" Oct 13 08:45:05 crc kubenswrapper[4833]: I1013 08:45:05.678170 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fh2r5" Oct 13 08:45:05 crc kubenswrapper[4833]: I1013 08:45:05.744201 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fh2r5" Oct 13 08:45:06 crc kubenswrapper[4833]: I1013 08:45:06.339837 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fh2r5" Oct 13 08:45:06 crc kubenswrapper[4833]: I1013 08:45:06.660606 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc9d7e62-9f76-449f-90f7-bcaf9e66da5d" path="/var/lib/kubelet/pods/fc9d7e62-9f76-449f-90f7-bcaf9e66da5d/volumes" Oct 13 08:45:06 crc kubenswrapper[4833]: I1013 08:45:06.919423 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fh2r5"] Oct 13 08:45:08 crc kubenswrapper[4833]: I1013 08:45:08.272570 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fh2r5" podUID="f425e14a-c2fa-41b0-b50b-7e11fd0c8523" containerName="registry-server" containerID="cri-o://c0148287deb5130f718079e9376c7e1c25bc4d7a7f7dc47b375cff23475addc3" gracePeriod=2 Oct 13 08:45:08 crc kubenswrapper[4833]: I1013 08:45:08.296478 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rms8h" Oct 13 08:45:08 crc kubenswrapper[4833]: I1013 08:45:08.297201 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rms8h" Oct 13 08:45:08 crc kubenswrapper[4833]: I1013 08:45:08.367769 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rms8h" Oct 13 08:45:08 crc kubenswrapper[4833]: I1013 08:45:08.880235 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fh2r5" Oct 13 08:45:09 crc kubenswrapper[4833]: I1013 08:45:09.009481 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f425e14a-c2fa-41b0-b50b-7e11fd0c8523-catalog-content\") pod \"f425e14a-c2fa-41b0-b50b-7e11fd0c8523\" (UID: \"f425e14a-c2fa-41b0-b50b-7e11fd0c8523\") " Oct 13 08:45:09 crc kubenswrapper[4833]: I1013 08:45:09.009751 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dwzd\" (UniqueName: \"kubernetes.io/projected/f425e14a-c2fa-41b0-b50b-7e11fd0c8523-kube-api-access-7dwzd\") pod \"f425e14a-c2fa-41b0-b50b-7e11fd0c8523\" (UID: \"f425e14a-c2fa-41b0-b50b-7e11fd0c8523\") " Oct 13 08:45:09 crc kubenswrapper[4833]: I1013 08:45:09.009872 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f425e14a-c2fa-41b0-b50b-7e11fd0c8523-utilities\") pod \"f425e14a-c2fa-41b0-b50b-7e11fd0c8523\" (UID: \"f425e14a-c2fa-41b0-b50b-7e11fd0c8523\") " Oct 13 08:45:09 crc kubenswrapper[4833]: I1013 08:45:09.010743 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f425e14a-c2fa-41b0-b50b-7e11fd0c8523-utilities" (OuterVolumeSpecName: "utilities") pod "f425e14a-c2fa-41b0-b50b-7e11fd0c8523" (UID: "f425e14a-c2fa-41b0-b50b-7e11fd0c8523"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:45:09 crc kubenswrapper[4833]: I1013 08:45:09.019973 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f425e14a-c2fa-41b0-b50b-7e11fd0c8523-kube-api-access-7dwzd" (OuterVolumeSpecName: "kube-api-access-7dwzd") pod "f425e14a-c2fa-41b0-b50b-7e11fd0c8523" (UID: "f425e14a-c2fa-41b0-b50b-7e11fd0c8523"). InnerVolumeSpecName "kube-api-access-7dwzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:45:09 crc kubenswrapper[4833]: I1013 08:45:09.082442 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f425e14a-c2fa-41b0-b50b-7e11fd0c8523-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f425e14a-c2fa-41b0-b50b-7e11fd0c8523" (UID: "f425e14a-c2fa-41b0-b50b-7e11fd0c8523"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:45:09 crc kubenswrapper[4833]: I1013 08:45:09.113010 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dwzd\" (UniqueName: \"kubernetes.io/projected/f425e14a-c2fa-41b0-b50b-7e11fd0c8523-kube-api-access-7dwzd\") on node \"crc\" DevicePath \"\"" Oct 13 08:45:09 crc kubenswrapper[4833]: I1013 08:45:09.113060 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f425e14a-c2fa-41b0-b50b-7e11fd0c8523-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 08:45:09 crc kubenswrapper[4833]: I1013 08:45:09.113081 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f425e14a-c2fa-41b0-b50b-7e11fd0c8523-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 08:45:09 crc kubenswrapper[4833]: I1013 08:45:09.324218 4833 generic.go:334] "Generic (PLEG): container finished" podID="f425e14a-c2fa-41b0-b50b-7e11fd0c8523" containerID="c0148287deb5130f718079e9376c7e1c25bc4d7a7f7dc47b375cff23475addc3" exitCode=0 Oct 13 08:45:09 crc kubenswrapper[4833]: I1013 08:45:09.324412 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fh2r5" Oct 13 08:45:09 crc kubenswrapper[4833]: I1013 08:45:09.326968 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fh2r5" event={"ID":"f425e14a-c2fa-41b0-b50b-7e11fd0c8523","Type":"ContainerDied","Data":"c0148287deb5130f718079e9376c7e1c25bc4d7a7f7dc47b375cff23475addc3"} Oct 13 08:45:09 crc kubenswrapper[4833]: I1013 08:45:09.327029 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fh2r5" event={"ID":"f425e14a-c2fa-41b0-b50b-7e11fd0c8523","Type":"ContainerDied","Data":"4b980853d4a86e005342b230c4f08e3e97a877598c265ecf93e03f7a20f3cc5d"} Oct 13 08:45:09 crc kubenswrapper[4833]: I1013 08:45:09.327083 4833 scope.go:117] "RemoveContainer" containerID="c0148287deb5130f718079e9376c7e1c25bc4d7a7f7dc47b375cff23475addc3" Oct 13 08:45:09 crc kubenswrapper[4833]: I1013 08:45:09.370849 4833 scope.go:117] "RemoveContainer" containerID="f66d8bfba9dbf53d2bb9ef4dff9ff4d23d0f22effce26ea55c6c4a216f632e25" Oct 13 08:45:09 crc kubenswrapper[4833]: I1013 08:45:09.386007 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fh2r5"] Oct 13 08:45:09 crc kubenswrapper[4833]: I1013 08:45:09.394793 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fh2r5"] Oct 13 08:45:09 crc kubenswrapper[4833]: I1013 08:45:09.405458 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rms8h" Oct 13 08:45:09 crc kubenswrapper[4833]: I1013 08:45:09.414716 4833 scope.go:117] "RemoveContainer" containerID="4174404ed15a734acd6e29c86367a87f6628d940cda310da616cc4756dbcdf54" Oct 13 08:45:09 crc kubenswrapper[4833]: I1013 08:45:09.465872 4833 scope.go:117] "RemoveContainer" containerID="c0148287deb5130f718079e9376c7e1c25bc4d7a7f7dc47b375cff23475addc3" Oct 13 08:45:09 crc kubenswrapper[4833]: E1013 08:45:09.467184 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0148287deb5130f718079e9376c7e1c25bc4d7a7f7dc47b375cff23475addc3\": container with ID starting with c0148287deb5130f718079e9376c7e1c25bc4d7a7f7dc47b375cff23475addc3 not found: ID does not exist" containerID="c0148287deb5130f718079e9376c7e1c25bc4d7a7f7dc47b375cff23475addc3" Oct 13 08:45:09 crc kubenswrapper[4833]: I1013 08:45:09.467244 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0148287deb5130f718079e9376c7e1c25bc4d7a7f7dc47b375cff23475addc3"} err="failed to get container status \"c0148287deb5130f718079e9376c7e1c25bc4d7a7f7dc47b375cff23475addc3\": rpc error: code = NotFound desc = could not find container \"c0148287deb5130f718079e9376c7e1c25bc4d7a7f7dc47b375cff23475addc3\": container with ID starting with c0148287deb5130f718079e9376c7e1c25bc4d7a7f7dc47b375cff23475addc3 not found: ID does not exist" Oct 13 08:45:09 crc kubenswrapper[4833]: I1013 08:45:09.467282 4833 scope.go:117] "RemoveContainer" containerID="f66d8bfba9dbf53d2bb9ef4dff9ff4d23d0f22effce26ea55c6c4a216f632e25" Oct 13 08:45:09 crc kubenswrapper[4833]: E1013 08:45:09.467801 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f66d8bfba9dbf53d2bb9ef4dff9ff4d23d0f22effce26ea55c6c4a216f632e25\": container with ID starting with f66d8bfba9dbf53d2bb9ef4dff9ff4d23d0f22effce26ea55c6c4a216f632e25 not found: ID does not exist" containerID="f66d8bfba9dbf53d2bb9ef4dff9ff4d23d0f22effce26ea55c6c4a216f632e25" Oct 13 08:45:09 crc kubenswrapper[4833]: I1013 08:45:09.467834 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f66d8bfba9dbf53d2bb9ef4dff9ff4d23d0f22effce26ea55c6c4a216f632e25"} err="failed to get container status \"f66d8bfba9dbf53d2bb9ef4dff9ff4d23d0f22effce26ea55c6c4a216f632e25\": rpc error: code = NotFound desc = could not find container \"f66d8bfba9dbf53d2bb9ef4dff9ff4d23d0f22effce26ea55c6c4a216f632e25\": container with ID starting with f66d8bfba9dbf53d2bb9ef4dff9ff4d23d0f22effce26ea55c6c4a216f632e25 not found: ID does not exist" Oct 13 08:45:09 crc kubenswrapper[4833]: I1013 08:45:09.467870 4833 scope.go:117] "RemoveContainer" containerID="4174404ed15a734acd6e29c86367a87f6628d940cda310da616cc4756dbcdf54" Oct 13 08:45:09 crc kubenswrapper[4833]: E1013 08:45:09.468368 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4174404ed15a734acd6e29c86367a87f6628d940cda310da616cc4756dbcdf54\": container with ID starting with 4174404ed15a734acd6e29c86367a87f6628d940cda310da616cc4756dbcdf54 not found: ID does not exist" containerID="4174404ed15a734acd6e29c86367a87f6628d940cda310da616cc4756dbcdf54" Oct 13 08:45:09 crc kubenswrapper[4833]: I1013 08:45:09.468411 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4174404ed15a734acd6e29c86367a87f6628d940cda310da616cc4756dbcdf54"} err="failed to get container status \"4174404ed15a734acd6e29c86367a87f6628d940cda310da616cc4756dbcdf54\": rpc error: code = NotFound desc = could not find container \"4174404ed15a734acd6e29c86367a87f6628d940cda310da616cc4756dbcdf54\": container with ID starting with 4174404ed15a734acd6e29c86367a87f6628d940cda310da616cc4756dbcdf54 not found: ID does not exist" Oct 13 08:45:10 crc kubenswrapper[4833]: I1013 08:45:10.652835 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f425e14a-c2fa-41b0-b50b-7e11fd0c8523" path="/var/lib/kubelet/pods/f425e14a-c2fa-41b0-b50b-7e11fd0c8523/volumes" Oct 13 08:45:11 crc kubenswrapper[4833]: I1013 08:45:11.728484 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rms8h"] Oct 13 08:45:11 crc kubenswrapper[4833]: I1013 08:45:11.731720 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rms8h" podUID="57e3fa35-505e-4d28-96f0-1d0fb02ed435" containerName="registry-server" containerID="cri-o://76e9d6c7f5b6d183b41a957c5b9d25a9deddeb938ae538c0eafa8eb432f66144" gracePeriod=2 Oct 13 08:45:12 crc kubenswrapper[4833]: E1013 08:45:12.002132 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57e3fa35_505e_4d28_96f0_1d0fb02ed435.slice/crio-conmon-76e9d6c7f5b6d183b41a957c5b9d25a9deddeb938ae538c0eafa8eb432f66144.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57e3fa35_505e_4d28_96f0_1d0fb02ed435.slice/crio-76e9d6c7f5b6d183b41a957c5b9d25a9deddeb938ae538c0eafa8eb432f66144.scope\": RecentStats: unable to find data in memory cache]" Oct 13 08:45:12 crc kubenswrapper[4833]: I1013 08:45:12.302820 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rms8h" Oct 13 08:45:12 crc kubenswrapper[4833]: I1013 08:45:12.369472 4833 generic.go:334] "Generic (PLEG): container finished" podID="57e3fa35-505e-4d28-96f0-1d0fb02ed435" containerID="76e9d6c7f5b6d183b41a957c5b9d25a9deddeb938ae538c0eafa8eb432f66144" exitCode=0 Oct 13 08:45:12 crc kubenswrapper[4833]: I1013 08:45:12.369519 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rms8h" event={"ID":"57e3fa35-505e-4d28-96f0-1d0fb02ed435","Type":"ContainerDied","Data":"76e9d6c7f5b6d183b41a957c5b9d25a9deddeb938ae538c0eafa8eb432f66144"} Oct 13 08:45:12 crc kubenswrapper[4833]: I1013 08:45:12.369571 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rms8h" event={"ID":"57e3fa35-505e-4d28-96f0-1d0fb02ed435","Type":"ContainerDied","Data":"0bf16bd1f6539efc43cc5cd3a51ec2b8aaefa5fba2e11262d734b8d0f5013840"} Oct 13 08:45:12 crc kubenswrapper[4833]: I1013 08:45:12.369592 4833 scope.go:117] "RemoveContainer" containerID="76e9d6c7f5b6d183b41a957c5b9d25a9deddeb938ae538c0eafa8eb432f66144" Oct 13 08:45:12 crc kubenswrapper[4833]: I1013 08:45:12.369624 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rms8h" Oct 13 08:45:12 crc kubenswrapper[4833]: I1013 08:45:12.393381 4833 scope.go:117] "RemoveContainer" containerID="0f0eec7812edfe7a86863c4eef7e08c8846056ec2b3152fed84a687209c2386c" Oct 13 08:45:12 crc kubenswrapper[4833]: I1013 08:45:12.401759 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57e3fa35-505e-4d28-96f0-1d0fb02ed435-utilities\") pod \"57e3fa35-505e-4d28-96f0-1d0fb02ed435\" (UID: \"57e3fa35-505e-4d28-96f0-1d0fb02ed435\") " Oct 13 08:45:12 crc kubenswrapper[4833]: I1013 08:45:12.401819 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpvvw\" (UniqueName: \"kubernetes.io/projected/57e3fa35-505e-4d28-96f0-1d0fb02ed435-kube-api-access-qpvvw\") pod \"57e3fa35-505e-4d28-96f0-1d0fb02ed435\" (UID: \"57e3fa35-505e-4d28-96f0-1d0fb02ed435\") " Oct 13 08:45:12 crc kubenswrapper[4833]: I1013 08:45:12.401855 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57e3fa35-505e-4d28-96f0-1d0fb02ed435-catalog-content\") pod \"57e3fa35-505e-4d28-96f0-1d0fb02ed435\" (UID: \"57e3fa35-505e-4d28-96f0-1d0fb02ed435\") " Oct 13 08:45:12 crc kubenswrapper[4833]: I1013 08:45:12.403917 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57e3fa35-505e-4d28-96f0-1d0fb02ed435-utilities" (OuterVolumeSpecName: "utilities") pod "57e3fa35-505e-4d28-96f0-1d0fb02ed435" (UID: "57e3fa35-505e-4d28-96f0-1d0fb02ed435"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:45:12 crc kubenswrapper[4833]: I1013 08:45:12.408176 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57e3fa35-505e-4d28-96f0-1d0fb02ed435-kube-api-access-qpvvw" (OuterVolumeSpecName: "kube-api-access-qpvvw") pod "57e3fa35-505e-4d28-96f0-1d0fb02ed435" (UID: "57e3fa35-505e-4d28-96f0-1d0fb02ed435"). InnerVolumeSpecName "kube-api-access-qpvvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:45:12 crc kubenswrapper[4833]: I1013 08:45:12.416683 4833 scope.go:117] "RemoveContainer" containerID="04868a04449426d11eea7d460b58cc021c6790176b35e09c49c90cb38dd7c444" Oct 13 08:45:12 crc kubenswrapper[4833]: I1013 08:45:12.422422 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57e3fa35-505e-4d28-96f0-1d0fb02ed435-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57e3fa35-505e-4d28-96f0-1d0fb02ed435" (UID: "57e3fa35-505e-4d28-96f0-1d0fb02ed435"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:45:12 crc kubenswrapper[4833]: I1013 08:45:12.504664 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57e3fa35-505e-4d28-96f0-1d0fb02ed435-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 08:45:12 crc kubenswrapper[4833]: I1013 08:45:12.504688 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpvvw\" (UniqueName: \"kubernetes.io/projected/57e3fa35-505e-4d28-96f0-1d0fb02ed435-kube-api-access-qpvvw\") on node \"crc\" DevicePath \"\"" Oct 13 08:45:12 crc kubenswrapper[4833]: I1013 08:45:12.504697 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57e3fa35-505e-4d28-96f0-1d0fb02ed435-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 08:45:12 crc kubenswrapper[4833]: I1013 08:45:12.505355 4833 scope.go:117] "RemoveContainer" containerID="76e9d6c7f5b6d183b41a957c5b9d25a9deddeb938ae538c0eafa8eb432f66144" Oct 13 08:45:12 crc kubenswrapper[4833]: E1013 08:45:12.505882 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76e9d6c7f5b6d183b41a957c5b9d25a9deddeb938ae538c0eafa8eb432f66144\": container with ID starting with 76e9d6c7f5b6d183b41a957c5b9d25a9deddeb938ae538c0eafa8eb432f66144 not found: ID does not exist" containerID="76e9d6c7f5b6d183b41a957c5b9d25a9deddeb938ae538c0eafa8eb432f66144" Oct 13 08:45:12 crc kubenswrapper[4833]: I1013 08:45:12.505942 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76e9d6c7f5b6d183b41a957c5b9d25a9deddeb938ae538c0eafa8eb432f66144"} err="failed to get container status \"76e9d6c7f5b6d183b41a957c5b9d25a9deddeb938ae538c0eafa8eb432f66144\": rpc error: code = NotFound desc = could not find container \"76e9d6c7f5b6d183b41a957c5b9d25a9deddeb938ae538c0eafa8eb432f66144\": container with ID starting with 76e9d6c7f5b6d183b41a957c5b9d25a9deddeb938ae538c0eafa8eb432f66144 not found: ID does not exist" Oct 13 08:45:12 crc kubenswrapper[4833]: I1013 08:45:12.505974 4833 scope.go:117] "RemoveContainer" containerID="0f0eec7812edfe7a86863c4eef7e08c8846056ec2b3152fed84a687209c2386c" Oct 13 08:45:12 crc kubenswrapper[4833]: E1013 08:45:12.506300 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f0eec7812edfe7a86863c4eef7e08c8846056ec2b3152fed84a687209c2386c\": container with ID starting with 0f0eec7812edfe7a86863c4eef7e08c8846056ec2b3152fed84a687209c2386c not found: ID does not exist" containerID="0f0eec7812edfe7a86863c4eef7e08c8846056ec2b3152fed84a687209c2386c" Oct 13 08:45:12 crc kubenswrapper[4833]: I1013 08:45:12.506328 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f0eec7812edfe7a86863c4eef7e08c8846056ec2b3152fed84a687209c2386c"} err="failed to get container status \"0f0eec7812edfe7a86863c4eef7e08c8846056ec2b3152fed84a687209c2386c\": rpc error: code = NotFound desc = could not find container \"0f0eec7812edfe7a86863c4eef7e08c8846056ec2b3152fed84a687209c2386c\": container with ID starting with 0f0eec7812edfe7a86863c4eef7e08c8846056ec2b3152fed84a687209c2386c not found: ID does not exist" Oct 13 08:45:12 crc kubenswrapper[4833]: I1013 08:45:12.506350 4833 scope.go:117] "RemoveContainer" containerID="04868a04449426d11eea7d460b58cc021c6790176b35e09c49c90cb38dd7c444" Oct 13 08:45:12 crc kubenswrapper[4833]: E1013 08:45:12.507508 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04868a04449426d11eea7d460b58cc021c6790176b35e09c49c90cb38dd7c444\": container with ID starting with 04868a04449426d11eea7d460b58cc021c6790176b35e09c49c90cb38dd7c444 not found: ID does not exist" containerID="04868a04449426d11eea7d460b58cc021c6790176b35e09c49c90cb38dd7c444" Oct 13 08:45:12 crc kubenswrapper[4833]: I1013 08:45:12.507783 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04868a04449426d11eea7d460b58cc021c6790176b35e09c49c90cb38dd7c444"} err="failed to get container status \"04868a04449426d11eea7d460b58cc021c6790176b35e09c49c90cb38dd7c444\": rpc error: code = NotFound desc = could not find container \"04868a04449426d11eea7d460b58cc021c6790176b35e09c49c90cb38dd7c444\": container with ID starting with 04868a04449426d11eea7d460b58cc021c6790176b35e09c49c90cb38dd7c444 not found: ID does not exist" Oct 13 08:45:12 crc kubenswrapper[4833]: I1013 08:45:12.703605 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rms8h"] Oct 13 08:45:12 crc kubenswrapper[4833]: I1013 08:45:12.724740 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rms8h"] Oct 13 08:45:14 crc kubenswrapper[4833]: I1013 08:45:14.645191 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57e3fa35-505e-4d28-96f0-1d0fb02ed435" path="/var/lib/kubelet/pods/57e3fa35-505e-4d28-96f0-1d0fb02ed435/volumes" Oct 13 08:45:30 crc kubenswrapper[4833]: I1013 08:45:30.542805 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:45:30 crc kubenswrapper[4833]: I1013 08:45:30.543468 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 08:45:30 crc kubenswrapper[4833]: I1013 08:45:30.543565 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 08:45:30 crc kubenswrapper[4833]: I1013 08:45:30.544903 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"22856144922dcb85420c849eb3f6308a0a81a29149d29ed40f72d81846153551"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 08:45:30 crc kubenswrapper[4833]: I1013 08:45:30.545005 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://22856144922dcb85420c849eb3f6308a0a81a29149d29ed40f72d81846153551" gracePeriod=600 Oct 13 08:45:31 crc kubenswrapper[4833]: I1013 08:45:31.644824 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="22856144922dcb85420c849eb3f6308a0a81a29149d29ed40f72d81846153551" exitCode=0 Oct 13 08:45:31 crc kubenswrapper[4833]: I1013 08:45:31.644933 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"22856144922dcb85420c849eb3f6308a0a81a29149d29ed40f72d81846153551"} Oct 13 08:45:31 crc kubenswrapper[4833]: I1013 08:45:31.645523 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"a76086e46c8a4e9ad69dc7869bcdd6e98cbdb68d0ed85c8463b29d24ee04f7f8"} Oct 13 08:45:31 crc kubenswrapper[4833]: I1013 08:45:31.645579 4833 scope.go:117] "RemoveContainer" containerID="18757d11b2563537d0bd73435161e9ea0a84b6b44238545002dede1960b151c7" Oct 13 08:45:57 crc kubenswrapper[4833]: I1013 08:45:57.309277 4833 scope.go:117] "RemoveContainer" containerID="0e52208a96027a5c7e6b9fc703a2f6b7ffa11182f67a0fd1463eee046a4d39f8" Oct 13 08:46:41 crc kubenswrapper[4833]: I1013 08:46:41.068629 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cttk6"] Oct 13 08:46:41 crc kubenswrapper[4833]: E1013 08:46:41.069452 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f425e14a-c2fa-41b0-b50b-7e11fd0c8523" containerName="extract-utilities" Oct 13 08:46:41 crc kubenswrapper[4833]: I1013 08:46:41.069464 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f425e14a-c2fa-41b0-b50b-7e11fd0c8523" containerName="extract-utilities" Oct 13 08:46:41 crc kubenswrapper[4833]: E1013 08:46:41.069491 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f425e14a-c2fa-41b0-b50b-7e11fd0c8523" containerName="extract-content" Oct 13 08:46:41 crc kubenswrapper[4833]: I1013 08:46:41.069497 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f425e14a-c2fa-41b0-b50b-7e11fd0c8523" containerName="extract-content" Oct 13 08:46:41 crc kubenswrapper[4833]: E1013 08:46:41.069514 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57e3fa35-505e-4d28-96f0-1d0fb02ed435" containerName="extract-utilities" Oct 13 08:46:41 crc kubenswrapper[4833]: I1013 08:46:41.069520 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="57e3fa35-505e-4d28-96f0-1d0fb02ed435" containerName="extract-utilities" Oct 13 08:46:41 crc kubenswrapper[4833]: E1013 08:46:41.069546 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f425e14a-c2fa-41b0-b50b-7e11fd0c8523" containerName="registry-server" Oct 13 08:46:41 crc kubenswrapper[4833]: I1013 08:46:41.069552 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f425e14a-c2fa-41b0-b50b-7e11fd0c8523" containerName="registry-server" Oct 13 08:46:41 crc kubenswrapper[4833]: E1013 08:46:41.069563 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="113f75fd-7d34-40ba-b6a4-dd90b91c2075" containerName="collect-profiles" Oct 13 08:46:41 crc kubenswrapper[4833]: I1013 08:46:41.069568 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="113f75fd-7d34-40ba-b6a4-dd90b91c2075" containerName="collect-profiles" Oct 13 08:46:41 crc kubenswrapper[4833]: E1013 08:46:41.069579 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57e3fa35-505e-4d28-96f0-1d0fb02ed435" containerName="extract-content" Oct 13 08:46:41 crc kubenswrapper[4833]: I1013 08:46:41.069585 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="57e3fa35-505e-4d28-96f0-1d0fb02ed435" containerName="extract-content" Oct 13 08:46:41 crc kubenswrapper[4833]: E1013 08:46:41.069600 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57e3fa35-505e-4d28-96f0-1d0fb02ed435" containerName="registry-server" Oct 13 08:46:41 crc kubenswrapper[4833]: I1013 08:46:41.069606 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="57e3fa35-505e-4d28-96f0-1d0fb02ed435" containerName="registry-server" Oct 13 08:46:41 crc kubenswrapper[4833]: I1013 08:46:41.069791 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="57e3fa35-505e-4d28-96f0-1d0fb02ed435" containerName="registry-server" Oct 13 08:46:41 crc kubenswrapper[4833]: I1013 08:46:41.069807 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="113f75fd-7d34-40ba-b6a4-dd90b91c2075" containerName="collect-profiles" Oct 13 08:46:41 crc kubenswrapper[4833]: I1013 08:46:41.069827 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f425e14a-c2fa-41b0-b50b-7e11fd0c8523" containerName="registry-server" Oct 13 08:46:41 crc kubenswrapper[4833]: I1013 08:46:41.071443 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cttk6" Oct 13 08:46:41 crc kubenswrapper[4833]: I1013 08:46:41.087900 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cttk6"] Oct 13 08:46:41 crc kubenswrapper[4833]: I1013 08:46:41.204264 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/209a5232-a01d-4097-8626-ce9ec0e51752-utilities\") pod \"redhat-operators-cttk6\" (UID: \"209a5232-a01d-4097-8626-ce9ec0e51752\") " pod="openshift-marketplace/redhat-operators-cttk6" Oct 13 08:46:41 crc kubenswrapper[4833]: I1013 08:46:41.204366 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/209a5232-a01d-4097-8626-ce9ec0e51752-catalog-content\") pod \"redhat-operators-cttk6\" (UID: \"209a5232-a01d-4097-8626-ce9ec0e51752\") " pod="openshift-marketplace/redhat-operators-cttk6" Oct 13 08:46:41 crc kubenswrapper[4833]: I1013 08:46:41.204443 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cm5b\" (UniqueName: \"kubernetes.io/projected/209a5232-a01d-4097-8626-ce9ec0e51752-kube-api-access-4cm5b\") pod \"redhat-operators-cttk6\" (UID: \"209a5232-a01d-4097-8626-ce9ec0e51752\") " pod="openshift-marketplace/redhat-operators-cttk6" Oct 13 08:46:41 crc kubenswrapper[4833]: I1013 08:46:41.306357 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/209a5232-a01d-4097-8626-ce9ec0e51752-catalog-content\") pod \"redhat-operators-cttk6\" (UID: \"209a5232-a01d-4097-8626-ce9ec0e51752\") " pod="openshift-marketplace/redhat-operators-cttk6" Oct 13 08:46:41 crc kubenswrapper[4833]: I1013 08:46:41.306499 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cm5b\" (UniqueName: \"kubernetes.io/projected/209a5232-a01d-4097-8626-ce9ec0e51752-kube-api-access-4cm5b\") pod \"redhat-operators-cttk6\" (UID: \"209a5232-a01d-4097-8626-ce9ec0e51752\") " pod="openshift-marketplace/redhat-operators-cttk6" Oct 13 08:46:41 crc kubenswrapper[4833]: I1013 08:46:41.306653 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/209a5232-a01d-4097-8626-ce9ec0e51752-utilities\") pod \"redhat-operators-cttk6\" (UID: \"209a5232-a01d-4097-8626-ce9ec0e51752\") " pod="openshift-marketplace/redhat-operators-cttk6" Oct 13 08:46:41 crc kubenswrapper[4833]: I1013 08:46:41.306874 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/209a5232-a01d-4097-8626-ce9ec0e51752-catalog-content\") pod \"redhat-operators-cttk6\" (UID: \"209a5232-a01d-4097-8626-ce9ec0e51752\") " pod="openshift-marketplace/redhat-operators-cttk6" Oct 13 08:46:41 crc kubenswrapper[4833]: I1013 08:46:41.307027 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/209a5232-a01d-4097-8626-ce9ec0e51752-utilities\") pod \"redhat-operators-cttk6\" (UID: \"209a5232-a01d-4097-8626-ce9ec0e51752\") " pod="openshift-marketplace/redhat-operators-cttk6" Oct 13 08:46:41 crc kubenswrapper[4833]: I1013 08:46:41.351569 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cm5b\" (UniqueName: \"kubernetes.io/projected/209a5232-a01d-4097-8626-ce9ec0e51752-kube-api-access-4cm5b\") pod \"redhat-operators-cttk6\" (UID: \"209a5232-a01d-4097-8626-ce9ec0e51752\") " pod="openshift-marketplace/redhat-operators-cttk6" Oct 13 08:46:41 crc kubenswrapper[4833]: I1013 08:46:41.399605 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cttk6" Oct 13 08:46:41 crc kubenswrapper[4833]: I1013 08:46:41.988486 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cttk6"] Oct 13 08:46:42 crc kubenswrapper[4833]: I1013 08:46:42.450254 4833 generic.go:334] "Generic (PLEG): container finished" podID="209a5232-a01d-4097-8626-ce9ec0e51752" containerID="14e36670621f226f9a6107579297f0192a41b06fcadd56c79fddd4090bbb3fbd" exitCode=0 Oct 13 08:46:42 crc kubenswrapper[4833]: I1013 08:46:42.450367 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cttk6" event={"ID":"209a5232-a01d-4097-8626-ce9ec0e51752","Type":"ContainerDied","Data":"14e36670621f226f9a6107579297f0192a41b06fcadd56c79fddd4090bbb3fbd"} Oct 13 08:46:42 crc kubenswrapper[4833]: I1013 08:46:42.450470 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cttk6" event={"ID":"209a5232-a01d-4097-8626-ce9ec0e51752","Type":"ContainerStarted","Data":"5b56b491c69ef6af8ac97e0158717dcf5e94671ae4b205ab3cbe84e5736748e0"} Oct 13 08:46:42 crc kubenswrapper[4833]: I1013 08:46:42.451980 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 08:46:43 crc kubenswrapper[4833]: I1013 08:46:43.462692 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cttk6" event={"ID":"209a5232-a01d-4097-8626-ce9ec0e51752","Type":"ContainerStarted","Data":"bab6195b901d4ecd963cafe3c07f5b60ee3391de096f9c8e6388376a59e7a6fc"} Oct 13 08:46:46 crc kubenswrapper[4833]: I1013 08:46:46.494026 4833 generic.go:334] "Generic (PLEG): container finished" podID="209a5232-a01d-4097-8626-ce9ec0e51752" containerID="bab6195b901d4ecd963cafe3c07f5b60ee3391de096f9c8e6388376a59e7a6fc" exitCode=0 Oct 13 08:46:46 crc kubenswrapper[4833]: I1013 08:46:46.494149 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cttk6" event={"ID":"209a5232-a01d-4097-8626-ce9ec0e51752","Type":"ContainerDied","Data":"bab6195b901d4ecd963cafe3c07f5b60ee3391de096f9c8e6388376a59e7a6fc"} Oct 13 08:46:47 crc kubenswrapper[4833]: I1013 08:46:47.506705 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cttk6" event={"ID":"209a5232-a01d-4097-8626-ce9ec0e51752","Type":"ContainerStarted","Data":"93f8fb388a2a0231fecc65c44440ec86d964d42f1a0d844fb8a6018e11b5d299"} Oct 13 08:46:47 crc kubenswrapper[4833]: I1013 08:46:47.528977 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cttk6" podStartSLOduration=1.959257281 podStartE2EDuration="6.528956434s" podCreationTimestamp="2025-10-13 08:46:41 +0000 UTC" firstStartedPulling="2025-10-13 08:46:42.451788884 +0000 UTC m=+8292.552211800" lastFinishedPulling="2025-10-13 08:46:47.021488037 +0000 UTC m=+8297.121910953" observedRunningTime="2025-10-13 08:46:47.524808016 +0000 UTC m=+8297.625230932" watchObservedRunningTime="2025-10-13 08:46:47.528956434 +0000 UTC m=+8297.629379350" Oct 13 08:46:51 crc kubenswrapper[4833]: I1013 08:46:51.399810 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cttk6" Oct 13 08:46:51 crc kubenswrapper[4833]: I1013 08:46:51.400492 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cttk6" Oct 13 08:46:52 crc kubenswrapper[4833]: I1013 08:46:52.455672 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cttk6" podUID="209a5232-a01d-4097-8626-ce9ec0e51752" containerName="registry-server" probeResult="failure" output=< Oct 13 08:46:52 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Oct 13 08:46:52 crc kubenswrapper[4833]: > Oct 13 08:47:01 crc kubenswrapper[4833]: I1013 08:47:01.471431 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cttk6" Oct 13 08:47:01 crc kubenswrapper[4833]: I1013 08:47:01.569753 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cttk6" Oct 13 08:47:01 crc kubenswrapper[4833]: I1013 08:47:01.711187 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cttk6"] Oct 13 08:47:02 crc kubenswrapper[4833]: I1013 08:47:02.683513 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cttk6" podUID="209a5232-a01d-4097-8626-ce9ec0e51752" containerName="registry-server" containerID="cri-o://93f8fb388a2a0231fecc65c44440ec86d964d42f1a0d844fb8a6018e11b5d299" gracePeriod=2 Oct 13 08:47:03 crc kubenswrapper[4833]: I1013 08:47:03.159097 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cttk6" Oct 13 08:47:03 crc kubenswrapper[4833]: I1013 08:47:03.238753 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/209a5232-a01d-4097-8626-ce9ec0e51752-utilities\") pod \"209a5232-a01d-4097-8626-ce9ec0e51752\" (UID: \"209a5232-a01d-4097-8626-ce9ec0e51752\") " Oct 13 08:47:03 crc kubenswrapper[4833]: I1013 08:47:03.238835 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/209a5232-a01d-4097-8626-ce9ec0e51752-catalog-content\") pod \"209a5232-a01d-4097-8626-ce9ec0e51752\" (UID: \"209a5232-a01d-4097-8626-ce9ec0e51752\") " Oct 13 08:47:03 crc kubenswrapper[4833]: I1013 08:47:03.238940 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cm5b\" (UniqueName: \"kubernetes.io/projected/209a5232-a01d-4097-8626-ce9ec0e51752-kube-api-access-4cm5b\") pod \"209a5232-a01d-4097-8626-ce9ec0e51752\" (UID: \"209a5232-a01d-4097-8626-ce9ec0e51752\") " Oct 13 08:47:03 crc kubenswrapper[4833]: I1013 08:47:03.239770 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/209a5232-a01d-4097-8626-ce9ec0e51752-utilities" (OuterVolumeSpecName: "utilities") pod "209a5232-a01d-4097-8626-ce9ec0e51752" (UID: "209a5232-a01d-4097-8626-ce9ec0e51752"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:47:03 crc kubenswrapper[4833]: I1013 08:47:03.239968 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/209a5232-a01d-4097-8626-ce9ec0e51752-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 08:47:03 crc kubenswrapper[4833]: I1013 08:47:03.244935 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/209a5232-a01d-4097-8626-ce9ec0e51752-kube-api-access-4cm5b" (OuterVolumeSpecName: "kube-api-access-4cm5b") pod "209a5232-a01d-4097-8626-ce9ec0e51752" (UID: "209a5232-a01d-4097-8626-ce9ec0e51752"). InnerVolumeSpecName "kube-api-access-4cm5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:47:03 crc kubenswrapper[4833]: I1013 08:47:03.328887 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/209a5232-a01d-4097-8626-ce9ec0e51752-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "209a5232-a01d-4097-8626-ce9ec0e51752" (UID: "209a5232-a01d-4097-8626-ce9ec0e51752"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:47:03 crc kubenswrapper[4833]: I1013 08:47:03.342180 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/209a5232-a01d-4097-8626-ce9ec0e51752-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 08:47:03 crc kubenswrapper[4833]: I1013 08:47:03.342401 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cm5b\" (UniqueName: \"kubernetes.io/projected/209a5232-a01d-4097-8626-ce9ec0e51752-kube-api-access-4cm5b\") on node \"crc\" DevicePath \"\"" Oct 13 08:47:03 crc kubenswrapper[4833]: I1013 08:47:03.701477 4833 generic.go:334] "Generic (PLEG): container finished" podID="209a5232-a01d-4097-8626-ce9ec0e51752" containerID="93f8fb388a2a0231fecc65c44440ec86d964d42f1a0d844fb8a6018e11b5d299" exitCode=0 Oct 13 08:47:03 crc kubenswrapper[4833]: I1013 08:47:03.701602 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cttk6" Oct 13 08:47:03 crc kubenswrapper[4833]: I1013 08:47:03.701595 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cttk6" event={"ID":"209a5232-a01d-4097-8626-ce9ec0e51752","Type":"ContainerDied","Data":"93f8fb388a2a0231fecc65c44440ec86d964d42f1a0d844fb8a6018e11b5d299"} Oct 13 08:47:03 crc kubenswrapper[4833]: I1013 08:47:03.702108 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cttk6" event={"ID":"209a5232-a01d-4097-8626-ce9ec0e51752","Type":"ContainerDied","Data":"5b56b491c69ef6af8ac97e0158717dcf5e94671ae4b205ab3cbe84e5736748e0"} Oct 13 08:47:03 crc kubenswrapper[4833]: I1013 08:47:03.702148 4833 scope.go:117] "RemoveContainer" containerID="93f8fb388a2a0231fecc65c44440ec86d964d42f1a0d844fb8a6018e11b5d299" Oct 13 08:47:03 crc kubenswrapper[4833]: I1013 08:47:03.745585 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cttk6"] Oct 13 08:47:03 crc kubenswrapper[4833]: I1013 08:47:03.770137 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cttk6"] Oct 13 08:47:03 crc kubenswrapper[4833]: I1013 08:47:03.778032 4833 scope.go:117] "RemoveContainer" containerID="bab6195b901d4ecd963cafe3c07f5b60ee3391de096f9c8e6388376a59e7a6fc" Oct 13 08:47:03 crc kubenswrapper[4833]: I1013 08:47:03.816529 4833 scope.go:117] "RemoveContainer" containerID="14e36670621f226f9a6107579297f0192a41b06fcadd56c79fddd4090bbb3fbd" Oct 13 08:47:03 crc kubenswrapper[4833]: I1013 08:47:03.872053 4833 scope.go:117] "RemoveContainer" containerID="93f8fb388a2a0231fecc65c44440ec86d964d42f1a0d844fb8a6018e11b5d299" Oct 13 08:47:03 crc kubenswrapper[4833]: E1013 08:47:03.872590 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93f8fb388a2a0231fecc65c44440ec86d964d42f1a0d844fb8a6018e11b5d299\": container with ID starting with 93f8fb388a2a0231fecc65c44440ec86d964d42f1a0d844fb8a6018e11b5d299 not found: ID does not exist" containerID="93f8fb388a2a0231fecc65c44440ec86d964d42f1a0d844fb8a6018e11b5d299" Oct 13 08:47:03 crc kubenswrapper[4833]: I1013 08:47:03.872677 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93f8fb388a2a0231fecc65c44440ec86d964d42f1a0d844fb8a6018e11b5d299"} err="failed to get container status \"93f8fb388a2a0231fecc65c44440ec86d964d42f1a0d844fb8a6018e11b5d299\": rpc error: code = NotFound desc = could not find container \"93f8fb388a2a0231fecc65c44440ec86d964d42f1a0d844fb8a6018e11b5d299\": container with ID starting with 93f8fb388a2a0231fecc65c44440ec86d964d42f1a0d844fb8a6018e11b5d299 not found: ID does not exist" Oct 13 08:47:03 crc kubenswrapper[4833]: I1013 08:47:03.872761 4833 scope.go:117] "RemoveContainer" containerID="bab6195b901d4ecd963cafe3c07f5b60ee3391de096f9c8e6388376a59e7a6fc" Oct 13 08:47:03 crc kubenswrapper[4833]: E1013 08:47:03.873116 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bab6195b901d4ecd963cafe3c07f5b60ee3391de096f9c8e6388376a59e7a6fc\": container with ID starting with bab6195b901d4ecd963cafe3c07f5b60ee3391de096f9c8e6388376a59e7a6fc not found: ID does not exist" containerID="bab6195b901d4ecd963cafe3c07f5b60ee3391de096f9c8e6388376a59e7a6fc" Oct 13 08:47:03 crc kubenswrapper[4833]: I1013 08:47:03.873203 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bab6195b901d4ecd963cafe3c07f5b60ee3391de096f9c8e6388376a59e7a6fc"} err="failed to get container status \"bab6195b901d4ecd963cafe3c07f5b60ee3391de096f9c8e6388376a59e7a6fc\": rpc error: code = NotFound desc = could not find container \"bab6195b901d4ecd963cafe3c07f5b60ee3391de096f9c8e6388376a59e7a6fc\": container with ID starting with bab6195b901d4ecd963cafe3c07f5b60ee3391de096f9c8e6388376a59e7a6fc not found: ID does not exist" Oct 13 08:47:03 crc kubenswrapper[4833]: I1013 08:47:03.873271 4833 scope.go:117] "RemoveContainer" containerID="14e36670621f226f9a6107579297f0192a41b06fcadd56c79fddd4090bbb3fbd" Oct 13 08:47:03 crc kubenswrapper[4833]: E1013 08:47:03.873732 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14e36670621f226f9a6107579297f0192a41b06fcadd56c79fddd4090bbb3fbd\": container with ID starting with 14e36670621f226f9a6107579297f0192a41b06fcadd56c79fddd4090bbb3fbd not found: ID does not exist" containerID="14e36670621f226f9a6107579297f0192a41b06fcadd56c79fddd4090bbb3fbd" Oct 13 08:47:03 crc kubenswrapper[4833]: I1013 08:47:03.873788 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14e36670621f226f9a6107579297f0192a41b06fcadd56c79fddd4090bbb3fbd"} err="failed to get container status \"14e36670621f226f9a6107579297f0192a41b06fcadd56c79fddd4090bbb3fbd\": rpc error: code = NotFound desc = could not find container \"14e36670621f226f9a6107579297f0192a41b06fcadd56c79fddd4090bbb3fbd\": container with ID starting with 14e36670621f226f9a6107579297f0192a41b06fcadd56c79fddd4090bbb3fbd not found: ID does not exist" Oct 13 08:47:04 crc kubenswrapper[4833]: I1013 08:47:04.640868 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="209a5232-a01d-4097-8626-ce9ec0e51752" path="/var/lib/kubelet/pods/209a5232-a01d-4097-8626-ce9ec0e51752/volumes" Oct 13 08:47:30 crc kubenswrapper[4833]: I1013 08:47:30.542983 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:47:30 crc kubenswrapper[4833]: I1013 08:47:30.543813 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 08:48:00 crc kubenswrapper[4833]: I1013 08:48:00.543642 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:48:00 crc kubenswrapper[4833]: I1013 08:48:00.544406 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 08:48:30 crc kubenswrapper[4833]: I1013 08:48:30.543238 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:48:30 crc kubenswrapper[4833]: I1013 08:48:30.543967 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 08:48:30 crc kubenswrapper[4833]: I1013 08:48:30.544038 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 08:48:30 crc kubenswrapper[4833]: I1013 08:48:30.545232 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a76086e46c8a4e9ad69dc7869bcdd6e98cbdb68d0ed85c8463b29d24ee04f7f8"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 08:48:30 crc kubenswrapper[4833]: I1013 08:48:30.545351 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://a76086e46c8a4e9ad69dc7869bcdd6e98cbdb68d0ed85c8463b29d24ee04f7f8" gracePeriod=600 Oct 13 08:48:30 crc kubenswrapper[4833]: E1013 08:48:30.674972 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:48:31 crc kubenswrapper[4833]: I1013 08:48:31.660490 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="a76086e46c8a4e9ad69dc7869bcdd6e98cbdb68d0ed85c8463b29d24ee04f7f8" exitCode=0 Oct 13 08:48:31 crc kubenswrapper[4833]: I1013 08:48:31.660560 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"a76086e46c8a4e9ad69dc7869bcdd6e98cbdb68d0ed85c8463b29d24ee04f7f8"} Oct 13 08:48:31 crc kubenswrapper[4833]: I1013 08:48:31.660886 4833 scope.go:117] "RemoveContainer" containerID="22856144922dcb85420c849eb3f6308a0a81a29149d29ed40f72d81846153551" Oct 13 08:48:31 crc kubenswrapper[4833]: I1013 08:48:31.661946 4833 scope.go:117] "RemoveContainer" containerID="a76086e46c8a4e9ad69dc7869bcdd6e98cbdb68d0ed85c8463b29d24ee04f7f8" Oct 13 08:48:31 crc kubenswrapper[4833]: E1013 08:48:31.662499 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:48:42 crc kubenswrapper[4833]: I1013 08:48:42.627371 4833 scope.go:117] "RemoveContainer" containerID="a76086e46c8a4e9ad69dc7869bcdd6e98cbdb68d0ed85c8463b29d24ee04f7f8" Oct 13 08:48:42 crc kubenswrapper[4833]: E1013 08:48:42.628185 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:48:43 crc kubenswrapper[4833]: I1013 08:48:43.816030 4833 generic.go:334] "Generic (PLEG): container finished" podID="d3e31455-5a84-415b-be9a-91e2f7033095" containerID="b0cee67a6311bcb0d63210bf4967a1af1525d1bfe86e4b0dff87becbde018ac3" exitCode=0 Oct 13 08:48:43 crc kubenswrapper[4833]: I1013 08:48:43.816124 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-dkj5f" event={"ID":"d3e31455-5a84-415b-be9a-91e2f7033095","Type":"ContainerDied","Data":"b0cee67a6311bcb0d63210bf4967a1af1525d1bfe86e4b0dff87becbde018ac3"} Oct 13 08:48:45 crc kubenswrapper[4833]: I1013 08:48:45.278903 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-dkj5f" Oct 13 08:48:45 crc kubenswrapper[4833]: I1013 08:48:45.311087 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz22d\" (UniqueName: \"kubernetes.io/projected/d3e31455-5a84-415b-be9a-91e2f7033095-kube-api-access-fz22d\") pod \"d3e31455-5a84-415b-be9a-91e2f7033095\" (UID: \"d3e31455-5a84-415b-be9a-91e2f7033095\") " Oct 13 08:48:45 crc kubenswrapper[4833]: I1013 08:48:45.311203 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e31455-5a84-415b-be9a-91e2f7033095-libvirt-combined-ca-bundle\") pod \"d3e31455-5a84-415b-be9a-91e2f7033095\" (UID: \"d3e31455-5a84-415b-be9a-91e2f7033095\") " Oct 13 08:48:45 crc kubenswrapper[4833]: I1013 08:48:45.311230 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d3e31455-5a84-415b-be9a-91e2f7033095-ssh-key\") pod \"d3e31455-5a84-415b-be9a-91e2f7033095\" (UID: \"d3e31455-5a84-415b-be9a-91e2f7033095\") " Oct 13 08:48:45 crc kubenswrapper[4833]: I1013 08:48:45.311315 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3e31455-5a84-415b-be9a-91e2f7033095-inventory\") pod \"d3e31455-5a84-415b-be9a-91e2f7033095\" (UID: \"d3e31455-5a84-415b-be9a-91e2f7033095\") " Oct 13 08:48:45 crc kubenswrapper[4833]: I1013 08:48:45.311378 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d3e31455-5a84-415b-be9a-91e2f7033095-libvirt-secret-0\") pod \"d3e31455-5a84-415b-be9a-91e2f7033095\" (UID: \"d3e31455-5a84-415b-be9a-91e2f7033095\") " Oct 13 08:48:45 crc kubenswrapper[4833]: I1013 08:48:45.322168 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3e31455-5a84-415b-be9a-91e2f7033095-kube-api-access-fz22d" (OuterVolumeSpecName: "kube-api-access-fz22d") pod "d3e31455-5a84-415b-be9a-91e2f7033095" (UID: "d3e31455-5a84-415b-be9a-91e2f7033095"). InnerVolumeSpecName "kube-api-access-fz22d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:48:45 crc kubenswrapper[4833]: I1013 08:48:45.323718 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3e31455-5a84-415b-be9a-91e2f7033095-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d3e31455-5a84-415b-be9a-91e2f7033095" (UID: "d3e31455-5a84-415b-be9a-91e2f7033095"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:48:45 crc kubenswrapper[4833]: I1013 08:48:45.344428 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3e31455-5a84-415b-be9a-91e2f7033095-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d3e31455-5a84-415b-be9a-91e2f7033095" (UID: "d3e31455-5a84-415b-be9a-91e2f7033095"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:48:45 crc kubenswrapper[4833]: I1013 08:48:45.350161 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3e31455-5a84-415b-be9a-91e2f7033095-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "d3e31455-5a84-415b-be9a-91e2f7033095" (UID: "d3e31455-5a84-415b-be9a-91e2f7033095"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:48:45 crc kubenswrapper[4833]: I1013 08:48:45.361950 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3e31455-5a84-415b-be9a-91e2f7033095-inventory" (OuterVolumeSpecName: "inventory") pod "d3e31455-5a84-415b-be9a-91e2f7033095" (UID: "d3e31455-5a84-415b-be9a-91e2f7033095"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:48:45 crc kubenswrapper[4833]: I1013 08:48:45.419494 4833 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e31455-5a84-415b-be9a-91e2f7033095-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:48:45 crc kubenswrapper[4833]: I1013 08:48:45.419522 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d3e31455-5a84-415b-be9a-91e2f7033095-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 08:48:45 crc kubenswrapper[4833]: I1013 08:48:45.419543 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3e31455-5a84-415b-be9a-91e2f7033095-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 08:48:45 crc kubenswrapper[4833]: I1013 08:48:45.419553 4833 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d3e31455-5a84-415b-be9a-91e2f7033095-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Oct 13 08:48:45 crc kubenswrapper[4833]: I1013 08:48:45.419563 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz22d\" (UniqueName: \"kubernetes.io/projected/d3e31455-5a84-415b-be9a-91e2f7033095-kube-api-access-fz22d\") on node \"crc\" DevicePath \"\"" Oct 13 08:48:45 crc kubenswrapper[4833]: I1013 08:48:45.840642 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-dkj5f" event={"ID":"d3e31455-5a84-415b-be9a-91e2f7033095","Type":"ContainerDied","Data":"b059427683f49ae85172747fbc7a07f744e3579cefa6d7e6658a5e7ec28b6992"} Oct 13 08:48:45 crc kubenswrapper[4833]: I1013 08:48:45.840683 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b059427683f49ae85172747fbc7a07f744e3579cefa6d7e6658a5e7ec28b6992" Oct 13 08:48:45 crc kubenswrapper[4833]: I1013 08:48:45.840739 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-dkj5f" Oct 13 08:48:45 crc kubenswrapper[4833]: I1013 08:48:45.983465 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-txjbp"] Oct 13 08:48:45 crc kubenswrapper[4833]: E1013 08:48:45.984421 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="209a5232-a01d-4097-8626-ce9ec0e51752" containerName="extract-utilities" Oct 13 08:48:45 crc kubenswrapper[4833]: I1013 08:48:45.984443 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="209a5232-a01d-4097-8626-ce9ec0e51752" containerName="extract-utilities" Oct 13 08:48:45 crc kubenswrapper[4833]: E1013 08:48:45.984509 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="209a5232-a01d-4097-8626-ce9ec0e51752" containerName="registry-server" Oct 13 08:48:45 crc kubenswrapper[4833]: I1013 08:48:45.984518 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="209a5232-a01d-4097-8626-ce9ec0e51752" containerName="registry-server" Oct 13 08:48:45 crc kubenswrapper[4833]: E1013 08:48:45.984566 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3e31455-5a84-415b-be9a-91e2f7033095" containerName="libvirt-openstack-openstack-cell1" Oct 13 08:48:45 crc kubenswrapper[4833]: I1013 08:48:45.984576 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3e31455-5a84-415b-be9a-91e2f7033095" containerName="libvirt-openstack-openstack-cell1" Oct 13 08:48:45 crc kubenswrapper[4833]: E1013 08:48:45.984615 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="209a5232-a01d-4097-8626-ce9ec0e51752" containerName="extract-content" Oct 13 08:48:45 crc kubenswrapper[4833]: I1013 08:48:45.984625 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="209a5232-a01d-4097-8626-ce9ec0e51752" containerName="extract-content" Oct 13 08:48:45 crc kubenswrapper[4833]: I1013 08:48:45.985080 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3e31455-5a84-415b-be9a-91e2f7033095" containerName="libvirt-openstack-openstack-cell1" Oct 13 08:48:45 crc kubenswrapper[4833]: I1013 08:48:45.985122 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="209a5232-a01d-4097-8626-ce9ec0e51752" containerName="registry-server" Oct 13 08:48:45 crc kubenswrapper[4833]: I1013 08:48:45.986378 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" Oct 13 08:48:45 crc kubenswrapper[4833]: I1013 08:48:45.993743 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 13 08:48:45 crc kubenswrapper[4833]: I1013 08:48:45.998400 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:45.998769 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:45.999116 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:45.999466 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-qqrx8" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:45.999854 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:46.000055 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:46.008206 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-txjbp"] Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:46.160887 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-inventory\") pod \"nova-cell1-openstack-openstack-cell1-txjbp\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:46.161185 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-txjbp\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:46.161222 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-txjbp\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:46.161278 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-txjbp\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:46.161390 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-txjbp\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:46.161408 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-txjbp\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:46.161424 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7wcm\" (UniqueName: \"kubernetes.io/projected/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-kube-api-access-r7wcm\") pod \"nova-cell1-openstack-openstack-cell1-txjbp\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:46.161477 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-txjbp\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:46.161695 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-txjbp\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:46.264251 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-txjbp\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:46.264394 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-txjbp\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:46.264418 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-txjbp\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:46.264436 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7wcm\" (UniqueName: \"kubernetes.io/projected/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-kube-api-access-r7wcm\") pod \"nova-cell1-openstack-openstack-cell1-txjbp\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:46.264484 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-txjbp\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:46.264506 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-txjbp\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:46.264533 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-inventory\") pod \"nova-cell1-openstack-openstack-cell1-txjbp\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:46.264568 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-txjbp\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:46.264597 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-txjbp\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:46.265816 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-txjbp\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:46.268187 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-txjbp\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:46.268458 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-inventory\") pod \"nova-cell1-openstack-openstack-cell1-txjbp\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:46.269514 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-txjbp\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:46.273143 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-txjbp\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:46.275323 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-txjbp\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:46.275928 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-txjbp\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:46.285744 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-txjbp\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:46.289290 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7wcm\" (UniqueName: \"kubernetes.io/projected/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-kube-api-access-r7wcm\") pod \"nova-cell1-openstack-openstack-cell1-txjbp\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:46.334921 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" Oct 13 08:48:46 crc kubenswrapper[4833]: I1013 08:48:46.901218 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-txjbp"] Oct 13 08:48:47 crc kubenswrapper[4833]: I1013 08:48:47.869646 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" event={"ID":"2e4c0a0f-f063-49a3-8289-6efdb12b97fc","Type":"ContainerStarted","Data":"7dafdb4edea9ee8dcd024b63d0946cd7394b2b0d76c3611a154fd35191191ae5"} Oct 13 08:48:48 crc kubenswrapper[4833]: I1013 08:48:48.883796 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" event={"ID":"2e4c0a0f-f063-49a3-8289-6efdb12b97fc","Type":"ContainerStarted","Data":"d4ea855d14ec9e15a0baeadcf35e714493ecda7bfc5c4d0b73d4e31e2cba0a42"} Oct 13 08:48:48 crc kubenswrapper[4833]: I1013 08:48:48.916587 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" podStartSLOduration=3.201351623 podStartE2EDuration="3.916565126s" podCreationTimestamp="2025-10-13 08:48:45 +0000 UTC" firstStartedPulling="2025-10-13 08:48:46.933603973 +0000 UTC m=+8417.034026889" lastFinishedPulling="2025-10-13 08:48:47.648817466 +0000 UTC m=+8417.749240392" observedRunningTime="2025-10-13 08:48:48.909636169 +0000 UTC m=+8419.010059085" watchObservedRunningTime="2025-10-13 08:48:48.916565126 +0000 UTC m=+8419.016988032" Oct 13 08:48:57 crc kubenswrapper[4833]: I1013 08:48:57.627352 4833 scope.go:117] "RemoveContainer" containerID="a76086e46c8a4e9ad69dc7869bcdd6e98cbdb68d0ed85c8463b29d24ee04f7f8" Oct 13 08:48:57 crc kubenswrapper[4833]: E1013 08:48:57.628244 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:49:10 crc kubenswrapper[4833]: I1013 08:49:10.643363 4833 scope.go:117] "RemoveContainer" containerID="a76086e46c8a4e9ad69dc7869bcdd6e98cbdb68d0ed85c8463b29d24ee04f7f8" Oct 13 08:49:10 crc kubenswrapper[4833]: E1013 08:49:10.644533 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:49:22 crc kubenswrapper[4833]: I1013 08:49:22.629751 4833 scope.go:117] "RemoveContainer" containerID="a76086e46c8a4e9ad69dc7869bcdd6e98cbdb68d0ed85c8463b29d24ee04f7f8" Oct 13 08:49:22 crc kubenswrapper[4833]: E1013 08:49:22.639136 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:49:35 crc kubenswrapper[4833]: I1013 08:49:35.627090 4833 scope.go:117] "RemoveContainer" containerID="a76086e46c8a4e9ad69dc7869bcdd6e98cbdb68d0ed85c8463b29d24ee04f7f8" Oct 13 08:49:35 crc kubenswrapper[4833]: E1013 08:49:35.628197 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:49:46 crc kubenswrapper[4833]: I1013 08:49:46.629148 4833 scope.go:117] "RemoveContainer" containerID="a76086e46c8a4e9ad69dc7869bcdd6e98cbdb68d0ed85c8463b29d24ee04f7f8" Oct 13 08:49:46 crc kubenswrapper[4833]: E1013 08:49:46.630018 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:50:01 crc kubenswrapper[4833]: I1013 08:50:01.627870 4833 scope.go:117] "RemoveContainer" containerID="a76086e46c8a4e9ad69dc7869bcdd6e98cbdb68d0ed85c8463b29d24ee04f7f8" Oct 13 08:50:01 crc kubenswrapper[4833]: E1013 08:50:01.629294 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:50:12 crc kubenswrapper[4833]: I1013 08:50:12.627954 4833 scope.go:117] "RemoveContainer" containerID="a76086e46c8a4e9ad69dc7869bcdd6e98cbdb68d0ed85c8463b29d24ee04f7f8" Oct 13 08:50:12 crc kubenswrapper[4833]: E1013 08:50:12.628867 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:50:25 crc kubenswrapper[4833]: I1013 08:50:25.627726 4833 scope.go:117] "RemoveContainer" containerID="a76086e46c8a4e9ad69dc7869bcdd6e98cbdb68d0ed85c8463b29d24ee04f7f8" Oct 13 08:50:25 crc kubenswrapper[4833]: E1013 08:50:25.628515 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:50:38 crc kubenswrapper[4833]: I1013 08:50:38.627482 4833 scope.go:117] "RemoveContainer" containerID="a76086e46c8a4e9ad69dc7869bcdd6e98cbdb68d0ed85c8463b29d24ee04f7f8" Oct 13 08:50:38 crc kubenswrapper[4833]: E1013 08:50:38.628673 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:50:51 crc kubenswrapper[4833]: I1013 08:50:51.627834 4833 scope.go:117] "RemoveContainer" containerID="a76086e46c8a4e9ad69dc7869bcdd6e98cbdb68d0ed85c8463b29d24ee04f7f8" Oct 13 08:50:51 crc kubenswrapper[4833]: E1013 08:50:51.629119 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:51:05 crc kubenswrapper[4833]: I1013 08:51:05.628335 4833 scope.go:117] "RemoveContainer" containerID="a76086e46c8a4e9ad69dc7869bcdd6e98cbdb68d0ed85c8463b29d24ee04f7f8" Oct 13 08:51:05 crc kubenswrapper[4833]: E1013 08:51:05.629736 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:51:19 crc kubenswrapper[4833]: I1013 08:51:19.627462 4833 scope.go:117] "RemoveContainer" containerID="a76086e46c8a4e9ad69dc7869bcdd6e98cbdb68d0ed85c8463b29d24ee04f7f8" Oct 13 08:51:19 crc kubenswrapper[4833]: E1013 08:51:19.630239 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:51:32 crc kubenswrapper[4833]: I1013 08:51:32.627949 4833 scope.go:117] "RemoveContainer" containerID="a76086e46c8a4e9ad69dc7869bcdd6e98cbdb68d0ed85c8463b29d24ee04f7f8" Oct 13 08:51:32 crc kubenswrapper[4833]: E1013 08:51:32.629783 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:51:44 crc kubenswrapper[4833]: I1013 08:51:44.627766 4833 scope.go:117] "RemoveContainer" containerID="a76086e46c8a4e9ad69dc7869bcdd6e98cbdb68d0ed85c8463b29d24ee04f7f8" Oct 13 08:51:44 crc kubenswrapper[4833]: E1013 08:51:44.628621 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:51:56 crc kubenswrapper[4833]: I1013 08:51:56.628268 4833 scope.go:117] "RemoveContainer" containerID="a76086e46c8a4e9ad69dc7869bcdd6e98cbdb68d0ed85c8463b29d24ee04f7f8" Oct 13 08:51:56 crc kubenswrapper[4833]: E1013 08:51:56.629593 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:52:11 crc kubenswrapper[4833]: I1013 08:52:11.627667 4833 scope.go:117] "RemoveContainer" containerID="a76086e46c8a4e9ad69dc7869bcdd6e98cbdb68d0ed85c8463b29d24ee04f7f8" Oct 13 08:52:11 crc kubenswrapper[4833]: E1013 08:52:11.628445 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:52:23 crc kubenswrapper[4833]: I1013 08:52:23.628264 4833 scope.go:117] "RemoveContainer" containerID="a76086e46c8a4e9ad69dc7869bcdd6e98cbdb68d0ed85c8463b29d24ee04f7f8" Oct 13 08:52:23 crc kubenswrapper[4833]: E1013 08:52:23.629468 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:52:26 crc kubenswrapper[4833]: I1013 08:52:26.456642 4833 generic.go:334] "Generic (PLEG): container finished" podID="2e4c0a0f-f063-49a3-8289-6efdb12b97fc" containerID="d4ea855d14ec9e15a0baeadcf35e714493ecda7bfc5c4d0b73d4e31e2cba0a42" exitCode=0 Oct 13 08:52:26 crc kubenswrapper[4833]: I1013 08:52:26.456699 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" event={"ID":"2e4c0a0f-f063-49a3-8289-6efdb12b97fc","Type":"ContainerDied","Data":"d4ea855d14ec9e15a0baeadcf35e714493ecda7bfc5c4d0b73d4e31e2cba0a42"} Oct 13 08:52:27 crc kubenswrapper[4833]: I1013 08:52:27.957953 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" Oct 13 08:52:27 crc kubenswrapper[4833]: I1013 08:52:27.963284 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-migration-ssh-key-1\") pod \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " Oct 13 08:52:27 crc kubenswrapper[4833]: I1013 08:52:27.963313 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-migration-ssh-key-0\") pod \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " Oct 13 08:52:27 crc kubenswrapper[4833]: I1013 08:52:27.963331 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-cell1-compute-config-1\") pod \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " Oct 13 08:52:27 crc kubenswrapper[4833]: I1013 08:52:27.963407 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-cell1-compute-config-0\") pod \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " Oct 13 08:52:27 crc kubenswrapper[4833]: I1013 08:52:27.963434 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7wcm\" (UniqueName: \"kubernetes.io/projected/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-kube-api-access-r7wcm\") pod \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " Oct 13 08:52:27 crc kubenswrapper[4833]: I1013 08:52:27.963451 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-inventory\") pod \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " Oct 13 08:52:27 crc kubenswrapper[4833]: I1013 08:52:27.963492 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-cells-global-config-0\") pod \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " Oct 13 08:52:27 crc kubenswrapper[4833]: I1013 08:52:27.963566 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-cell1-combined-ca-bundle\") pod \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " Oct 13 08:52:27 crc kubenswrapper[4833]: I1013 08:52:27.963590 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-ssh-key\") pod \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\" (UID: \"2e4c0a0f-f063-49a3-8289-6efdb12b97fc\") " Oct 13 08:52:27 crc kubenswrapper[4833]: I1013 08:52:27.969563 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-kube-api-access-r7wcm" (OuterVolumeSpecName: "kube-api-access-r7wcm") pod "2e4c0a0f-f063-49a3-8289-6efdb12b97fc" (UID: "2e4c0a0f-f063-49a3-8289-6efdb12b97fc"). InnerVolumeSpecName "kube-api-access-r7wcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.021314 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "2e4c0a0f-f063-49a3-8289-6efdb12b97fc" (UID: "2e4c0a0f-f063-49a3-8289-6efdb12b97fc"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.026933 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "2e4c0a0f-f063-49a3-8289-6efdb12b97fc" (UID: "2e4c0a0f-f063-49a3-8289-6efdb12b97fc"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.034327 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-inventory" (OuterVolumeSpecName: "inventory") pod "2e4c0a0f-f063-49a3-8289-6efdb12b97fc" (UID: "2e4c0a0f-f063-49a3-8289-6efdb12b97fc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.036777 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "2e4c0a0f-f063-49a3-8289-6efdb12b97fc" (UID: "2e4c0a0f-f063-49a3-8289-6efdb12b97fc"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.044872 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "2e4c0a0f-f063-49a3-8289-6efdb12b97fc" (UID: "2e4c0a0f-f063-49a3-8289-6efdb12b97fc"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.045930 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "2e4c0a0f-f063-49a3-8289-6efdb12b97fc" (UID: "2e4c0a0f-f063-49a3-8289-6efdb12b97fc"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.047776 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2e4c0a0f-f063-49a3-8289-6efdb12b97fc" (UID: "2e4c0a0f-f063-49a3-8289-6efdb12b97fc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.054209 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "2e4c0a0f-f063-49a3-8289-6efdb12b97fc" (UID: "2e4c0a0f-f063-49a3-8289-6efdb12b97fc"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.066570 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7wcm\" (UniqueName: \"kubernetes.io/projected/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-kube-api-access-r7wcm\") on node \"crc\" DevicePath \"\"" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.066604 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.066615 4833 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.066623 4833 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.066632 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.066643 4833 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.066653 4833 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.066661 4833 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.066669 4833 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2e4c0a0f-f063-49a3-8289-6efdb12b97fc-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.482013 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" event={"ID":"2e4c0a0f-f063-49a3-8289-6efdb12b97fc","Type":"ContainerDied","Data":"7dafdb4edea9ee8dcd024b63d0946cd7394b2b0d76c3611a154fd35191191ae5"} Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.482373 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dafdb4edea9ee8dcd024b63d0946cd7394b2b0d76c3611a154fd35191191ae5" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.482121 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-txjbp" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.616389 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-nwksz"] Oct 13 08:52:28 crc kubenswrapper[4833]: E1013 08:52:28.617220 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e4c0a0f-f063-49a3-8289-6efdb12b97fc" containerName="nova-cell1-openstack-openstack-cell1" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.617241 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e4c0a0f-f063-49a3-8289-6efdb12b97fc" containerName="nova-cell1-openstack-openstack-cell1" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.617467 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e4c0a0f-f063-49a3-8289-6efdb12b97fc" containerName="nova-cell1-openstack-openstack-cell1" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.621159 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-nwksz" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.624275 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.624308 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.624562 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.625338 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-qqrx8" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.625734 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.654211 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-nwksz"] Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.781224 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-inventory\") pod \"telemetry-openstack-openstack-cell1-nwksz\" (UID: \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\") " pod="openstack/telemetry-openstack-openstack-cell1-nwksz" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.781332 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-nwksz\" (UID: \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\") " pod="openstack/telemetry-openstack-openstack-cell1-nwksz" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.781511 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-ssh-key\") pod \"telemetry-openstack-openstack-cell1-nwksz\" (UID: \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\") " pod="openstack/telemetry-openstack-openstack-cell1-nwksz" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.781622 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzcxw\" (UniqueName: \"kubernetes.io/projected/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-kube-api-access-qzcxw\") pod \"telemetry-openstack-openstack-cell1-nwksz\" (UID: \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\") " pod="openstack/telemetry-openstack-openstack-cell1-nwksz" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.781878 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-nwksz\" (UID: \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\") " pod="openstack/telemetry-openstack-openstack-cell1-nwksz" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.781998 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-nwksz\" (UID: \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\") " pod="openstack/telemetry-openstack-openstack-cell1-nwksz" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.782045 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-nwksz\" (UID: \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\") " pod="openstack/telemetry-openstack-openstack-cell1-nwksz" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.884701 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-nwksz\" (UID: \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\") " pod="openstack/telemetry-openstack-openstack-cell1-nwksz" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.884798 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-nwksz\" (UID: \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\") " pod="openstack/telemetry-openstack-openstack-cell1-nwksz" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.885495 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-inventory\") pod \"telemetry-openstack-openstack-cell1-nwksz\" (UID: \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\") " pod="openstack/telemetry-openstack-openstack-cell1-nwksz" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.885924 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-nwksz\" (UID: \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\") " pod="openstack/telemetry-openstack-openstack-cell1-nwksz" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.886049 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-ssh-key\") pod \"telemetry-openstack-openstack-cell1-nwksz\" (UID: \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\") " pod="openstack/telemetry-openstack-openstack-cell1-nwksz" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.886102 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzcxw\" (UniqueName: \"kubernetes.io/projected/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-kube-api-access-qzcxw\") pod \"telemetry-openstack-openstack-cell1-nwksz\" (UID: \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\") " pod="openstack/telemetry-openstack-openstack-cell1-nwksz" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.886221 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-nwksz\" (UID: \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\") " pod="openstack/telemetry-openstack-openstack-cell1-nwksz" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.889497 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-ssh-key\") pod \"telemetry-openstack-openstack-cell1-nwksz\" (UID: \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\") " pod="openstack/telemetry-openstack-openstack-cell1-nwksz" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.890230 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-nwksz\" (UID: \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\") " pod="openstack/telemetry-openstack-openstack-cell1-nwksz" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.890402 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-nwksz\" (UID: \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\") " pod="openstack/telemetry-openstack-openstack-cell1-nwksz" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.890470 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-inventory\") pod \"telemetry-openstack-openstack-cell1-nwksz\" (UID: \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\") " pod="openstack/telemetry-openstack-openstack-cell1-nwksz" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.890624 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-nwksz\" (UID: \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\") " pod="openstack/telemetry-openstack-openstack-cell1-nwksz" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.890871 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-nwksz\" (UID: \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\") " pod="openstack/telemetry-openstack-openstack-cell1-nwksz" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.906906 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzcxw\" (UniqueName: \"kubernetes.io/projected/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-kube-api-access-qzcxw\") pod \"telemetry-openstack-openstack-cell1-nwksz\" (UID: \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\") " pod="openstack/telemetry-openstack-openstack-cell1-nwksz" Oct 13 08:52:28 crc kubenswrapper[4833]: I1013 08:52:28.968873 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-nwksz" Oct 13 08:52:29 crc kubenswrapper[4833]: I1013 08:52:29.572771 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-nwksz"] Oct 13 08:52:29 crc kubenswrapper[4833]: I1013 08:52:29.584079 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 08:52:30 crc kubenswrapper[4833]: I1013 08:52:30.506017 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-nwksz" event={"ID":"58c23bcc-cad3-4d75-9970-b8b9335d7fe5","Type":"ContainerStarted","Data":"25db2846fcfbd2ad3b496d5d6702158d86918d390a36937d1e70f274f0311404"} Oct 13 08:52:30 crc kubenswrapper[4833]: I1013 08:52:30.506763 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-nwksz" event={"ID":"58c23bcc-cad3-4d75-9970-b8b9335d7fe5","Type":"ContainerStarted","Data":"1387134d0fe832fffc30d1d711eace0114647938367aead162bb605b44c062c2"} Oct 13 08:52:30 crc kubenswrapper[4833]: I1013 08:52:30.522455 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-openstack-openstack-cell1-nwksz" podStartSLOduration=2.048125472 podStartE2EDuration="2.522426639s" podCreationTimestamp="2025-10-13 08:52:28 +0000 UTC" firstStartedPulling="2025-10-13 08:52:29.583639554 +0000 UTC m=+8639.684062510" lastFinishedPulling="2025-10-13 08:52:30.057940751 +0000 UTC m=+8640.158363677" observedRunningTime="2025-10-13 08:52:30.520419312 +0000 UTC m=+8640.620842268" watchObservedRunningTime="2025-10-13 08:52:30.522426639 +0000 UTC m=+8640.622849585" Oct 13 08:52:36 crc kubenswrapper[4833]: I1013 08:52:36.627175 4833 scope.go:117] "RemoveContainer" containerID="a76086e46c8a4e9ad69dc7869bcdd6e98cbdb68d0ed85c8463b29d24ee04f7f8" Oct 13 08:52:36 crc kubenswrapper[4833]: E1013 08:52:36.628035 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:52:49 crc kubenswrapper[4833]: I1013 08:52:49.627035 4833 scope.go:117] "RemoveContainer" containerID="a76086e46c8a4e9ad69dc7869bcdd6e98cbdb68d0ed85c8463b29d24ee04f7f8" Oct 13 08:52:49 crc kubenswrapper[4833]: E1013 08:52:49.628517 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:52:50 crc kubenswrapper[4833]: I1013 08:52:50.082276 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dnmjs"] Oct 13 08:52:50 crc kubenswrapper[4833]: I1013 08:52:50.085431 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dnmjs" Oct 13 08:52:50 crc kubenswrapper[4833]: I1013 08:52:50.101756 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dnmjs"] Oct 13 08:52:50 crc kubenswrapper[4833]: I1013 08:52:50.135396 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65bcw\" (UniqueName: \"kubernetes.io/projected/4bb75d48-e779-4278-b441-4150ed62ca2a-kube-api-access-65bcw\") pod \"certified-operators-dnmjs\" (UID: \"4bb75d48-e779-4278-b441-4150ed62ca2a\") " pod="openshift-marketplace/certified-operators-dnmjs" Oct 13 08:52:50 crc kubenswrapper[4833]: I1013 08:52:50.135677 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bb75d48-e779-4278-b441-4150ed62ca2a-catalog-content\") pod \"certified-operators-dnmjs\" (UID: \"4bb75d48-e779-4278-b441-4150ed62ca2a\") " pod="openshift-marketplace/certified-operators-dnmjs" Oct 13 08:52:50 crc kubenswrapper[4833]: I1013 08:52:50.135763 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bb75d48-e779-4278-b441-4150ed62ca2a-utilities\") pod \"certified-operators-dnmjs\" (UID: \"4bb75d48-e779-4278-b441-4150ed62ca2a\") " pod="openshift-marketplace/certified-operators-dnmjs" Oct 13 08:52:50 crc kubenswrapper[4833]: I1013 08:52:50.238312 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bb75d48-e779-4278-b441-4150ed62ca2a-catalog-content\") pod \"certified-operators-dnmjs\" (UID: \"4bb75d48-e779-4278-b441-4150ed62ca2a\") " pod="openshift-marketplace/certified-operators-dnmjs" Oct 13 08:52:50 crc kubenswrapper[4833]: I1013 08:52:50.238384 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bb75d48-e779-4278-b441-4150ed62ca2a-utilities\") pod \"certified-operators-dnmjs\" (UID: \"4bb75d48-e779-4278-b441-4150ed62ca2a\") " pod="openshift-marketplace/certified-operators-dnmjs" Oct 13 08:52:50 crc kubenswrapper[4833]: I1013 08:52:50.238579 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65bcw\" (UniqueName: \"kubernetes.io/projected/4bb75d48-e779-4278-b441-4150ed62ca2a-kube-api-access-65bcw\") pod \"certified-operators-dnmjs\" (UID: \"4bb75d48-e779-4278-b441-4150ed62ca2a\") " pod="openshift-marketplace/certified-operators-dnmjs" Oct 13 08:52:50 crc kubenswrapper[4833]: I1013 08:52:50.239306 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bb75d48-e779-4278-b441-4150ed62ca2a-utilities\") pod \"certified-operators-dnmjs\" (UID: \"4bb75d48-e779-4278-b441-4150ed62ca2a\") " pod="openshift-marketplace/certified-operators-dnmjs" Oct 13 08:52:50 crc kubenswrapper[4833]: I1013 08:52:50.239694 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bb75d48-e779-4278-b441-4150ed62ca2a-catalog-content\") pod \"certified-operators-dnmjs\" (UID: \"4bb75d48-e779-4278-b441-4150ed62ca2a\") " pod="openshift-marketplace/certified-operators-dnmjs" Oct 13 08:52:50 crc kubenswrapper[4833]: I1013 08:52:50.263220 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65bcw\" (UniqueName: \"kubernetes.io/projected/4bb75d48-e779-4278-b441-4150ed62ca2a-kube-api-access-65bcw\") pod \"certified-operators-dnmjs\" (UID: \"4bb75d48-e779-4278-b441-4150ed62ca2a\") " pod="openshift-marketplace/certified-operators-dnmjs" Oct 13 08:52:50 crc kubenswrapper[4833]: I1013 08:52:50.410992 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dnmjs" Oct 13 08:52:50 crc kubenswrapper[4833]: W1013 08:52:50.923002 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bb75d48_e779_4278_b441_4150ed62ca2a.slice/crio-462c7f6d6ec1424b107228d7532fdf98bb10f41f9bdc53a53846bdf1ca6ca775 WatchSource:0}: Error finding container 462c7f6d6ec1424b107228d7532fdf98bb10f41f9bdc53a53846bdf1ca6ca775: Status 404 returned error can't find the container with id 462c7f6d6ec1424b107228d7532fdf98bb10f41f9bdc53a53846bdf1ca6ca775 Oct 13 08:52:50 crc kubenswrapper[4833]: I1013 08:52:50.924614 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dnmjs"] Oct 13 08:52:51 crc kubenswrapper[4833]: I1013 08:52:51.765935 4833 generic.go:334] "Generic (PLEG): container finished" podID="4bb75d48-e779-4278-b441-4150ed62ca2a" containerID="19c872db3732b18024a5947a596081627f6f2838742e2dfde5be280a8c8567dc" exitCode=0 Oct 13 08:52:51 crc kubenswrapper[4833]: I1013 08:52:51.766075 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnmjs" event={"ID":"4bb75d48-e779-4278-b441-4150ed62ca2a","Type":"ContainerDied","Data":"19c872db3732b18024a5947a596081627f6f2838742e2dfde5be280a8c8567dc"} Oct 13 08:52:51 crc kubenswrapper[4833]: I1013 08:52:51.766381 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnmjs" event={"ID":"4bb75d48-e779-4278-b441-4150ed62ca2a","Type":"ContainerStarted","Data":"462c7f6d6ec1424b107228d7532fdf98bb10f41f9bdc53a53846bdf1ca6ca775"} Oct 13 08:52:53 crc kubenswrapper[4833]: I1013 08:52:53.787904 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnmjs" event={"ID":"4bb75d48-e779-4278-b441-4150ed62ca2a","Type":"ContainerStarted","Data":"67dbc8be43b2a971517fffa874a80dbc16645bc3238c2843ff872ef082c19590"} Oct 13 08:52:54 crc kubenswrapper[4833]: I1013 08:52:54.801861 4833 generic.go:334] "Generic (PLEG): container finished" podID="4bb75d48-e779-4278-b441-4150ed62ca2a" containerID="67dbc8be43b2a971517fffa874a80dbc16645bc3238c2843ff872ef082c19590" exitCode=0 Oct 13 08:52:54 crc kubenswrapper[4833]: I1013 08:52:54.801922 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnmjs" event={"ID":"4bb75d48-e779-4278-b441-4150ed62ca2a","Type":"ContainerDied","Data":"67dbc8be43b2a971517fffa874a80dbc16645bc3238c2843ff872ef082c19590"} Oct 13 08:52:55 crc kubenswrapper[4833]: I1013 08:52:55.818240 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnmjs" event={"ID":"4bb75d48-e779-4278-b441-4150ed62ca2a","Type":"ContainerStarted","Data":"7ca0805c467bf62e88d7d93fba902358dd2b3f3402eb657f1e2c8322bb1a441a"} Oct 13 08:52:55 crc kubenswrapper[4833]: I1013 08:52:55.838013 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dnmjs" podStartSLOduration=2.328049823 podStartE2EDuration="5.837985873s" podCreationTimestamp="2025-10-13 08:52:50 +0000 UTC" firstStartedPulling="2025-10-13 08:52:51.768878697 +0000 UTC m=+8661.869301613" lastFinishedPulling="2025-10-13 08:52:55.278814747 +0000 UTC m=+8665.379237663" observedRunningTime="2025-10-13 08:52:55.837019966 +0000 UTC m=+8665.937442892" watchObservedRunningTime="2025-10-13 08:52:55.837985873 +0000 UTC m=+8665.938408799" Oct 13 08:53:00 crc kubenswrapper[4833]: I1013 08:53:00.411123 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dnmjs" Oct 13 08:53:00 crc kubenswrapper[4833]: I1013 08:53:00.411622 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dnmjs" Oct 13 08:53:00 crc kubenswrapper[4833]: I1013 08:53:00.476375 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dnmjs" Oct 13 08:53:00 crc kubenswrapper[4833]: I1013 08:53:00.942692 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dnmjs" Oct 13 08:53:03 crc kubenswrapper[4833]: I1013 08:53:03.258037 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dnmjs"] Oct 13 08:53:03 crc kubenswrapper[4833]: I1013 08:53:03.258509 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dnmjs" podUID="4bb75d48-e779-4278-b441-4150ed62ca2a" containerName="registry-server" containerID="cri-o://7ca0805c467bf62e88d7d93fba902358dd2b3f3402eb657f1e2c8322bb1a441a" gracePeriod=2 Oct 13 08:53:03 crc kubenswrapper[4833]: I1013 08:53:03.631609 4833 scope.go:117] "RemoveContainer" containerID="a76086e46c8a4e9ad69dc7869bcdd6e98cbdb68d0ed85c8463b29d24ee04f7f8" Oct 13 08:53:03 crc kubenswrapper[4833]: E1013 08:53:03.632228 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:53:03 crc kubenswrapper[4833]: I1013 08:53:03.767521 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dnmjs" Oct 13 08:53:03 crc kubenswrapper[4833]: I1013 08:53:03.910628 4833 generic.go:334] "Generic (PLEG): container finished" podID="4bb75d48-e779-4278-b441-4150ed62ca2a" containerID="7ca0805c467bf62e88d7d93fba902358dd2b3f3402eb657f1e2c8322bb1a441a" exitCode=0 Oct 13 08:53:03 crc kubenswrapper[4833]: I1013 08:53:03.910668 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnmjs" event={"ID":"4bb75d48-e779-4278-b441-4150ed62ca2a","Type":"ContainerDied","Data":"7ca0805c467bf62e88d7d93fba902358dd2b3f3402eb657f1e2c8322bb1a441a"} Oct 13 08:53:03 crc kubenswrapper[4833]: I1013 08:53:03.910712 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnmjs" event={"ID":"4bb75d48-e779-4278-b441-4150ed62ca2a","Type":"ContainerDied","Data":"462c7f6d6ec1424b107228d7532fdf98bb10f41f9bdc53a53846bdf1ca6ca775"} Oct 13 08:53:03 crc kubenswrapper[4833]: I1013 08:53:03.910731 4833 scope.go:117] "RemoveContainer" containerID="7ca0805c467bf62e88d7d93fba902358dd2b3f3402eb657f1e2c8322bb1a441a" Oct 13 08:53:03 crc kubenswrapper[4833]: I1013 08:53:03.911044 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dnmjs" Oct 13 08:53:03 crc kubenswrapper[4833]: I1013 08:53:03.930489 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bb75d48-e779-4278-b441-4150ed62ca2a-catalog-content\") pod \"4bb75d48-e779-4278-b441-4150ed62ca2a\" (UID: \"4bb75d48-e779-4278-b441-4150ed62ca2a\") " Oct 13 08:53:03 crc kubenswrapper[4833]: I1013 08:53:03.930816 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bb75d48-e779-4278-b441-4150ed62ca2a-utilities\") pod \"4bb75d48-e779-4278-b441-4150ed62ca2a\" (UID: \"4bb75d48-e779-4278-b441-4150ed62ca2a\") " Oct 13 08:53:03 crc kubenswrapper[4833]: I1013 08:53:03.931509 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65bcw\" (UniqueName: \"kubernetes.io/projected/4bb75d48-e779-4278-b441-4150ed62ca2a-kube-api-access-65bcw\") pod \"4bb75d48-e779-4278-b441-4150ed62ca2a\" (UID: \"4bb75d48-e779-4278-b441-4150ed62ca2a\") " Oct 13 08:53:03 crc kubenswrapper[4833]: I1013 08:53:03.931422 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bb75d48-e779-4278-b441-4150ed62ca2a-utilities" (OuterVolumeSpecName: "utilities") pod "4bb75d48-e779-4278-b441-4150ed62ca2a" (UID: "4bb75d48-e779-4278-b441-4150ed62ca2a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:53:03 crc kubenswrapper[4833]: I1013 08:53:03.933146 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bb75d48-e779-4278-b441-4150ed62ca2a-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 08:53:03 crc kubenswrapper[4833]: I1013 08:53:03.933993 4833 scope.go:117] "RemoveContainer" containerID="67dbc8be43b2a971517fffa874a80dbc16645bc3238c2843ff872ef082c19590" Oct 13 08:53:03 crc kubenswrapper[4833]: I1013 08:53:03.937310 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb75d48-e779-4278-b441-4150ed62ca2a-kube-api-access-65bcw" (OuterVolumeSpecName: "kube-api-access-65bcw") pod "4bb75d48-e779-4278-b441-4150ed62ca2a" (UID: "4bb75d48-e779-4278-b441-4150ed62ca2a"). InnerVolumeSpecName "kube-api-access-65bcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:53:03 crc kubenswrapper[4833]: I1013 08:53:03.973229 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bb75d48-e779-4278-b441-4150ed62ca2a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4bb75d48-e779-4278-b441-4150ed62ca2a" (UID: "4bb75d48-e779-4278-b441-4150ed62ca2a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:53:04 crc kubenswrapper[4833]: I1013 08:53:04.035378 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bb75d48-e779-4278-b441-4150ed62ca2a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 08:53:04 crc kubenswrapper[4833]: I1013 08:53:04.035725 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65bcw\" (UniqueName: \"kubernetes.io/projected/4bb75d48-e779-4278-b441-4150ed62ca2a-kube-api-access-65bcw\") on node \"crc\" DevicePath \"\"" Oct 13 08:53:04 crc kubenswrapper[4833]: I1013 08:53:04.044322 4833 scope.go:117] "RemoveContainer" containerID="19c872db3732b18024a5947a596081627f6f2838742e2dfde5be280a8c8567dc" Oct 13 08:53:04 crc kubenswrapper[4833]: I1013 08:53:04.094249 4833 scope.go:117] "RemoveContainer" containerID="7ca0805c467bf62e88d7d93fba902358dd2b3f3402eb657f1e2c8322bb1a441a" Oct 13 08:53:04 crc kubenswrapper[4833]: E1013 08:53:04.095103 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ca0805c467bf62e88d7d93fba902358dd2b3f3402eb657f1e2c8322bb1a441a\": container with ID starting with 7ca0805c467bf62e88d7d93fba902358dd2b3f3402eb657f1e2c8322bb1a441a not found: ID does not exist" containerID="7ca0805c467bf62e88d7d93fba902358dd2b3f3402eb657f1e2c8322bb1a441a" Oct 13 08:53:04 crc kubenswrapper[4833]: I1013 08:53:04.095152 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ca0805c467bf62e88d7d93fba902358dd2b3f3402eb657f1e2c8322bb1a441a"} err="failed to get container status \"7ca0805c467bf62e88d7d93fba902358dd2b3f3402eb657f1e2c8322bb1a441a\": rpc error: code = NotFound desc = could not find container \"7ca0805c467bf62e88d7d93fba902358dd2b3f3402eb657f1e2c8322bb1a441a\": container with ID starting with 7ca0805c467bf62e88d7d93fba902358dd2b3f3402eb657f1e2c8322bb1a441a not found: ID does not exist" Oct 13 08:53:04 crc kubenswrapper[4833]: I1013 08:53:04.095173 4833 scope.go:117] "RemoveContainer" containerID="67dbc8be43b2a971517fffa874a80dbc16645bc3238c2843ff872ef082c19590" Oct 13 08:53:04 crc kubenswrapper[4833]: E1013 08:53:04.095748 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67dbc8be43b2a971517fffa874a80dbc16645bc3238c2843ff872ef082c19590\": container with ID starting with 67dbc8be43b2a971517fffa874a80dbc16645bc3238c2843ff872ef082c19590 not found: ID does not exist" containerID="67dbc8be43b2a971517fffa874a80dbc16645bc3238c2843ff872ef082c19590" Oct 13 08:53:04 crc kubenswrapper[4833]: I1013 08:53:04.095793 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67dbc8be43b2a971517fffa874a80dbc16645bc3238c2843ff872ef082c19590"} err="failed to get container status \"67dbc8be43b2a971517fffa874a80dbc16645bc3238c2843ff872ef082c19590\": rpc error: code = NotFound desc = could not find container \"67dbc8be43b2a971517fffa874a80dbc16645bc3238c2843ff872ef082c19590\": container with ID starting with 67dbc8be43b2a971517fffa874a80dbc16645bc3238c2843ff872ef082c19590 not found: ID does not exist" Oct 13 08:53:04 crc kubenswrapper[4833]: I1013 08:53:04.095811 4833 scope.go:117] "RemoveContainer" containerID="19c872db3732b18024a5947a596081627f6f2838742e2dfde5be280a8c8567dc" Oct 13 08:53:04 crc kubenswrapper[4833]: E1013 08:53:04.096189 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19c872db3732b18024a5947a596081627f6f2838742e2dfde5be280a8c8567dc\": container with ID starting with 19c872db3732b18024a5947a596081627f6f2838742e2dfde5be280a8c8567dc not found: ID does not exist" containerID="19c872db3732b18024a5947a596081627f6f2838742e2dfde5be280a8c8567dc" Oct 13 08:53:04 crc kubenswrapper[4833]: I1013 08:53:04.096215 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19c872db3732b18024a5947a596081627f6f2838742e2dfde5be280a8c8567dc"} err="failed to get container status \"19c872db3732b18024a5947a596081627f6f2838742e2dfde5be280a8c8567dc\": rpc error: code = NotFound desc = could not find container \"19c872db3732b18024a5947a596081627f6f2838742e2dfde5be280a8c8567dc\": container with ID starting with 19c872db3732b18024a5947a596081627f6f2838742e2dfde5be280a8c8567dc not found: ID does not exist" Oct 13 08:53:04 crc kubenswrapper[4833]: I1013 08:53:04.249800 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dnmjs"] Oct 13 08:53:04 crc kubenswrapper[4833]: I1013 08:53:04.261159 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dnmjs"] Oct 13 08:53:04 crc kubenswrapper[4833]: I1013 08:53:04.645366 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb75d48-e779-4278-b441-4150ed62ca2a" path="/var/lib/kubelet/pods/4bb75d48-e779-4278-b441-4150ed62ca2a/volumes" Oct 13 08:53:15 crc kubenswrapper[4833]: I1013 08:53:15.627747 4833 scope.go:117] "RemoveContainer" containerID="a76086e46c8a4e9ad69dc7869bcdd6e98cbdb68d0ed85c8463b29d24ee04f7f8" Oct 13 08:53:15 crc kubenswrapper[4833]: E1013 08:53:15.628557 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:53:28 crc kubenswrapper[4833]: I1013 08:53:28.627703 4833 scope.go:117] "RemoveContainer" containerID="a76086e46c8a4e9ad69dc7869bcdd6e98cbdb68d0ed85c8463b29d24ee04f7f8" Oct 13 08:53:28 crc kubenswrapper[4833]: E1013 08:53:28.628565 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 08:53:43 crc kubenswrapper[4833]: I1013 08:53:43.628625 4833 scope.go:117] "RemoveContainer" containerID="a76086e46c8a4e9ad69dc7869bcdd6e98cbdb68d0ed85c8463b29d24ee04f7f8" Oct 13 08:53:44 crc kubenswrapper[4833]: I1013 08:53:44.430566 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"7e895bd5c6e21977d6a07da7fbbf215e42ea81d6910b18a5ec4218a548cc487e"} Oct 13 08:56:00 crc kubenswrapper[4833]: I1013 08:56:00.542449 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:56:00 crc kubenswrapper[4833]: I1013 08:56:00.543218 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 08:56:01 crc kubenswrapper[4833]: I1013 08:56:01.627450 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-99vgl"] Oct 13 08:56:01 crc kubenswrapper[4833]: E1013 08:56:01.629222 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bb75d48-e779-4278-b441-4150ed62ca2a" containerName="registry-server" Oct 13 08:56:01 crc kubenswrapper[4833]: I1013 08:56:01.629313 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bb75d48-e779-4278-b441-4150ed62ca2a" containerName="registry-server" Oct 13 08:56:01 crc kubenswrapper[4833]: E1013 08:56:01.629406 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bb75d48-e779-4278-b441-4150ed62ca2a" containerName="extract-utilities" Oct 13 08:56:01 crc kubenswrapper[4833]: I1013 08:56:01.629463 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bb75d48-e779-4278-b441-4150ed62ca2a" containerName="extract-utilities" Oct 13 08:56:01 crc kubenswrapper[4833]: E1013 08:56:01.629560 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bb75d48-e779-4278-b441-4150ed62ca2a" containerName="extract-content" Oct 13 08:56:01 crc kubenswrapper[4833]: I1013 08:56:01.629628 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bb75d48-e779-4278-b441-4150ed62ca2a" containerName="extract-content" Oct 13 08:56:01 crc kubenswrapper[4833]: I1013 08:56:01.629885 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bb75d48-e779-4278-b441-4150ed62ca2a" containerName="registry-server" Oct 13 08:56:01 crc kubenswrapper[4833]: I1013 08:56:01.639183 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-99vgl" Oct 13 08:56:01 crc kubenswrapper[4833]: I1013 08:56:01.656182 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-99vgl"] Oct 13 08:56:01 crc kubenswrapper[4833]: I1013 08:56:01.811398 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0721846b-6c89-471c-8bac-6720a4519f5b-catalog-content\") pod \"redhat-marketplace-99vgl\" (UID: \"0721846b-6c89-471c-8bac-6720a4519f5b\") " pod="openshift-marketplace/redhat-marketplace-99vgl" Oct 13 08:56:01 crc kubenswrapper[4833]: I1013 08:56:01.811509 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0721846b-6c89-471c-8bac-6720a4519f5b-utilities\") pod \"redhat-marketplace-99vgl\" (UID: \"0721846b-6c89-471c-8bac-6720a4519f5b\") " pod="openshift-marketplace/redhat-marketplace-99vgl" Oct 13 08:56:01 crc kubenswrapper[4833]: I1013 08:56:01.811785 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sfcf\" (UniqueName: \"kubernetes.io/projected/0721846b-6c89-471c-8bac-6720a4519f5b-kube-api-access-9sfcf\") pod \"redhat-marketplace-99vgl\" (UID: \"0721846b-6c89-471c-8bac-6720a4519f5b\") " pod="openshift-marketplace/redhat-marketplace-99vgl" Oct 13 08:56:01 crc kubenswrapper[4833]: I1013 08:56:01.914176 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0721846b-6c89-471c-8bac-6720a4519f5b-utilities\") pod \"redhat-marketplace-99vgl\" (UID: \"0721846b-6c89-471c-8bac-6720a4519f5b\") " pod="openshift-marketplace/redhat-marketplace-99vgl" Oct 13 08:56:01 crc kubenswrapper[4833]: I1013 08:56:01.914345 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sfcf\" (UniqueName: \"kubernetes.io/projected/0721846b-6c89-471c-8bac-6720a4519f5b-kube-api-access-9sfcf\") pod \"redhat-marketplace-99vgl\" (UID: \"0721846b-6c89-471c-8bac-6720a4519f5b\") " pod="openshift-marketplace/redhat-marketplace-99vgl" Oct 13 08:56:01 crc kubenswrapper[4833]: I1013 08:56:01.914439 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0721846b-6c89-471c-8bac-6720a4519f5b-catalog-content\") pod \"redhat-marketplace-99vgl\" (UID: \"0721846b-6c89-471c-8bac-6720a4519f5b\") " pod="openshift-marketplace/redhat-marketplace-99vgl" Oct 13 08:56:01 crc kubenswrapper[4833]: I1013 08:56:01.914837 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0721846b-6c89-471c-8bac-6720a4519f5b-utilities\") pod \"redhat-marketplace-99vgl\" (UID: \"0721846b-6c89-471c-8bac-6720a4519f5b\") " pod="openshift-marketplace/redhat-marketplace-99vgl" Oct 13 08:56:01 crc kubenswrapper[4833]: I1013 08:56:01.914927 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0721846b-6c89-471c-8bac-6720a4519f5b-catalog-content\") pod \"redhat-marketplace-99vgl\" (UID: \"0721846b-6c89-471c-8bac-6720a4519f5b\") " pod="openshift-marketplace/redhat-marketplace-99vgl" Oct 13 08:56:01 crc kubenswrapper[4833]: I1013 08:56:01.944389 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sfcf\" (UniqueName: \"kubernetes.io/projected/0721846b-6c89-471c-8bac-6720a4519f5b-kube-api-access-9sfcf\") pod \"redhat-marketplace-99vgl\" (UID: \"0721846b-6c89-471c-8bac-6720a4519f5b\") " pod="openshift-marketplace/redhat-marketplace-99vgl" Oct 13 08:56:01 crc kubenswrapper[4833]: I1013 08:56:01.971812 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-99vgl" Oct 13 08:56:02 crc kubenswrapper[4833]: I1013 08:56:02.455797 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-99vgl"] Oct 13 08:56:03 crc kubenswrapper[4833]: I1013 08:56:03.089290 4833 generic.go:334] "Generic (PLEG): container finished" podID="0721846b-6c89-471c-8bac-6720a4519f5b" containerID="8959554d4ea2847336710774d21fe953df1c664f2ee418e797387028be9a7ad5" exitCode=0 Oct 13 08:56:03 crc kubenswrapper[4833]: I1013 08:56:03.089360 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-99vgl" event={"ID":"0721846b-6c89-471c-8bac-6720a4519f5b","Type":"ContainerDied","Data":"8959554d4ea2847336710774d21fe953df1c664f2ee418e797387028be9a7ad5"} Oct 13 08:56:03 crc kubenswrapper[4833]: I1013 08:56:03.089764 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-99vgl" event={"ID":"0721846b-6c89-471c-8bac-6720a4519f5b","Type":"ContainerStarted","Data":"f6ca4ad55d689a50f5ce79b20d222cad9a559b06b7c3514834cd242e18e0d976"} Oct 13 08:56:05 crc kubenswrapper[4833]: I1013 08:56:05.114341 4833 generic.go:334] "Generic (PLEG): container finished" podID="0721846b-6c89-471c-8bac-6720a4519f5b" containerID="64ef1533ea6ab4fb2e05f38507dad9f7cbd90943a1dbc65451d2b66a146b757d" exitCode=0 Oct 13 08:56:05 crc kubenswrapper[4833]: I1013 08:56:05.114417 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-99vgl" event={"ID":"0721846b-6c89-471c-8bac-6720a4519f5b","Type":"ContainerDied","Data":"64ef1533ea6ab4fb2e05f38507dad9f7cbd90943a1dbc65451d2b66a146b757d"} Oct 13 08:56:07 crc kubenswrapper[4833]: I1013 08:56:07.137399 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-99vgl" event={"ID":"0721846b-6c89-471c-8bac-6720a4519f5b","Type":"ContainerStarted","Data":"5a8c0a54f159450ca1f963383543393b59caae5bf748898ce298b2dfdad734df"} Oct 13 08:56:07 crc kubenswrapper[4833]: I1013 08:56:07.168023 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-99vgl" podStartSLOduration=3.2635784709999998 podStartE2EDuration="6.168000435s" podCreationTimestamp="2025-10-13 08:56:01 +0000 UTC" firstStartedPulling="2025-10-13 08:56:03.091884499 +0000 UTC m=+8853.192307415" lastFinishedPulling="2025-10-13 08:56:05.996306423 +0000 UTC m=+8856.096729379" observedRunningTime="2025-10-13 08:56:07.154015816 +0000 UTC m=+8857.254438742" watchObservedRunningTime="2025-10-13 08:56:07.168000435 +0000 UTC m=+8857.268423351" Oct 13 08:56:11 crc kubenswrapper[4833]: I1013 08:56:11.972827 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-99vgl" Oct 13 08:56:11 crc kubenswrapper[4833]: I1013 08:56:11.973377 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-99vgl" Oct 13 08:56:12 crc kubenswrapper[4833]: I1013 08:56:12.023803 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-99vgl" Oct 13 08:56:12 crc kubenswrapper[4833]: I1013 08:56:12.250380 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-99vgl" Oct 13 08:56:12 crc kubenswrapper[4833]: I1013 08:56:12.297254 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-99vgl"] Oct 13 08:56:14 crc kubenswrapper[4833]: I1013 08:56:14.215345 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-99vgl" podUID="0721846b-6c89-471c-8bac-6720a4519f5b" containerName="registry-server" containerID="cri-o://5a8c0a54f159450ca1f963383543393b59caae5bf748898ce298b2dfdad734df" gracePeriod=2 Oct 13 08:56:14 crc kubenswrapper[4833]: I1013 08:56:14.696967 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-99vgl" Oct 13 08:56:14 crc kubenswrapper[4833]: I1013 08:56:14.820905 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sfcf\" (UniqueName: \"kubernetes.io/projected/0721846b-6c89-471c-8bac-6720a4519f5b-kube-api-access-9sfcf\") pod \"0721846b-6c89-471c-8bac-6720a4519f5b\" (UID: \"0721846b-6c89-471c-8bac-6720a4519f5b\") " Oct 13 08:56:14 crc kubenswrapper[4833]: I1013 08:56:14.821303 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0721846b-6c89-471c-8bac-6720a4519f5b-catalog-content\") pod \"0721846b-6c89-471c-8bac-6720a4519f5b\" (UID: \"0721846b-6c89-471c-8bac-6720a4519f5b\") " Oct 13 08:56:14 crc kubenswrapper[4833]: I1013 08:56:14.821644 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0721846b-6c89-471c-8bac-6720a4519f5b-utilities\") pod \"0721846b-6c89-471c-8bac-6720a4519f5b\" (UID: \"0721846b-6c89-471c-8bac-6720a4519f5b\") " Oct 13 08:56:14 crc kubenswrapper[4833]: I1013 08:56:14.822617 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0721846b-6c89-471c-8bac-6720a4519f5b-utilities" (OuterVolumeSpecName: "utilities") pod "0721846b-6c89-471c-8bac-6720a4519f5b" (UID: "0721846b-6c89-471c-8bac-6720a4519f5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:56:14 crc kubenswrapper[4833]: I1013 08:56:14.827891 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0721846b-6c89-471c-8bac-6720a4519f5b-kube-api-access-9sfcf" (OuterVolumeSpecName: "kube-api-access-9sfcf") pod "0721846b-6c89-471c-8bac-6720a4519f5b" (UID: "0721846b-6c89-471c-8bac-6720a4519f5b"). InnerVolumeSpecName "kube-api-access-9sfcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:56:14 crc kubenswrapper[4833]: I1013 08:56:14.839663 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0721846b-6c89-471c-8bac-6720a4519f5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0721846b-6c89-471c-8bac-6720a4519f5b" (UID: "0721846b-6c89-471c-8bac-6720a4519f5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:56:14 crc kubenswrapper[4833]: I1013 08:56:14.924057 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0721846b-6c89-471c-8bac-6720a4519f5b-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 08:56:14 crc kubenswrapper[4833]: I1013 08:56:14.924097 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sfcf\" (UniqueName: \"kubernetes.io/projected/0721846b-6c89-471c-8bac-6720a4519f5b-kube-api-access-9sfcf\") on node \"crc\" DevicePath \"\"" Oct 13 08:56:14 crc kubenswrapper[4833]: I1013 08:56:14.924114 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0721846b-6c89-471c-8bac-6720a4519f5b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 08:56:15 crc kubenswrapper[4833]: I1013 08:56:15.224929 4833 generic.go:334] "Generic (PLEG): container finished" podID="0721846b-6c89-471c-8bac-6720a4519f5b" containerID="5a8c0a54f159450ca1f963383543393b59caae5bf748898ce298b2dfdad734df" exitCode=0 Oct 13 08:56:15 crc kubenswrapper[4833]: I1013 08:56:15.224972 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-99vgl" event={"ID":"0721846b-6c89-471c-8bac-6720a4519f5b","Type":"ContainerDied","Data":"5a8c0a54f159450ca1f963383543393b59caae5bf748898ce298b2dfdad734df"} Oct 13 08:56:15 crc kubenswrapper[4833]: I1013 08:56:15.225002 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-99vgl" event={"ID":"0721846b-6c89-471c-8bac-6720a4519f5b","Type":"ContainerDied","Data":"f6ca4ad55d689a50f5ce79b20d222cad9a559b06b7c3514834cd242e18e0d976"} Oct 13 08:56:15 crc kubenswrapper[4833]: I1013 08:56:15.225018 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-99vgl" Oct 13 08:56:15 crc kubenswrapper[4833]: I1013 08:56:15.225031 4833 scope.go:117] "RemoveContainer" containerID="5a8c0a54f159450ca1f963383543393b59caae5bf748898ce298b2dfdad734df" Oct 13 08:56:15 crc kubenswrapper[4833]: I1013 08:56:15.246508 4833 scope.go:117] "RemoveContainer" containerID="64ef1533ea6ab4fb2e05f38507dad9f7cbd90943a1dbc65451d2b66a146b757d" Oct 13 08:56:15 crc kubenswrapper[4833]: I1013 08:56:15.262910 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-99vgl"] Oct 13 08:56:15 crc kubenswrapper[4833]: I1013 08:56:15.275662 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-99vgl"] Oct 13 08:56:15 crc kubenswrapper[4833]: I1013 08:56:15.283413 4833 scope.go:117] "RemoveContainer" containerID="8959554d4ea2847336710774d21fe953df1c664f2ee418e797387028be9a7ad5" Oct 13 08:56:15 crc kubenswrapper[4833]: I1013 08:56:15.337303 4833 scope.go:117] "RemoveContainer" containerID="5a8c0a54f159450ca1f963383543393b59caae5bf748898ce298b2dfdad734df" Oct 13 08:56:15 crc kubenswrapper[4833]: E1013 08:56:15.338145 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a8c0a54f159450ca1f963383543393b59caae5bf748898ce298b2dfdad734df\": container with ID starting with 5a8c0a54f159450ca1f963383543393b59caae5bf748898ce298b2dfdad734df not found: ID does not exist" containerID="5a8c0a54f159450ca1f963383543393b59caae5bf748898ce298b2dfdad734df" Oct 13 08:56:15 crc kubenswrapper[4833]: I1013 08:56:15.338189 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a8c0a54f159450ca1f963383543393b59caae5bf748898ce298b2dfdad734df"} err="failed to get container status \"5a8c0a54f159450ca1f963383543393b59caae5bf748898ce298b2dfdad734df\": rpc error: code = NotFound desc = could not find container \"5a8c0a54f159450ca1f963383543393b59caae5bf748898ce298b2dfdad734df\": container with ID starting with 5a8c0a54f159450ca1f963383543393b59caae5bf748898ce298b2dfdad734df not found: ID does not exist" Oct 13 08:56:15 crc kubenswrapper[4833]: I1013 08:56:15.338224 4833 scope.go:117] "RemoveContainer" containerID="64ef1533ea6ab4fb2e05f38507dad9f7cbd90943a1dbc65451d2b66a146b757d" Oct 13 08:56:15 crc kubenswrapper[4833]: E1013 08:56:15.338560 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64ef1533ea6ab4fb2e05f38507dad9f7cbd90943a1dbc65451d2b66a146b757d\": container with ID starting with 64ef1533ea6ab4fb2e05f38507dad9f7cbd90943a1dbc65451d2b66a146b757d not found: ID does not exist" containerID="64ef1533ea6ab4fb2e05f38507dad9f7cbd90943a1dbc65451d2b66a146b757d" Oct 13 08:56:15 crc kubenswrapper[4833]: I1013 08:56:15.338594 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64ef1533ea6ab4fb2e05f38507dad9f7cbd90943a1dbc65451d2b66a146b757d"} err="failed to get container status \"64ef1533ea6ab4fb2e05f38507dad9f7cbd90943a1dbc65451d2b66a146b757d\": rpc error: code = NotFound desc = could not find container \"64ef1533ea6ab4fb2e05f38507dad9f7cbd90943a1dbc65451d2b66a146b757d\": container with ID starting with 64ef1533ea6ab4fb2e05f38507dad9f7cbd90943a1dbc65451d2b66a146b757d not found: ID does not exist" Oct 13 08:56:15 crc kubenswrapper[4833]: I1013 08:56:15.338616 4833 scope.go:117] "RemoveContainer" containerID="8959554d4ea2847336710774d21fe953df1c664f2ee418e797387028be9a7ad5" Oct 13 08:56:15 crc kubenswrapper[4833]: E1013 08:56:15.338856 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8959554d4ea2847336710774d21fe953df1c664f2ee418e797387028be9a7ad5\": container with ID starting with 8959554d4ea2847336710774d21fe953df1c664f2ee418e797387028be9a7ad5 not found: ID does not exist" containerID="8959554d4ea2847336710774d21fe953df1c664f2ee418e797387028be9a7ad5" Oct 13 08:56:15 crc kubenswrapper[4833]: I1013 08:56:15.338887 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8959554d4ea2847336710774d21fe953df1c664f2ee418e797387028be9a7ad5"} err="failed to get container status \"8959554d4ea2847336710774d21fe953df1c664f2ee418e797387028be9a7ad5\": rpc error: code = NotFound desc = could not find container \"8959554d4ea2847336710774d21fe953df1c664f2ee418e797387028be9a7ad5\": container with ID starting with 8959554d4ea2847336710774d21fe953df1c664f2ee418e797387028be9a7ad5 not found: ID does not exist" Oct 13 08:56:16 crc kubenswrapper[4833]: I1013 08:56:16.662394 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0721846b-6c89-471c-8bac-6720a4519f5b" path="/var/lib/kubelet/pods/0721846b-6c89-471c-8bac-6720a4519f5b/volumes" Oct 13 08:56:30 crc kubenswrapper[4833]: I1013 08:56:30.542384 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:56:30 crc kubenswrapper[4833]: I1013 08:56:30.542803 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 08:56:59 crc kubenswrapper[4833]: I1013 08:56:59.711606 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mpjpn"] Oct 13 08:56:59 crc kubenswrapper[4833]: E1013 08:56:59.715026 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0721846b-6c89-471c-8bac-6720a4519f5b" containerName="extract-utilities" Oct 13 08:56:59 crc kubenswrapper[4833]: I1013 08:56:59.715053 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0721846b-6c89-471c-8bac-6720a4519f5b" containerName="extract-utilities" Oct 13 08:56:59 crc kubenswrapper[4833]: E1013 08:56:59.715370 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0721846b-6c89-471c-8bac-6720a4519f5b" containerName="registry-server" Oct 13 08:56:59 crc kubenswrapper[4833]: I1013 08:56:59.715385 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0721846b-6c89-471c-8bac-6720a4519f5b" containerName="registry-server" Oct 13 08:56:59 crc kubenswrapper[4833]: E1013 08:56:59.715613 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0721846b-6c89-471c-8bac-6720a4519f5b" containerName="extract-content" Oct 13 08:56:59 crc kubenswrapper[4833]: I1013 08:56:59.715747 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0721846b-6c89-471c-8bac-6720a4519f5b" containerName="extract-content" Oct 13 08:56:59 crc kubenswrapper[4833]: I1013 08:56:59.716282 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0721846b-6c89-471c-8bac-6720a4519f5b" containerName="registry-server" Oct 13 08:56:59 crc kubenswrapper[4833]: I1013 08:56:59.721604 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mpjpn" Oct 13 08:56:59 crc kubenswrapper[4833]: I1013 08:56:59.743707 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mpjpn"] Oct 13 08:56:59 crc kubenswrapper[4833]: I1013 08:56:59.797026 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/974f077c-833b-4949-8d1a-2269e1d0df1c-utilities\") pod \"redhat-operators-mpjpn\" (UID: \"974f077c-833b-4949-8d1a-2269e1d0df1c\") " pod="openshift-marketplace/redhat-operators-mpjpn" Oct 13 08:56:59 crc kubenswrapper[4833]: I1013 08:56:59.797083 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-286g4\" (UniqueName: \"kubernetes.io/projected/974f077c-833b-4949-8d1a-2269e1d0df1c-kube-api-access-286g4\") pod \"redhat-operators-mpjpn\" (UID: \"974f077c-833b-4949-8d1a-2269e1d0df1c\") " pod="openshift-marketplace/redhat-operators-mpjpn" Oct 13 08:56:59 crc kubenswrapper[4833]: I1013 08:56:59.797130 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/974f077c-833b-4949-8d1a-2269e1d0df1c-catalog-content\") pod \"redhat-operators-mpjpn\" (UID: \"974f077c-833b-4949-8d1a-2269e1d0df1c\") " pod="openshift-marketplace/redhat-operators-mpjpn" Oct 13 08:56:59 crc kubenswrapper[4833]: I1013 08:56:59.899226 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/974f077c-833b-4949-8d1a-2269e1d0df1c-utilities\") pod \"redhat-operators-mpjpn\" (UID: \"974f077c-833b-4949-8d1a-2269e1d0df1c\") " pod="openshift-marketplace/redhat-operators-mpjpn" Oct 13 08:56:59 crc kubenswrapper[4833]: I1013 08:56:59.899284 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-286g4\" (UniqueName: \"kubernetes.io/projected/974f077c-833b-4949-8d1a-2269e1d0df1c-kube-api-access-286g4\") pod \"redhat-operators-mpjpn\" (UID: \"974f077c-833b-4949-8d1a-2269e1d0df1c\") " pod="openshift-marketplace/redhat-operators-mpjpn" Oct 13 08:56:59 crc kubenswrapper[4833]: I1013 08:56:59.899339 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/974f077c-833b-4949-8d1a-2269e1d0df1c-catalog-content\") pod \"redhat-operators-mpjpn\" (UID: \"974f077c-833b-4949-8d1a-2269e1d0df1c\") " pod="openshift-marketplace/redhat-operators-mpjpn" Oct 13 08:56:59 crc kubenswrapper[4833]: I1013 08:56:59.899855 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/974f077c-833b-4949-8d1a-2269e1d0df1c-utilities\") pod \"redhat-operators-mpjpn\" (UID: \"974f077c-833b-4949-8d1a-2269e1d0df1c\") " pod="openshift-marketplace/redhat-operators-mpjpn" Oct 13 08:56:59 crc kubenswrapper[4833]: I1013 08:56:59.899869 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/974f077c-833b-4949-8d1a-2269e1d0df1c-catalog-content\") pod \"redhat-operators-mpjpn\" (UID: \"974f077c-833b-4949-8d1a-2269e1d0df1c\") " pod="openshift-marketplace/redhat-operators-mpjpn" Oct 13 08:56:59 crc kubenswrapper[4833]: I1013 08:56:59.935349 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-286g4\" (UniqueName: \"kubernetes.io/projected/974f077c-833b-4949-8d1a-2269e1d0df1c-kube-api-access-286g4\") pod \"redhat-operators-mpjpn\" (UID: \"974f077c-833b-4949-8d1a-2269e1d0df1c\") " pod="openshift-marketplace/redhat-operators-mpjpn" Oct 13 08:57:00 crc kubenswrapper[4833]: I1013 08:57:00.043423 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mpjpn" Oct 13 08:57:00 crc kubenswrapper[4833]: I1013 08:57:00.545114 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:57:00 crc kubenswrapper[4833]: I1013 08:57:00.545423 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 08:57:00 crc kubenswrapper[4833]: I1013 08:57:00.545483 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 08:57:00 crc kubenswrapper[4833]: I1013 08:57:00.547931 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7e895bd5c6e21977d6a07da7fbbf215e42ea81d6910b18a5ec4218a548cc487e"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 08:57:00 crc kubenswrapper[4833]: I1013 08:57:00.548016 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://7e895bd5c6e21977d6a07da7fbbf215e42ea81d6910b18a5ec4218a548cc487e" gracePeriod=600 Oct 13 08:57:00 crc kubenswrapper[4833]: I1013 08:57:00.552833 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mpjpn"] Oct 13 08:57:00 crc kubenswrapper[4833]: I1013 08:57:00.737574 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpjpn" event={"ID":"974f077c-833b-4949-8d1a-2269e1d0df1c","Type":"ContainerStarted","Data":"da817723b186401d4421047db7ac61e7863856c93769613cb69e66ead40b9660"} Oct 13 08:57:00 crc kubenswrapper[4833]: I1013 08:57:00.740160 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="7e895bd5c6e21977d6a07da7fbbf215e42ea81d6910b18a5ec4218a548cc487e" exitCode=0 Oct 13 08:57:00 crc kubenswrapper[4833]: I1013 08:57:00.740192 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"7e895bd5c6e21977d6a07da7fbbf215e42ea81d6910b18a5ec4218a548cc487e"} Oct 13 08:57:00 crc kubenswrapper[4833]: I1013 08:57:00.740243 4833 scope.go:117] "RemoveContainer" containerID="a76086e46c8a4e9ad69dc7869bcdd6e98cbdb68d0ed85c8463b29d24ee04f7f8" Oct 13 08:57:01 crc kubenswrapper[4833]: I1013 08:57:01.749783 4833 generic.go:334] "Generic (PLEG): container finished" podID="974f077c-833b-4949-8d1a-2269e1d0df1c" containerID="f8741fde71a72366d26763b466bf0facf5a54cc56e1c7c06697f66dd5473791e" exitCode=0 Oct 13 08:57:01 crc kubenswrapper[4833]: I1013 08:57:01.749843 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpjpn" event={"ID":"974f077c-833b-4949-8d1a-2269e1d0df1c","Type":"ContainerDied","Data":"f8741fde71a72366d26763b466bf0facf5a54cc56e1c7c06697f66dd5473791e"} Oct 13 08:57:01 crc kubenswrapper[4833]: I1013 08:57:01.754349 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"012ebe3496e05271beb0f2417b1ba0dea73760fd7eab989d4ce27a4a433c6446"} Oct 13 08:57:06 crc kubenswrapper[4833]: I1013 08:57:06.809373 4833 generic.go:334] "Generic (PLEG): container finished" podID="58c23bcc-cad3-4d75-9970-b8b9335d7fe5" containerID="25db2846fcfbd2ad3b496d5d6702158d86918d390a36937d1e70f274f0311404" exitCode=0 Oct 13 08:57:06 crc kubenswrapper[4833]: I1013 08:57:06.809459 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-nwksz" event={"ID":"58c23bcc-cad3-4d75-9970-b8b9335d7fe5","Type":"ContainerDied","Data":"25db2846fcfbd2ad3b496d5d6702158d86918d390a36937d1e70f274f0311404"} Oct 13 08:57:06 crc kubenswrapper[4833]: I1013 08:57:06.812170 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpjpn" event={"ID":"974f077c-833b-4949-8d1a-2269e1d0df1c","Type":"ContainerStarted","Data":"0a4e977c9cb892d5d12090f6a297e28e4f9a7fd9a7c67aec2726137f6c291f53"} Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.358063 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-nwksz" Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.433088 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-ceilometer-compute-config-data-1\") pod \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\" (UID: \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\") " Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.433203 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-inventory\") pod \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\" (UID: \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\") " Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.433328 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-ceilometer-compute-config-data-0\") pod \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\" (UID: \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\") " Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.433369 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-ssh-key\") pod \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\" (UID: \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\") " Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.433483 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzcxw\" (UniqueName: \"kubernetes.io/projected/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-kube-api-access-qzcxw\") pod \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\" (UID: \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\") " Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.433523 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-ceilometer-compute-config-data-2\") pod \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\" (UID: \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\") " Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.433582 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-telemetry-combined-ca-bundle\") pod \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\" (UID: \"58c23bcc-cad3-4d75-9970-b8b9335d7fe5\") " Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.439313 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-kube-api-access-qzcxw" (OuterVolumeSpecName: "kube-api-access-qzcxw") pod "58c23bcc-cad3-4d75-9970-b8b9335d7fe5" (UID: "58c23bcc-cad3-4d75-9970-b8b9335d7fe5"). InnerVolumeSpecName "kube-api-access-qzcxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.440133 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "58c23bcc-cad3-4d75-9970-b8b9335d7fe5" (UID: "58c23bcc-cad3-4d75-9970-b8b9335d7fe5"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.472653 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "58c23bcc-cad3-4d75-9970-b8b9335d7fe5" (UID: "58c23bcc-cad3-4d75-9970-b8b9335d7fe5"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.474681 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-inventory" (OuterVolumeSpecName: "inventory") pod "58c23bcc-cad3-4d75-9970-b8b9335d7fe5" (UID: "58c23bcc-cad3-4d75-9970-b8b9335d7fe5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.478219 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "58c23bcc-cad3-4d75-9970-b8b9335d7fe5" (UID: "58c23bcc-cad3-4d75-9970-b8b9335d7fe5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.479181 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "58c23bcc-cad3-4d75-9970-b8b9335d7fe5" (UID: "58c23bcc-cad3-4d75-9970-b8b9335d7fe5"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.483067 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "58c23bcc-cad3-4d75-9970-b8b9335d7fe5" (UID: "58c23bcc-cad3-4d75-9970-b8b9335d7fe5"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.535558 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.535593 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzcxw\" (UniqueName: \"kubernetes.io/projected/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-kube-api-access-qzcxw\") on node \"crc\" DevicePath \"\"" Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.535610 4833 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.535621 4833 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.535635 4833 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.535647 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.535659 4833 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/58c23bcc-cad3-4d75-9970-b8b9335d7fe5-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.832515 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-nwksz" event={"ID":"58c23bcc-cad3-4d75-9970-b8b9335d7fe5","Type":"ContainerDied","Data":"1387134d0fe832fffc30d1d711eace0114647938367aead162bb605b44c062c2"} Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.832879 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1387134d0fe832fffc30d1d711eace0114647938367aead162bb605b44c062c2" Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.832670 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-nwksz" Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.911830 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-dzjpn"] Oct 13 08:57:08 crc kubenswrapper[4833]: E1013 08:57:08.916318 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58c23bcc-cad3-4d75-9970-b8b9335d7fe5" containerName="telemetry-openstack-openstack-cell1" Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.916422 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="58c23bcc-cad3-4d75-9970-b8b9335d7fe5" containerName="telemetry-openstack-openstack-cell1" Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.916727 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="58c23bcc-cad3-4d75-9970-b8b9335d7fe5" containerName="telemetry-openstack-openstack-cell1" Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.918472 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-dzjpn" Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.920926 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-qqrx8" Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.921407 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.923395 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.923970 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-sriov-agent-neutron-config" Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.924003 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.924043 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-dzjpn"] Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.955438 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9df95b1-d108-4695-b002-f586578d6afe-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-dzjpn\" (UID: \"d9df95b1-d108-4695-b002-f586578d6afe\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-dzjpn" Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.955554 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwgtz\" (UniqueName: \"kubernetes.io/projected/d9df95b1-d108-4695-b002-f586578d6afe-kube-api-access-dwgtz\") pod \"neutron-sriov-openstack-openstack-cell1-dzjpn\" (UID: \"d9df95b1-d108-4695-b002-f586578d6afe\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-dzjpn" Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.955660 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d9df95b1-d108-4695-b002-f586578d6afe-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-dzjpn\" (UID: \"d9df95b1-d108-4695-b002-f586578d6afe\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-dzjpn" Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.955709 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9df95b1-d108-4695-b002-f586578d6afe-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-dzjpn\" (UID: \"d9df95b1-d108-4695-b002-f586578d6afe\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-dzjpn" Oct 13 08:57:08 crc kubenswrapper[4833]: I1013 08:57:08.955772 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d9df95b1-d108-4695-b002-f586578d6afe-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-dzjpn\" (UID: \"d9df95b1-d108-4695-b002-f586578d6afe\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-dzjpn" Oct 13 08:57:09 crc kubenswrapper[4833]: I1013 08:57:09.058077 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d9df95b1-d108-4695-b002-f586578d6afe-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-dzjpn\" (UID: \"d9df95b1-d108-4695-b002-f586578d6afe\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-dzjpn" Oct 13 08:57:09 crc kubenswrapper[4833]: I1013 08:57:09.058161 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9df95b1-d108-4695-b002-f586578d6afe-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-dzjpn\" (UID: \"d9df95b1-d108-4695-b002-f586578d6afe\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-dzjpn" Oct 13 08:57:09 crc kubenswrapper[4833]: I1013 08:57:09.058237 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d9df95b1-d108-4695-b002-f586578d6afe-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-dzjpn\" (UID: \"d9df95b1-d108-4695-b002-f586578d6afe\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-dzjpn" Oct 13 08:57:09 crc kubenswrapper[4833]: I1013 08:57:09.058316 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9df95b1-d108-4695-b002-f586578d6afe-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-dzjpn\" (UID: \"d9df95b1-d108-4695-b002-f586578d6afe\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-dzjpn" Oct 13 08:57:09 crc kubenswrapper[4833]: I1013 08:57:09.058378 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwgtz\" (UniqueName: \"kubernetes.io/projected/d9df95b1-d108-4695-b002-f586578d6afe-kube-api-access-dwgtz\") pod \"neutron-sriov-openstack-openstack-cell1-dzjpn\" (UID: \"d9df95b1-d108-4695-b002-f586578d6afe\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-dzjpn" Oct 13 08:57:09 crc kubenswrapper[4833]: I1013 08:57:09.062268 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d9df95b1-d108-4695-b002-f586578d6afe-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-dzjpn\" (UID: \"d9df95b1-d108-4695-b002-f586578d6afe\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-dzjpn" Oct 13 08:57:09 crc kubenswrapper[4833]: I1013 08:57:09.065755 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d9df95b1-d108-4695-b002-f586578d6afe-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-dzjpn\" (UID: \"d9df95b1-d108-4695-b002-f586578d6afe\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-dzjpn" Oct 13 08:57:09 crc kubenswrapper[4833]: I1013 08:57:09.066263 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9df95b1-d108-4695-b002-f586578d6afe-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-dzjpn\" (UID: \"d9df95b1-d108-4695-b002-f586578d6afe\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-dzjpn" Oct 13 08:57:09 crc kubenswrapper[4833]: I1013 08:57:09.068249 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9df95b1-d108-4695-b002-f586578d6afe-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-dzjpn\" (UID: \"d9df95b1-d108-4695-b002-f586578d6afe\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-dzjpn" Oct 13 08:57:09 crc kubenswrapper[4833]: I1013 08:57:09.103922 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwgtz\" (UniqueName: \"kubernetes.io/projected/d9df95b1-d108-4695-b002-f586578d6afe-kube-api-access-dwgtz\") pod \"neutron-sriov-openstack-openstack-cell1-dzjpn\" (UID: \"d9df95b1-d108-4695-b002-f586578d6afe\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-dzjpn" Oct 13 08:57:09 crc kubenswrapper[4833]: I1013 08:57:09.246750 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-dzjpn" Oct 13 08:57:09 crc kubenswrapper[4833]: I1013 08:57:09.812273 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-dzjpn"] Oct 13 08:57:09 crc kubenswrapper[4833]: I1013 08:57:09.842188 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-dzjpn" event={"ID":"d9df95b1-d108-4695-b002-f586578d6afe","Type":"ContainerStarted","Data":"183e4b2aafc25b97dbd6a4aaea5696fb110097d4330b078f0d8ac7c03ea0fe32"} Oct 13 08:57:11 crc kubenswrapper[4833]: I1013 08:57:11.864606 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-dzjpn" event={"ID":"d9df95b1-d108-4695-b002-f586578d6afe","Type":"ContainerStarted","Data":"b322d6deb6d7fe2d4b8c82e65ef54daa10436b7cb86c2dcd387cc477832a795b"} Oct 13 08:57:11 crc kubenswrapper[4833]: I1013 08:57:11.895894 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-sriov-openstack-openstack-cell1-dzjpn" podStartSLOduration=2.800393751 podStartE2EDuration="3.895878281s" podCreationTimestamp="2025-10-13 08:57:08 +0000 UTC" firstStartedPulling="2025-10-13 08:57:09.818620011 +0000 UTC m=+8919.919042927" lastFinishedPulling="2025-10-13 08:57:10.914104541 +0000 UTC m=+8921.014527457" observedRunningTime="2025-10-13 08:57:11.89409502 +0000 UTC m=+8921.994517936" watchObservedRunningTime="2025-10-13 08:57:11.895878281 +0000 UTC m=+8921.996301197" Oct 13 08:57:13 crc kubenswrapper[4833]: I1013 08:57:13.885902 4833 generic.go:334] "Generic (PLEG): container finished" podID="974f077c-833b-4949-8d1a-2269e1d0df1c" containerID="0a4e977c9cb892d5d12090f6a297e28e4f9a7fd9a7c67aec2726137f6c291f53" exitCode=0 Oct 13 08:57:13 crc kubenswrapper[4833]: I1013 08:57:13.885980 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpjpn" event={"ID":"974f077c-833b-4949-8d1a-2269e1d0df1c","Type":"ContainerDied","Data":"0a4e977c9cb892d5d12090f6a297e28e4f9a7fd9a7c67aec2726137f6c291f53"} Oct 13 08:57:14 crc kubenswrapper[4833]: I1013 08:57:14.902139 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpjpn" event={"ID":"974f077c-833b-4949-8d1a-2269e1d0df1c","Type":"ContainerStarted","Data":"a6d9e7e4e8ea0db9bff3069a5d6057b87665b11046fc213247899cc1e11faff9"} Oct 13 08:57:14 crc kubenswrapper[4833]: I1013 08:57:14.950139 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mpjpn" podStartSLOduration=3.306036668 podStartE2EDuration="15.950112484s" podCreationTimestamp="2025-10-13 08:56:59 +0000 UTC" firstStartedPulling="2025-10-13 08:57:01.752237255 +0000 UTC m=+8911.852660171" lastFinishedPulling="2025-10-13 08:57:14.396313071 +0000 UTC m=+8924.496735987" observedRunningTime="2025-10-13 08:57:14.931919225 +0000 UTC m=+8925.032342151" watchObservedRunningTime="2025-10-13 08:57:14.950112484 +0000 UTC m=+8925.050535440" Oct 13 08:57:20 crc kubenswrapper[4833]: I1013 08:57:20.044188 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mpjpn" Oct 13 08:57:20 crc kubenswrapper[4833]: I1013 08:57:20.044665 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mpjpn" Oct 13 08:57:20 crc kubenswrapper[4833]: I1013 08:57:20.101413 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mpjpn" Oct 13 08:57:21 crc kubenswrapper[4833]: I1013 08:57:21.019835 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mpjpn" Oct 13 08:57:21 crc kubenswrapper[4833]: I1013 08:57:21.065396 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mpjpn"] Oct 13 08:57:22 crc kubenswrapper[4833]: I1013 08:57:22.993914 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mpjpn" podUID="974f077c-833b-4949-8d1a-2269e1d0df1c" containerName="registry-server" containerID="cri-o://a6d9e7e4e8ea0db9bff3069a5d6057b87665b11046fc213247899cc1e11faff9" gracePeriod=2 Oct 13 08:57:23 crc kubenswrapper[4833]: I1013 08:57:23.554972 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mpjpn" Oct 13 08:57:23 crc kubenswrapper[4833]: I1013 08:57:23.719979 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/974f077c-833b-4949-8d1a-2269e1d0df1c-utilities\") pod \"974f077c-833b-4949-8d1a-2269e1d0df1c\" (UID: \"974f077c-833b-4949-8d1a-2269e1d0df1c\") " Oct 13 08:57:23 crc kubenswrapper[4833]: I1013 08:57:23.720356 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/974f077c-833b-4949-8d1a-2269e1d0df1c-catalog-content\") pod \"974f077c-833b-4949-8d1a-2269e1d0df1c\" (UID: \"974f077c-833b-4949-8d1a-2269e1d0df1c\") " Oct 13 08:57:23 crc kubenswrapper[4833]: I1013 08:57:23.720386 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-286g4\" (UniqueName: \"kubernetes.io/projected/974f077c-833b-4949-8d1a-2269e1d0df1c-kube-api-access-286g4\") pod \"974f077c-833b-4949-8d1a-2269e1d0df1c\" (UID: \"974f077c-833b-4949-8d1a-2269e1d0df1c\") " Oct 13 08:57:23 crc kubenswrapper[4833]: I1013 08:57:23.720714 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/974f077c-833b-4949-8d1a-2269e1d0df1c-utilities" (OuterVolumeSpecName: "utilities") pod "974f077c-833b-4949-8d1a-2269e1d0df1c" (UID: "974f077c-833b-4949-8d1a-2269e1d0df1c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:57:23 crc kubenswrapper[4833]: I1013 08:57:23.721230 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/974f077c-833b-4949-8d1a-2269e1d0df1c-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 08:57:23 crc kubenswrapper[4833]: I1013 08:57:23.727331 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/974f077c-833b-4949-8d1a-2269e1d0df1c-kube-api-access-286g4" (OuterVolumeSpecName: "kube-api-access-286g4") pod "974f077c-833b-4949-8d1a-2269e1d0df1c" (UID: "974f077c-833b-4949-8d1a-2269e1d0df1c"). InnerVolumeSpecName "kube-api-access-286g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:57:23 crc kubenswrapper[4833]: I1013 08:57:23.801788 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/974f077c-833b-4949-8d1a-2269e1d0df1c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "974f077c-833b-4949-8d1a-2269e1d0df1c" (UID: "974f077c-833b-4949-8d1a-2269e1d0df1c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:57:23 crc kubenswrapper[4833]: I1013 08:57:23.823376 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/974f077c-833b-4949-8d1a-2269e1d0df1c-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 08:57:23 crc kubenswrapper[4833]: I1013 08:57:23.823644 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-286g4\" (UniqueName: \"kubernetes.io/projected/974f077c-833b-4949-8d1a-2269e1d0df1c-kube-api-access-286g4\") on node \"crc\" DevicePath \"\"" Oct 13 08:57:24 crc kubenswrapper[4833]: I1013 08:57:24.010167 4833 generic.go:334] "Generic (PLEG): container finished" podID="974f077c-833b-4949-8d1a-2269e1d0df1c" containerID="a6d9e7e4e8ea0db9bff3069a5d6057b87665b11046fc213247899cc1e11faff9" exitCode=0 Oct 13 08:57:24 crc kubenswrapper[4833]: I1013 08:57:24.010214 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpjpn" event={"ID":"974f077c-833b-4949-8d1a-2269e1d0df1c","Type":"ContainerDied","Data":"a6d9e7e4e8ea0db9bff3069a5d6057b87665b11046fc213247899cc1e11faff9"} Oct 13 08:57:24 crc kubenswrapper[4833]: I1013 08:57:24.010240 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpjpn" event={"ID":"974f077c-833b-4949-8d1a-2269e1d0df1c","Type":"ContainerDied","Data":"da817723b186401d4421047db7ac61e7863856c93769613cb69e66ead40b9660"} Oct 13 08:57:24 crc kubenswrapper[4833]: I1013 08:57:24.010257 4833 scope.go:117] "RemoveContainer" containerID="a6d9e7e4e8ea0db9bff3069a5d6057b87665b11046fc213247899cc1e11faff9" Oct 13 08:57:24 crc kubenswrapper[4833]: I1013 08:57:24.010266 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mpjpn" Oct 13 08:57:24 crc kubenswrapper[4833]: I1013 08:57:24.066851 4833 scope.go:117] "RemoveContainer" containerID="0a4e977c9cb892d5d12090f6a297e28e4f9a7fd9a7c67aec2726137f6c291f53" Oct 13 08:57:24 crc kubenswrapper[4833]: I1013 08:57:24.068031 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mpjpn"] Oct 13 08:57:24 crc kubenswrapper[4833]: I1013 08:57:24.082719 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mpjpn"] Oct 13 08:57:24 crc kubenswrapper[4833]: I1013 08:57:24.101181 4833 scope.go:117] "RemoveContainer" containerID="f8741fde71a72366d26763b466bf0facf5a54cc56e1c7c06697f66dd5473791e" Oct 13 08:57:24 crc kubenswrapper[4833]: I1013 08:57:24.138928 4833 scope.go:117] "RemoveContainer" containerID="a6d9e7e4e8ea0db9bff3069a5d6057b87665b11046fc213247899cc1e11faff9" Oct 13 08:57:24 crc kubenswrapper[4833]: E1013 08:57:24.139459 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6d9e7e4e8ea0db9bff3069a5d6057b87665b11046fc213247899cc1e11faff9\": container with ID starting with a6d9e7e4e8ea0db9bff3069a5d6057b87665b11046fc213247899cc1e11faff9 not found: ID does not exist" containerID="a6d9e7e4e8ea0db9bff3069a5d6057b87665b11046fc213247899cc1e11faff9" Oct 13 08:57:24 crc kubenswrapper[4833]: I1013 08:57:24.139505 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6d9e7e4e8ea0db9bff3069a5d6057b87665b11046fc213247899cc1e11faff9"} err="failed to get container status \"a6d9e7e4e8ea0db9bff3069a5d6057b87665b11046fc213247899cc1e11faff9\": rpc error: code = NotFound desc = could not find container \"a6d9e7e4e8ea0db9bff3069a5d6057b87665b11046fc213247899cc1e11faff9\": container with ID starting with a6d9e7e4e8ea0db9bff3069a5d6057b87665b11046fc213247899cc1e11faff9 not found: ID does not exist" Oct 13 08:57:24 crc kubenswrapper[4833]: I1013 08:57:24.139556 4833 scope.go:117] "RemoveContainer" containerID="0a4e977c9cb892d5d12090f6a297e28e4f9a7fd9a7c67aec2726137f6c291f53" Oct 13 08:57:24 crc kubenswrapper[4833]: E1013 08:57:24.139901 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a4e977c9cb892d5d12090f6a297e28e4f9a7fd9a7c67aec2726137f6c291f53\": container with ID starting with 0a4e977c9cb892d5d12090f6a297e28e4f9a7fd9a7c67aec2726137f6c291f53 not found: ID does not exist" containerID="0a4e977c9cb892d5d12090f6a297e28e4f9a7fd9a7c67aec2726137f6c291f53" Oct 13 08:57:24 crc kubenswrapper[4833]: I1013 08:57:24.139922 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a4e977c9cb892d5d12090f6a297e28e4f9a7fd9a7c67aec2726137f6c291f53"} err="failed to get container status \"0a4e977c9cb892d5d12090f6a297e28e4f9a7fd9a7c67aec2726137f6c291f53\": rpc error: code = NotFound desc = could not find container \"0a4e977c9cb892d5d12090f6a297e28e4f9a7fd9a7c67aec2726137f6c291f53\": container with ID starting with 0a4e977c9cb892d5d12090f6a297e28e4f9a7fd9a7c67aec2726137f6c291f53 not found: ID does not exist" Oct 13 08:57:24 crc kubenswrapper[4833]: I1013 08:57:24.139943 4833 scope.go:117] "RemoveContainer" containerID="f8741fde71a72366d26763b466bf0facf5a54cc56e1c7c06697f66dd5473791e" Oct 13 08:57:24 crc kubenswrapper[4833]: E1013 08:57:24.140216 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8741fde71a72366d26763b466bf0facf5a54cc56e1c7c06697f66dd5473791e\": container with ID starting with f8741fde71a72366d26763b466bf0facf5a54cc56e1c7c06697f66dd5473791e not found: ID does not exist" containerID="f8741fde71a72366d26763b466bf0facf5a54cc56e1c7c06697f66dd5473791e" Oct 13 08:57:24 crc kubenswrapper[4833]: I1013 08:57:24.140234 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8741fde71a72366d26763b466bf0facf5a54cc56e1c7c06697f66dd5473791e"} err="failed to get container status \"f8741fde71a72366d26763b466bf0facf5a54cc56e1c7c06697f66dd5473791e\": rpc error: code = NotFound desc = could not find container \"f8741fde71a72366d26763b466bf0facf5a54cc56e1c7c06697f66dd5473791e\": container with ID starting with f8741fde71a72366d26763b466bf0facf5a54cc56e1c7c06697f66dd5473791e not found: ID does not exist" Oct 13 08:57:24 crc kubenswrapper[4833]: I1013 08:57:24.643071 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="974f077c-833b-4949-8d1a-2269e1d0df1c" path="/var/lib/kubelet/pods/974f077c-833b-4949-8d1a-2269e1d0df1c/volumes" Oct 13 08:58:20 crc kubenswrapper[4833]: I1013 08:58:20.710383 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zz4k8"] Oct 13 08:58:20 crc kubenswrapper[4833]: E1013 08:58:20.711637 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="974f077c-833b-4949-8d1a-2269e1d0df1c" containerName="registry-server" Oct 13 08:58:20 crc kubenswrapper[4833]: I1013 08:58:20.711650 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="974f077c-833b-4949-8d1a-2269e1d0df1c" containerName="registry-server" Oct 13 08:58:20 crc kubenswrapper[4833]: E1013 08:58:20.711673 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="974f077c-833b-4949-8d1a-2269e1d0df1c" containerName="extract-utilities" Oct 13 08:58:20 crc kubenswrapper[4833]: I1013 08:58:20.711681 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="974f077c-833b-4949-8d1a-2269e1d0df1c" containerName="extract-utilities" Oct 13 08:58:20 crc kubenswrapper[4833]: E1013 08:58:20.711694 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="974f077c-833b-4949-8d1a-2269e1d0df1c" containerName="extract-content" Oct 13 08:58:20 crc kubenswrapper[4833]: I1013 08:58:20.711700 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="974f077c-833b-4949-8d1a-2269e1d0df1c" containerName="extract-content" Oct 13 08:58:20 crc kubenswrapper[4833]: I1013 08:58:20.712011 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="974f077c-833b-4949-8d1a-2269e1d0df1c" containerName="registry-server" Oct 13 08:58:20 crc kubenswrapper[4833]: I1013 08:58:20.714454 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zz4k8" Oct 13 08:58:20 crc kubenswrapper[4833]: I1013 08:58:20.723072 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zz4k8"] Oct 13 08:58:20 crc kubenswrapper[4833]: I1013 08:58:20.830310 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/912d9b70-7e03-4f09-b932-aca04e217855-utilities\") pod \"community-operators-zz4k8\" (UID: \"912d9b70-7e03-4f09-b932-aca04e217855\") " pod="openshift-marketplace/community-operators-zz4k8" Oct 13 08:58:20 crc kubenswrapper[4833]: I1013 08:58:20.830357 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/912d9b70-7e03-4f09-b932-aca04e217855-catalog-content\") pod \"community-operators-zz4k8\" (UID: \"912d9b70-7e03-4f09-b932-aca04e217855\") " pod="openshift-marketplace/community-operators-zz4k8" Oct 13 08:58:20 crc kubenswrapper[4833]: I1013 08:58:20.830425 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkrdk\" (UniqueName: \"kubernetes.io/projected/912d9b70-7e03-4f09-b932-aca04e217855-kube-api-access-zkrdk\") pod \"community-operators-zz4k8\" (UID: \"912d9b70-7e03-4f09-b932-aca04e217855\") " pod="openshift-marketplace/community-operators-zz4k8" Oct 13 08:58:20 crc kubenswrapper[4833]: I1013 08:58:20.932813 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/912d9b70-7e03-4f09-b932-aca04e217855-utilities\") pod \"community-operators-zz4k8\" (UID: \"912d9b70-7e03-4f09-b932-aca04e217855\") " pod="openshift-marketplace/community-operators-zz4k8" Oct 13 08:58:20 crc kubenswrapper[4833]: I1013 08:58:20.933120 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/912d9b70-7e03-4f09-b932-aca04e217855-catalog-content\") pod \"community-operators-zz4k8\" (UID: \"912d9b70-7e03-4f09-b932-aca04e217855\") " pod="openshift-marketplace/community-operators-zz4k8" Oct 13 08:58:20 crc kubenswrapper[4833]: I1013 08:58:20.933259 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/912d9b70-7e03-4f09-b932-aca04e217855-utilities\") pod \"community-operators-zz4k8\" (UID: \"912d9b70-7e03-4f09-b932-aca04e217855\") " pod="openshift-marketplace/community-operators-zz4k8" Oct 13 08:58:20 crc kubenswrapper[4833]: I1013 08:58:20.933690 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/912d9b70-7e03-4f09-b932-aca04e217855-catalog-content\") pod \"community-operators-zz4k8\" (UID: \"912d9b70-7e03-4f09-b932-aca04e217855\") " pod="openshift-marketplace/community-operators-zz4k8" Oct 13 08:58:20 crc kubenswrapper[4833]: I1013 08:58:20.934033 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkrdk\" (UniqueName: \"kubernetes.io/projected/912d9b70-7e03-4f09-b932-aca04e217855-kube-api-access-zkrdk\") pod \"community-operators-zz4k8\" (UID: \"912d9b70-7e03-4f09-b932-aca04e217855\") " pod="openshift-marketplace/community-operators-zz4k8" Oct 13 08:58:20 crc kubenswrapper[4833]: I1013 08:58:20.956770 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkrdk\" (UniqueName: \"kubernetes.io/projected/912d9b70-7e03-4f09-b932-aca04e217855-kube-api-access-zkrdk\") pod \"community-operators-zz4k8\" (UID: \"912d9b70-7e03-4f09-b932-aca04e217855\") " pod="openshift-marketplace/community-operators-zz4k8" Oct 13 08:58:21 crc kubenswrapper[4833]: I1013 08:58:21.093141 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zz4k8" Oct 13 08:58:21 crc kubenswrapper[4833]: I1013 08:58:21.620081 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zz4k8"] Oct 13 08:58:21 crc kubenswrapper[4833]: I1013 08:58:21.699893 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zz4k8" event={"ID":"912d9b70-7e03-4f09-b932-aca04e217855","Type":"ContainerStarted","Data":"149bddb0149ed29fbc2504ec2012344e5b13eebd9901b5fd6d70ad8bd5069c2d"} Oct 13 08:58:22 crc kubenswrapper[4833]: I1013 08:58:22.711824 4833 generic.go:334] "Generic (PLEG): container finished" podID="912d9b70-7e03-4f09-b932-aca04e217855" containerID="25dec9954163b101e6079caffd0378b200cb26544b67ae96f36fd78cca8f21b9" exitCode=0 Oct 13 08:58:22 crc kubenswrapper[4833]: I1013 08:58:22.711965 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zz4k8" event={"ID":"912d9b70-7e03-4f09-b932-aca04e217855","Type":"ContainerDied","Data":"25dec9954163b101e6079caffd0378b200cb26544b67ae96f36fd78cca8f21b9"} Oct 13 08:58:22 crc kubenswrapper[4833]: I1013 08:58:22.715167 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 08:58:23 crc kubenswrapper[4833]: I1013 08:58:23.723553 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zz4k8" event={"ID":"912d9b70-7e03-4f09-b932-aca04e217855","Type":"ContainerStarted","Data":"3d3572e8ec14959cac4757efa299851f00be0ffad04845201c189f3bbe10cbcd"} Oct 13 08:58:24 crc kubenswrapper[4833]: I1013 08:58:24.741102 4833 generic.go:334] "Generic (PLEG): container finished" podID="912d9b70-7e03-4f09-b932-aca04e217855" containerID="3d3572e8ec14959cac4757efa299851f00be0ffad04845201c189f3bbe10cbcd" exitCode=0 Oct 13 08:58:24 crc kubenswrapper[4833]: I1013 08:58:24.741203 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zz4k8" event={"ID":"912d9b70-7e03-4f09-b932-aca04e217855","Type":"ContainerDied","Data":"3d3572e8ec14959cac4757efa299851f00be0ffad04845201c189f3bbe10cbcd"} Oct 13 08:58:26 crc kubenswrapper[4833]: I1013 08:58:26.775140 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zz4k8" event={"ID":"912d9b70-7e03-4f09-b932-aca04e217855","Type":"ContainerStarted","Data":"6d1d5ae6ae00ea8f1046b7cdfa784cad3b6acc08e8c406cac801299e8ece7ddf"} Oct 13 08:58:26 crc kubenswrapper[4833]: I1013 08:58:26.805266 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zz4k8" podStartSLOduration=4.363421631 podStartE2EDuration="6.805249112s" podCreationTimestamp="2025-10-13 08:58:20 +0000 UTC" firstStartedPulling="2025-10-13 08:58:22.714776877 +0000 UTC m=+8992.815199793" lastFinishedPulling="2025-10-13 08:58:25.156604358 +0000 UTC m=+8995.257027274" observedRunningTime="2025-10-13 08:58:26.804906963 +0000 UTC m=+8996.905329879" watchObservedRunningTime="2025-10-13 08:58:26.805249112 +0000 UTC m=+8996.905672028" Oct 13 08:58:31 crc kubenswrapper[4833]: I1013 08:58:31.094067 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zz4k8" Oct 13 08:58:31 crc kubenswrapper[4833]: I1013 08:58:31.094586 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zz4k8" Oct 13 08:58:31 crc kubenswrapper[4833]: I1013 08:58:31.159249 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zz4k8" Oct 13 08:58:31 crc kubenswrapper[4833]: I1013 08:58:31.890335 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zz4k8" Oct 13 08:58:31 crc kubenswrapper[4833]: I1013 08:58:31.948619 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zz4k8"] Oct 13 08:58:33 crc kubenswrapper[4833]: I1013 08:58:33.847676 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zz4k8" podUID="912d9b70-7e03-4f09-b932-aca04e217855" containerName="registry-server" containerID="cri-o://6d1d5ae6ae00ea8f1046b7cdfa784cad3b6acc08e8c406cac801299e8ece7ddf" gracePeriod=2 Oct 13 08:58:34 crc kubenswrapper[4833]: I1013 08:58:34.828081 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zz4k8" Oct 13 08:58:34 crc kubenswrapper[4833]: I1013 08:58:34.861330 4833 generic.go:334] "Generic (PLEG): container finished" podID="912d9b70-7e03-4f09-b932-aca04e217855" containerID="6d1d5ae6ae00ea8f1046b7cdfa784cad3b6acc08e8c406cac801299e8ece7ddf" exitCode=0 Oct 13 08:58:34 crc kubenswrapper[4833]: I1013 08:58:34.861409 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zz4k8" Oct 13 08:58:34 crc kubenswrapper[4833]: I1013 08:58:34.861437 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zz4k8" event={"ID":"912d9b70-7e03-4f09-b932-aca04e217855","Type":"ContainerDied","Data":"6d1d5ae6ae00ea8f1046b7cdfa784cad3b6acc08e8c406cac801299e8ece7ddf"} Oct 13 08:58:34 crc kubenswrapper[4833]: I1013 08:58:34.863346 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zz4k8" event={"ID":"912d9b70-7e03-4f09-b932-aca04e217855","Type":"ContainerDied","Data":"149bddb0149ed29fbc2504ec2012344e5b13eebd9901b5fd6d70ad8bd5069c2d"} Oct 13 08:58:34 crc kubenswrapper[4833]: I1013 08:58:34.863464 4833 scope.go:117] "RemoveContainer" containerID="6d1d5ae6ae00ea8f1046b7cdfa784cad3b6acc08e8c406cac801299e8ece7ddf" Oct 13 08:58:34 crc kubenswrapper[4833]: I1013 08:58:34.887029 4833 scope.go:117] "RemoveContainer" containerID="3d3572e8ec14959cac4757efa299851f00be0ffad04845201c189f3bbe10cbcd" Oct 13 08:58:34 crc kubenswrapper[4833]: I1013 08:58:34.908095 4833 scope.go:117] "RemoveContainer" containerID="25dec9954163b101e6079caffd0378b200cb26544b67ae96f36fd78cca8f21b9" Oct 13 08:58:34 crc kubenswrapper[4833]: I1013 08:58:34.956139 4833 scope.go:117] "RemoveContainer" containerID="6d1d5ae6ae00ea8f1046b7cdfa784cad3b6acc08e8c406cac801299e8ece7ddf" Oct 13 08:58:34 crc kubenswrapper[4833]: E1013 08:58:34.957252 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d1d5ae6ae00ea8f1046b7cdfa784cad3b6acc08e8c406cac801299e8ece7ddf\": container with ID starting with 6d1d5ae6ae00ea8f1046b7cdfa784cad3b6acc08e8c406cac801299e8ece7ddf not found: ID does not exist" containerID="6d1d5ae6ae00ea8f1046b7cdfa784cad3b6acc08e8c406cac801299e8ece7ddf" Oct 13 08:58:34 crc kubenswrapper[4833]: I1013 08:58:34.957353 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d1d5ae6ae00ea8f1046b7cdfa784cad3b6acc08e8c406cac801299e8ece7ddf"} err="failed to get container status \"6d1d5ae6ae00ea8f1046b7cdfa784cad3b6acc08e8c406cac801299e8ece7ddf\": rpc error: code = NotFound desc = could not find container \"6d1d5ae6ae00ea8f1046b7cdfa784cad3b6acc08e8c406cac801299e8ece7ddf\": container with ID starting with 6d1d5ae6ae00ea8f1046b7cdfa784cad3b6acc08e8c406cac801299e8ece7ddf not found: ID does not exist" Oct 13 08:58:34 crc kubenswrapper[4833]: I1013 08:58:34.957433 4833 scope.go:117] "RemoveContainer" containerID="3d3572e8ec14959cac4757efa299851f00be0ffad04845201c189f3bbe10cbcd" Oct 13 08:58:34 crc kubenswrapper[4833]: E1013 08:58:34.957962 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d3572e8ec14959cac4757efa299851f00be0ffad04845201c189f3bbe10cbcd\": container with ID starting with 3d3572e8ec14959cac4757efa299851f00be0ffad04845201c189f3bbe10cbcd not found: ID does not exist" containerID="3d3572e8ec14959cac4757efa299851f00be0ffad04845201c189f3bbe10cbcd" Oct 13 08:58:34 crc kubenswrapper[4833]: I1013 08:58:34.958019 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d3572e8ec14959cac4757efa299851f00be0ffad04845201c189f3bbe10cbcd"} err="failed to get container status \"3d3572e8ec14959cac4757efa299851f00be0ffad04845201c189f3bbe10cbcd\": rpc error: code = NotFound desc = could not find container \"3d3572e8ec14959cac4757efa299851f00be0ffad04845201c189f3bbe10cbcd\": container with ID starting with 3d3572e8ec14959cac4757efa299851f00be0ffad04845201c189f3bbe10cbcd not found: ID does not exist" Oct 13 08:58:34 crc kubenswrapper[4833]: I1013 08:58:34.958056 4833 scope.go:117] "RemoveContainer" containerID="25dec9954163b101e6079caffd0378b200cb26544b67ae96f36fd78cca8f21b9" Oct 13 08:58:34 crc kubenswrapper[4833]: E1013 08:58:34.958451 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25dec9954163b101e6079caffd0378b200cb26544b67ae96f36fd78cca8f21b9\": container with ID starting with 25dec9954163b101e6079caffd0378b200cb26544b67ae96f36fd78cca8f21b9 not found: ID does not exist" containerID="25dec9954163b101e6079caffd0378b200cb26544b67ae96f36fd78cca8f21b9" Oct 13 08:58:34 crc kubenswrapper[4833]: I1013 08:58:34.958588 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25dec9954163b101e6079caffd0378b200cb26544b67ae96f36fd78cca8f21b9"} err="failed to get container status \"25dec9954163b101e6079caffd0378b200cb26544b67ae96f36fd78cca8f21b9\": rpc error: code = NotFound desc = could not find container \"25dec9954163b101e6079caffd0378b200cb26544b67ae96f36fd78cca8f21b9\": container with ID starting with 25dec9954163b101e6079caffd0378b200cb26544b67ae96f36fd78cca8f21b9 not found: ID does not exist" Oct 13 08:58:34 crc kubenswrapper[4833]: I1013 08:58:34.976691 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkrdk\" (UniqueName: \"kubernetes.io/projected/912d9b70-7e03-4f09-b932-aca04e217855-kube-api-access-zkrdk\") pod \"912d9b70-7e03-4f09-b932-aca04e217855\" (UID: \"912d9b70-7e03-4f09-b932-aca04e217855\") " Oct 13 08:58:34 crc kubenswrapper[4833]: I1013 08:58:34.976790 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/912d9b70-7e03-4f09-b932-aca04e217855-utilities\") pod \"912d9b70-7e03-4f09-b932-aca04e217855\" (UID: \"912d9b70-7e03-4f09-b932-aca04e217855\") " Oct 13 08:58:34 crc kubenswrapper[4833]: I1013 08:58:34.976976 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/912d9b70-7e03-4f09-b932-aca04e217855-catalog-content\") pod \"912d9b70-7e03-4f09-b932-aca04e217855\" (UID: \"912d9b70-7e03-4f09-b932-aca04e217855\") " Oct 13 08:58:34 crc kubenswrapper[4833]: I1013 08:58:34.978114 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/912d9b70-7e03-4f09-b932-aca04e217855-utilities" (OuterVolumeSpecName: "utilities") pod "912d9b70-7e03-4f09-b932-aca04e217855" (UID: "912d9b70-7e03-4f09-b932-aca04e217855"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:58:34 crc kubenswrapper[4833]: I1013 08:58:34.983815 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/912d9b70-7e03-4f09-b932-aca04e217855-kube-api-access-zkrdk" (OuterVolumeSpecName: "kube-api-access-zkrdk") pod "912d9b70-7e03-4f09-b932-aca04e217855" (UID: "912d9b70-7e03-4f09-b932-aca04e217855"). InnerVolumeSpecName "kube-api-access-zkrdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:58:35 crc kubenswrapper[4833]: I1013 08:58:35.023129 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/912d9b70-7e03-4f09-b932-aca04e217855-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "912d9b70-7e03-4f09-b932-aca04e217855" (UID: "912d9b70-7e03-4f09-b932-aca04e217855"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 08:58:35 crc kubenswrapper[4833]: I1013 08:58:35.079858 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkrdk\" (UniqueName: \"kubernetes.io/projected/912d9b70-7e03-4f09-b932-aca04e217855-kube-api-access-zkrdk\") on node \"crc\" DevicePath \"\"" Oct 13 08:58:35 crc kubenswrapper[4833]: I1013 08:58:35.079898 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/912d9b70-7e03-4f09-b932-aca04e217855-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 08:58:35 crc kubenswrapper[4833]: I1013 08:58:35.079908 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/912d9b70-7e03-4f09-b932-aca04e217855-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 08:58:35 crc kubenswrapper[4833]: I1013 08:58:35.201746 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zz4k8"] Oct 13 08:58:35 crc kubenswrapper[4833]: I1013 08:58:35.212613 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zz4k8"] Oct 13 08:58:36 crc kubenswrapper[4833]: I1013 08:58:36.643462 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="912d9b70-7e03-4f09-b932-aca04e217855" path="/var/lib/kubelet/pods/912d9b70-7e03-4f09-b932-aca04e217855/volumes" Oct 13 08:59:00 crc kubenswrapper[4833]: I1013 08:59:00.543377 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:59:00 crc kubenswrapper[4833]: I1013 08:59:00.545846 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 08:59:12 crc kubenswrapper[4833]: I1013 08:59:12.352737 4833 generic.go:334] "Generic (PLEG): container finished" podID="d9df95b1-d108-4695-b002-f586578d6afe" containerID="b322d6deb6d7fe2d4b8c82e65ef54daa10436b7cb86c2dcd387cc477832a795b" exitCode=0 Oct 13 08:59:12 crc kubenswrapper[4833]: I1013 08:59:12.352868 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-dzjpn" event={"ID":"d9df95b1-d108-4695-b002-f586578d6afe","Type":"ContainerDied","Data":"b322d6deb6d7fe2d4b8c82e65ef54daa10436b7cb86c2dcd387cc477832a795b"} Oct 13 08:59:13 crc kubenswrapper[4833]: I1013 08:59:13.857895 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-dzjpn" Oct 13 08:59:13 crc kubenswrapper[4833]: I1013 08:59:13.921442 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d9df95b1-d108-4695-b002-f586578d6afe-neutron-sriov-agent-neutron-config-0\") pod \"d9df95b1-d108-4695-b002-f586578d6afe\" (UID: \"d9df95b1-d108-4695-b002-f586578d6afe\") " Oct 13 08:59:13 crc kubenswrapper[4833]: I1013 08:59:13.921568 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9df95b1-d108-4695-b002-f586578d6afe-inventory\") pod \"d9df95b1-d108-4695-b002-f586578d6afe\" (UID: \"d9df95b1-d108-4695-b002-f586578d6afe\") " Oct 13 08:59:13 crc kubenswrapper[4833]: I1013 08:59:13.921831 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d9df95b1-d108-4695-b002-f586578d6afe-ssh-key\") pod \"d9df95b1-d108-4695-b002-f586578d6afe\" (UID: \"d9df95b1-d108-4695-b002-f586578d6afe\") " Oct 13 08:59:13 crc kubenswrapper[4833]: I1013 08:59:13.921886 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwgtz\" (UniqueName: \"kubernetes.io/projected/d9df95b1-d108-4695-b002-f586578d6afe-kube-api-access-dwgtz\") pod \"d9df95b1-d108-4695-b002-f586578d6afe\" (UID: \"d9df95b1-d108-4695-b002-f586578d6afe\") " Oct 13 08:59:13 crc kubenswrapper[4833]: I1013 08:59:13.921980 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9df95b1-d108-4695-b002-f586578d6afe-neutron-sriov-combined-ca-bundle\") pod \"d9df95b1-d108-4695-b002-f586578d6afe\" (UID: \"d9df95b1-d108-4695-b002-f586578d6afe\") " Oct 13 08:59:13 crc kubenswrapper[4833]: I1013 08:59:13.926766 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9df95b1-d108-4695-b002-f586578d6afe-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "d9df95b1-d108-4695-b002-f586578d6afe" (UID: "d9df95b1-d108-4695-b002-f586578d6afe"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:59:13 crc kubenswrapper[4833]: I1013 08:59:13.934947 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9df95b1-d108-4695-b002-f586578d6afe-kube-api-access-dwgtz" (OuterVolumeSpecName: "kube-api-access-dwgtz") pod "d9df95b1-d108-4695-b002-f586578d6afe" (UID: "d9df95b1-d108-4695-b002-f586578d6afe"). InnerVolumeSpecName "kube-api-access-dwgtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 08:59:13 crc kubenswrapper[4833]: I1013 08:59:13.952727 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9df95b1-d108-4695-b002-f586578d6afe-neutron-sriov-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-sriov-agent-neutron-config-0") pod "d9df95b1-d108-4695-b002-f586578d6afe" (UID: "d9df95b1-d108-4695-b002-f586578d6afe"). InnerVolumeSpecName "neutron-sriov-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:59:13 crc kubenswrapper[4833]: I1013 08:59:13.978863 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9df95b1-d108-4695-b002-f586578d6afe-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d9df95b1-d108-4695-b002-f586578d6afe" (UID: "d9df95b1-d108-4695-b002-f586578d6afe"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:59:13 crc kubenswrapper[4833]: I1013 08:59:13.992995 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9df95b1-d108-4695-b002-f586578d6afe-inventory" (OuterVolumeSpecName: "inventory") pod "d9df95b1-d108-4695-b002-f586578d6afe" (UID: "d9df95b1-d108-4695-b002-f586578d6afe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.024967 4833 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d9df95b1-d108-4695-b002-f586578d6afe-neutron-sriov-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.025010 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9df95b1-d108-4695-b002-f586578d6afe-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.025024 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d9df95b1-d108-4695-b002-f586578d6afe-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.025038 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwgtz\" (UniqueName: \"kubernetes.io/projected/d9df95b1-d108-4695-b002-f586578d6afe-kube-api-access-dwgtz\") on node \"crc\" DevicePath \"\"" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.025051 4833 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9df95b1-d108-4695-b002-f586578d6afe-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.380712 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-dzjpn" event={"ID":"d9df95b1-d108-4695-b002-f586578d6afe","Type":"ContainerDied","Data":"183e4b2aafc25b97dbd6a4aaea5696fb110097d4330b078f0d8ac7c03ea0fe32"} Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.380756 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="183e4b2aafc25b97dbd6a4aaea5696fb110097d4330b078f0d8ac7c03ea0fe32" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.380774 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-dzjpn" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.496885 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-mvqtv"] Oct 13 08:59:14 crc kubenswrapper[4833]: E1013 08:59:14.497479 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="912d9b70-7e03-4f09-b932-aca04e217855" containerName="extract-utilities" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.497502 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="912d9b70-7e03-4f09-b932-aca04e217855" containerName="extract-utilities" Oct 13 08:59:14 crc kubenswrapper[4833]: E1013 08:59:14.497552 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9df95b1-d108-4695-b002-f586578d6afe" containerName="neutron-sriov-openstack-openstack-cell1" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.497564 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9df95b1-d108-4695-b002-f586578d6afe" containerName="neutron-sriov-openstack-openstack-cell1" Oct 13 08:59:14 crc kubenswrapper[4833]: E1013 08:59:14.497584 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="912d9b70-7e03-4f09-b932-aca04e217855" containerName="registry-server" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.497595 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="912d9b70-7e03-4f09-b932-aca04e217855" containerName="registry-server" Oct 13 08:59:14 crc kubenswrapper[4833]: E1013 08:59:14.497633 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="912d9b70-7e03-4f09-b932-aca04e217855" containerName="extract-content" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.497643 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="912d9b70-7e03-4f09-b932-aca04e217855" containerName="extract-content" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.497960 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="912d9b70-7e03-4f09-b932-aca04e217855" containerName="registry-server" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.497992 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9df95b1-d108-4695-b002-f586578d6afe" containerName="neutron-sriov-openstack-openstack-cell1" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.499029 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-mvqtv" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.502357 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.502901 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.503137 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-qqrx8" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.506063 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.508611 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-dhcp-agent-neutron-config" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.509961 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-mvqtv"] Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.535167 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-mvqtv\" (UID: \"3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-mvqtv" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.535309 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-mvqtv\" (UID: \"3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-mvqtv" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.535411 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nspx\" (UniqueName: \"kubernetes.io/projected/3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c-kube-api-access-2nspx\") pod \"neutron-dhcp-openstack-openstack-cell1-mvqtv\" (UID: \"3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-mvqtv" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.535458 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-mvqtv\" (UID: \"3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-mvqtv" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.535893 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-mvqtv\" (UID: \"3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-mvqtv" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.636620 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nspx\" (UniqueName: \"kubernetes.io/projected/3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c-kube-api-access-2nspx\") pod \"neutron-dhcp-openstack-openstack-cell1-mvqtv\" (UID: \"3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-mvqtv" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.636673 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-mvqtv\" (UID: \"3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-mvqtv" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.636762 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-mvqtv\" (UID: \"3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-mvqtv" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.636789 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-mvqtv\" (UID: \"3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-mvqtv" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.636872 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-mvqtv\" (UID: \"3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-mvqtv" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.640929 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-mvqtv\" (UID: \"3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-mvqtv" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.641230 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-mvqtv\" (UID: \"3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-mvqtv" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.641512 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-mvqtv\" (UID: \"3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-mvqtv" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.641847 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-mvqtv\" (UID: \"3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-mvqtv" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.652589 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nspx\" (UniqueName: \"kubernetes.io/projected/3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c-kube-api-access-2nspx\") pod \"neutron-dhcp-openstack-openstack-cell1-mvqtv\" (UID: \"3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-mvqtv" Oct 13 08:59:14 crc kubenswrapper[4833]: I1013 08:59:14.828604 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-mvqtv" Oct 13 08:59:15 crc kubenswrapper[4833]: I1013 08:59:15.409745 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-mvqtv"] Oct 13 08:59:16 crc kubenswrapper[4833]: I1013 08:59:16.403507 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-mvqtv" event={"ID":"3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c","Type":"ContainerStarted","Data":"a67cfd81ce15b2e7556dddd16aa7678532c1aba529dce6f370f8aa2f889b34bb"} Oct 13 08:59:16 crc kubenswrapper[4833]: I1013 08:59:16.404508 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-mvqtv" event={"ID":"3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c","Type":"ContainerStarted","Data":"4fd978a080c00577f332ba71fe7aa28040edd58da158ed4fac71009caca914df"} Oct 13 08:59:16 crc kubenswrapper[4833]: I1013 08:59:16.426041 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dhcp-openstack-openstack-cell1-mvqtv" podStartSLOduration=1.8070565969999999 podStartE2EDuration="2.426022736s" podCreationTimestamp="2025-10-13 08:59:14 +0000 UTC" firstStartedPulling="2025-10-13 08:59:15.427717956 +0000 UTC m=+9045.528140872" lastFinishedPulling="2025-10-13 08:59:16.046684075 +0000 UTC m=+9046.147107011" observedRunningTime="2025-10-13 08:59:16.424147513 +0000 UTC m=+9046.524570439" watchObservedRunningTime="2025-10-13 08:59:16.426022736 +0000 UTC m=+9046.526445662" Oct 13 08:59:30 crc kubenswrapper[4833]: I1013 08:59:30.542285 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 08:59:30 crc kubenswrapper[4833]: I1013 08:59:30.543180 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 09:00:00 crc kubenswrapper[4833]: I1013 09:00:00.173163 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339100-5jz2c"] Oct 13 09:00:00 crc kubenswrapper[4833]: I1013 09:00:00.176257 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339100-5jz2c" Oct 13 09:00:00 crc kubenswrapper[4833]: I1013 09:00:00.178967 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 13 09:00:00 crc kubenswrapper[4833]: I1013 09:00:00.179461 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 13 09:00:00 crc kubenswrapper[4833]: I1013 09:00:00.191271 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339100-5jz2c"] Oct 13 09:00:00 crc kubenswrapper[4833]: I1013 09:00:00.321727 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a3322e6-c658-4388-bc12-ee24558292df-secret-volume\") pod \"collect-profiles-29339100-5jz2c\" (UID: \"0a3322e6-c658-4388-bc12-ee24558292df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339100-5jz2c" Oct 13 09:00:00 crc kubenswrapper[4833]: I1013 09:00:00.321976 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2vr7\" (UniqueName: \"kubernetes.io/projected/0a3322e6-c658-4388-bc12-ee24558292df-kube-api-access-j2vr7\") pod \"collect-profiles-29339100-5jz2c\" (UID: \"0a3322e6-c658-4388-bc12-ee24558292df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339100-5jz2c" Oct 13 09:00:00 crc kubenswrapper[4833]: I1013 09:00:00.322238 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a3322e6-c658-4388-bc12-ee24558292df-config-volume\") pod \"collect-profiles-29339100-5jz2c\" (UID: \"0a3322e6-c658-4388-bc12-ee24558292df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339100-5jz2c" Oct 13 09:00:00 crc kubenswrapper[4833]: I1013 09:00:00.425253 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a3322e6-c658-4388-bc12-ee24558292df-secret-volume\") pod \"collect-profiles-29339100-5jz2c\" (UID: \"0a3322e6-c658-4388-bc12-ee24558292df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339100-5jz2c" Oct 13 09:00:00 crc kubenswrapper[4833]: I1013 09:00:00.425410 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2vr7\" (UniqueName: \"kubernetes.io/projected/0a3322e6-c658-4388-bc12-ee24558292df-kube-api-access-j2vr7\") pod \"collect-profiles-29339100-5jz2c\" (UID: \"0a3322e6-c658-4388-bc12-ee24558292df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339100-5jz2c" Oct 13 09:00:00 crc kubenswrapper[4833]: I1013 09:00:00.425503 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a3322e6-c658-4388-bc12-ee24558292df-config-volume\") pod \"collect-profiles-29339100-5jz2c\" (UID: \"0a3322e6-c658-4388-bc12-ee24558292df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339100-5jz2c" Oct 13 09:00:00 crc kubenswrapper[4833]: I1013 09:00:00.426905 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a3322e6-c658-4388-bc12-ee24558292df-config-volume\") pod \"collect-profiles-29339100-5jz2c\" (UID: \"0a3322e6-c658-4388-bc12-ee24558292df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339100-5jz2c" Oct 13 09:00:00 crc kubenswrapper[4833]: I1013 09:00:00.438870 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a3322e6-c658-4388-bc12-ee24558292df-secret-volume\") pod \"collect-profiles-29339100-5jz2c\" (UID: \"0a3322e6-c658-4388-bc12-ee24558292df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339100-5jz2c" Oct 13 09:00:00 crc kubenswrapper[4833]: I1013 09:00:00.452113 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2vr7\" (UniqueName: \"kubernetes.io/projected/0a3322e6-c658-4388-bc12-ee24558292df-kube-api-access-j2vr7\") pod \"collect-profiles-29339100-5jz2c\" (UID: \"0a3322e6-c658-4388-bc12-ee24558292df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339100-5jz2c" Oct 13 09:00:00 crc kubenswrapper[4833]: I1013 09:00:00.511271 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339100-5jz2c" Oct 13 09:00:00 crc kubenswrapper[4833]: I1013 09:00:00.542923 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 09:00:00 crc kubenswrapper[4833]: I1013 09:00:00.542986 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 09:00:00 crc kubenswrapper[4833]: I1013 09:00:00.543036 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 09:00:00 crc kubenswrapper[4833]: I1013 09:00:00.543863 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"012ebe3496e05271beb0f2417b1ba0dea73760fd7eab989d4ce27a4a433c6446"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 09:00:00 crc kubenswrapper[4833]: I1013 09:00:00.543924 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://012ebe3496e05271beb0f2417b1ba0dea73760fd7eab989d4ce27a4a433c6446" gracePeriod=600 Oct 13 09:00:00 crc kubenswrapper[4833]: E1013 09:00:00.674398 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:00:00 crc kubenswrapper[4833]: I1013 09:00:00.939717 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="012ebe3496e05271beb0f2417b1ba0dea73760fd7eab989d4ce27a4a433c6446" exitCode=0 Oct 13 09:00:00 crc kubenswrapper[4833]: I1013 09:00:00.939762 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"012ebe3496e05271beb0f2417b1ba0dea73760fd7eab989d4ce27a4a433c6446"} Oct 13 09:00:00 crc kubenswrapper[4833]: I1013 09:00:00.939797 4833 scope.go:117] "RemoveContainer" containerID="7e895bd5c6e21977d6a07da7fbbf215e42ea81d6910b18a5ec4218a548cc487e" Oct 13 09:00:00 crc kubenswrapper[4833]: I1013 09:00:00.940528 4833 scope.go:117] "RemoveContainer" containerID="012ebe3496e05271beb0f2417b1ba0dea73760fd7eab989d4ce27a4a433c6446" Oct 13 09:00:00 crc kubenswrapper[4833]: E1013 09:00:00.940832 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:00:01 crc kubenswrapper[4833]: I1013 09:00:01.048408 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339100-5jz2c"] Oct 13 09:00:01 crc kubenswrapper[4833]: I1013 09:00:01.953794 4833 generic.go:334] "Generic (PLEG): container finished" podID="0a3322e6-c658-4388-bc12-ee24558292df" containerID="042392dea7fd7362273056983adef5a464272278ce41830a457b68c0b90db82f" exitCode=0 Oct 13 09:00:01 crc kubenswrapper[4833]: I1013 09:00:01.953894 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339100-5jz2c" event={"ID":"0a3322e6-c658-4388-bc12-ee24558292df","Type":"ContainerDied","Data":"042392dea7fd7362273056983adef5a464272278ce41830a457b68c0b90db82f"} Oct 13 09:00:01 crc kubenswrapper[4833]: I1013 09:00:01.954260 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339100-5jz2c" event={"ID":"0a3322e6-c658-4388-bc12-ee24558292df","Type":"ContainerStarted","Data":"1104eb308714ea475bd377d9047c60147d7ae2bf9905eddc68f4589d7f8030a3"} Oct 13 09:00:03 crc kubenswrapper[4833]: I1013 09:00:03.400374 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339100-5jz2c" Oct 13 09:00:03 crc kubenswrapper[4833]: I1013 09:00:03.598083 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a3322e6-c658-4388-bc12-ee24558292df-secret-volume\") pod \"0a3322e6-c658-4388-bc12-ee24558292df\" (UID: \"0a3322e6-c658-4388-bc12-ee24558292df\") " Oct 13 09:00:03 crc kubenswrapper[4833]: I1013 09:00:03.598362 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2vr7\" (UniqueName: \"kubernetes.io/projected/0a3322e6-c658-4388-bc12-ee24558292df-kube-api-access-j2vr7\") pod \"0a3322e6-c658-4388-bc12-ee24558292df\" (UID: \"0a3322e6-c658-4388-bc12-ee24558292df\") " Oct 13 09:00:03 crc kubenswrapper[4833]: I1013 09:00:03.598440 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a3322e6-c658-4388-bc12-ee24558292df-config-volume\") pod \"0a3322e6-c658-4388-bc12-ee24558292df\" (UID: \"0a3322e6-c658-4388-bc12-ee24558292df\") " Oct 13 09:00:03 crc kubenswrapper[4833]: I1013 09:00:03.599367 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a3322e6-c658-4388-bc12-ee24558292df-config-volume" (OuterVolumeSpecName: "config-volume") pod "0a3322e6-c658-4388-bc12-ee24558292df" (UID: "0a3322e6-c658-4388-bc12-ee24558292df"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 09:00:03 crc kubenswrapper[4833]: I1013 09:00:03.605665 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a3322e6-c658-4388-bc12-ee24558292df-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0a3322e6-c658-4388-bc12-ee24558292df" (UID: "0a3322e6-c658-4388-bc12-ee24558292df"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 09:00:03 crc kubenswrapper[4833]: I1013 09:00:03.612039 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a3322e6-c658-4388-bc12-ee24558292df-kube-api-access-j2vr7" (OuterVolumeSpecName: "kube-api-access-j2vr7") pod "0a3322e6-c658-4388-bc12-ee24558292df" (UID: "0a3322e6-c658-4388-bc12-ee24558292df"). InnerVolumeSpecName "kube-api-access-j2vr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 09:00:03 crc kubenswrapper[4833]: I1013 09:00:03.713596 4833 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a3322e6-c658-4388-bc12-ee24558292df-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 13 09:00:03 crc kubenswrapper[4833]: I1013 09:00:03.713635 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2vr7\" (UniqueName: \"kubernetes.io/projected/0a3322e6-c658-4388-bc12-ee24558292df-kube-api-access-j2vr7\") on node \"crc\" DevicePath \"\"" Oct 13 09:00:03 crc kubenswrapper[4833]: I1013 09:00:03.713647 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a3322e6-c658-4388-bc12-ee24558292df-config-volume\") on node \"crc\" DevicePath \"\"" Oct 13 09:00:03 crc kubenswrapper[4833]: I1013 09:00:03.979236 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339100-5jz2c" event={"ID":"0a3322e6-c658-4388-bc12-ee24558292df","Type":"ContainerDied","Data":"1104eb308714ea475bd377d9047c60147d7ae2bf9905eddc68f4589d7f8030a3"} Oct 13 09:00:03 crc kubenswrapper[4833]: I1013 09:00:03.979280 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1104eb308714ea475bd377d9047c60147d7ae2bf9905eddc68f4589d7f8030a3" Oct 13 09:00:03 crc kubenswrapper[4833]: I1013 09:00:03.979291 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339100-5jz2c" Oct 13 09:00:04 crc kubenswrapper[4833]: I1013 09:00:04.492920 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339055-77zpf"] Oct 13 09:00:04 crc kubenswrapper[4833]: I1013 09:00:04.506950 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339055-77zpf"] Oct 13 09:00:04 crc kubenswrapper[4833]: I1013 09:00:04.644438 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40b04da5-acdc-491e-b1a6-c2a377dc8284" path="/var/lib/kubelet/pods/40b04da5-acdc-491e-b1a6-c2a377dc8284/volumes" Oct 13 09:00:15 crc kubenswrapper[4833]: I1013 09:00:15.626534 4833 scope.go:117] "RemoveContainer" containerID="012ebe3496e05271beb0f2417b1ba0dea73760fd7eab989d4ce27a4a433c6446" Oct 13 09:00:15 crc kubenswrapper[4833]: E1013 09:00:15.627416 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:00:30 crc kubenswrapper[4833]: I1013 09:00:30.644747 4833 scope.go:117] "RemoveContainer" containerID="012ebe3496e05271beb0f2417b1ba0dea73760fd7eab989d4ce27a4a433c6446" Oct 13 09:00:30 crc kubenswrapper[4833]: E1013 09:00:30.655329 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:00:45 crc kubenswrapper[4833]: I1013 09:00:45.628285 4833 scope.go:117] "RemoveContainer" containerID="012ebe3496e05271beb0f2417b1ba0dea73760fd7eab989d4ce27a4a433c6446" Oct 13 09:00:45 crc kubenswrapper[4833]: E1013 09:00:45.629715 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:00:57 crc kubenswrapper[4833]: I1013 09:00:57.918501 4833 scope.go:117] "RemoveContainer" containerID="77d1505ed1c409974f1884c5c44b65e1e7d11bf83e2f541f8523d078a2f3142d" Oct 13 09:00:59 crc kubenswrapper[4833]: I1013 09:00:59.627613 4833 scope.go:117] "RemoveContainer" containerID="012ebe3496e05271beb0f2417b1ba0dea73760fd7eab989d4ce27a4a433c6446" Oct 13 09:00:59 crc kubenswrapper[4833]: E1013 09:00:59.628306 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:01:00 crc kubenswrapper[4833]: I1013 09:01:00.164592 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29339101-7ntm4"] Oct 13 09:01:00 crc kubenswrapper[4833]: E1013 09:01:00.165060 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a3322e6-c658-4388-bc12-ee24558292df" containerName="collect-profiles" Oct 13 09:01:00 crc kubenswrapper[4833]: I1013 09:01:00.165081 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a3322e6-c658-4388-bc12-ee24558292df" containerName="collect-profiles" Oct 13 09:01:00 crc kubenswrapper[4833]: I1013 09:01:00.165278 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a3322e6-c658-4388-bc12-ee24558292df" containerName="collect-profiles" Oct 13 09:01:00 crc kubenswrapper[4833]: I1013 09:01:00.166433 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29339101-7ntm4" Oct 13 09:01:00 crc kubenswrapper[4833]: I1013 09:01:00.189235 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29339101-7ntm4"] Oct 13 09:01:00 crc kubenswrapper[4833]: I1013 09:01:00.289913 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4-fernet-keys\") pod \"keystone-cron-29339101-7ntm4\" (UID: \"8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4\") " pod="openstack/keystone-cron-29339101-7ntm4" Oct 13 09:01:00 crc kubenswrapper[4833]: I1013 09:01:00.290247 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bltjp\" (UniqueName: \"kubernetes.io/projected/8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4-kube-api-access-bltjp\") pod \"keystone-cron-29339101-7ntm4\" (UID: \"8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4\") " pod="openstack/keystone-cron-29339101-7ntm4" Oct 13 09:01:00 crc kubenswrapper[4833]: I1013 09:01:00.290780 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4-config-data\") pod \"keystone-cron-29339101-7ntm4\" (UID: \"8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4\") " pod="openstack/keystone-cron-29339101-7ntm4" Oct 13 09:01:00 crc kubenswrapper[4833]: I1013 09:01:00.290866 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4-combined-ca-bundle\") pod \"keystone-cron-29339101-7ntm4\" (UID: \"8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4\") " pod="openstack/keystone-cron-29339101-7ntm4" Oct 13 09:01:00 crc kubenswrapper[4833]: I1013 09:01:00.393271 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4-config-data\") pod \"keystone-cron-29339101-7ntm4\" (UID: \"8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4\") " pod="openstack/keystone-cron-29339101-7ntm4" Oct 13 09:01:00 crc kubenswrapper[4833]: I1013 09:01:00.393333 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4-combined-ca-bundle\") pod \"keystone-cron-29339101-7ntm4\" (UID: \"8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4\") " pod="openstack/keystone-cron-29339101-7ntm4" Oct 13 09:01:00 crc kubenswrapper[4833]: I1013 09:01:00.393384 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4-fernet-keys\") pod \"keystone-cron-29339101-7ntm4\" (UID: \"8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4\") " pod="openstack/keystone-cron-29339101-7ntm4" Oct 13 09:01:00 crc kubenswrapper[4833]: I1013 09:01:00.393438 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bltjp\" (UniqueName: \"kubernetes.io/projected/8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4-kube-api-access-bltjp\") pod \"keystone-cron-29339101-7ntm4\" (UID: \"8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4\") " pod="openstack/keystone-cron-29339101-7ntm4" Oct 13 09:01:00 crc kubenswrapper[4833]: I1013 09:01:00.399562 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4-fernet-keys\") pod \"keystone-cron-29339101-7ntm4\" (UID: \"8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4\") " pod="openstack/keystone-cron-29339101-7ntm4" Oct 13 09:01:00 crc kubenswrapper[4833]: I1013 09:01:00.399883 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4-combined-ca-bundle\") pod \"keystone-cron-29339101-7ntm4\" (UID: \"8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4\") " pod="openstack/keystone-cron-29339101-7ntm4" Oct 13 09:01:00 crc kubenswrapper[4833]: I1013 09:01:00.400339 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4-config-data\") pod \"keystone-cron-29339101-7ntm4\" (UID: \"8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4\") " pod="openstack/keystone-cron-29339101-7ntm4" Oct 13 09:01:00 crc kubenswrapper[4833]: I1013 09:01:00.408903 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bltjp\" (UniqueName: \"kubernetes.io/projected/8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4-kube-api-access-bltjp\") pod \"keystone-cron-29339101-7ntm4\" (UID: \"8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4\") " pod="openstack/keystone-cron-29339101-7ntm4" Oct 13 09:01:00 crc kubenswrapper[4833]: I1013 09:01:00.489619 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29339101-7ntm4" Oct 13 09:01:00 crc kubenswrapper[4833]: I1013 09:01:00.943068 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29339101-7ntm4"] Oct 13 09:01:01 crc kubenswrapper[4833]: I1013 09:01:01.601811 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29339101-7ntm4" event={"ID":"8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4","Type":"ContainerStarted","Data":"2b60ad7553848a801e6429a012b23918baf22d9a1bbc1569383e0a2df2768650"} Oct 13 09:01:01 crc kubenswrapper[4833]: I1013 09:01:01.602437 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29339101-7ntm4" event={"ID":"8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4","Type":"ContainerStarted","Data":"6e5cb5297ecb673a1d47f93aa03c2c81b4134df7caef22ff23c55ba2d63049bb"} Oct 13 09:01:01 crc kubenswrapper[4833]: I1013 09:01:01.618259 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29339101-7ntm4" podStartSLOduration=1.618242497 podStartE2EDuration="1.618242497s" podCreationTimestamp="2025-10-13 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 09:01:01.616753954 +0000 UTC m=+9151.717176870" watchObservedRunningTime="2025-10-13 09:01:01.618242497 +0000 UTC m=+9151.718665413" Oct 13 09:01:04 crc kubenswrapper[4833]: I1013 09:01:04.639515 4833 generic.go:334] "Generic (PLEG): container finished" podID="8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4" containerID="2b60ad7553848a801e6429a012b23918baf22d9a1bbc1569383e0a2df2768650" exitCode=0 Oct 13 09:01:04 crc kubenswrapper[4833]: I1013 09:01:04.646848 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29339101-7ntm4" event={"ID":"8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4","Type":"ContainerDied","Data":"2b60ad7553848a801e6429a012b23918baf22d9a1bbc1569383e0a2df2768650"} Oct 13 09:01:06 crc kubenswrapper[4833]: I1013 09:01:06.069827 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29339101-7ntm4" Oct 13 09:01:06 crc kubenswrapper[4833]: I1013 09:01:06.138517 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4-fernet-keys\") pod \"8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4\" (UID: \"8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4\") " Oct 13 09:01:06 crc kubenswrapper[4833]: I1013 09:01:06.138618 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bltjp\" (UniqueName: \"kubernetes.io/projected/8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4-kube-api-access-bltjp\") pod \"8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4\" (UID: \"8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4\") " Oct 13 09:01:06 crc kubenswrapper[4833]: I1013 09:01:06.138664 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4-config-data\") pod \"8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4\" (UID: \"8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4\") " Oct 13 09:01:06 crc kubenswrapper[4833]: I1013 09:01:06.138747 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4-combined-ca-bundle\") pod \"8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4\" (UID: \"8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4\") " Oct 13 09:01:06 crc kubenswrapper[4833]: I1013 09:01:06.144990 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4-kube-api-access-bltjp" (OuterVolumeSpecName: "kube-api-access-bltjp") pod "8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4" (UID: "8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4"). InnerVolumeSpecName "kube-api-access-bltjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 09:01:06 crc kubenswrapper[4833]: I1013 09:01:06.149095 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4" (UID: "8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 09:01:06 crc kubenswrapper[4833]: I1013 09:01:06.184969 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4" (UID: "8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 09:01:06 crc kubenswrapper[4833]: I1013 09:01:06.205884 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4-config-data" (OuterVolumeSpecName: "config-data") pod "8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4" (UID: "8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 09:01:06 crc kubenswrapper[4833]: I1013 09:01:06.240169 4833 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 13 09:01:06 crc kubenswrapper[4833]: I1013 09:01:06.240203 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bltjp\" (UniqueName: \"kubernetes.io/projected/8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4-kube-api-access-bltjp\") on node \"crc\" DevicePath \"\"" Oct 13 09:01:06 crc kubenswrapper[4833]: I1013 09:01:06.240214 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 09:01:06 crc kubenswrapper[4833]: I1013 09:01:06.240223 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 09:01:06 crc kubenswrapper[4833]: I1013 09:01:06.664276 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29339101-7ntm4" event={"ID":"8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4","Type":"ContainerDied","Data":"6e5cb5297ecb673a1d47f93aa03c2c81b4134df7caef22ff23c55ba2d63049bb"} Oct 13 09:01:06 crc kubenswrapper[4833]: I1013 09:01:06.664534 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e5cb5297ecb673a1d47f93aa03c2c81b4134df7caef22ff23c55ba2d63049bb" Oct 13 09:01:06 crc kubenswrapper[4833]: I1013 09:01:06.664306 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29339101-7ntm4" Oct 13 09:01:13 crc kubenswrapper[4833]: I1013 09:01:13.628425 4833 scope.go:117] "RemoveContainer" containerID="012ebe3496e05271beb0f2417b1ba0dea73760fd7eab989d4ce27a4a433c6446" Oct 13 09:01:13 crc kubenswrapper[4833]: E1013 09:01:13.629489 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:01:26 crc kubenswrapper[4833]: I1013 09:01:26.628860 4833 scope.go:117] "RemoveContainer" containerID="012ebe3496e05271beb0f2417b1ba0dea73760fd7eab989d4ce27a4a433c6446" Oct 13 09:01:26 crc kubenswrapper[4833]: E1013 09:01:26.629764 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:01:41 crc kubenswrapper[4833]: I1013 09:01:41.629016 4833 scope.go:117] "RemoveContainer" containerID="012ebe3496e05271beb0f2417b1ba0dea73760fd7eab989d4ce27a4a433c6446" Oct 13 09:01:41 crc kubenswrapper[4833]: E1013 09:01:41.629822 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:01:46 crc kubenswrapper[4833]: I1013 09:01:46.126813 4833 generic.go:334] "Generic (PLEG): container finished" podID="3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c" containerID="a67cfd81ce15b2e7556dddd16aa7678532c1aba529dce6f370f8aa2f889b34bb" exitCode=0 Oct 13 09:01:46 crc kubenswrapper[4833]: I1013 09:01:46.126909 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-mvqtv" event={"ID":"3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c","Type":"ContainerDied","Data":"a67cfd81ce15b2e7556dddd16aa7678532c1aba529dce6f370f8aa2f889b34bb"} Oct 13 09:01:47 crc kubenswrapper[4833]: I1013 09:01:47.600857 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-mvqtv" Oct 13 09:01:47 crc kubenswrapper[4833]: I1013 09:01:47.733730 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c-ssh-key\") pod \"3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c\" (UID: \"3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c\") " Oct 13 09:01:47 crc kubenswrapper[4833]: I1013 09:01:47.734095 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c-inventory\") pod \"3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c\" (UID: \"3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c\") " Oct 13 09:01:47 crc kubenswrapper[4833]: I1013 09:01:47.734146 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c-neutron-dhcp-combined-ca-bundle\") pod \"3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c\" (UID: \"3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c\") " Oct 13 09:01:47 crc kubenswrapper[4833]: I1013 09:01:47.734290 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c-neutron-dhcp-agent-neutron-config-0\") pod \"3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c\" (UID: \"3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c\") " Oct 13 09:01:47 crc kubenswrapper[4833]: I1013 09:01:47.734394 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nspx\" (UniqueName: \"kubernetes.io/projected/3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c-kube-api-access-2nspx\") pod \"3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c\" (UID: \"3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c\") " Oct 13 09:01:47 crc kubenswrapper[4833]: I1013 09:01:47.739111 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c-kube-api-access-2nspx" (OuterVolumeSpecName: "kube-api-access-2nspx") pod "3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c" (UID: "3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c"). InnerVolumeSpecName "kube-api-access-2nspx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 09:01:47 crc kubenswrapper[4833]: I1013 09:01:47.759946 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c" (UID: "3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 09:01:47 crc kubenswrapper[4833]: I1013 09:01:47.766144 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c-neutron-dhcp-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-dhcp-agent-neutron-config-0") pod "3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c" (UID: "3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c"). InnerVolumeSpecName "neutron-dhcp-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 09:01:47 crc kubenswrapper[4833]: I1013 09:01:47.768904 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c" (UID: "3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 09:01:47 crc kubenswrapper[4833]: I1013 09:01:47.770424 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c-inventory" (OuterVolumeSpecName: "inventory") pod "3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c" (UID: "3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 09:01:47 crc kubenswrapper[4833]: I1013 09:01:47.838309 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 09:01:47 crc kubenswrapper[4833]: I1013 09:01:47.838395 4833 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 09:01:47 crc kubenswrapper[4833]: I1013 09:01:47.838419 4833 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c-neutron-dhcp-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 13 09:01:47 crc kubenswrapper[4833]: I1013 09:01:47.838440 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nspx\" (UniqueName: \"kubernetes.io/projected/3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c-kube-api-access-2nspx\") on node \"crc\" DevicePath \"\"" Oct 13 09:01:47 crc kubenswrapper[4833]: I1013 09:01:47.838458 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 09:01:48 crc kubenswrapper[4833]: I1013 09:01:48.151631 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-mvqtv" event={"ID":"3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c","Type":"ContainerDied","Data":"4fd978a080c00577f332ba71fe7aa28040edd58da158ed4fac71009caca914df"} Oct 13 09:01:48 crc kubenswrapper[4833]: I1013 09:01:48.151693 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fd978a080c00577f332ba71fe7aa28040edd58da158ed4fac71009caca914df" Oct 13 09:01:48 crc kubenswrapper[4833]: I1013 09:01:48.151752 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-mvqtv" Oct 13 09:01:52 crc kubenswrapper[4833]: I1013 09:01:52.628483 4833 scope.go:117] "RemoveContainer" containerID="012ebe3496e05271beb0f2417b1ba0dea73760fd7eab989d4ce27a4a433c6446" Oct 13 09:01:52 crc kubenswrapper[4833]: E1013 09:01:52.629109 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:02:03 crc kubenswrapper[4833]: I1013 09:02:03.627863 4833 scope.go:117] "RemoveContainer" containerID="012ebe3496e05271beb0f2417b1ba0dea73760fd7eab989d4ce27a4a433c6446" Oct 13 09:02:03 crc kubenswrapper[4833]: E1013 09:02:03.628790 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:02:14 crc kubenswrapper[4833]: I1013 09:02:14.628188 4833 scope.go:117] "RemoveContainer" containerID="012ebe3496e05271beb0f2417b1ba0dea73760fd7eab989d4ce27a4a433c6446" Oct 13 09:02:14 crc kubenswrapper[4833]: E1013 09:02:14.629113 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:02:14 crc kubenswrapper[4833]: I1013 09:02:14.734810 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 13 09:02:14 crc kubenswrapper[4833]: I1013 09:02:14.735071 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="31c249ee-bf3f-4fdd-8f22-56ef6f18881b" containerName="nova-cell0-conductor-conductor" containerID="cri-o://51fda6b52644a269bdc7ae0f18a7b037d0dca524844f50d5ea60e2488fd33840" gracePeriod=30 Oct 13 09:02:14 crc kubenswrapper[4833]: I1013 09:02:14.766008 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 13 09:02:14 crc kubenswrapper[4833]: I1013 09:02:14.766503 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4" containerName="nova-cell1-conductor-conductor" containerID="cri-o://9a9b0f375eac92bc57cf8dafdca3f62c30a35eb6b448ebba820347736a0bd092" gracePeriod=30 Oct 13 09:02:15 crc kubenswrapper[4833]: E1013 09:02:15.128600 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9a9b0f375eac92bc57cf8dafdca3f62c30a35eb6b448ebba820347736a0bd092" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 13 09:02:15 crc kubenswrapper[4833]: E1013 09:02:15.130750 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9a9b0f375eac92bc57cf8dafdca3f62c30a35eb6b448ebba820347736a0bd092" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 13 09:02:15 crc kubenswrapper[4833]: E1013 09:02:15.132572 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9a9b0f375eac92bc57cf8dafdca3f62c30a35eb6b448ebba820347736a0bd092" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 13 09:02:15 crc kubenswrapper[4833]: E1013 09:02:15.132675 4833 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4" containerName="nova-cell1-conductor-conductor" Oct 13 09:02:15 crc kubenswrapper[4833]: E1013 09:02:15.763694 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="51fda6b52644a269bdc7ae0f18a7b037d0dca524844f50d5ea60e2488fd33840" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 13 09:02:15 crc kubenswrapper[4833]: E1013 09:02:15.765947 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="51fda6b52644a269bdc7ae0f18a7b037d0dca524844f50d5ea60e2488fd33840" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 13 09:02:15 crc kubenswrapper[4833]: E1013 09:02:15.767271 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="51fda6b52644a269bdc7ae0f18a7b037d0dca524844f50d5ea60e2488fd33840" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 13 09:02:15 crc kubenswrapper[4833]: E1013 09:02:15.767308 4833 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="31c249ee-bf3f-4fdd-8f22-56ef6f18881b" containerName="nova-cell0-conductor-conductor" Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.067026 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.067779 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ef7ccb3e-8928-43a2-abdb-6225bfabd4e5" containerName="nova-api-api" containerID="cri-o://8aae16e4081dc7b95230701a9443f695850056f967001774ef8fbe7a619cdd28" gracePeriod=30 Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.067787 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ef7ccb3e-8928-43a2-abdb-6225bfabd4e5" containerName="nova-api-log" containerID="cri-o://e308f63114bd164e298425d0bba795d1fad67e30b533635b1948b02f03e31c53" gracePeriod=30 Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.093692 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.093914 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="be48f910-53ab-4cbd-a846-543d163f0edc" containerName="nova-metadata-log" containerID="cri-o://a0e5c6b3ae0fd9cc383069142322c85a9c6ce9b3367d56d64ab94495047f3e8a" gracePeriod=30 Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.094145 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="be48f910-53ab-4cbd-a846-543d163f0edc" containerName="nova-metadata-metadata" containerID="cri-o://ac0219a87c1325770c4894722161b965a03354f38c4c6cf49adc9e88a879ac7c" gracePeriod=30 Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.112793 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.113014 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="6f63dce5-488f-43e5-8217-5c855de31f30" containerName="nova-scheduler-scheduler" containerID="cri-o://545758d2f6f4b36d5c73ab20671c7d7211159aa78a02182bff7454e234246929" gracePeriod=30 Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.341335 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.460651 4833 generic.go:334] "Generic (PLEG): container finished" podID="ef7ccb3e-8928-43a2-abdb-6225bfabd4e5" containerID="e308f63114bd164e298425d0bba795d1fad67e30b533635b1948b02f03e31c53" exitCode=143 Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.460738 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5","Type":"ContainerDied","Data":"e308f63114bd164e298425d0bba795d1fad67e30b533635b1948b02f03e31c53"} Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.464305 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be48f910-53ab-4cbd-a846-543d163f0edc","Type":"ContainerDied","Data":"a0e5c6b3ae0fd9cc383069142322c85a9c6ce9b3367d56d64ab94495047f3e8a"} Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.464086 4833 generic.go:334] "Generic (PLEG): container finished" podID="be48f910-53ab-4cbd-a846-543d163f0edc" containerID="a0e5c6b3ae0fd9cc383069142322c85a9c6ce9b3367d56d64ab94495047f3e8a" exitCode=143 Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.468036 4833 generic.go:334] "Generic (PLEG): container finished" podID="b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4" containerID="9a9b0f375eac92bc57cf8dafdca3f62c30a35eb6b448ebba820347736a0bd092" exitCode=0 Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.468073 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4","Type":"ContainerDied","Data":"9a9b0f375eac92bc57cf8dafdca3f62c30a35eb6b448ebba820347736a0bd092"} Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.468102 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4","Type":"ContainerDied","Data":"3883e17b878d1363e5b7015af706abad93c7898a89f3fee27da8817323702a79"} Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.468126 4833 scope.go:117] "RemoveContainer" containerID="9a9b0f375eac92bc57cf8dafdca3f62c30a35eb6b448ebba820347736a0bd092" Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.468260 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.497344 4833 scope.go:117] "RemoveContainer" containerID="9a9b0f375eac92bc57cf8dafdca3f62c30a35eb6b448ebba820347736a0bd092" Oct 13 09:02:16 crc kubenswrapper[4833]: E1013 09:02:16.497912 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a9b0f375eac92bc57cf8dafdca3f62c30a35eb6b448ebba820347736a0bd092\": container with ID starting with 9a9b0f375eac92bc57cf8dafdca3f62c30a35eb6b448ebba820347736a0bd092 not found: ID does not exist" containerID="9a9b0f375eac92bc57cf8dafdca3f62c30a35eb6b448ebba820347736a0bd092" Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.497970 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a9b0f375eac92bc57cf8dafdca3f62c30a35eb6b448ebba820347736a0bd092"} err="failed to get container status \"9a9b0f375eac92bc57cf8dafdca3f62c30a35eb6b448ebba820347736a0bd092\": rpc error: code = NotFound desc = could not find container \"9a9b0f375eac92bc57cf8dafdca3f62c30a35eb6b448ebba820347736a0bd092\": container with ID starting with 9a9b0f375eac92bc57cf8dafdca3f62c30a35eb6b448ebba820347736a0bd092 not found: ID does not exist" Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.543192 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4-combined-ca-bundle\") pod \"b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4\" (UID: \"b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4\") " Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.543254 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4-config-data\") pod \"b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4\" (UID: \"b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4\") " Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.543490 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk29g\" (UniqueName: \"kubernetes.io/projected/b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4-kube-api-access-hk29g\") pod \"b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4\" (UID: \"b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4\") " Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.557779 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4-kube-api-access-hk29g" (OuterVolumeSpecName: "kube-api-access-hk29g") pod "b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4" (UID: "b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4"). InnerVolumeSpecName "kube-api-access-hk29g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.573737 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4" (UID: "b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.589398 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4-config-data" (OuterVolumeSpecName: "config-data") pod "b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4" (UID: "b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 09:02:16 crc kubenswrapper[4833]: E1013 09:02:16.596956 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="545758d2f6f4b36d5c73ab20671c7d7211159aa78a02182bff7454e234246929" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 13 09:02:16 crc kubenswrapper[4833]: E1013 09:02:16.599728 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="545758d2f6f4b36d5c73ab20671c7d7211159aa78a02182bff7454e234246929" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 13 09:02:16 crc kubenswrapper[4833]: E1013 09:02:16.601354 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="545758d2f6f4b36d5c73ab20671c7d7211159aa78a02182bff7454e234246929" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 13 09:02:16 crc kubenswrapper[4833]: E1013 09:02:16.601397 4833 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="6f63dce5-488f-43e5-8217-5c855de31f30" containerName="nova-scheduler-scheduler" Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.659554 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hk29g\" (UniqueName: \"kubernetes.io/projected/b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4-kube-api-access-hk29g\") on node \"crc\" DevicePath \"\"" Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.659951 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.660038 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.813466 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.827679 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.839154 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 13 09:02:16 crc kubenswrapper[4833]: E1013 09:02:16.839596 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c" containerName="neutron-dhcp-openstack-openstack-cell1" Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.839613 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c" containerName="neutron-dhcp-openstack-openstack-cell1" Oct 13 09:02:16 crc kubenswrapper[4833]: E1013 09:02:16.839635 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4" containerName="keystone-cron" Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.839641 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4" containerName="keystone-cron" Oct 13 09:02:16 crc kubenswrapper[4833]: E1013 09:02:16.839648 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4" containerName="nova-cell1-conductor-conductor" Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.839654 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4" containerName="nova-cell1-conductor-conductor" Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.839871 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c" containerName="neutron-dhcp-openstack-openstack-cell1" Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.839887 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4" containerName="nova-cell1-conductor-conductor" Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.839899 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4" containerName="keystone-cron" Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.840623 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.851005 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.856197 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.966660 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rt4h\" (UniqueName: \"kubernetes.io/projected/f9864c95-1312-4722-a9db-1848bf00059a-kube-api-access-8rt4h\") pod \"nova-cell1-conductor-0\" (UID: \"f9864c95-1312-4722-a9db-1848bf00059a\") " pod="openstack/nova-cell1-conductor-0" Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.967030 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9864c95-1312-4722-a9db-1848bf00059a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f9864c95-1312-4722-a9db-1848bf00059a\") " pod="openstack/nova-cell1-conductor-0" Oct 13 09:02:16 crc kubenswrapper[4833]: I1013 09:02:16.967076 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9864c95-1312-4722-a9db-1848bf00059a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f9864c95-1312-4722-a9db-1848bf00059a\") " pod="openstack/nova-cell1-conductor-0" Oct 13 09:02:17 crc kubenswrapper[4833]: I1013 09:02:17.069130 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rt4h\" (UniqueName: \"kubernetes.io/projected/f9864c95-1312-4722-a9db-1848bf00059a-kube-api-access-8rt4h\") pod \"nova-cell1-conductor-0\" (UID: \"f9864c95-1312-4722-a9db-1848bf00059a\") " pod="openstack/nova-cell1-conductor-0" Oct 13 09:02:17 crc kubenswrapper[4833]: I1013 09:02:17.069190 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9864c95-1312-4722-a9db-1848bf00059a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f9864c95-1312-4722-a9db-1848bf00059a\") " pod="openstack/nova-cell1-conductor-0" Oct 13 09:02:17 crc kubenswrapper[4833]: I1013 09:02:17.069243 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9864c95-1312-4722-a9db-1848bf00059a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f9864c95-1312-4722-a9db-1848bf00059a\") " pod="openstack/nova-cell1-conductor-0" Oct 13 09:02:17 crc kubenswrapper[4833]: I1013 09:02:17.072924 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9864c95-1312-4722-a9db-1848bf00059a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f9864c95-1312-4722-a9db-1848bf00059a\") " pod="openstack/nova-cell1-conductor-0" Oct 13 09:02:17 crc kubenswrapper[4833]: I1013 09:02:17.073687 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9864c95-1312-4722-a9db-1848bf00059a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f9864c95-1312-4722-a9db-1848bf00059a\") " pod="openstack/nova-cell1-conductor-0" Oct 13 09:02:17 crc kubenswrapper[4833]: I1013 09:02:17.790196 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rt4h\" (UniqueName: \"kubernetes.io/projected/f9864c95-1312-4722-a9db-1848bf00059a-kube-api-access-8rt4h\") pod \"nova-cell1-conductor-0\" (UID: \"f9864c95-1312-4722-a9db-1848bf00059a\") " pod="openstack/nova-cell1-conductor-0" Oct 13 09:02:17 crc kubenswrapper[4833]: I1013 09:02:17.850294 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 13 09:02:18 crc kubenswrapper[4833]: I1013 09:02:18.281220 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 13 09:02:18 crc kubenswrapper[4833]: I1013 09:02:18.494709 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f9864c95-1312-4722-a9db-1848bf00059a","Type":"ContainerStarted","Data":"51043412188e4858dc95bf0f3d565d64d60701a878ed5a4dd4589231f6545f20"} Oct 13 09:02:18 crc kubenswrapper[4833]: I1013 09:02:18.638455 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4" path="/var/lib/kubelet/pods/b9b8e12b-a2fc-4f9f-9f65-6c9c42af70b4/volumes" Oct 13 09:02:19 crc kubenswrapper[4833]: I1013 09:02:19.500904 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="be48f910-53ab-4cbd-a846-543d163f0edc" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.96:8775/\": read tcp 10.217.0.2:55852->10.217.1.96:8775: read: connection reset by peer" Oct 13 09:02:19 crc kubenswrapper[4833]: I1013 09:02:19.501479 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="be48f910-53ab-4cbd-a846-543d163f0edc" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.96:8775/\": read tcp 10.217.0.2:55868->10.217.1.96:8775: read: connection reset by peer" Oct 13 09:02:19 crc kubenswrapper[4833]: I1013 09:02:19.506763 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f9864c95-1312-4722-a9db-1848bf00059a","Type":"ContainerStarted","Data":"70d28b7fbe5ce5e6417dd92a88f3993ccedd7e2a22129e241a289f85acf7ddae"} Oct 13 09:02:19 crc kubenswrapper[4833]: I1013 09:02:19.506957 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Oct 13 09:02:19 crc kubenswrapper[4833]: I1013 09:02:19.536031 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.536008523 podStartE2EDuration="3.536008523s" podCreationTimestamp="2025-10-13 09:02:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 09:02:19.525614297 +0000 UTC m=+9229.626037213" watchObservedRunningTime="2025-10-13 09:02:19.536008523 +0000 UTC m=+9229.636431439" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.146076 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.235236 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.244941 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.344563 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-combined-ca-bundle\") pod \"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5\" (UID: \"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5\") " Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.344910 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjzlp\" (UniqueName: \"kubernetes.io/projected/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-kube-api-access-gjzlp\") pod \"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5\" (UID: \"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5\") " Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.344984 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-public-tls-certs\") pod \"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5\" (UID: \"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5\") " Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.345031 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89lj8\" (UniqueName: \"kubernetes.io/projected/31c249ee-bf3f-4fdd-8f22-56ef6f18881b-kube-api-access-89lj8\") pod \"31c249ee-bf3f-4fdd-8f22-56ef6f18881b\" (UID: \"31c249ee-bf3f-4fdd-8f22-56ef6f18881b\") " Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.345150 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-internal-tls-certs\") pod \"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5\" (UID: \"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5\") " Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.345371 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31c249ee-bf3f-4fdd-8f22-56ef6f18881b-config-data\") pod \"31c249ee-bf3f-4fdd-8f22-56ef6f18881b\" (UID: \"31c249ee-bf3f-4fdd-8f22-56ef6f18881b\") " Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.345420 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b2cj\" (UniqueName: \"kubernetes.io/projected/be48f910-53ab-4cbd-a846-543d163f0edc-kube-api-access-9b2cj\") pod \"be48f910-53ab-4cbd-a846-543d163f0edc\" (UID: \"be48f910-53ab-4cbd-a846-543d163f0edc\") " Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.345467 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be48f910-53ab-4cbd-a846-543d163f0edc-combined-ca-bundle\") pod \"be48f910-53ab-4cbd-a846-543d163f0edc\" (UID: \"be48f910-53ab-4cbd-a846-543d163f0edc\") " Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.345493 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-logs\") pod \"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5\" (UID: \"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5\") " Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.345607 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31c249ee-bf3f-4fdd-8f22-56ef6f18881b-combined-ca-bundle\") pod \"31c249ee-bf3f-4fdd-8f22-56ef6f18881b\" (UID: \"31c249ee-bf3f-4fdd-8f22-56ef6f18881b\") " Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.345626 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-config-data\") pod \"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5\" (UID: \"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5\") " Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.345654 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/be48f910-53ab-4cbd-a846-543d163f0edc-nova-metadata-tls-certs\") pod \"be48f910-53ab-4cbd-a846-543d163f0edc\" (UID: \"be48f910-53ab-4cbd-a846-543d163f0edc\") " Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.346418 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-logs" (OuterVolumeSpecName: "logs") pod "ef7ccb3e-8928-43a2-abdb-6225bfabd4e5" (UID: "ef7ccb3e-8928-43a2-abdb-6225bfabd4e5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.346617 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-logs\") on node \"crc\" DevicePath \"\"" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.351661 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-kube-api-access-gjzlp" (OuterVolumeSpecName: "kube-api-access-gjzlp") pod "ef7ccb3e-8928-43a2-abdb-6225bfabd4e5" (UID: "ef7ccb3e-8928-43a2-abdb-6225bfabd4e5"). InnerVolumeSpecName "kube-api-access-gjzlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.354373 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31c249ee-bf3f-4fdd-8f22-56ef6f18881b-kube-api-access-89lj8" (OuterVolumeSpecName: "kube-api-access-89lj8") pod "31c249ee-bf3f-4fdd-8f22-56ef6f18881b" (UID: "31c249ee-bf3f-4fdd-8f22-56ef6f18881b"). InnerVolumeSpecName "kube-api-access-89lj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.372898 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be48f910-53ab-4cbd-a846-543d163f0edc-kube-api-access-9b2cj" (OuterVolumeSpecName: "kube-api-access-9b2cj") pod "be48f910-53ab-4cbd-a846-543d163f0edc" (UID: "be48f910-53ab-4cbd-a846-543d163f0edc"). InnerVolumeSpecName "kube-api-access-9b2cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.387648 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef7ccb3e-8928-43a2-abdb-6225bfabd4e5" (UID: "ef7ccb3e-8928-43a2-abdb-6225bfabd4e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.402767 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31c249ee-bf3f-4fdd-8f22-56ef6f18881b-config-data" (OuterVolumeSpecName: "config-data") pod "31c249ee-bf3f-4fdd-8f22-56ef6f18881b" (UID: "31c249ee-bf3f-4fdd-8f22-56ef6f18881b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.416078 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-config-data" (OuterVolumeSpecName: "config-data") pod "ef7ccb3e-8928-43a2-abdb-6225bfabd4e5" (UID: "ef7ccb3e-8928-43a2-abdb-6225bfabd4e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.440089 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ef7ccb3e-8928-43a2-abdb-6225bfabd4e5" (UID: "ef7ccb3e-8928-43a2-abdb-6225bfabd4e5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.447812 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be48f910-53ab-4cbd-a846-543d163f0edc-logs\") pod \"be48f910-53ab-4cbd-a846-543d163f0edc\" (UID: \"be48f910-53ab-4cbd-a846-543d163f0edc\") " Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.448015 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be48f910-53ab-4cbd-a846-543d163f0edc-config-data\") pod \"be48f910-53ab-4cbd-a846-543d163f0edc\" (UID: \"be48f910-53ab-4cbd-a846-543d163f0edc\") " Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.448734 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be48f910-53ab-4cbd-a846-543d163f0edc-logs" (OuterVolumeSpecName: "logs") pod "be48f910-53ab-4cbd-a846-543d163f0edc" (UID: "be48f910-53ab-4cbd-a846-543d163f0edc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.452512 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be48f910-53ab-4cbd-a846-543d163f0edc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be48f910-53ab-4cbd-a846-543d163f0edc" (UID: "be48f910-53ab-4cbd-a846-543d163f0edc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.453624 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.453678 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.453689 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjzlp\" (UniqueName: \"kubernetes.io/projected/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-kube-api-access-gjzlp\") on node \"crc\" DevicePath \"\"" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.453698 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89lj8\" (UniqueName: \"kubernetes.io/projected/31c249ee-bf3f-4fdd-8f22-56ef6f18881b-kube-api-access-89lj8\") on node \"crc\" DevicePath \"\"" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.453707 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be48f910-53ab-4cbd-a846-543d163f0edc-logs\") on node \"crc\" DevicePath \"\"" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.453715 4833 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.453723 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31c249ee-bf3f-4fdd-8f22-56ef6f18881b-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.453732 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b2cj\" (UniqueName: \"kubernetes.io/projected/be48f910-53ab-4cbd-a846-543d163f0edc-kube-api-access-9b2cj\") on node \"crc\" DevicePath \"\"" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.453740 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be48f910-53ab-4cbd-a846-543d163f0edc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.454076 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31c249ee-bf3f-4fdd-8f22-56ef6f18881b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31c249ee-bf3f-4fdd-8f22-56ef6f18881b" (UID: "31c249ee-bf3f-4fdd-8f22-56ef6f18881b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.457503 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ef7ccb3e-8928-43a2-abdb-6225bfabd4e5" (UID: "ef7ccb3e-8928-43a2-abdb-6225bfabd4e5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.463615 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be48f910-53ab-4cbd-a846-543d163f0edc-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "be48f910-53ab-4cbd-a846-543d163f0edc" (UID: "be48f910-53ab-4cbd-a846-543d163f0edc"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.480428 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be48f910-53ab-4cbd-a846-543d163f0edc-config-data" (OuterVolumeSpecName: "config-data") pod "be48f910-53ab-4cbd-a846-543d163f0edc" (UID: "be48f910-53ab-4cbd-a846-543d163f0edc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.525809 4833 generic.go:334] "Generic (PLEG): container finished" podID="ef7ccb3e-8928-43a2-abdb-6225bfabd4e5" containerID="8aae16e4081dc7b95230701a9443f695850056f967001774ef8fbe7a619cdd28" exitCode=0 Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.525876 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5","Type":"ContainerDied","Data":"8aae16e4081dc7b95230701a9443f695850056f967001774ef8fbe7a619cdd28"} Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.525883 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.525903 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ef7ccb3e-8928-43a2-abdb-6225bfabd4e5","Type":"ContainerDied","Data":"f0d38829228e619af6ee53f2ce0c0920d2cabb9f3e6e17f58d17d62fd45af8c7"} Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.525922 4833 scope.go:117] "RemoveContainer" containerID="8aae16e4081dc7b95230701a9443f695850056f967001774ef8fbe7a619cdd28" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.529547 4833 generic.go:334] "Generic (PLEG): container finished" podID="6f63dce5-488f-43e5-8217-5c855de31f30" containerID="545758d2f6f4b36d5c73ab20671c7d7211159aa78a02182bff7454e234246929" exitCode=0 Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.529605 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6f63dce5-488f-43e5-8217-5c855de31f30","Type":"ContainerDied","Data":"545758d2f6f4b36d5c73ab20671c7d7211159aa78a02182bff7454e234246929"} Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.531883 4833 generic.go:334] "Generic (PLEG): container finished" podID="be48f910-53ab-4cbd-a846-543d163f0edc" containerID="ac0219a87c1325770c4894722161b965a03354f38c4c6cf49adc9e88a879ac7c" exitCode=0 Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.531959 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be48f910-53ab-4cbd-a846-543d163f0edc","Type":"ContainerDied","Data":"ac0219a87c1325770c4894722161b965a03354f38c4c6cf49adc9e88a879ac7c"} Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.531981 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be48f910-53ab-4cbd-a846-543d163f0edc","Type":"ContainerDied","Data":"c6ad4afc0709fe5b7d35e61e2b8bfe97d0e6e5c35832a90a70bc8b55cc0f5246"} Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.531993 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.550158 4833 generic.go:334] "Generic (PLEG): container finished" podID="31c249ee-bf3f-4fdd-8f22-56ef6f18881b" containerID="51fda6b52644a269bdc7ae0f18a7b037d0dca524844f50d5ea60e2488fd33840" exitCode=0 Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.550261 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"31c249ee-bf3f-4fdd-8f22-56ef6f18881b","Type":"ContainerDied","Data":"51fda6b52644a269bdc7ae0f18a7b037d0dca524844f50d5ea60e2488fd33840"} Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.550320 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"31c249ee-bf3f-4fdd-8f22-56ef6f18881b","Type":"ContainerDied","Data":"6d3c0b39506d869cf1ebd1940f119acca1e8bd5c90ac3fc7618827598e28d1b3"} Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.550412 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.555550 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31c249ee-bf3f-4fdd-8f22-56ef6f18881b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.555573 4833 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/be48f910-53ab-4cbd-a846-543d163f0edc-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.555584 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be48f910-53ab-4cbd-a846-543d163f0edc-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.555592 4833 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.578500 4833 scope.go:117] "RemoveContainer" containerID="e308f63114bd164e298425d0bba795d1fad67e30b533635b1948b02f03e31c53" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.584703 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd"] Oct 13 09:02:20 crc kubenswrapper[4833]: E1013 09:02:20.585227 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be48f910-53ab-4cbd-a846-543d163f0edc" containerName="nova-metadata-log" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.585239 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="be48f910-53ab-4cbd-a846-543d163f0edc" containerName="nova-metadata-log" Oct 13 09:02:20 crc kubenswrapper[4833]: E1013 09:02:20.585269 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef7ccb3e-8928-43a2-abdb-6225bfabd4e5" containerName="nova-api-log" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.585275 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef7ccb3e-8928-43a2-abdb-6225bfabd4e5" containerName="nova-api-log" Oct 13 09:02:20 crc kubenswrapper[4833]: E1013 09:02:20.585288 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c249ee-bf3f-4fdd-8f22-56ef6f18881b" containerName="nova-cell0-conductor-conductor" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.585295 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c249ee-bf3f-4fdd-8f22-56ef6f18881b" containerName="nova-cell0-conductor-conductor" Oct 13 09:02:20 crc kubenswrapper[4833]: E1013 09:02:20.585310 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef7ccb3e-8928-43a2-abdb-6225bfabd4e5" containerName="nova-api-api" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.585316 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef7ccb3e-8928-43a2-abdb-6225bfabd4e5" containerName="nova-api-api" Oct 13 09:02:20 crc kubenswrapper[4833]: E1013 09:02:20.585331 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be48f910-53ab-4cbd-a846-543d163f0edc" containerName="nova-metadata-metadata" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.585337 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="be48f910-53ab-4cbd-a846-543d163f0edc" containerName="nova-metadata-metadata" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.585647 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="31c249ee-bf3f-4fdd-8f22-56ef6f18881b" containerName="nova-cell0-conductor-conductor" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.585665 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="be48f910-53ab-4cbd-a846-543d163f0edc" containerName="nova-metadata-metadata" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.585685 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="be48f910-53ab-4cbd-a846-543d163f0edc" containerName="nova-metadata-log" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.585699 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef7ccb3e-8928-43a2-abdb-6225bfabd4e5" containerName="nova-api-log" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.585711 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef7ccb3e-8928-43a2-abdb-6225bfabd4e5" containerName="nova-api-api" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.588414 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.595880 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.595929 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.595928 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.624841 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-qqrx8" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.625592 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.626828 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.626849 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.640129 4833 scope.go:117] "RemoveContainer" containerID="8aae16e4081dc7b95230701a9443f695850056f967001774ef8fbe7a619cdd28" Oct 13 09:02:20 crc kubenswrapper[4833]: E1013 09:02:20.643827 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8aae16e4081dc7b95230701a9443f695850056f967001774ef8fbe7a619cdd28\": container with ID starting with 8aae16e4081dc7b95230701a9443f695850056f967001774ef8fbe7a619cdd28 not found: ID does not exist" containerID="8aae16e4081dc7b95230701a9443f695850056f967001774ef8fbe7a619cdd28" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.643920 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8aae16e4081dc7b95230701a9443f695850056f967001774ef8fbe7a619cdd28"} err="failed to get container status \"8aae16e4081dc7b95230701a9443f695850056f967001774ef8fbe7a619cdd28\": rpc error: code = NotFound desc = could not find container \"8aae16e4081dc7b95230701a9443f695850056f967001774ef8fbe7a619cdd28\": container with ID starting with 8aae16e4081dc7b95230701a9443f695850056f967001774ef8fbe7a619cdd28 not found: ID does not exist" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.643950 4833 scope.go:117] "RemoveContainer" containerID="e308f63114bd164e298425d0bba795d1fad67e30b533635b1948b02f03e31c53" Oct 13 09:02:20 crc kubenswrapper[4833]: E1013 09:02:20.646741 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e308f63114bd164e298425d0bba795d1fad67e30b533635b1948b02f03e31c53\": container with ID starting with e308f63114bd164e298425d0bba795d1fad67e30b533635b1948b02f03e31c53 not found: ID does not exist" containerID="e308f63114bd164e298425d0bba795d1fad67e30b533635b1948b02f03e31c53" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.646772 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e308f63114bd164e298425d0bba795d1fad67e30b533635b1948b02f03e31c53"} err="failed to get container status \"e308f63114bd164e298425d0bba795d1fad67e30b533635b1948b02f03e31c53\": rpc error: code = NotFound desc = could not find container \"e308f63114bd164e298425d0bba795d1fad67e30b533635b1948b02f03e31c53\": container with ID starting with e308f63114bd164e298425d0bba795d1fad67e30b533635b1948b02f03e31c53 not found: ID does not exist" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.646790 4833 scope.go:117] "RemoveContainer" containerID="ac0219a87c1325770c4894722161b965a03354f38c4c6cf49adc9e88a879ac7c" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.659362 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.659493 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.659576 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.659669 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.659701 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxxdd\" (UniqueName: \"kubernetes.io/projected/3f103fdd-b425-420e-99f5-e73aaa0b91cc-kube-api-access-mxxdd\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.659836 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.659895 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.659974 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.660021 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.705352 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.705386 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.750166 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd"] Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.751030 4833 scope.go:117] "RemoveContainer" containerID="a0e5c6b3ae0fd9cc383069142322c85a9c6ce9b3367d56d64ab94495047f3e8a" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.762404 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.765400 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.765452 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.765480 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.765500 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxxdd\" (UniqueName: \"kubernetes.io/projected/3f103fdd-b425-420e-99f5-e73aaa0b91cc-kube-api-access-mxxdd\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.765594 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.765629 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.765676 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.765705 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.765730 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.767246 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.768209 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.769092 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.774277 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.776311 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.798795 4833 scope.go:117] "RemoveContainer" containerID="ac0219a87c1325770c4894722161b965a03354f38c4c6cf49adc9e88a879ac7c" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.799120 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.799223 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.799320 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.799340 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.799930 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" Oct 13 09:02:20 crc kubenswrapper[4833]: E1013 09:02:20.800035 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac0219a87c1325770c4894722161b965a03354f38c4c6cf49adc9e88a879ac7c\": container with ID starting with ac0219a87c1325770c4894722161b965a03354f38c4c6cf49adc9e88a879ac7c not found: ID does not exist" containerID="ac0219a87c1325770c4894722161b965a03354f38c4c6cf49adc9e88a879ac7c" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.800065 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac0219a87c1325770c4894722161b965a03354f38c4c6cf49adc9e88a879ac7c"} err="failed to get container status \"ac0219a87c1325770c4894722161b965a03354f38c4c6cf49adc9e88a879ac7c\": rpc error: code = NotFound desc = could not find container \"ac0219a87c1325770c4894722161b965a03354f38c4c6cf49adc9e88a879ac7c\": container with ID starting with ac0219a87c1325770c4894722161b965a03354f38c4c6cf49adc9e88a879ac7c not found: ID does not exist" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.800089 4833 scope.go:117] "RemoveContainer" containerID="a0e5c6b3ae0fd9cc383069142322c85a9c6ce9b3367d56d64ab94495047f3e8a" Oct 13 09:02:20 crc kubenswrapper[4833]: E1013 09:02:20.800395 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0e5c6b3ae0fd9cc383069142322c85a9c6ce9b3367d56d64ab94495047f3e8a\": container with ID starting with a0e5c6b3ae0fd9cc383069142322c85a9c6ce9b3367d56d64ab94495047f3e8a not found: ID does not exist" containerID="a0e5c6b3ae0fd9cc383069142322c85a9c6ce9b3367d56d64ab94495047f3e8a" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.800418 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0e5c6b3ae0fd9cc383069142322c85a9c6ce9b3367d56d64ab94495047f3e8a"} err="failed to get container status \"a0e5c6b3ae0fd9cc383069142322c85a9c6ce9b3367d56d64ab94495047f3e8a\": rpc error: code = NotFound desc = could not find container \"a0e5c6b3ae0fd9cc383069142322c85a9c6ce9b3367d56d64ab94495047f3e8a\": container with ID starting with a0e5c6b3ae0fd9cc383069142322c85a9c6ce9b3367d56d64ab94495047f3e8a not found: ID does not exist" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.800433 4833 scope.go:117] "RemoveContainer" containerID="51fda6b52644a269bdc7ae0f18a7b037d0dca524844f50d5ea60e2488fd33840" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.803590 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.805981 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.806360 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxxdd\" (UniqueName: \"kubernetes.io/projected/3f103fdd-b425-420e-99f5-e73aaa0b91cc-kube-api-access-mxxdd\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.823653 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.836606 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.838500 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.841449 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.841623 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.841807 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.850336 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.867601 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2b4a338-75d6-4013-87be-5a57fb8f203e-logs\") pod \"nova-api-0\" (UID: \"a2b4a338-75d6-4013-87be-5a57fb8f203e\") " pod="openstack/nova-api-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.868120 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2b4a338-75d6-4013-87be-5a57fb8f203e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a2b4a338-75d6-4013-87be-5a57fb8f203e\") " pod="openstack/nova-api-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.868196 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aad2dab2-c620-43aa-b43b-be9119ff2864-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aad2dab2-c620-43aa-b43b-be9119ff2864\") " pod="openstack/nova-metadata-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.868282 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad2dab2-c620-43aa-b43b-be9119ff2864-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aad2dab2-c620-43aa-b43b-be9119ff2864\") " pod="openstack/nova-metadata-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.868337 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2b4a338-75d6-4013-87be-5a57fb8f203e-public-tls-certs\") pod \"nova-api-0\" (UID: \"a2b4a338-75d6-4013-87be-5a57fb8f203e\") " pod="openstack/nova-api-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.868359 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2b4a338-75d6-4013-87be-5a57fb8f203e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a2b4a338-75d6-4013-87be-5a57fb8f203e\") " pod="openstack/nova-api-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.868401 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2b4a338-75d6-4013-87be-5a57fb8f203e-config-data\") pod \"nova-api-0\" (UID: \"a2b4a338-75d6-4013-87be-5a57fb8f203e\") " pod="openstack/nova-api-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.868440 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aad2dab2-c620-43aa-b43b-be9119ff2864-logs\") pod \"nova-metadata-0\" (UID: \"aad2dab2-c620-43aa-b43b-be9119ff2864\") " pod="openstack/nova-metadata-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.868633 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5wqf\" (UniqueName: \"kubernetes.io/projected/aad2dab2-c620-43aa-b43b-be9119ff2864-kube-api-access-c5wqf\") pod \"nova-metadata-0\" (UID: \"aad2dab2-c620-43aa-b43b-be9119ff2864\") " pod="openstack/nova-metadata-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.868695 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn8vj\" (UniqueName: \"kubernetes.io/projected/a2b4a338-75d6-4013-87be-5a57fb8f203e-kube-api-access-mn8vj\") pod \"nova-api-0\" (UID: \"a2b4a338-75d6-4013-87be-5a57fb8f203e\") " pod="openstack/nova-api-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.868834 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aad2dab2-c620-43aa-b43b-be9119ff2864-config-data\") pod \"nova-metadata-0\" (UID: \"aad2dab2-c620-43aa-b43b-be9119ff2864\") " pod="openstack/nova-metadata-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.877898 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.890211 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.890596 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.895592 4833 scope.go:117] "RemoveContainer" containerID="51fda6b52644a269bdc7ae0f18a7b037d0dca524844f50d5ea60e2488fd33840" Oct 13 09:02:20 crc kubenswrapper[4833]: E1013 09:02:20.895907 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51fda6b52644a269bdc7ae0f18a7b037d0dca524844f50d5ea60e2488fd33840\": container with ID starting with 51fda6b52644a269bdc7ae0f18a7b037d0dca524844f50d5ea60e2488fd33840 not found: ID does not exist" containerID="51fda6b52644a269bdc7ae0f18a7b037d0dca524844f50d5ea60e2488fd33840" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.895946 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51fda6b52644a269bdc7ae0f18a7b037d0dca524844f50d5ea60e2488fd33840"} err="failed to get container status \"51fda6b52644a269bdc7ae0f18a7b037d0dca524844f50d5ea60e2488fd33840\": rpc error: code = NotFound desc = could not find container \"51fda6b52644a269bdc7ae0f18a7b037d0dca524844f50d5ea60e2488fd33840\": container with ID starting with 51fda6b52644a269bdc7ae0f18a7b037d0dca524844f50d5ea60e2488fd33840 not found: ID does not exist" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.904464 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.928828 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 13 09:02:20 crc kubenswrapper[4833]: E1013 09:02:20.929508 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f63dce5-488f-43e5-8217-5c855de31f30" containerName="nova-scheduler-scheduler" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.929559 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f63dce5-488f-43e5-8217-5c855de31f30" containerName="nova-scheduler-scheduler" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.929852 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f63dce5-488f-43e5-8217-5c855de31f30" containerName="nova-scheduler-scheduler" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.930815 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.933604 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.955306 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.970047 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f63dce5-488f-43e5-8217-5c855de31f30-config-data\") pod \"6f63dce5-488f-43e5-8217-5c855de31f30\" (UID: \"6f63dce5-488f-43e5-8217-5c855de31f30\") " Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.970207 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f63dce5-488f-43e5-8217-5c855de31f30-combined-ca-bundle\") pod \"6f63dce5-488f-43e5-8217-5c855de31f30\" (UID: \"6f63dce5-488f-43e5-8217-5c855de31f30\") " Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.970330 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6klz\" (UniqueName: \"kubernetes.io/projected/6f63dce5-488f-43e5-8217-5c855de31f30-kube-api-access-m6klz\") pod \"6f63dce5-488f-43e5-8217-5c855de31f30\" (UID: \"6f63dce5-488f-43e5-8217-5c855de31f30\") " Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.970586 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aad2dab2-c620-43aa-b43b-be9119ff2864-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aad2dab2-c620-43aa-b43b-be9119ff2864\") " pod="openstack/nova-metadata-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.970614 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad2dab2-c620-43aa-b43b-be9119ff2864-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aad2dab2-c620-43aa-b43b-be9119ff2864\") " pod="openstack/nova-metadata-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.970643 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2b4a338-75d6-4013-87be-5a57fb8f203e-public-tls-certs\") pod \"nova-api-0\" (UID: \"a2b4a338-75d6-4013-87be-5a57fb8f203e\") " pod="openstack/nova-api-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.970664 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2b4a338-75d6-4013-87be-5a57fb8f203e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a2b4a338-75d6-4013-87be-5a57fb8f203e\") " pod="openstack/nova-api-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.970692 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2b4a338-75d6-4013-87be-5a57fb8f203e-config-data\") pod \"nova-api-0\" (UID: \"a2b4a338-75d6-4013-87be-5a57fb8f203e\") " pod="openstack/nova-api-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.970907 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24phx\" (UniqueName: \"kubernetes.io/projected/d236a3ad-6af8-4c48-97b7-9bc1c9d90039-kube-api-access-24phx\") pod \"nova-cell0-conductor-0\" (UID: \"d236a3ad-6af8-4c48-97b7-9bc1c9d90039\") " pod="openstack/nova-cell0-conductor-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.970935 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aad2dab2-c620-43aa-b43b-be9119ff2864-logs\") pod \"nova-metadata-0\" (UID: \"aad2dab2-c620-43aa-b43b-be9119ff2864\") " pod="openstack/nova-metadata-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.970968 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5wqf\" (UniqueName: \"kubernetes.io/projected/aad2dab2-c620-43aa-b43b-be9119ff2864-kube-api-access-c5wqf\") pod \"nova-metadata-0\" (UID: \"aad2dab2-c620-43aa-b43b-be9119ff2864\") " pod="openstack/nova-metadata-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.970991 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn8vj\" (UniqueName: \"kubernetes.io/projected/a2b4a338-75d6-4013-87be-5a57fb8f203e-kube-api-access-mn8vj\") pod \"nova-api-0\" (UID: \"a2b4a338-75d6-4013-87be-5a57fb8f203e\") " pod="openstack/nova-api-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.971008 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d236a3ad-6af8-4c48-97b7-9bc1c9d90039-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d236a3ad-6af8-4c48-97b7-9bc1c9d90039\") " pod="openstack/nova-cell0-conductor-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.971054 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aad2dab2-c620-43aa-b43b-be9119ff2864-config-data\") pod \"nova-metadata-0\" (UID: \"aad2dab2-c620-43aa-b43b-be9119ff2864\") " pod="openstack/nova-metadata-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.971104 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2b4a338-75d6-4013-87be-5a57fb8f203e-logs\") pod \"nova-api-0\" (UID: \"a2b4a338-75d6-4013-87be-5a57fb8f203e\") " pod="openstack/nova-api-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.971126 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d236a3ad-6af8-4c48-97b7-9bc1c9d90039-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d236a3ad-6af8-4c48-97b7-9bc1c9d90039\") " pod="openstack/nova-cell0-conductor-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.971149 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2b4a338-75d6-4013-87be-5a57fb8f203e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a2b4a338-75d6-4013-87be-5a57fb8f203e\") " pod="openstack/nova-api-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.972329 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aad2dab2-c620-43aa-b43b-be9119ff2864-logs\") pod \"nova-metadata-0\" (UID: \"aad2dab2-c620-43aa-b43b-be9119ff2864\") " pod="openstack/nova-metadata-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.974164 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2b4a338-75d6-4013-87be-5a57fb8f203e-logs\") pod \"nova-api-0\" (UID: \"a2b4a338-75d6-4013-87be-5a57fb8f203e\") " pod="openstack/nova-api-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.977126 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f63dce5-488f-43e5-8217-5c855de31f30-kube-api-access-m6klz" (OuterVolumeSpecName: "kube-api-access-m6klz") pod "6f63dce5-488f-43e5-8217-5c855de31f30" (UID: "6f63dce5-488f-43e5-8217-5c855de31f30"). InnerVolumeSpecName "kube-api-access-m6klz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.977489 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aad2dab2-c620-43aa-b43b-be9119ff2864-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aad2dab2-c620-43aa-b43b-be9119ff2864\") " pod="openstack/nova-metadata-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.977517 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2b4a338-75d6-4013-87be-5a57fb8f203e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a2b4a338-75d6-4013-87be-5a57fb8f203e\") " pod="openstack/nova-api-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.977529 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2b4a338-75d6-4013-87be-5a57fb8f203e-public-tls-certs\") pod \"nova-api-0\" (UID: \"a2b4a338-75d6-4013-87be-5a57fb8f203e\") " pod="openstack/nova-api-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.978677 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad2dab2-c620-43aa-b43b-be9119ff2864-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aad2dab2-c620-43aa-b43b-be9119ff2864\") " pod="openstack/nova-metadata-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.980356 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aad2dab2-c620-43aa-b43b-be9119ff2864-config-data\") pod \"nova-metadata-0\" (UID: \"aad2dab2-c620-43aa-b43b-be9119ff2864\") " pod="openstack/nova-metadata-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.983228 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2b4a338-75d6-4013-87be-5a57fb8f203e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a2b4a338-75d6-4013-87be-5a57fb8f203e\") " pod="openstack/nova-api-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.990820 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn8vj\" (UniqueName: \"kubernetes.io/projected/a2b4a338-75d6-4013-87be-5a57fb8f203e-kube-api-access-mn8vj\") pod \"nova-api-0\" (UID: \"a2b4a338-75d6-4013-87be-5a57fb8f203e\") " pod="openstack/nova-api-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.993527 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2b4a338-75d6-4013-87be-5a57fb8f203e-config-data\") pod \"nova-api-0\" (UID: \"a2b4a338-75d6-4013-87be-5a57fb8f203e\") " pod="openstack/nova-api-0" Oct 13 09:02:20 crc kubenswrapper[4833]: I1013 09:02:20.996943 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5wqf\" (UniqueName: \"kubernetes.io/projected/aad2dab2-c620-43aa-b43b-be9119ff2864-kube-api-access-c5wqf\") pod \"nova-metadata-0\" (UID: \"aad2dab2-c620-43aa-b43b-be9119ff2864\") " pod="openstack/nova-metadata-0" Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.013147 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f63dce5-488f-43e5-8217-5c855de31f30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f63dce5-488f-43e5-8217-5c855de31f30" (UID: "6f63dce5-488f-43e5-8217-5c855de31f30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.025410 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f63dce5-488f-43e5-8217-5c855de31f30-config-data" (OuterVolumeSpecName: "config-data") pod "6f63dce5-488f-43e5-8217-5c855de31f30" (UID: "6f63dce5-488f-43e5-8217-5c855de31f30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.053504 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.073789 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24phx\" (UniqueName: \"kubernetes.io/projected/d236a3ad-6af8-4c48-97b7-9bc1c9d90039-kube-api-access-24phx\") pod \"nova-cell0-conductor-0\" (UID: \"d236a3ad-6af8-4c48-97b7-9bc1c9d90039\") " pod="openstack/nova-cell0-conductor-0" Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.073874 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d236a3ad-6af8-4c48-97b7-9bc1c9d90039-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d236a3ad-6af8-4c48-97b7-9bc1c9d90039\") " pod="openstack/nova-cell0-conductor-0" Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.073969 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d236a3ad-6af8-4c48-97b7-9bc1c9d90039-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d236a3ad-6af8-4c48-97b7-9bc1c9d90039\") " pod="openstack/nova-cell0-conductor-0" Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.074023 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6klz\" (UniqueName: \"kubernetes.io/projected/6f63dce5-488f-43e5-8217-5c855de31f30-kube-api-access-m6klz\") on node \"crc\" DevicePath \"\"" Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.074034 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f63dce5-488f-43e5-8217-5c855de31f30-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.074045 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f63dce5-488f-43e5-8217-5c855de31f30-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.079137 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d236a3ad-6af8-4c48-97b7-9bc1c9d90039-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d236a3ad-6af8-4c48-97b7-9bc1c9d90039\") " pod="openstack/nova-cell0-conductor-0" Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.080146 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d236a3ad-6af8-4c48-97b7-9bc1c9d90039-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d236a3ad-6af8-4c48-97b7-9bc1c9d90039\") " pod="openstack/nova-cell0-conductor-0" Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.096195 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24phx\" (UniqueName: \"kubernetes.io/projected/d236a3ad-6af8-4c48-97b7-9bc1c9d90039-kube-api-access-24phx\") pod \"nova-cell0-conductor-0\" (UID: \"d236a3ad-6af8-4c48-97b7-9bc1c9d90039\") " pod="openstack/nova-cell0-conductor-0" Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.187728 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.203087 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.248156 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.565763 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.565852 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6f63dce5-488f-43e5-8217-5c855de31f30","Type":"ContainerDied","Data":"0cd933f168e8ecaa2d3c078603a96b419e40c4f5dca6de96a79d50261b2006ba"} Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.566121 4833 scope.go:117] "RemoveContainer" containerID="545758d2f6f4b36d5c73ab20671c7d7211159aa78a02182bff7454e234246929" Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.602954 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd"] Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.628985 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 09:02:21 crc kubenswrapper[4833]: W1013 09:02:21.629192 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f103fdd_b425_420e_99f5_e73aaa0b91cc.slice/crio-9191cb1eb05a7801417817ef1f22311af5919fe30fe9e5372ad232d7590460c5 WatchSource:0}: Error finding container 9191cb1eb05a7801417817ef1f22311af5919fe30fe9e5372ad232d7590460c5: Status 404 returned error can't find the container with id 9191cb1eb05a7801417817ef1f22311af5919fe30fe9e5372ad232d7590460c5 Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.645079 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.659975 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.661990 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.664830 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.671937 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.732672 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 13 09:02:21 crc kubenswrapper[4833]: W1013 09:02:21.737096 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2b4a338_75d6_4013_87be_5a57fb8f203e.slice/crio-3135c03f8c16591dae70d4df18f1ed030cdd45cd8dfd3542fe7951ac7796eb1e WatchSource:0}: Error finding container 3135c03f8c16591dae70d4df18f1ed030cdd45cd8dfd3542fe7951ac7796eb1e: Status 404 returned error can't find the container with id 3135c03f8c16591dae70d4df18f1ed030cdd45cd8dfd3542fe7951ac7796eb1e Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.794942 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fxl9\" (UniqueName: \"kubernetes.io/projected/ff520e60-61f4-4a72-bff7-f0a47fc3c5f1-kube-api-access-8fxl9\") pod \"nova-scheduler-0\" (UID: \"ff520e60-61f4-4a72-bff7-f0a47fc3c5f1\") " pod="openstack/nova-scheduler-0" Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.794998 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff520e60-61f4-4a72-bff7-f0a47fc3c5f1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ff520e60-61f4-4a72-bff7-f0a47fc3c5f1\") " pod="openstack/nova-scheduler-0" Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.795053 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff520e60-61f4-4a72-bff7-f0a47fc3c5f1-config-data\") pod \"nova-scheduler-0\" (UID: \"ff520e60-61f4-4a72-bff7-f0a47fc3c5f1\") " pod="openstack/nova-scheduler-0" Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.838874 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.849153 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 09:02:21 crc kubenswrapper[4833]: W1013 09:02:21.849659 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd236a3ad_6af8_4c48_97b7_9bc1c9d90039.slice/crio-75afe7c0b04d1125253fe3be84e043190c96ac4b5e8eabed220431003b3e49b7 WatchSource:0}: Error finding container 75afe7c0b04d1125253fe3be84e043190c96ac4b5e8eabed220431003b3e49b7: Status 404 returned error can't find the container with id 75afe7c0b04d1125253fe3be84e043190c96ac4b5e8eabed220431003b3e49b7 Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.896952 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fxl9\" (UniqueName: \"kubernetes.io/projected/ff520e60-61f4-4a72-bff7-f0a47fc3c5f1-kube-api-access-8fxl9\") pod \"nova-scheduler-0\" (UID: \"ff520e60-61f4-4a72-bff7-f0a47fc3c5f1\") " pod="openstack/nova-scheduler-0" Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.897013 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff520e60-61f4-4a72-bff7-f0a47fc3c5f1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ff520e60-61f4-4a72-bff7-f0a47fc3c5f1\") " pod="openstack/nova-scheduler-0" Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.897070 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff520e60-61f4-4a72-bff7-f0a47fc3c5f1-config-data\") pod \"nova-scheduler-0\" (UID: \"ff520e60-61f4-4a72-bff7-f0a47fc3c5f1\") " pod="openstack/nova-scheduler-0" Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.901969 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff520e60-61f4-4a72-bff7-f0a47fc3c5f1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ff520e60-61f4-4a72-bff7-f0a47fc3c5f1\") " pod="openstack/nova-scheduler-0" Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.903269 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff520e60-61f4-4a72-bff7-f0a47fc3c5f1-config-data\") pod \"nova-scheduler-0\" (UID: \"ff520e60-61f4-4a72-bff7-f0a47fc3c5f1\") " pod="openstack/nova-scheduler-0" Oct 13 09:02:21 crc kubenswrapper[4833]: I1013 09:02:21.921896 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fxl9\" (UniqueName: \"kubernetes.io/projected/ff520e60-61f4-4a72-bff7-f0a47fc3c5f1-kube-api-access-8fxl9\") pod \"nova-scheduler-0\" (UID: \"ff520e60-61f4-4a72-bff7-f0a47fc3c5f1\") " pod="openstack/nova-scheduler-0" Oct 13 09:02:22 crc kubenswrapper[4833]: I1013 09:02:22.015589 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 09:02:22 crc kubenswrapper[4833]: I1013 09:02:22.512566 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 09:02:22 crc kubenswrapper[4833]: I1013 09:02:22.666366 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31c249ee-bf3f-4fdd-8f22-56ef6f18881b" path="/var/lib/kubelet/pods/31c249ee-bf3f-4fdd-8f22-56ef6f18881b/volumes" Oct 13 09:02:22 crc kubenswrapper[4833]: I1013 09:02:22.668101 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f63dce5-488f-43e5-8217-5c855de31f30" path="/var/lib/kubelet/pods/6f63dce5-488f-43e5-8217-5c855de31f30/volumes" Oct 13 09:02:22 crc kubenswrapper[4833]: I1013 09:02:22.669574 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be48f910-53ab-4cbd-a846-543d163f0edc" path="/var/lib/kubelet/pods/be48f910-53ab-4cbd-a846-543d163f0edc/volumes" Oct 13 09:02:22 crc kubenswrapper[4833]: I1013 09:02:22.669765 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" podStartSLOduration=2.17881455 podStartE2EDuration="2.669744582s" podCreationTimestamp="2025-10-13 09:02:20 +0000 UTC" firstStartedPulling="2025-10-13 09:02:21.633872931 +0000 UTC m=+9231.734295857" lastFinishedPulling="2025-10-13 09:02:22.124802973 +0000 UTC m=+9232.225225889" observedRunningTime="2025-10-13 09:02:22.662399963 +0000 UTC m=+9232.762822879" watchObservedRunningTime="2025-10-13 09:02:22.669744582 +0000 UTC m=+9232.770167518" Oct 13 09:02:22 crc kubenswrapper[4833]: I1013 09:02:22.670353 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef7ccb3e-8928-43a2-abdb-6225bfabd4e5" path="/var/lib/kubelet/pods/ef7ccb3e-8928-43a2-abdb-6225bfabd4e5/volumes" Oct 13 09:02:22 crc kubenswrapper[4833]: I1013 09:02:22.671191 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" event={"ID":"3f103fdd-b425-420e-99f5-e73aaa0b91cc","Type":"ContainerStarted","Data":"ad3e546a7f77c6b7ef02fd53c8428a94eb2781b42ccd4e89d52f6534ab79fc37"} Oct 13 09:02:22 crc kubenswrapper[4833]: I1013 09:02:22.671225 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" event={"ID":"3f103fdd-b425-420e-99f5-e73aaa0b91cc","Type":"ContainerStarted","Data":"9191cb1eb05a7801417817ef1f22311af5919fe30fe9e5372ad232d7590460c5"} Oct 13 09:02:22 crc kubenswrapper[4833]: I1013 09:02:22.672484 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2b4a338-75d6-4013-87be-5a57fb8f203e","Type":"ContainerStarted","Data":"7f6e30e6694a7b1ac6715e43dfdbd6e5ec3dd5c357a2a2e767acef15617ab568"} Oct 13 09:02:22 crc kubenswrapper[4833]: I1013 09:02:22.672509 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2b4a338-75d6-4013-87be-5a57fb8f203e","Type":"ContainerStarted","Data":"2bc19cc2b3055bd139a01652383b53c2f7f24e8c5e7989069e0b2031b597ca4b"} Oct 13 09:02:22 crc kubenswrapper[4833]: I1013 09:02:22.672521 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2b4a338-75d6-4013-87be-5a57fb8f203e","Type":"ContainerStarted","Data":"3135c03f8c16591dae70d4df18f1ed030cdd45cd8dfd3542fe7951ac7796eb1e"} Oct 13 09:02:22 crc kubenswrapper[4833]: I1013 09:02:22.676059 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ff520e60-61f4-4a72-bff7-f0a47fc3c5f1","Type":"ContainerStarted","Data":"f0d558fa1f0fd15ff4fd55f64cd5ca44744fe09529a3c9ef4606055020c510bd"} Oct 13 09:02:22 crc kubenswrapper[4833]: I1013 09:02:22.706590 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d236a3ad-6af8-4c48-97b7-9bc1c9d90039","Type":"ContainerStarted","Data":"8c23166dc418da6d45896da73f350875b9539a6322d0bf0f7a8cfb9b9f1a6e6f"} Oct 13 09:02:22 crc kubenswrapper[4833]: I1013 09:02:22.706677 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d236a3ad-6af8-4c48-97b7-9bc1c9d90039","Type":"ContainerStarted","Data":"75afe7c0b04d1125253fe3be84e043190c96ac4b5e8eabed220431003b3e49b7"} Oct 13 09:02:22 crc kubenswrapper[4833]: I1013 09:02:22.706974 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Oct 13 09:02:22 crc kubenswrapper[4833]: I1013 09:02:22.727396 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aad2dab2-c620-43aa-b43b-be9119ff2864","Type":"ContainerStarted","Data":"37242995847a8fdffceceb354aa3102c92e40a7483cf5addab0d89daf8ded49e"} Oct 13 09:02:22 crc kubenswrapper[4833]: I1013 09:02:22.727464 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aad2dab2-c620-43aa-b43b-be9119ff2864","Type":"ContainerStarted","Data":"d08ee835795c63a5527e8f349a9a9fc3bcd141f8190708cf0de265a6e1d54207"} Oct 13 09:02:22 crc kubenswrapper[4833]: I1013 09:02:22.727477 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aad2dab2-c620-43aa-b43b-be9119ff2864","Type":"ContainerStarted","Data":"66b8d7da8ec574159537000c35a94cfdd9932ab8a9055a938229944493fbc435"} Oct 13 09:02:22 crc kubenswrapper[4833]: I1013 09:02:22.730243 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.7302292660000003 podStartE2EDuration="2.730229266s" podCreationTimestamp="2025-10-13 09:02:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 09:02:22.723217696 +0000 UTC m=+9232.823640612" watchObservedRunningTime="2025-10-13 09:02:22.730229266 +0000 UTC m=+9232.830652182" Oct 13 09:02:22 crc kubenswrapper[4833]: I1013 09:02:22.755197 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.755175237 podStartE2EDuration="2.755175237s" podCreationTimestamp="2025-10-13 09:02:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 09:02:22.741332052 +0000 UTC m=+9232.841754978" watchObservedRunningTime="2025-10-13 09:02:22.755175237 +0000 UTC m=+9232.855598153" Oct 13 09:02:22 crc kubenswrapper[4833]: I1013 09:02:22.774731 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.774698003 podStartE2EDuration="2.774698003s" podCreationTimestamp="2025-10-13 09:02:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 09:02:22.763123873 +0000 UTC m=+9232.863546779" watchObservedRunningTime="2025-10-13 09:02:22.774698003 +0000 UTC m=+9232.875120919" Oct 13 09:02:23 crc kubenswrapper[4833]: I1013 09:02:23.738756 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ff520e60-61f4-4a72-bff7-f0a47fc3c5f1","Type":"ContainerStarted","Data":"8536bb333f3cfd138ad616522068f37f650e53b7d2a3e3fb166cfbc131be4c32"} Oct 13 09:02:26 crc kubenswrapper[4833]: I1013 09:02:26.188772 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 13 09:02:26 crc kubenswrapper[4833]: I1013 09:02:26.190679 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 13 09:02:27 crc kubenswrapper[4833]: I1013 09:02:27.016047 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 13 09:02:28 crc kubenswrapper[4833]: I1013 09:02:28.211373 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Oct 13 09:02:28 crc kubenswrapper[4833]: I1013 09:02:28.231156 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=7.231139278 podStartE2EDuration="7.231139278s" podCreationTimestamp="2025-10-13 09:02:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 09:02:23.767095706 +0000 UTC m=+9233.867518622" watchObservedRunningTime="2025-10-13 09:02:28.231139278 +0000 UTC m=+9238.331562184" Oct 13 09:02:28 crc kubenswrapper[4833]: I1013 09:02:28.627566 4833 scope.go:117] "RemoveContainer" containerID="012ebe3496e05271beb0f2417b1ba0dea73760fd7eab989d4ce27a4a433c6446" Oct 13 09:02:28 crc kubenswrapper[4833]: E1013 09:02:28.627942 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:02:31 crc kubenswrapper[4833]: I1013 09:02:31.195128 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 13 09:02:31 crc kubenswrapper[4833]: I1013 09:02:31.199016 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 13 09:02:31 crc kubenswrapper[4833]: I1013 09:02:31.205676 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 13 09:02:31 crc kubenswrapper[4833]: I1013 09:02:31.206372 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 13 09:02:31 crc kubenswrapper[4833]: I1013 09:02:31.283894 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Oct 13 09:02:32 crc kubenswrapper[4833]: I1013 09:02:32.015979 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 13 09:02:32 crc kubenswrapper[4833]: I1013 09:02:32.049172 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 13 09:02:32 crc kubenswrapper[4833]: I1013 09:02:32.207831 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="aad2dab2-c620-43aa-b43b-be9119ff2864" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 13 09:02:32 crc kubenswrapper[4833]: I1013 09:02:32.207906 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="aad2dab2-c620-43aa-b43b-be9119ff2864" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 13 09:02:32 crc kubenswrapper[4833]: I1013 09:02:32.219695 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a2b4a338-75d6-4013-87be-5a57fb8f203e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 13 09:02:32 crc kubenswrapper[4833]: I1013 09:02:32.219764 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a2b4a338-75d6-4013-87be-5a57fb8f203e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 13 09:02:32 crc kubenswrapper[4833]: I1013 09:02:32.905040 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 13 09:02:41 crc kubenswrapper[4833]: I1013 09:02:41.193389 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 13 09:02:41 crc kubenswrapper[4833]: I1013 09:02:41.194440 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 13 09:02:41 crc kubenswrapper[4833]: I1013 09:02:41.209666 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 13 09:02:41 crc kubenswrapper[4833]: I1013 09:02:41.216418 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 13 09:02:41 crc kubenswrapper[4833]: I1013 09:02:41.216903 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 13 09:02:41 crc kubenswrapper[4833]: I1013 09:02:41.217784 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 13 09:02:41 crc kubenswrapper[4833]: I1013 09:02:41.221919 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 13 09:02:41 crc kubenswrapper[4833]: I1013 09:02:41.224150 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 13 09:02:41 crc kubenswrapper[4833]: I1013 09:02:41.972015 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 13 09:02:41 crc kubenswrapper[4833]: I1013 09:02:41.981298 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 13 09:02:42 crc kubenswrapper[4833]: I1013 09:02:42.627421 4833 scope.go:117] "RemoveContainer" containerID="012ebe3496e05271beb0f2417b1ba0dea73760fd7eab989d4ce27a4a433c6446" Oct 13 09:02:42 crc kubenswrapper[4833]: E1013 09:02:42.627838 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:02:53 crc kubenswrapper[4833]: I1013 09:02:53.628337 4833 scope.go:117] "RemoveContainer" containerID="012ebe3496e05271beb0f2417b1ba0dea73760fd7eab989d4ce27a4a433c6446" Oct 13 09:02:53 crc kubenswrapper[4833]: E1013 09:02:53.629453 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:03:05 crc kubenswrapper[4833]: I1013 09:03:05.628516 4833 scope.go:117] "RemoveContainer" containerID="012ebe3496e05271beb0f2417b1ba0dea73760fd7eab989d4ce27a4a433c6446" Oct 13 09:03:05 crc kubenswrapper[4833]: E1013 09:03:05.629366 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:03:17 crc kubenswrapper[4833]: I1013 09:03:17.626970 4833 scope.go:117] "RemoveContainer" containerID="012ebe3496e05271beb0f2417b1ba0dea73760fd7eab989d4ce27a4a433c6446" Oct 13 09:03:17 crc kubenswrapper[4833]: E1013 09:03:17.627921 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:03:32 crc kubenswrapper[4833]: I1013 09:03:32.627719 4833 scope.go:117] "RemoveContainer" containerID="012ebe3496e05271beb0f2417b1ba0dea73760fd7eab989d4ce27a4a433c6446" Oct 13 09:03:32 crc kubenswrapper[4833]: E1013 09:03:32.629010 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:03:45 crc kubenswrapper[4833]: I1013 09:03:45.885701 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q6dpw"] Oct 13 09:03:45 crc kubenswrapper[4833]: I1013 09:03:45.889332 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q6dpw" Oct 13 09:03:45 crc kubenswrapper[4833]: I1013 09:03:45.903008 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q6dpw"] Oct 13 09:03:46 crc kubenswrapper[4833]: I1013 09:03:46.024765 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgqdj\" (UniqueName: \"kubernetes.io/projected/f1cd098d-0004-4639-8724-9fc557c36298-kube-api-access-mgqdj\") pod \"certified-operators-q6dpw\" (UID: \"f1cd098d-0004-4639-8724-9fc557c36298\") " pod="openshift-marketplace/certified-operators-q6dpw" Oct 13 09:03:46 crc kubenswrapper[4833]: I1013 09:03:46.025005 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1cd098d-0004-4639-8724-9fc557c36298-catalog-content\") pod \"certified-operators-q6dpw\" (UID: \"f1cd098d-0004-4639-8724-9fc557c36298\") " pod="openshift-marketplace/certified-operators-q6dpw" Oct 13 09:03:46 crc kubenswrapper[4833]: I1013 09:03:46.025071 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1cd098d-0004-4639-8724-9fc557c36298-utilities\") pod \"certified-operators-q6dpw\" (UID: \"f1cd098d-0004-4639-8724-9fc557c36298\") " pod="openshift-marketplace/certified-operators-q6dpw" Oct 13 09:03:46 crc kubenswrapper[4833]: I1013 09:03:46.128718 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgqdj\" (UniqueName: \"kubernetes.io/projected/f1cd098d-0004-4639-8724-9fc557c36298-kube-api-access-mgqdj\") pod \"certified-operators-q6dpw\" (UID: \"f1cd098d-0004-4639-8724-9fc557c36298\") " pod="openshift-marketplace/certified-operators-q6dpw" Oct 13 09:03:46 crc kubenswrapper[4833]: I1013 09:03:46.128956 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1cd098d-0004-4639-8724-9fc557c36298-catalog-content\") pod \"certified-operators-q6dpw\" (UID: \"f1cd098d-0004-4639-8724-9fc557c36298\") " pod="openshift-marketplace/certified-operators-q6dpw" Oct 13 09:03:46 crc kubenswrapper[4833]: I1013 09:03:46.129029 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1cd098d-0004-4639-8724-9fc557c36298-utilities\") pod \"certified-operators-q6dpw\" (UID: \"f1cd098d-0004-4639-8724-9fc557c36298\") " pod="openshift-marketplace/certified-operators-q6dpw" Oct 13 09:03:46 crc kubenswrapper[4833]: I1013 09:03:46.129766 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1cd098d-0004-4639-8724-9fc557c36298-utilities\") pod \"certified-operators-q6dpw\" (UID: \"f1cd098d-0004-4639-8724-9fc557c36298\") " pod="openshift-marketplace/certified-operators-q6dpw" Oct 13 09:03:46 crc kubenswrapper[4833]: I1013 09:03:46.130503 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1cd098d-0004-4639-8724-9fc557c36298-catalog-content\") pod \"certified-operators-q6dpw\" (UID: \"f1cd098d-0004-4639-8724-9fc557c36298\") " pod="openshift-marketplace/certified-operators-q6dpw" Oct 13 09:03:46 crc kubenswrapper[4833]: I1013 09:03:46.173290 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgqdj\" (UniqueName: \"kubernetes.io/projected/f1cd098d-0004-4639-8724-9fc557c36298-kube-api-access-mgqdj\") pod \"certified-operators-q6dpw\" (UID: \"f1cd098d-0004-4639-8724-9fc557c36298\") " pod="openshift-marketplace/certified-operators-q6dpw" Oct 13 09:03:46 crc kubenswrapper[4833]: I1013 09:03:46.227704 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q6dpw" Oct 13 09:03:46 crc kubenswrapper[4833]: I1013 09:03:46.864502 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q6dpw"] Oct 13 09:03:47 crc kubenswrapper[4833]: I1013 09:03:47.627847 4833 scope.go:117] "RemoveContainer" containerID="012ebe3496e05271beb0f2417b1ba0dea73760fd7eab989d4ce27a4a433c6446" Oct 13 09:03:47 crc kubenswrapper[4833]: E1013 09:03:47.628706 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:03:47 crc kubenswrapper[4833]: I1013 09:03:47.685996 4833 generic.go:334] "Generic (PLEG): container finished" podID="f1cd098d-0004-4639-8724-9fc557c36298" containerID="9912ad263909c792b6c11758ba1d24996a2a80bd16ad2765e2b71c576f90d969" exitCode=0 Oct 13 09:03:47 crc kubenswrapper[4833]: I1013 09:03:47.686049 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6dpw" event={"ID":"f1cd098d-0004-4639-8724-9fc557c36298","Type":"ContainerDied","Data":"9912ad263909c792b6c11758ba1d24996a2a80bd16ad2765e2b71c576f90d969"} Oct 13 09:03:47 crc kubenswrapper[4833]: I1013 09:03:47.686080 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6dpw" event={"ID":"f1cd098d-0004-4639-8724-9fc557c36298","Type":"ContainerStarted","Data":"93bff8ff917e3d3d2bd8de3d15ca983bf8713f5fbb95d509adb92e606068bea9"} Oct 13 09:03:47 crc kubenswrapper[4833]: I1013 09:03:47.688369 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 09:03:49 crc kubenswrapper[4833]: I1013 09:03:49.708833 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6dpw" event={"ID":"f1cd098d-0004-4639-8724-9fc557c36298","Type":"ContainerStarted","Data":"f0000e80f53bdadbcb976caf7a3289a5fed02108725906edfd1cd223fae3a8a4"} Oct 13 09:03:52 crc kubenswrapper[4833]: I1013 09:03:52.748798 4833 generic.go:334] "Generic (PLEG): container finished" podID="f1cd098d-0004-4639-8724-9fc557c36298" containerID="f0000e80f53bdadbcb976caf7a3289a5fed02108725906edfd1cd223fae3a8a4" exitCode=0 Oct 13 09:03:52 crc kubenswrapper[4833]: I1013 09:03:52.748881 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6dpw" event={"ID":"f1cd098d-0004-4639-8724-9fc557c36298","Type":"ContainerDied","Data":"f0000e80f53bdadbcb976caf7a3289a5fed02108725906edfd1cd223fae3a8a4"} Oct 13 09:03:55 crc kubenswrapper[4833]: I1013 09:03:55.778569 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6dpw" event={"ID":"f1cd098d-0004-4639-8724-9fc557c36298","Type":"ContainerStarted","Data":"245f6da9120471ba45b18c93a253b1b7b0bfe086eab54eb8a06fe6d1340695f9"} Oct 13 09:03:55 crc kubenswrapper[4833]: I1013 09:03:55.796157 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q6dpw" podStartSLOduration=4.245485207 podStartE2EDuration="10.796141436s" podCreationTimestamp="2025-10-13 09:03:45 +0000 UTC" firstStartedPulling="2025-10-13 09:03:47.688091033 +0000 UTC m=+9317.788513949" lastFinishedPulling="2025-10-13 09:03:54.238747232 +0000 UTC m=+9324.339170178" observedRunningTime="2025-10-13 09:03:55.794358625 +0000 UTC m=+9325.894781541" watchObservedRunningTime="2025-10-13 09:03:55.796141436 +0000 UTC m=+9325.896564352" Oct 13 09:03:56 crc kubenswrapper[4833]: I1013 09:03:56.229551 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q6dpw" Oct 13 09:03:56 crc kubenswrapper[4833]: I1013 09:03:56.229631 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q6dpw" Oct 13 09:03:57 crc kubenswrapper[4833]: I1013 09:03:57.295741 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-q6dpw" podUID="f1cd098d-0004-4639-8724-9fc557c36298" containerName="registry-server" probeResult="failure" output=< Oct 13 09:03:57 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Oct 13 09:03:57 crc kubenswrapper[4833]: > Oct 13 09:04:00 crc kubenswrapper[4833]: I1013 09:04:00.641854 4833 scope.go:117] "RemoveContainer" containerID="012ebe3496e05271beb0f2417b1ba0dea73760fd7eab989d4ce27a4a433c6446" Oct 13 09:04:00 crc kubenswrapper[4833]: E1013 09:04:00.642638 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:04:06 crc kubenswrapper[4833]: I1013 09:04:06.326865 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q6dpw" Oct 13 09:04:06 crc kubenswrapper[4833]: I1013 09:04:06.393359 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q6dpw" Oct 13 09:04:06 crc kubenswrapper[4833]: I1013 09:04:06.587365 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q6dpw"] Oct 13 09:04:07 crc kubenswrapper[4833]: I1013 09:04:07.908106 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q6dpw" podUID="f1cd098d-0004-4639-8724-9fc557c36298" containerName="registry-server" containerID="cri-o://245f6da9120471ba45b18c93a253b1b7b0bfe086eab54eb8a06fe6d1340695f9" gracePeriod=2 Oct 13 09:04:08 crc kubenswrapper[4833]: I1013 09:04:08.438848 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q6dpw" Oct 13 09:04:08 crc kubenswrapper[4833]: I1013 09:04:08.467035 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgqdj\" (UniqueName: \"kubernetes.io/projected/f1cd098d-0004-4639-8724-9fc557c36298-kube-api-access-mgqdj\") pod \"f1cd098d-0004-4639-8724-9fc557c36298\" (UID: \"f1cd098d-0004-4639-8724-9fc557c36298\") " Oct 13 09:04:08 crc kubenswrapper[4833]: I1013 09:04:08.467160 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1cd098d-0004-4639-8724-9fc557c36298-catalog-content\") pod \"f1cd098d-0004-4639-8724-9fc557c36298\" (UID: \"f1cd098d-0004-4639-8724-9fc557c36298\") " Oct 13 09:04:08 crc kubenswrapper[4833]: I1013 09:04:08.467241 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1cd098d-0004-4639-8724-9fc557c36298-utilities\") pod \"f1cd098d-0004-4639-8724-9fc557c36298\" (UID: \"f1cd098d-0004-4639-8724-9fc557c36298\") " Oct 13 09:04:08 crc kubenswrapper[4833]: I1013 09:04:08.468216 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1cd098d-0004-4639-8724-9fc557c36298-utilities" (OuterVolumeSpecName: "utilities") pod "f1cd098d-0004-4639-8724-9fc557c36298" (UID: "f1cd098d-0004-4639-8724-9fc557c36298"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 09:04:08 crc kubenswrapper[4833]: I1013 09:04:08.473489 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1cd098d-0004-4639-8724-9fc557c36298-kube-api-access-mgqdj" (OuterVolumeSpecName: "kube-api-access-mgqdj") pod "f1cd098d-0004-4639-8724-9fc557c36298" (UID: "f1cd098d-0004-4639-8724-9fc557c36298"). InnerVolumeSpecName "kube-api-access-mgqdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 09:04:08 crc kubenswrapper[4833]: I1013 09:04:08.513818 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1cd098d-0004-4639-8724-9fc557c36298-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1cd098d-0004-4639-8724-9fc557c36298" (UID: "f1cd098d-0004-4639-8724-9fc557c36298"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 09:04:08 crc kubenswrapper[4833]: I1013 09:04:08.568995 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1cd098d-0004-4639-8724-9fc557c36298-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 09:04:08 crc kubenswrapper[4833]: I1013 09:04:08.569041 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1cd098d-0004-4639-8724-9fc557c36298-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 09:04:08 crc kubenswrapper[4833]: I1013 09:04:08.569055 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgqdj\" (UniqueName: \"kubernetes.io/projected/f1cd098d-0004-4639-8724-9fc557c36298-kube-api-access-mgqdj\") on node \"crc\" DevicePath \"\"" Oct 13 09:04:08 crc kubenswrapper[4833]: I1013 09:04:08.922774 4833 generic.go:334] "Generic (PLEG): container finished" podID="f1cd098d-0004-4639-8724-9fc557c36298" containerID="245f6da9120471ba45b18c93a253b1b7b0bfe086eab54eb8a06fe6d1340695f9" exitCode=0 Oct 13 09:04:08 crc kubenswrapper[4833]: I1013 09:04:08.922836 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6dpw" event={"ID":"f1cd098d-0004-4639-8724-9fc557c36298","Type":"ContainerDied","Data":"245f6da9120471ba45b18c93a253b1b7b0bfe086eab54eb8a06fe6d1340695f9"} Oct 13 09:04:08 crc kubenswrapper[4833]: I1013 09:04:08.922898 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6dpw" event={"ID":"f1cd098d-0004-4639-8724-9fc557c36298","Type":"ContainerDied","Data":"93bff8ff917e3d3d2bd8de3d15ca983bf8713f5fbb95d509adb92e606068bea9"} Oct 13 09:04:08 crc kubenswrapper[4833]: I1013 09:04:08.922902 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q6dpw" Oct 13 09:04:08 crc kubenswrapper[4833]: I1013 09:04:08.922922 4833 scope.go:117] "RemoveContainer" containerID="245f6da9120471ba45b18c93a253b1b7b0bfe086eab54eb8a06fe6d1340695f9" Oct 13 09:04:08 crc kubenswrapper[4833]: I1013 09:04:08.955724 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q6dpw"] Oct 13 09:04:08 crc kubenswrapper[4833]: I1013 09:04:08.957412 4833 scope.go:117] "RemoveContainer" containerID="f0000e80f53bdadbcb976caf7a3289a5fed02108725906edfd1cd223fae3a8a4" Oct 13 09:04:08 crc kubenswrapper[4833]: I1013 09:04:08.965173 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q6dpw"] Oct 13 09:04:09 crc kubenswrapper[4833]: I1013 09:04:09.700443 4833 scope.go:117] "RemoveContainer" containerID="9912ad263909c792b6c11758ba1d24996a2a80bd16ad2765e2b71c576f90d969" Oct 13 09:04:09 crc kubenswrapper[4833]: I1013 09:04:09.893910 4833 scope.go:117] "RemoveContainer" containerID="245f6da9120471ba45b18c93a253b1b7b0bfe086eab54eb8a06fe6d1340695f9" Oct 13 09:04:09 crc kubenswrapper[4833]: E1013 09:04:09.894866 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"245f6da9120471ba45b18c93a253b1b7b0bfe086eab54eb8a06fe6d1340695f9\": container with ID starting with 245f6da9120471ba45b18c93a253b1b7b0bfe086eab54eb8a06fe6d1340695f9 not found: ID does not exist" containerID="245f6da9120471ba45b18c93a253b1b7b0bfe086eab54eb8a06fe6d1340695f9" Oct 13 09:04:09 crc kubenswrapper[4833]: I1013 09:04:09.894908 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"245f6da9120471ba45b18c93a253b1b7b0bfe086eab54eb8a06fe6d1340695f9"} err="failed to get container status \"245f6da9120471ba45b18c93a253b1b7b0bfe086eab54eb8a06fe6d1340695f9\": rpc error: code = NotFound desc = could not find container \"245f6da9120471ba45b18c93a253b1b7b0bfe086eab54eb8a06fe6d1340695f9\": container with ID starting with 245f6da9120471ba45b18c93a253b1b7b0bfe086eab54eb8a06fe6d1340695f9 not found: ID does not exist" Oct 13 09:04:09 crc kubenswrapper[4833]: I1013 09:04:09.894940 4833 scope.go:117] "RemoveContainer" containerID="f0000e80f53bdadbcb976caf7a3289a5fed02108725906edfd1cd223fae3a8a4" Oct 13 09:04:09 crc kubenswrapper[4833]: E1013 09:04:09.895349 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0000e80f53bdadbcb976caf7a3289a5fed02108725906edfd1cd223fae3a8a4\": container with ID starting with f0000e80f53bdadbcb976caf7a3289a5fed02108725906edfd1cd223fae3a8a4 not found: ID does not exist" containerID="f0000e80f53bdadbcb976caf7a3289a5fed02108725906edfd1cd223fae3a8a4" Oct 13 09:04:09 crc kubenswrapper[4833]: I1013 09:04:09.895417 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0000e80f53bdadbcb976caf7a3289a5fed02108725906edfd1cd223fae3a8a4"} err="failed to get container status \"f0000e80f53bdadbcb976caf7a3289a5fed02108725906edfd1cd223fae3a8a4\": rpc error: code = NotFound desc = could not find container \"f0000e80f53bdadbcb976caf7a3289a5fed02108725906edfd1cd223fae3a8a4\": container with ID starting with f0000e80f53bdadbcb976caf7a3289a5fed02108725906edfd1cd223fae3a8a4 not found: ID does not exist" Oct 13 09:04:09 crc kubenswrapper[4833]: I1013 09:04:09.895459 4833 scope.go:117] "RemoveContainer" containerID="9912ad263909c792b6c11758ba1d24996a2a80bd16ad2765e2b71c576f90d969" Oct 13 09:04:09 crc kubenswrapper[4833]: E1013 09:04:09.895920 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9912ad263909c792b6c11758ba1d24996a2a80bd16ad2765e2b71c576f90d969\": container with ID starting with 9912ad263909c792b6c11758ba1d24996a2a80bd16ad2765e2b71c576f90d969 not found: ID does not exist" containerID="9912ad263909c792b6c11758ba1d24996a2a80bd16ad2765e2b71c576f90d969" Oct 13 09:04:09 crc kubenswrapper[4833]: I1013 09:04:09.895981 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9912ad263909c792b6c11758ba1d24996a2a80bd16ad2765e2b71c576f90d969"} err="failed to get container status \"9912ad263909c792b6c11758ba1d24996a2a80bd16ad2765e2b71c576f90d969\": rpc error: code = NotFound desc = could not find container \"9912ad263909c792b6c11758ba1d24996a2a80bd16ad2765e2b71c576f90d969\": container with ID starting with 9912ad263909c792b6c11758ba1d24996a2a80bd16ad2765e2b71c576f90d969 not found: ID does not exist" Oct 13 09:04:10 crc kubenswrapper[4833]: I1013 09:04:10.644258 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1cd098d-0004-4639-8724-9fc557c36298" path="/var/lib/kubelet/pods/f1cd098d-0004-4639-8724-9fc557c36298/volumes" Oct 13 09:04:13 crc kubenswrapper[4833]: I1013 09:04:13.627217 4833 scope.go:117] "RemoveContainer" containerID="012ebe3496e05271beb0f2417b1ba0dea73760fd7eab989d4ce27a4a433c6446" Oct 13 09:04:13 crc kubenswrapper[4833]: E1013 09:04:13.627574 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:04:25 crc kubenswrapper[4833]: I1013 09:04:25.629986 4833 scope.go:117] "RemoveContainer" containerID="012ebe3496e05271beb0f2417b1ba0dea73760fd7eab989d4ce27a4a433c6446" Oct 13 09:04:25 crc kubenswrapper[4833]: E1013 09:04:25.632832 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:04:40 crc kubenswrapper[4833]: I1013 09:04:40.634415 4833 scope.go:117] "RemoveContainer" containerID="012ebe3496e05271beb0f2417b1ba0dea73760fd7eab989d4ce27a4a433c6446" Oct 13 09:04:40 crc kubenswrapper[4833]: E1013 09:04:40.635232 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:04:52 crc kubenswrapper[4833]: I1013 09:04:52.627561 4833 scope.go:117] "RemoveContainer" containerID="012ebe3496e05271beb0f2417b1ba0dea73760fd7eab989d4ce27a4a433c6446" Oct 13 09:04:52 crc kubenswrapper[4833]: E1013 09:04:52.628311 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:05:07 crc kubenswrapper[4833]: I1013 09:05:07.628064 4833 scope.go:117] "RemoveContainer" containerID="012ebe3496e05271beb0f2417b1ba0dea73760fd7eab989d4ce27a4a433c6446" Oct 13 09:05:08 crc kubenswrapper[4833]: I1013 09:05:08.539324 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"c1fc0c40f2a1959157f45de53e1df0156fcbee785a2270243197e2ebd2f4a145"} Oct 13 09:06:12 crc kubenswrapper[4833]: I1013 09:06:12.700091 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hwpds"] Oct 13 09:06:12 crc kubenswrapper[4833]: E1013 09:06:12.701412 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1cd098d-0004-4639-8724-9fc557c36298" containerName="registry-server" Oct 13 09:06:12 crc kubenswrapper[4833]: I1013 09:06:12.701435 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1cd098d-0004-4639-8724-9fc557c36298" containerName="registry-server" Oct 13 09:06:12 crc kubenswrapper[4833]: E1013 09:06:12.701477 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1cd098d-0004-4639-8724-9fc557c36298" containerName="extract-content" Oct 13 09:06:12 crc kubenswrapper[4833]: I1013 09:06:12.701489 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1cd098d-0004-4639-8724-9fc557c36298" containerName="extract-content" Oct 13 09:06:12 crc kubenswrapper[4833]: E1013 09:06:12.701560 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1cd098d-0004-4639-8724-9fc557c36298" containerName="extract-utilities" Oct 13 09:06:12 crc kubenswrapper[4833]: I1013 09:06:12.701575 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1cd098d-0004-4639-8724-9fc557c36298" containerName="extract-utilities" Oct 13 09:06:12 crc kubenswrapper[4833]: I1013 09:06:12.702024 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1cd098d-0004-4639-8724-9fc557c36298" containerName="registry-server" Oct 13 09:06:12 crc kubenswrapper[4833]: I1013 09:06:12.704790 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hwpds" Oct 13 09:06:12 crc kubenswrapper[4833]: I1013 09:06:12.718931 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwpds"] Oct 13 09:06:12 crc kubenswrapper[4833]: I1013 09:06:12.720727 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d94038ca-b1f6-46bc-bec7-993684cbf071-utilities\") pod \"redhat-marketplace-hwpds\" (UID: \"d94038ca-b1f6-46bc-bec7-993684cbf071\") " pod="openshift-marketplace/redhat-marketplace-hwpds" Oct 13 09:06:12 crc kubenswrapper[4833]: I1013 09:06:12.720809 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d94038ca-b1f6-46bc-bec7-993684cbf071-catalog-content\") pod \"redhat-marketplace-hwpds\" (UID: \"d94038ca-b1f6-46bc-bec7-993684cbf071\") " pod="openshift-marketplace/redhat-marketplace-hwpds" Oct 13 09:06:12 crc kubenswrapper[4833]: I1013 09:06:12.720844 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2fk7\" (UniqueName: \"kubernetes.io/projected/d94038ca-b1f6-46bc-bec7-993684cbf071-kube-api-access-z2fk7\") pod \"redhat-marketplace-hwpds\" (UID: \"d94038ca-b1f6-46bc-bec7-993684cbf071\") " pod="openshift-marketplace/redhat-marketplace-hwpds" Oct 13 09:06:12 crc kubenswrapper[4833]: I1013 09:06:12.822780 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d94038ca-b1f6-46bc-bec7-993684cbf071-utilities\") pod \"redhat-marketplace-hwpds\" (UID: \"d94038ca-b1f6-46bc-bec7-993684cbf071\") " pod="openshift-marketplace/redhat-marketplace-hwpds" Oct 13 09:06:12 crc kubenswrapper[4833]: I1013 09:06:12.822866 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d94038ca-b1f6-46bc-bec7-993684cbf071-catalog-content\") pod \"redhat-marketplace-hwpds\" (UID: \"d94038ca-b1f6-46bc-bec7-993684cbf071\") " pod="openshift-marketplace/redhat-marketplace-hwpds" Oct 13 09:06:12 crc kubenswrapper[4833]: I1013 09:06:12.822903 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2fk7\" (UniqueName: \"kubernetes.io/projected/d94038ca-b1f6-46bc-bec7-993684cbf071-kube-api-access-z2fk7\") pod \"redhat-marketplace-hwpds\" (UID: \"d94038ca-b1f6-46bc-bec7-993684cbf071\") " pod="openshift-marketplace/redhat-marketplace-hwpds" Oct 13 09:06:12 crc kubenswrapper[4833]: I1013 09:06:12.824195 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d94038ca-b1f6-46bc-bec7-993684cbf071-catalog-content\") pod \"redhat-marketplace-hwpds\" (UID: \"d94038ca-b1f6-46bc-bec7-993684cbf071\") " pod="openshift-marketplace/redhat-marketplace-hwpds" Oct 13 09:06:12 crc kubenswrapper[4833]: I1013 09:06:12.824421 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d94038ca-b1f6-46bc-bec7-993684cbf071-utilities\") pod \"redhat-marketplace-hwpds\" (UID: \"d94038ca-b1f6-46bc-bec7-993684cbf071\") " pod="openshift-marketplace/redhat-marketplace-hwpds" Oct 13 09:06:12 crc kubenswrapper[4833]: I1013 09:06:12.842596 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2fk7\" (UniqueName: \"kubernetes.io/projected/d94038ca-b1f6-46bc-bec7-993684cbf071-kube-api-access-z2fk7\") pod \"redhat-marketplace-hwpds\" (UID: \"d94038ca-b1f6-46bc-bec7-993684cbf071\") " pod="openshift-marketplace/redhat-marketplace-hwpds" Oct 13 09:06:13 crc kubenswrapper[4833]: I1013 09:06:13.068339 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hwpds" Oct 13 09:06:13 crc kubenswrapper[4833]: I1013 09:06:13.590626 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwpds"] Oct 13 09:06:14 crc kubenswrapper[4833]: I1013 09:06:14.174870 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwpds" event={"ID":"d94038ca-b1f6-46bc-bec7-993684cbf071","Type":"ContainerStarted","Data":"a9f894c77230faa4fbfea3519196dd8c96bfcfa9860727369074a9e52f06d33e"} Oct 13 09:06:15 crc kubenswrapper[4833]: I1013 09:06:15.186705 4833 generic.go:334] "Generic (PLEG): container finished" podID="d94038ca-b1f6-46bc-bec7-993684cbf071" containerID="24d2f09c854748772da09fe1e333644c7a76ab89bd90454f60ab7a40dffc0641" exitCode=0 Oct 13 09:06:15 crc kubenswrapper[4833]: I1013 09:06:15.186824 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwpds" event={"ID":"d94038ca-b1f6-46bc-bec7-993684cbf071","Type":"ContainerDied","Data":"24d2f09c854748772da09fe1e333644c7a76ab89bd90454f60ab7a40dffc0641"} Oct 13 09:06:17 crc kubenswrapper[4833]: I1013 09:06:17.209189 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwpds" event={"ID":"d94038ca-b1f6-46bc-bec7-993684cbf071","Type":"ContainerStarted","Data":"5c55a59c71e896ee07ff94a8f0882c33851ff9b3ed6e17ef3032300b9ddcf296"} Oct 13 09:06:18 crc kubenswrapper[4833]: I1013 09:06:18.225466 4833 generic.go:334] "Generic (PLEG): container finished" podID="d94038ca-b1f6-46bc-bec7-993684cbf071" containerID="5c55a59c71e896ee07ff94a8f0882c33851ff9b3ed6e17ef3032300b9ddcf296" exitCode=0 Oct 13 09:06:18 crc kubenswrapper[4833]: I1013 09:06:18.225611 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwpds" event={"ID":"d94038ca-b1f6-46bc-bec7-993684cbf071","Type":"ContainerDied","Data":"5c55a59c71e896ee07ff94a8f0882c33851ff9b3ed6e17ef3032300b9ddcf296"} Oct 13 09:06:20 crc kubenswrapper[4833]: I1013 09:06:20.251377 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwpds" event={"ID":"d94038ca-b1f6-46bc-bec7-993684cbf071","Type":"ContainerStarted","Data":"239dddc88ec46556af8df6064e6e57d646235101feb2189600554011765599e2"} Oct 13 09:06:20 crc kubenswrapper[4833]: I1013 09:06:20.275217 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hwpds" podStartSLOduration=4.209067378 podStartE2EDuration="8.275197028s" podCreationTimestamp="2025-10-13 09:06:12 +0000 UTC" firstStartedPulling="2025-10-13 09:06:15.191004984 +0000 UTC m=+9465.291427900" lastFinishedPulling="2025-10-13 09:06:19.257134614 +0000 UTC m=+9469.357557550" observedRunningTime="2025-10-13 09:06:20.27141628 +0000 UTC m=+9470.371839206" watchObservedRunningTime="2025-10-13 09:06:20.275197028 +0000 UTC m=+9470.375619954" Oct 13 09:06:23 crc kubenswrapper[4833]: I1013 09:06:23.068773 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hwpds" Oct 13 09:06:23 crc kubenswrapper[4833]: I1013 09:06:23.069148 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hwpds" Oct 13 09:06:23 crc kubenswrapper[4833]: I1013 09:06:23.138397 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hwpds" Oct 13 09:06:33 crc kubenswrapper[4833]: I1013 09:06:33.132191 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hwpds" Oct 13 09:06:33 crc kubenswrapper[4833]: I1013 09:06:33.212062 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwpds"] Oct 13 09:06:33 crc kubenswrapper[4833]: I1013 09:06:33.389649 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hwpds" podUID="d94038ca-b1f6-46bc-bec7-993684cbf071" containerName="registry-server" containerID="cri-o://239dddc88ec46556af8df6064e6e57d646235101feb2189600554011765599e2" gracePeriod=2 Oct 13 09:06:33 crc kubenswrapper[4833]: I1013 09:06:33.891448 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hwpds" Oct 13 09:06:33 crc kubenswrapper[4833]: I1013 09:06:33.976864 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d94038ca-b1f6-46bc-bec7-993684cbf071-utilities\") pod \"d94038ca-b1f6-46bc-bec7-993684cbf071\" (UID: \"d94038ca-b1f6-46bc-bec7-993684cbf071\") " Oct 13 09:06:33 crc kubenswrapper[4833]: I1013 09:06:33.977082 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d94038ca-b1f6-46bc-bec7-993684cbf071-catalog-content\") pod \"d94038ca-b1f6-46bc-bec7-993684cbf071\" (UID: \"d94038ca-b1f6-46bc-bec7-993684cbf071\") " Oct 13 09:06:33 crc kubenswrapper[4833]: I1013 09:06:33.977122 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2fk7\" (UniqueName: \"kubernetes.io/projected/d94038ca-b1f6-46bc-bec7-993684cbf071-kube-api-access-z2fk7\") pod \"d94038ca-b1f6-46bc-bec7-993684cbf071\" (UID: \"d94038ca-b1f6-46bc-bec7-993684cbf071\") " Oct 13 09:06:33 crc kubenswrapper[4833]: I1013 09:06:33.978381 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d94038ca-b1f6-46bc-bec7-993684cbf071-utilities" (OuterVolumeSpecName: "utilities") pod "d94038ca-b1f6-46bc-bec7-993684cbf071" (UID: "d94038ca-b1f6-46bc-bec7-993684cbf071"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 09:06:33 crc kubenswrapper[4833]: I1013 09:06:33.982960 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d94038ca-b1f6-46bc-bec7-993684cbf071-kube-api-access-z2fk7" (OuterVolumeSpecName: "kube-api-access-z2fk7") pod "d94038ca-b1f6-46bc-bec7-993684cbf071" (UID: "d94038ca-b1f6-46bc-bec7-993684cbf071"). InnerVolumeSpecName "kube-api-access-z2fk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 09:06:33 crc kubenswrapper[4833]: I1013 09:06:33.991459 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d94038ca-b1f6-46bc-bec7-993684cbf071-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d94038ca-b1f6-46bc-bec7-993684cbf071" (UID: "d94038ca-b1f6-46bc-bec7-993684cbf071"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 09:06:34 crc kubenswrapper[4833]: I1013 09:06:34.079129 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d94038ca-b1f6-46bc-bec7-993684cbf071-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 09:06:34 crc kubenswrapper[4833]: I1013 09:06:34.079328 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2fk7\" (UniqueName: \"kubernetes.io/projected/d94038ca-b1f6-46bc-bec7-993684cbf071-kube-api-access-z2fk7\") on node \"crc\" DevicePath \"\"" Oct 13 09:06:34 crc kubenswrapper[4833]: I1013 09:06:34.079389 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d94038ca-b1f6-46bc-bec7-993684cbf071-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 09:06:34 crc kubenswrapper[4833]: I1013 09:06:34.403524 4833 generic.go:334] "Generic (PLEG): container finished" podID="d94038ca-b1f6-46bc-bec7-993684cbf071" containerID="239dddc88ec46556af8df6064e6e57d646235101feb2189600554011765599e2" exitCode=0 Oct 13 09:06:34 crc kubenswrapper[4833]: I1013 09:06:34.403629 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwpds" event={"ID":"d94038ca-b1f6-46bc-bec7-993684cbf071","Type":"ContainerDied","Data":"239dddc88ec46556af8df6064e6e57d646235101feb2189600554011765599e2"} Oct 13 09:06:34 crc kubenswrapper[4833]: I1013 09:06:34.403650 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hwpds" Oct 13 09:06:34 crc kubenswrapper[4833]: I1013 09:06:34.403865 4833 scope.go:117] "RemoveContainer" containerID="239dddc88ec46556af8df6064e6e57d646235101feb2189600554011765599e2" Oct 13 09:06:34 crc kubenswrapper[4833]: I1013 09:06:34.403849 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwpds" event={"ID":"d94038ca-b1f6-46bc-bec7-993684cbf071","Type":"ContainerDied","Data":"a9f894c77230faa4fbfea3519196dd8c96bfcfa9860727369074a9e52f06d33e"} Oct 13 09:06:34 crc kubenswrapper[4833]: I1013 09:06:34.432715 4833 scope.go:117] "RemoveContainer" containerID="5c55a59c71e896ee07ff94a8f0882c33851ff9b3ed6e17ef3032300b9ddcf296" Oct 13 09:06:34 crc kubenswrapper[4833]: I1013 09:06:34.453821 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwpds"] Oct 13 09:06:34 crc kubenswrapper[4833]: I1013 09:06:34.463354 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwpds"] Oct 13 09:06:34 crc kubenswrapper[4833]: I1013 09:06:34.470509 4833 scope.go:117] "RemoveContainer" containerID="24d2f09c854748772da09fe1e333644c7a76ab89bd90454f60ab7a40dffc0641" Oct 13 09:06:34 crc kubenswrapper[4833]: I1013 09:06:34.519478 4833 scope.go:117] "RemoveContainer" containerID="239dddc88ec46556af8df6064e6e57d646235101feb2189600554011765599e2" Oct 13 09:06:34 crc kubenswrapper[4833]: E1013 09:06:34.519966 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"239dddc88ec46556af8df6064e6e57d646235101feb2189600554011765599e2\": container with ID starting with 239dddc88ec46556af8df6064e6e57d646235101feb2189600554011765599e2 not found: ID does not exist" containerID="239dddc88ec46556af8df6064e6e57d646235101feb2189600554011765599e2" Oct 13 09:06:34 crc kubenswrapper[4833]: I1013 09:06:34.520024 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"239dddc88ec46556af8df6064e6e57d646235101feb2189600554011765599e2"} err="failed to get container status \"239dddc88ec46556af8df6064e6e57d646235101feb2189600554011765599e2\": rpc error: code = NotFound desc = could not find container \"239dddc88ec46556af8df6064e6e57d646235101feb2189600554011765599e2\": container with ID starting with 239dddc88ec46556af8df6064e6e57d646235101feb2189600554011765599e2 not found: ID does not exist" Oct 13 09:06:34 crc kubenswrapper[4833]: I1013 09:06:34.520052 4833 scope.go:117] "RemoveContainer" containerID="5c55a59c71e896ee07ff94a8f0882c33851ff9b3ed6e17ef3032300b9ddcf296" Oct 13 09:06:34 crc kubenswrapper[4833]: E1013 09:06:34.520516 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c55a59c71e896ee07ff94a8f0882c33851ff9b3ed6e17ef3032300b9ddcf296\": container with ID starting with 5c55a59c71e896ee07ff94a8f0882c33851ff9b3ed6e17ef3032300b9ddcf296 not found: ID does not exist" containerID="5c55a59c71e896ee07ff94a8f0882c33851ff9b3ed6e17ef3032300b9ddcf296" Oct 13 09:06:34 crc kubenswrapper[4833]: I1013 09:06:34.520588 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c55a59c71e896ee07ff94a8f0882c33851ff9b3ed6e17ef3032300b9ddcf296"} err="failed to get container status \"5c55a59c71e896ee07ff94a8f0882c33851ff9b3ed6e17ef3032300b9ddcf296\": rpc error: code = NotFound desc = could not find container \"5c55a59c71e896ee07ff94a8f0882c33851ff9b3ed6e17ef3032300b9ddcf296\": container with ID starting with 5c55a59c71e896ee07ff94a8f0882c33851ff9b3ed6e17ef3032300b9ddcf296 not found: ID does not exist" Oct 13 09:06:34 crc kubenswrapper[4833]: I1013 09:06:34.520615 4833 scope.go:117] "RemoveContainer" containerID="24d2f09c854748772da09fe1e333644c7a76ab89bd90454f60ab7a40dffc0641" Oct 13 09:06:34 crc kubenswrapper[4833]: E1013 09:06:34.521135 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24d2f09c854748772da09fe1e333644c7a76ab89bd90454f60ab7a40dffc0641\": container with ID starting with 24d2f09c854748772da09fe1e333644c7a76ab89bd90454f60ab7a40dffc0641 not found: ID does not exist" containerID="24d2f09c854748772da09fe1e333644c7a76ab89bd90454f60ab7a40dffc0641" Oct 13 09:06:34 crc kubenswrapper[4833]: I1013 09:06:34.521169 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24d2f09c854748772da09fe1e333644c7a76ab89bd90454f60ab7a40dffc0641"} err="failed to get container status \"24d2f09c854748772da09fe1e333644c7a76ab89bd90454f60ab7a40dffc0641\": rpc error: code = NotFound desc = could not find container \"24d2f09c854748772da09fe1e333644c7a76ab89bd90454f60ab7a40dffc0641\": container with ID starting with 24d2f09c854748772da09fe1e333644c7a76ab89bd90454f60ab7a40dffc0641 not found: ID does not exist" Oct 13 09:06:34 crc kubenswrapper[4833]: I1013 09:06:34.641772 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d94038ca-b1f6-46bc-bec7-993684cbf071" path="/var/lib/kubelet/pods/d94038ca-b1f6-46bc-bec7-993684cbf071/volumes" Oct 13 09:07:30 crc kubenswrapper[4833]: I1013 09:07:30.542913 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 09:07:30 crc kubenswrapper[4833]: I1013 09:07:30.543499 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 09:07:59 crc kubenswrapper[4833]: I1013 09:07:59.611833 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-47k4s"] Oct 13 09:07:59 crc kubenswrapper[4833]: E1013 09:07:59.612936 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d94038ca-b1f6-46bc-bec7-993684cbf071" containerName="extract-utilities" Oct 13 09:07:59 crc kubenswrapper[4833]: I1013 09:07:59.612955 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d94038ca-b1f6-46bc-bec7-993684cbf071" containerName="extract-utilities" Oct 13 09:07:59 crc kubenswrapper[4833]: E1013 09:07:59.612972 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d94038ca-b1f6-46bc-bec7-993684cbf071" containerName="registry-server" Oct 13 09:07:59 crc kubenswrapper[4833]: I1013 09:07:59.612981 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d94038ca-b1f6-46bc-bec7-993684cbf071" containerName="registry-server" Oct 13 09:07:59 crc kubenswrapper[4833]: E1013 09:07:59.613024 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d94038ca-b1f6-46bc-bec7-993684cbf071" containerName="extract-content" Oct 13 09:07:59 crc kubenswrapper[4833]: I1013 09:07:59.613033 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d94038ca-b1f6-46bc-bec7-993684cbf071" containerName="extract-content" Oct 13 09:07:59 crc kubenswrapper[4833]: I1013 09:07:59.613330 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d94038ca-b1f6-46bc-bec7-993684cbf071" containerName="registry-server" Oct 13 09:07:59 crc kubenswrapper[4833]: I1013 09:07:59.615933 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-47k4s" Oct 13 09:07:59 crc kubenswrapper[4833]: I1013 09:07:59.622306 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-47k4s"] Oct 13 09:07:59 crc kubenswrapper[4833]: I1013 09:07:59.663159 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6sp7\" (UniqueName: \"kubernetes.io/projected/b4f9836c-8f17-4927-a9e4-c43ac44a64fb-kube-api-access-s6sp7\") pod \"redhat-operators-47k4s\" (UID: \"b4f9836c-8f17-4927-a9e4-c43ac44a64fb\") " pod="openshift-marketplace/redhat-operators-47k4s" Oct 13 09:07:59 crc kubenswrapper[4833]: I1013 09:07:59.663301 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4f9836c-8f17-4927-a9e4-c43ac44a64fb-catalog-content\") pod \"redhat-operators-47k4s\" (UID: \"b4f9836c-8f17-4927-a9e4-c43ac44a64fb\") " pod="openshift-marketplace/redhat-operators-47k4s" Oct 13 09:07:59 crc kubenswrapper[4833]: I1013 09:07:59.663506 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4f9836c-8f17-4927-a9e4-c43ac44a64fb-utilities\") pod \"redhat-operators-47k4s\" (UID: \"b4f9836c-8f17-4927-a9e4-c43ac44a64fb\") " pod="openshift-marketplace/redhat-operators-47k4s" Oct 13 09:07:59 crc kubenswrapper[4833]: I1013 09:07:59.765520 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4f9836c-8f17-4927-a9e4-c43ac44a64fb-utilities\") pod \"redhat-operators-47k4s\" (UID: \"b4f9836c-8f17-4927-a9e4-c43ac44a64fb\") " pod="openshift-marketplace/redhat-operators-47k4s" Oct 13 09:07:59 crc kubenswrapper[4833]: I1013 09:07:59.765702 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6sp7\" (UniqueName: \"kubernetes.io/projected/b4f9836c-8f17-4927-a9e4-c43ac44a64fb-kube-api-access-s6sp7\") pod \"redhat-operators-47k4s\" (UID: \"b4f9836c-8f17-4927-a9e4-c43ac44a64fb\") " pod="openshift-marketplace/redhat-operators-47k4s" Oct 13 09:07:59 crc kubenswrapper[4833]: I1013 09:07:59.765786 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4f9836c-8f17-4927-a9e4-c43ac44a64fb-catalog-content\") pod \"redhat-operators-47k4s\" (UID: \"b4f9836c-8f17-4927-a9e4-c43ac44a64fb\") " pod="openshift-marketplace/redhat-operators-47k4s" Oct 13 09:07:59 crc kubenswrapper[4833]: I1013 09:07:59.766205 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4f9836c-8f17-4927-a9e4-c43ac44a64fb-catalog-content\") pod \"redhat-operators-47k4s\" (UID: \"b4f9836c-8f17-4927-a9e4-c43ac44a64fb\") " pod="openshift-marketplace/redhat-operators-47k4s" Oct 13 09:07:59 crc kubenswrapper[4833]: I1013 09:07:59.766246 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4f9836c-8f17-4927-a9e4-c43ac44a64fb-utilities\") pod \"redhat-operators-47k4s\" (UID: \"b4f9836c-8f17-4927-a9e4-c43ac44a64fb\") " pod="openshift-marketplace/redhat-operators-47k4s" Oct 13 09:07:59 crc kubenswrapper[4833]: I1013 09:07:59.785691 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6sp7\" (UniqueName: \"kubernetes.io/projected/b4f9836c-8f17-4927-a9e4-c43ac44a64fb-kube-api-access-s6sp7\") pod \"redhat-operators-47k4s\" (UID: \"b4f9836c-8f17-4927-a9e4-c43ac44a64fb\") " pod="openshift-marketplace/redhat-operators-47k4s" Oct 13 09:07:59 crc kubenswrapper[4833]: I1013 09:07:59.945955 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-47k4s" Oct 13 09:08:00 crc kubenswrapper[4833]: I1013 09:08:00.481394 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-47k4s"] Oct 13 09:08:00 crc kubenswrapper[4833]: I1013 09:08:00.542740 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 09:08:00 crc kubenswrapper[4833]: I1013 09:08:00.542800 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 09:08:01 crc kubenswrapper[4833]: I1013 09:08:01.340496 4833 generic.go:334] "Generic (PLEG): container finished" podID="b4f9836c-8f17-4927-a9e4-c43ac44a64fb" containerID="a84ac4fac5cd408256ce12d4c011ae58a9a0c798288f571943782a0698ad4627" exitCode=0 Oct 13 09:08:01 crc kubenswrapper[4833]: I1013 09:08:01.340692 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47k4s" event={"ID":"b4f9836c-8f17-4927-a9e4-c43ac44a64fb","Type":"ContainerDied","Data":"a84ac4fac5cd408256ce12d4c011ae58a9a0c798288f571943782a0698ad4627"} Oct 13 09:08:01 crc kubenswrapper[4833]: I1013 09:08:01.340874 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47k4s" event={"ID":"b4f9836c-8f17-4927-a9e4-c43ac44a64fb","Type":"ContainerStarted","Data":"b51e246fa93624225ab707ed890e7b2f67243f3b2d99c6d7d159e50fe9d72f88"} Oct 13 09:08:02 crc kubenswrapper[4833]: I1013 09:08:02.352931 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47k4s" event={"ID":"b4f9836c-8f17-4927-a9e4-c43ac44a64fb","Type":"ContainerStarted","Data":"7e60c4362176e2803aed53219d477804c072c46ec50789af85c5ded3533fc3d7"} Oct 13 09:08:09 crc kubenswrapper[4833]: I1013 09:08:09.432897 4833 generic.go:334] "Generic (PLEG): container finished" podID="b4f9836c-8f17-4927-a9e4-c43ac44a64fb" containerID="7e60c4362176e2803aed53219d477804c072c46ec50789af85c5ded3533fc3d7" exitCode=0 Oct 13 09:08:09 crc kubenswrapper[4833]: I1013 09:08:09.432979 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47k4s" event={"ID":"b4f9836c-8f17-4927-a9e4-c43ac44a64fb","Type":"ContainerDied","Data":"7e60c4362176e2803aed53219d477804c072c46ec50789af85c5ded3533fc3d7"} Oct 13 09:08:10 crc kubenswrapper[4833]: I1013 09:08:10.450447 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47k4s" event={"ID":"b4f9836c-8f17-4927-a9e4-c43ac44a64fb","Type":"ContainerStarted","Data":"2d4eb187c97ecfd1383be405f1ce4c8f30c91563e77d1517cd599913fb756d75"} Oct 13 09:08:10 crc kubenswrapper[4833]: I1013 09:08:10.510651 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-47k4s" podStartSLOduration=2.886221168 podStartE2EDuration="11.510626522s" podCreationTimestamp="2025-10-13 09:07:59 +0000 UTC" firstStartedPulling="2025-10-13 09:08:01.343709328 +0000 UTC m=+9571.444132254" lastFinishedPulling="2025-10-13 09:08:09.968114702 +0000 UTC m=+9580.068537608" observedRunningTime="2025-10-13 09:08:10.47295587 +0000 UTC m=+9580.573378816" watchObservedRunningTime="2025-10-13 09:08:10.510626522 +0000 UTC m=+9580.611049448" Oct 13 09:08:19 crc kubenswrapper[4833]: I1013 09:08:19.947576 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-47k4s" Oct 13 09:08:19 crc kubenswrapper[4833]: I1013 09:08:19.948121 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-47k4s" Oct 13 09:08:21 crc kubenswrapper[4833]: I1013 09:08:21.011289 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-47k4s" podUID="b4f9836c-8f17-4927-a9e4-c43ac44a64fb" containerName="registry-server" probeResult="failure" output=< Oct 13 09:08:21 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Oct 13 09:08:21 crc kubenswrapper[4833]: > Oct 13 09:08:30 crc kubenswrapper[4833]: I1013 09:08:30.000881 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-47k4s" Oct 13 09:08:30 crc kubenswrapper[4833]: I1013 09:08:30.057296 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-47k4s" Oct 13 09:08:30 crc kubenswrapper[4833]: I1013 09:08:30.543089 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 09:08:30 crc kubenswrapper[4833]: I1013 09:08:30.543144 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 09:08:30 crc kubenswrapper[4833]: I1013 09:08:30.543200 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 09:08:30 crc kubenswrapper[4833]: I1013 09:08:30.544005 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c1fc0c40f2a1959157f45de53e1df0156fcbee785a2270243197e2ebd2f4a145"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 09:08:30 crc kubenswrapper[4833]: I1013 09:08:30.544060 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://c1fc0c40f2a1959157f45de53e1df0156fcbee785a2270243197e2ebd2f4a145" gracePeriod=600 Oct 13 09:08:30 crc kubenswrapper[4833]: I1013 09:08:30.854135 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-47k4s"] Oct 13 09:08:31 crc kubenswrapper[4833]: I1013 09:08:31.671987 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="c1fc0c40f2a1959157f45de53e1df0156fcbee785a2270243197e2ebd2f4a145" exitCode=0 Oct 13 09:08:31 crc kubenswrapper[4833]: I1013 09:08:31.672018 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"c1fc0c40f2a1959157f45de53e1df0156fcbee785a2270243197e2ebd2f4a145"} Oct 13 09:08:31 crc kubenswrapper[4833]: I1013 09:08:31.672070 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"c1e0a749edc59c61aa1279e53e1d1abcd013fc13ab1f20b8ccb195b1b5ce6138"} Oct 13 09:08:31 crc kubenswrapper[4833]: I1013 09:08:31.672088 4833 scope.go:117] "RemoveContainer" containerID="012ebe3496e05271beb0f2417b1ba0dea73760fd7eab989d4ce27a4a433c6446" Oct 13 09:08:31 crc kubenswrapper[4833]: I1013 09:08:31.672250 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-47k4s" podUID="b4f9836c-8f17-4927-a9e4-c43ac44a64fb" containerName="registry-server" containerID="cri-o://2d4eb187c97ecfd1383be405f1ce4c8f30c91563e77d1517cd599913fb756d75" gracePeriod=2 Oct 13 09:08:32 crc kubenswrapper[4833]: I1013 09:08:32.168228 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-47k4s" Oct 13 09:08:32 crc kubenswrapper[4833]: I1013 09:08:32.305100 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4f9836c-8f17-4927-a9e4-c43ac44a64fb-catalog-content\") pod \"b4f9836c-8f17-4927-a9e4-c43ac44a64fb\" (UID: \"b4f9836c-8f17-4927-a9e4-c43ac44a64fb\") " Oct 13 09:08:32 crc kubenswrapper[4833]: I1013 09:08:32.305389 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6sp7\" (UniqueName: \"kubernetes.io/projected/b4f9836c-8f17-4927-a9e4-c43ac44a64fb-kube-api-access-s6sp7\") pod \"b4f9836c-8f17-4927-a9e4-c43ac44a64fb\" (UID: \"b4f9836c-8f17-4927-a9e4-c43ac44a64fb\") " Oct 13 09:08:32 crc kubenswrapper[4833]: I1013 09:08:32.305636 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4f9836c-8f17-4927-a9e4-c43ac44a64fb-utilities\") pod \"b4f9836c-8f17-4927-a9e4-c43ac44a64fb\" (UID: \"b4f9836c-8f17-4927-a9e4-c43ac44a64fb\") " Oct 13 09:08:32 crc kubenswrapper[4833]: I1013 09:08:32.306298 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4f9836c-8f17-4927-a9e4-c43ac44a64fb-utilities" (OuterVolumeSpecName: "utilities") pod "b4f9836c-8f17-4927-a9e4-c43ac44a64fb" (UID: "b4f9836c-8f17-4927-a9e4-c43ac44a64fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 09:08:32 crc kubenswrapper[4833]: I1013 09:08:32.310811 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4f9836c-8f17-4927-a9e4-c43ac44a64fb-kube-api-access-s6sp7" (OuterVolumeSpecName: "kube-api-access-s6sp7") pod "b4f9836c-8f17-4927-a9e4-c43ac44a64fb" (UID: "b4f9836c-8f17-4927-a9e4-c43ac44a64fb"). InnerVolumeSpecName "kube-api-access-s6sp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 09:08:32 crc kubenswrapper[4833]: I1013 09:08:32.385617 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4f9836c-8f17-4927-a9e4-c43ac44a64fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4f9836c-8f17-4927-a9e4-c43ac44a64fb" (UID: "b4f9836c-8f17-4927-a9e4-c43ac44a64fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 09:08:32 crc kubenswrapper[4833]: I1013 09:08:32.408470 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4f9836c-8f17-4927-a9e4-c43ac44a64fb-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 09:08:32 crc kubenswrapper[4833]: I1013 09:08:32.408502 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6sp7\" (UniqueName: \"kubernetes.io/projected/b4f9836c-8f17-4927-a9e4-c43ac44a64fb-kube-api-access-s6sp7\") on node \"crc\" DevicePath \"\"" Oct 13 09:08:32 crc kubenswrapper[4833]: I1013 09:08:32.408514 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4f9836c-8f17-4927-a9e4-c43ac44a64fb-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 09:08:32 crc kubenswrapper[4833]: I1013 09:08:32.682121 4833 generic.go:334] "Generic (PLEG): container finished" podID="b4f9836c-8f17-4927-a9e4-c43ac44a64fb" containerID="2d4eb187c97ecfd1383be405f1ce4c8f30c91563e77d1517cd599913fb756d75" exitCode=0 Oct 13 09:08:32 crc kubenswrapper[4833]: I1013 09:08:32.682181 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-47k4s" Oct 13 09:08:32 crc kubenswrapper[4833]: I1013 09:08:32.682217 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47k4s" event={"ID":"b4f9836c-8f17-4927-a9e4-c43ac44a64fb","Type":"ContainerDied","Data":"2d4eb187c97ecfd1383be405f1ce4c8f30c91563e77d1517cd599913fb756d75"} Oct 13 09:08:32 crc kubenswrapper[4833]: I1013 09:08:32.682254 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47k4s" event={"ID":"b4f9836c-8f17-4927-a9e4-c43ac44a64fb","Type":"ContainerDied","Data":"b51e246fa93624225ab707ed890e7b2f67243f3b2d99c6d7d159e50fe9d72f88"} Oct 13 09:08:32 crc kubenswrapper[4833]: I1013 09:08:32.682276 4833 scope.go:117] "RemoveContainer" containerID="2d4eb187c97ecfd1383be405f1ce4c8f30c91563e77d1517cd599913fb756d75" Oct 13 09:08:32 crc kubenswrapper[4833]: I1013 09:08:32.706590 4833 scope.go:117] "RemoveContainer" containerID="7e60c4362176e2803aed53219d477804c072c46ec50789af85c5ded3533fc3d7" Oct 13 09:08:32 crc kubenswrapper[4833]: I1013 09:08:32.708518 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-47k4s"] Oct 13 09:08:32 crc kubenswrapper[4833]: I1013 09:08:32.735150 4833 scope.go:117] "RemoveContainer" containerID="a84ac4fac5cd408256ce12d4c011ae58a9a0c798288f571943782a0698ad4627" Oct 13 09:08:32 crc kubenswrapper[4833]: I1013 09:08:32.765524 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-47k4s"] Oct 13 09:08:32 crc kubenswrapper[4833]: I1013 09:08:32.795262 4833 scope.go:117] "RemoveContainer" containerID="2d4eb187c97ecfd1383be405f1ce4c8f30c91563e77d1517cd599913fb756d75" Oct 13 09:08:32 crc kubenswrapper[4833]: E1013 09:08:32.795867 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d4eb187c97ecfd1383be405f1ce4c8f30c91563e77d1517cd599913fb756d75\": container with ID starting with 2d4eb187c97ecfd1383be405f1ce4c8f30c91563e77d1517cd599913fb756d75 not found: ID does not exist" containerID="2d4eb187c97ecfd1383be405f1ce4c8f30c91563e77d1517cd599913fb756d75" Oct 13 09:08:32 crc kubenswrapper[4833]: I1013 09:08:32.795926 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d4eb187c97ecfd1383be405f1ce4c8f30c91563e77d1517cd599913fb756d75"} err="failed to get container status \"2d4eb187c97ecfd1383be405f1ce4c8f30c91563e77d1517cd599913fb756d75\": rpc error: code = NotFound desc = could not find container \"2d4eb187c97ecfd1383be405f1ce4c8f30c91563e77d1517cd599913fb756d75\": container with ID starting with 2d4eb187c97ecfd1383be405f1ce4c8f30c91563e77d1517cd599913fb756d75 not found: ID does not exist" Oct 13 09:08:32 crc kubenswrapper[4833]: I1013 09:08:32.795981 4833 scope.go:117] "RemoveContainer" containerID="7e60c4362176e2803aed53219d477804c072c46ec50789af85c5ded3533fc3d7" Oct 13 09:08:32 crc kubenswrapper[4833]: E1013 09:08:32.796380 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e60c4362176e2803aed53219d477804c072c46ec50789af85c5ded3533fc3d7\": container with ID starting with 7e60c4362176e2803aed53219d477804c072c46ec50789af85c5ded3533fc3d7 not found: ID does not exist" containerID="7e60c4362176e2803aed53219d477804c072c46ec50789af85c5ded3533fc3d7" Oct 13 09:08:32 crc kubenswrapper[4833]: I1013 09:08:32.796409 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e60c4362176e2803aed53219d477804c072c46ec50789af85c5ded3533fc3d7"} err="failed to get container status \"7e60c4362176e2803aed53219d477804c072c46ec50789af85c5ded3533fc3d7\": rpc error: code = NotFound desc = could not find container \"7e60c4362176e2803aed53219d477804c072c46ec50789af85c5ded3533fc3d7\": container with ID starting with 7e60c4362176e2803aed53219d477804c072c46ec50789af85c5ded3533fc3d7 not found: ID does not exist" Oct 13 09:08:32 crc kubenswrapper[4833]: I1013 09:08:32.796445 4833 scope.go:117] "RemoveContainer" containerID="a84ac4fac5cd408256ce12d4c011ae58a9a0c798288f571943782a0698ad4627" Oct 13 09:08:32 crc kubenswrapper[4833]: E1013 09:08:32.796882 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a84ac4fac5cd408256ce12d4c011ae58a9a0c798288f571943782a0698ad4627\": container with ID starting with a84ac4fac5cd408256ce12d4c011ae58a9a0c798288f571943782a0698ad4627 not found: ID does not exist" containerID="a84ac4fac5cd408256ce12d4c011ae58a9a0c798288f571943782a0698ad4627" Oct 13 09:08:32 crc kubenswrapper[4833]: I1013 09:08:32.796917 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a84ac4fac5cd408256ce12d4c011ae58a9a0c798288f571943782a0698ad4627"} err="failed to get container status \"a84ac4fac5cd408256ce12d4c011ae58a9a0c798288f571943782a0698ad4627\": rpc error: code = NotFound desc = could not find container \"a84ac4fac5cd408256ce12d4c011ae58a9a0c798288f571943782a0698ad4627\": container with ID starting with a84ac4fac5cd408256ce12d4c011ae58a9a0c798288f571943782a0698ad4627 not found: ID does not exist" Oct 13 09:08:34 crc kubenswrapper[4833]: I1013 09:08:34.640954 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4f9836c-8f17-4927-a9e4-c43ac44a64fb" path="/var/lib/kubelet/pods/b4f9836c-8f17-4927-a9e4-c43ac44a64fb/volumes" Oct 13 09:08:39 crc kubenswrapper[4833]: I1013 09:08:39.018195 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vvz22"] Oct 13 09:08:39 crc kubenswrapper[4833]: E1013 09:08:39.019050 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4f9836c-8f17-4927-a9e4-c43ac44a64fb" containerName="extract-content" Oct 13 09:08:39 crc kubenswrapper[4833]: I1013 09:08:39.019062 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4f9836c-8f17-4927-a9e4-c43ac44a64fb" containerName="extract-content" Oct 13 09:08:39 crc kubenswrapper[4833]: E1013 09:08:39.019143 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4f9836c-8f17-4927-a9e4-c43ac44a64fb" containerName="registry-server" Oct 13 09:08:39 crc kubenswrapper[4833]: I1013 09:08:39.019151 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4f9836c-8f17-4927-a9e4-c43ac44a64fb" containerName="registry-server" Oct 13 09:08:39 crc kubenswrapper[4833]: E1013 09:08:39.019165 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4f9836c-8f17-4927-a9e4-c43ac44a64fb" containerName="extract-utilities" Oct 13 09:08:39 crc kubenswrapper[4833]: I1013 09:08:39.019171 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4f9836c-8f17-4927-a9e4-c43ac44a64fb" containerName="extract-utilities" Oct 13 09:08:39 crc kubenswrapper[4833]: I1013 09:08:39.019360 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4f9836c-8f17-4927-a9e4-c43ac44a64fb" containerName="registry-server" Oct 13 09:08:39 crc kubenswrapper[4833]: I1013 09:08:39.023408 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vvz22" Oct 13 09:08:39 crc kubenswrapper[4833]: I1013 09:08:39.033685 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vvz22"] Oct 13 09:08:39 crc kubenswrapper[4833]: I1013 09:08:39.173107 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95ac0995-dbea-414e-ad58-0b9916f6a16f-utilities\") pod \"community-operators-vvz22\" (UID: \"95ac0995-dbea-414e-ad58-0b9916f6a16f\") " pod="openshift-marketplace/community-operators-vvz22" Oct 13 09:08:39 crc kubenswrapper[4833]: I1013 09:08:39.173172 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99b4q\" (UniqueName: \"kubernetes.io/projected/95ac0995-dbea-414e-ad58-0b9916f6a16f-kube-api-access-99b4q\") pod \"community-operators-vvz22\" (UID: \"95ac0995-dbea-414e-ad58-0b9916f6a16f\") " pod="openshift-marketplace/community-operators-vvz22" Oct 13 09:08:39 crc kubenswrapper[4833]: I1013 09:08:39.173196 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95ac0995-dbea-414e-ad58-0b9916f6a16f-catalog-content\") pod \"community-operators-vvz22\" (UID: \"95ac0995-dbea-414e-ad58-0b9916f6a16f\") " pod="openshift-marketplace/community-operators-vvz22" Oct 13 09:08:39 crc kubenswrapper[4833]: I1013 09:08:39.276578 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95ac0995-dbea-414e-ad58-0b9916f6a16f-catalog-content\") pod \"community-operators-vvz22\" (UID: \"95ac0995-dbea-414e-ad58-0b9916f6a16f\") " pod="openshift-marketplace/community-operators-vvz22" Oct 13 09:08:39 crc kubenswrapper[4833]: I1013 09:08:39.276813 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95ac0995-dbea-414e-ad58-0b9916f6a16f-utilities\") pod \"community-operators-vvz22\" (UID: \"95ac0995-dbea-414e-ad58-0b9916f6a16f\") " pod="openshift-marketplace/community-operators-vvz22" Oct 13 09:08:39 crc kubenswrapper[4833]: I1013 09:08:39.276848 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99b4q\" (UniqueName: \"kubernetes.io/projected/95ac0995-dbea-414e-ad58-0b9916f6a16f-kube-api-access-99b4q\") pod \"community-operators-vvz22\" (UID: \"95ac0995-dbea-414e-ad58-0b9916f6a16f\") " pod="openshift-marketplace/community-operators-vvz22" Oct 13 09:08:39 crc kubenswrapper[4833]: I1013 09:08:39.277117 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95ac0995-dbea-414e-ad58-0b9916f6a16f-catalog-content\") pod \"community-operators-vvz22\" (UID: \"95ac0995-dbea-414e-ad58-0b9916f6a16f\") " pod="openshift-marketplace/community-operators-vvz22" Oct 13 09:08:39 crc kubenswrapper[4833]: I1013 09:08:39.277423 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95ac0995-dbea-414e-ad58-0b9916f6a16f-utilities\") pod \"community-operators-vvz22\" (UID: \"95ac0995-dbea-414e-ad58-0b9916f6a16f\") " pod="openshift-marketplace/community-operators-vvz22" Oct 13 09:08:39 crc kubenswrapper[4833]: I1013 09:08:39.297733 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99b4q\" (UniqueName: \"kubernetes.io/projected/95ac0995-dbea-414e-ad58-0b9916f6a16f-kube-api-access-99b4q\") pod \"community-operators-vvz22\" (UID: \"95ac0995-dbea-414e-ad58-0b9916f6a16f\") " pod="openshift-marketplace/community-operators-vvz22" Oct 13 09:08:39 crc kubenswrapper[4833]: I1013 09:08:39.380746 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vvz22" Oct 13 09:08:39 crc kubenswrapper[4833]: I1013 09:08:39.891861 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vvz22"] Oct 13 09:08:40 crc kubenswrapper[4833]: I1013 09:08:40.772798 4833 generic.go:334] "Generic (PLEG): container finished" podID="95ac0995-dbea-414e-ad58-0b9916f6a16f" containerID="65a9561f64d8a72229b1f06f548a51f442e71cf6f1dee404278ddc4a9705de12" exitCode=0 Oct 13 09:08:40 crc kubenswrapper[4833]: I1013 09:08:40.772887 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvz22" event={"ID":"95ac0995-dbea-414e-ad58-0b9916f6a16f","Type":"ContainerDied","Data":"65a9561f64d8a72229b1f06f548a51f442e71cf6f1dee404278ddc4a9705de12"} Oct 13 09:08:40 crc kubenswrapper[4833]: I1013 09:08:40.773453 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvz22" event={"ID":"95ac0995-dbea-414e-ad58-0b9916f6a16f","Type":"ContainerStarted","Data":"b9511d27612fcfc33a7c4b9f65844940d07b65d4e9d78588afbdfe1e1d2279e0"} Oct 13 09:08:41 crc kubenswrapper[4833]: I1013 09:08:41.784625 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvz22" event={"ID":"95ac0995-dbea-414e-ad58-0b9916f6a16f","Type":"ContainerStarted","Data":"fff07e866dc4ad04417280b990066187a257eba34c7d5dedf890036f25860058"} Oct 13 09:08:43 crc kubenswrapper[4833]: I1013 09:08:43.803797 4833 generic.go:334] "Generic (PLEG): container finished" podID="95ac0995-dbea-414e-ad58-0b9916f6a16f" containerID="fff07e866dc4ad04417280b990066187a257eba34c7d5dedf890036f25860058" exitCode=0 Oct 13 09:08:43 crc kubenswrapper[4833]: I1013 09:08:43.803886 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvz22" event={"ID":"95ac0995-dbea-414e-ad58-0b9916f6a16f","Type":"ContainerDied","Data":"fff07e866dc4ad04417280b990066187a257eba34c7d5dedf890036f25860058"} Oct 13 09:08:44 crc kubenswrapper[4833]: I1013 09:08:44.816564 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvz22" event={"ID":"95ac0995-dbea-414e-ad58-0b9916f6a16f","Type":"ContainerStarted","Data":"5c4740540a8bef166a6148e9c21cd8efe6d13bdc3190419099c54c2332f8dce0"} Oct 13 09:08:44 crc kubenswrapper[4833]: I1013 09:08:44.847744 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vvz22" podStartSLOduration=3.150912863 podStartE2EDuration="6.847710412s" podCreationTimestamp="2025-10-13 09:08:38 +0000 UTC" firstStartedPulling="2025-10-13 09:08:40.775298194 +0000 UTC m=+9610.875721110" lastFinishedPulling="2025-10-13 09:08:44.472095743 +0000 UTC m=+9614.572518659" observedRunningTime="2025-10-13 09:08:44.833109517 +0000 UTC m=+9614.933532453" watchObservedRunningTime="2025-10-13 09:08:44.847710412 +0000 UTC m=+9614.948133338" Oct 13 09:08:49 crc kubenswrapper[4833]: I1013 09:08:49.382421 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vvz22" Oct 13 09:08:49 crc kubenswrapper[4833]: I1013 09:08:49.383030 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vvz22" Oct 13 09:08:49 crc kubenswrapper[4833]: I1013 09:08:49.442056 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vvz22" Oct 13 09:08:49 crc kubenswrapper[4833]: I1013 09:08:49.929755 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vvz22" Oct 13 09:08:50 crc kubenswrapper[4833]: I1013 09:08:50.424120 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vvz22"] Oct 13 09:08:51 crc kubenswrapper[4833]: I1013 09:08:51.918809 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vvz22" podUID="95ac0995-dbea-414e-ad58-0b9916f6a16f" containerName="registry-server" containerID="cri-o://5c4740540a8bef166a6148e9c21cd8efe6d13bdc3190419099c54c2332f8dce0" gracePeriod=2 Oct 13 09:08:52 crc kubenswrapper[4833]: I1013 09:08:52.928862 4833 generic.go:334] "Generic (PLEG): container finished" podID="95ac0995-dbea-414e-ad58-0b9916f6a16f" containerID="5c4740540a8bef166a6148e9c21cd8efe6d13bdc3190419099c54c2332f8dce0" exitCode=0 Oct 13 09:08:52 crc kubenswrapper[4833]: I1013 09:08:52.928971 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvz22" event={"ID":"95ac0995-dbea-414e-ad58-0b9916f6a16f","Type":"ContainerDied","Data":"5c4740540a8bef166a6148e9c21cd8efe6d13bdc3190419099c54c2332f8dce0"} Oct 13 09:08:52 crc kubenswrapper[4833]: I1013 09:08:52.929401 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvz22" event={"ID":"95ac0995-dbea-414e-ad58-0b9916f6a16f","Type":"ContainerDied","Data":"b9511d27612fcfc33a7c4b9f65844940d07b65d4e9d78588afbdfe1e1d2279e0"} Oct 13 09:08:52 crc kubenswrapper[4833]: I1013 09:08:52.929417 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9511d27612fcfc33a7c4b9f65844940d07b65d4e9d78588afbdfe1e1d2279e0" Oct 13 09:08:52 crc kubenswrapper[4833]: I1013 09:08:52.931824 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vvz22" Oct 13 09:08:53 crc kubenswrapper[4833]: I1013 09:08:53.000119 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99b4q\" (UniqueName: \"kubernetes.io/projected/95ac0995-dbea-414e-ad58-0b9916f6a16f-kube-api-access-99b4q\") pod \"95ac0995-dbea-414e-ad58-0b9916f6a16f\" (UID: \"95ac0995-dbea-414e-ad58-0b9916f6a16f\") " Oct 13 09:08:53 crc kubenswrapper[4833]: I1013 09:08:53.000176 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95ac0995-dbea-414e-ad58-0b9916f6a16f-catalog-content\") pod \"95ac0995-dbea-414e-ad58-0b9916f6a16f\" (UID: \"95ac0995-dbea-414e-ad58-0b9916f6a16f\") " Oct 13 09:08:53 crc kubenswrapper[4833]: I1013 09:08:53.000334 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95ac0995-dbea-414e-ad58-0b9916f6a16f-utilities\") pod \"95ac0995-dbea-414e-ad58-0b9916f6a16f\" (UID: \"95ac0995-dbea-414e-ad58-0b9916f6a16f\") " Oct 13 09:08:53 crc kubenswrapper[4833]: I1013 09:08:53.001034 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95ac0995-dbea-414e-ad58-0b9916f6a16f-utilities" (OuterVolumeSpecName: "utilities") pod "95ac0995-dbea-414e-ad58-0b9916f6a16f" (UID: "95ac0995-dbea-414e-ad58-0b9916f6a16f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 09:08:53 crc kubenswrapper[4833]: I1013 09:08:53.005719 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95ac0995-dbea-414e-ad58-0b9916f6a16f-kube-api-access-99b4q" (OuterVolumeSpecName: "kube-api-access-99b4q") pod "95ac0995-dbea-414e-ad58-0b9916f6a16f" (UID: "95ac0995-dbea-414e-ad58-0b9916f6a16f"). InnerVolumeSpecName "kube-api-access-99b4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 09:08:53 crc kubenswrapper[4833]: I1013 09:08:53.044957 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95ac0995-dbea-414e-ad58-0b9916f6a16f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95ac0995-dbea-414e-ad58-0b9916f6a16f" (UID: "95ac0995-dbea-414e-ad58-0b9916f6a16f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 09:08:53 crc kubenswrapper[4833]: I1013 09:08:53.102995 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95ac0995-dbea-414e-ad58-0b9916f6a16f-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 09:08:53 crc kubenswrapper[4833]: I1013 09:08:53.103035 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99b4q\" (UniqueName: \"kubernetes.io/projected/95ac0995-dbea-414e-ad58-0b9916f6a16f-kube-api-access-99b4q\") on node \"crc\" DevicePath \"\"" Oct 13 09:08:53 crc kubenswrapper[4833]: I1013 09:08:53.103049 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95ac0995-dbea-414e-ad58-0b9916f6a16f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 09:08:53 crc kubenswrapper[4833]: I1013 09:08:53.936934 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vvz22" Oct 13 09:08:53 crc kubenswrapper[4833]: I1013 09:08:53.966934 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vvz22"] Oct 13 09:08:53 crc kubenswrapper[4833]: I1013 09:08:53.976828 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vvz22"] Oct 13 09:08:54 crc kubenswrapper[4833]: I1013 09:08:54.638740 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95ac0995-dbea-414e-ad58-0b9916f6a16f" path="/var/lib/kubelet/pods/95ac0995-dbea-414e-ad58-0b9916f6a16f/volumes" Oct 13 09:11:00 crc kubenswrapper[4833]: I1013 09:11:00.542820 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 09:11:00 crc kubenswrapper[4833]: I1013 09:11:00.543587 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 09:11:30 crc kubenswrapper[4833]: I1013 09:11:30.542219 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 09:11:30 crc kubenswrapper[4833]: I1013 09:11:30.542773 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 09:12:00 crc kubenswrapper[4833]: I1013 09:12:00.542402 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 09:12:00 crc kubenswrapper[4833]: I1013 09:12:00.542880 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 09:12:00 crc kubenswrapper[4833]: I1013 09:12:00.542924 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 09:12:00 crc kubenswrapper[4833]: I1013 09:12:00.543774 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c1e0a749edc59c61aa1279e53e1d1abcd013fc13ab1f20b8ccb195b1b5ce6138"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 09:12:00 crc kubenswrapper[4833]: I1013 09:12:00.543819 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://c1e0a749edc59c61aa1279e53e1d1abcd013fc13ab1f20b8ccb195b1b5ce6138" gracePeriod=600 Oct 13 09:12:00 crc kubenswrapper[4833]: E1013 09:12:00.675380 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:12:01 crc kubenswrapper[4833]: I1013 09:12:01.026588 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="c1e0a749edc59c61aa1279e53e1d1abcd013fc13ab1f20b8ccb195b1b5ce6138" exitCode=0 Oct 13 09:12:01 crc kubenswrapper[4833]: I1013 09:12:01.026638 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"c1e0a749edc59c61aa1279e53e1d1abcd013fc13ab1f20b8ccb195b1b5ce6138"} Oct 13 09:12:01 crc kubenswrapper[4833]: I1013 09:12:01.026678 4833 scope.go:117] "RemoveContainer" containerID="c1fc0c40f2a1959157f45de53e1df0156fcbee785a2270243197e2ebd2f4a145" Oct 13 09:12:01 crc kubenswrapper[4833]: I1013 09:12:01.027364 4833 scope.go:117] "RemoveContainer" containerID="c1e0a749edc59c61aa1279e53e1d1abcd013fc13ab1f20b8ccb195b1b5ce6138" Oct 13 09:12:01 crc kubenswrapper[4833]: E1013 09:12:01.027628 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:12:15 crc kubenswrapper[4833]: I1013 09:12:15.627260 4833 scope.go:117] "RemoveContainer" containerID="c1e0a749edc59c61aa1279e53e1d1abcd013fc13ab1f20b8ccb195b1b5ce6138" Oct 13 09:12:15 crc kubenswrapper[4833]: E1013 09:12:15.628161 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:12:27 crc kubenswrapper[4833]: I1013 09:12:27.627421 4833 scope.go:117] "RemoveContainer" containerID="c1e0a749edc59c61aa1279e53e1d1abcd013fc13ab1f20b8ccb195b1b5ce6138" Oct 13 09:12:27 crc kubenswrapper[4833]: E1013 09:12:27.628307 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:12:39 crc kubenswrapper[4833]: I1013 09:12:39.628002 4833 scope.go:117] "RemoveContainer" containerID="c1e0a749edc59c61aa1279e53e1d1abcd013fc13ab1f20b8ccb195b1b5ce6138" Oct 13 09:12:39 crc kubenswrapper[4833]: E1013 09:12:39.629091 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:12:51 crc kubenswrapper[4833]: I1013 09:12:51.628258 4833 scope.go:117] "RemoveContainer" containerID="c1e0a749edc59c61aa1279e53e1d1abcd013fc13ab1f20b8ccb195b1b5ce6138" Oct 13 09:12:51 crc kubenswrapper[4833]: E1013 09:12:51.629180 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:13:03 crc kubenswrapper[4833]: I1013 09:13:03.627892 4833 scope.go:117] "RemoveContainer" containerID="c1e0a749edc59c61aa1279e53e1d1abcd013fc13ab1f20b8ccb195b1b5ce6138" Oct 13 09:13:03 crc kubenswrapper[4833]: E1013 09:13:03.628875 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:13:18 crc kubenswrapper[4833]: I1013 09:13:18.627335 4833 scope.go:117] "RemoveContainer" containerID="c1e0a749edc59c61aa1279e53e1d1abcd013fc13ab1f20b8ccb195b1b5ce6138" Oct 13 09:13:18 crc kubenswrapper[4833]: E1013 09:13:18.628319 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:13:30 crc kubenswrapper[4833]: I1013 09:13:30.648471 4833 scope.go:117] "RemoveContainer" containerID="c1e0a749edc59c61aa1279e53e1d1abcd013fc13ab1f20b8ccb195b1b5ce6138" Oct 13 09:13:30 crc kubenswrapper[4833]: E1013 09:13:30.650589 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:13:42 crc kubenswrapper[4833]: I1013 09:13:42.627487 4833 scope.go:117] "RemoveContainer" containerID="c1e0a749edc59c61aa1279e53e1d1abcd013fc13ab1f20b8ccb195b1b5ce6138" Oct 13 09:13:42 crc kubenswrapper[4833]: E1013 09:13:42.628424 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:13:56 crc kubenswrapper[4833]: I1013 09:13:56.627407 4833 scope.go:117] "RemoveContainer" containerID="c1e0a749edc59c61aa1279e53e1d1abcd013fc13ab1f20b8ccb195b1b5ce6138" Oct 13 09:13:56 crc kubenswrapper[4833]: E1013 09:13:56.628444 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:14:07 crc kubenswrapper[4833]: I1013 09:14:07.381383 4833 generic.go:334] "Generic (PLEG): container finished" podID="3f103fdd-b425-420e-99f5-e73aaa0b91cc" containerID="ad3e546a7f77c6b7ef02fd53c8428a94eb2781b42ccd4e89d52f6534ab79fc37" exitCode=0 Oct 13 09:14:07 crc kubenswrapper[4833]: I1013 09:14:07.381491 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" event={"ID":"3f103fdd-b425-420e-99f5-e73aaa0b91cc","Type":"ContainerDied","Data":"ad3e546a7f77c6b7ef02fd53c8428a94eb2781b42ccd4e89d52f6534ab79fc37"} Oct 13 09:14:07 crc kubenswrapper[4833]: I1013 09:14:07.626966 4833 scope.go:117] "RemoveContainer" containerID="c1e0a749edc59c61aa1279e53e1d1abcd013fc13ab1f20b8ccb195b1b5ce6138" Oct 13 09:14:07 crc kubenswrapper[4833]: E1013 09:14:07.627213 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:14:09 crc kubenswrapper[4833]: I1013 09:14:08.876268 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" Oct 13 09:14:09 crc kubenswrapper[4833]: I1013 09:14:09.045724 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-cell1-combined-ca-bundle\") pod \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " Oct 13 09:14:09 crc kubenswrapper[4833]: I1013 09:14:09.045811 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxxdd\" (UniqueName: \"kubernetes.io/projected/3f103fdd-b425-420e-99f5-e73aaa0b91cc-kube-api-access-mxxdd\") pod \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " Oct 13 09:14:09 crc kubenswrapper[4833]: I1013 09:14:09.045884 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-cells-global-config-0\") pod \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " Oct 13 09:14:09 crc kubenswrapper[4833]: I1013 09:14:09.046027 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-cell1-compute-config-1\") pod \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " Oct 13 09:14:09 crc kubenswrapper[4833]: I1013 09:14:09.046057 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-inventory\") pod \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " Oct 13 09:14:09 crc kubenswrapper[4833]: I1013 09:14:09.046143 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-cell1-compute-config-0\") pod \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " Oct 13 09:14:09 crc kubenswrapper[4833]: I1013 09:14:09.046230 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-migration-ssh-key-1\") pod \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " Oct 13 09:14:09 crc kubenswrapper[4833]: I1013 09:14:09.046248 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-migration-ssh-key-0\") pod \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " Oct 13 09:14:09 crc kubenswrapper[4833]: I1013 09:14:09.046319 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-ssh-key\") pod \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\" (UID: \"3f103fdd-b425-420e-99f5-e73aaa0b91cc\") " Oct 13 09:14:09 crc kubenswrapper[4833]: I1013 09:14:09.065962 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f103fdd-b425-420e-99f5-e73aaa0b91cc-kube-api-access-mxxdd" (OuterVolumeSpecName: "kube-api-access-mxxdd") pod "3f103fdd-b425-420e-99f5-e73aaa0b91cc" (UID: "3f103fdd-b425-420e-99f5-e73aaa0b91cc"). InnerVolumeSpecName "kube-api-access-mxxdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 09:14:09 crc kubenswrapper[4833]: I1013 09:14:09.066025 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "3f103fdd-b425-420e-99f5-e73aaa0b91cc" (UID: "3f103fdd-b425-420e-99f5-e73aaa0b91cc"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 09:14:09 crc kubenswrapper[4833]: I1013 09:14:09.080215 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "3f103fdd-b425-420e-99f5-e73aaa0b91cc" (UID: "3f103fdd-b425-420e-99f5-e73aaa0b91cc"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 09:14:09 crc kubenswrapper[4833]: I1013 09:14:09.085642 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "3f103fdd-b425-420e-99f5-e73aaa0b91cc" (UID: "3f103fdd-b425-420e-99f5-e73aaa0b91cc"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 09:14:09 crc kubenswrapper[4833]: I1013 09:14:09.086138 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-inventory" (OuterVolumeSpecName: "inventory") pod "3f103fdd-b425-420e-99f5-e73aaa0b91cc" (UID: "3f103fdd-b425-420e-99f5-e73aaa0b91cc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 09:14:09 crc kubenswrapper[4833]: I1013 09:14:09.087292 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "3f103fdd-b425-420e-99f5-e73aaa0b91cc" (UID: "3f103fdd-b425-420e-99f5-e73aaa0b91cc"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 09:14:09 crc kubenswrapper[4833]: I1013 09:14:09.088298 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "3f103fdd-b425-420e-99f5-e73aaa0b91cc" (UID: "3f103fdd-b425-420e-99f5-e73aaa0b91cc"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 09:14:09 crc kubenswrapper[4833]: I1013 09:14:09.097108 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "3f103fdd-b425-420e-99f5-e73aaa0b91cc" (UID: "3f103fdd-b425-420e-99f5-e73aaa0b91cc"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 09:14:09 crc kubenswrapper[4833]: I1013 09:14:09.104119 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3f103fdd-b425-420e-99f5-e73aaa0b91cc" (UID: "3f103fdd-b425-420e-99f5-e73aaa0b91cc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 09:14:09 crc kubenswrapper[4833]: I1013 09:14:09.148985 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxxdd\" (UniqueName: \"kubernetes.io/projected/3f103fdd-b425-420e-99f5-e73aaa0b91cc-kube-api-access-mxxdd\") on node \"crc\" DevicePath \"\"" Oct 13 09:14:09 crc kubenswrapper[4833]: I1013 09:14:09.149031 4833 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Oct 13 09:14:09 crc kubenswrapper[4833]: I1013 09:14:09.149046 4833 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Oct 13 09:14:09 crc kubenswrapper[4833]: I1013 09:14:09.149058 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 09:14:09 crc kubenswrapper[4833]: I1013 09:14:09.149070 4833 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Oct 13 09:14:09 crc kubenswrapper[4833]: I1013 09:14:09.149082 4833 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Oct 13 09:14:09 crc kubenswrapper[4833]: I1013 09:14:09.149092 4833 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Oct 13 09:14:09 crc kubenswrapper[4833]: I1013 09:14:09.149101 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 09:14:09 crc kubenswrapper[4833]: I1013 09:14:09.149113 4833 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f103fdd-b425-420e-99f5-e73aaa0b91cc-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 09:14:09 crc kubenswrapper[4833]: I1013 09:14:09.402490 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" event={"ID":"3f103fdd-b425-420e-99f5-e73aaa0b91cc","Type":"ContainerDied","Data":"9191cb1eb05a7801417817ef1f22311af5919fe30fe9e5372ad232d7590460c5"} Oct 13 09:14:09 crc kubenswrapper[4833]: I1013 09:14:09.402594 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9191cb1eb05a7801417817ef1f22311af5919fe30fe9e5372ad232d7590460c5" Oct 13 09:14:09 crc kubenswrapper[4833]: I1013 09:14:09.402529 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd" Oct 13 09:14:20 crc kubenswrapper[4833]: I1013 09:14:20.634747 4833 scope.go:117] "RemoveContainer" containerID="c1e0a749edc59c61aa1279e53e1d1abcd013fc13ab1f20b8ccb195b1b5ce6138" Oct 13 09:14:20 crc kubenswrapper[4833]: E1013 09:14:20.635584 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:14:35 crc kubenswrapper[4833]: I1013 09:14:35.627255 4833 scope.go:117] "RemoveContainer" containerID="c1e0a749edc59c61aa1279e53e1d1abcd013fc13ab1f20b8ccb195b1b5ce6138" Oct 13 09:14:35 crc kubenswrapper[4833]: E1013 09:14:35.628426 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:14:48 crc kubenswrapper[4833]: I1013 09:14:48.627894 4833 scope.go:117] "RemoveContainer" containerID="c1e0a749edc59c61aa1279e53e1d1abcd013fc13ab1f20b8ccb195b1b5ce6138" Oct 13 09:14:48 crc kubenswrapper[4833]: E1013 09:14:48.628722 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:14:58 crc kubenswrapper[4833]: I1013 09:14:58.474117 4833 scope.go:117] "RemoveContainer" containerID="5c4740540a8bef166a6148e9c21cd8efe6d13bdc3190419099c54c2332f8dce0" Oct 13 09:14:58 crc kubenswrapper[4833]: I1013 09:14:58.505958 4833 scope.go:117] "RemoveContainer" containerID="65a9561f64d8a72229b1f06f548a51f442e71cf6f1dee404278ddc4a9705de12" Oct 13 09:14:58 crc kubenswrapper[4833]: I1013 09:14:58.532778 4833 scope.go:117] "RemoveContainer" containerID="fff07e866dc4ad04417280b990066187a257eba34c7d5dedf890036f25860058" Oct 13 09:15:00 crc kubenswrapper[4833]: I1013 09:15:00.155710 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339115-x5ww7"] Oct 13 09:15:00 crc kubenswrapper[4833]: E1013 09:15:00.156603 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95ac0995-dbea-414e-ad58-0b9916f6a16f" containerName="extract-content" Oct 13 09:15:00 crc kubenswrapper[4833]: I1013 09:15:00.156624 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="95ac0995-dbea-414e-ad58-0b9916f6a16f" containerName="extract-content" Oct 13 09:15:00 crc kubenswrapper[4833]: E1013 09:15:00.156663 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95ac0995-dbea-414e-ad58-0b9916f6a16f" containerName="registry-server" Oct 13 09:15:00 crc kubenswrapper[4833]: I1013 09:15:00.156672 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="95ac0995-dbea-414e-ad58-0b9916f6a16f" containerName="registry-server" Oct 13 09:15:00 crc kubenswrapper[4833]: E1013 09:15:00.156717 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f103fdd-b425-420e-99f5-e73aaa0b91cc" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Oct 13 09:15:00 crc kubenswrapper[4833]: I1013 09:15:00.156734 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f103fdd-b425-420e-99f5-e73aaa0b91cc" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Oct 13 09:15:00 crc kubenswrapper[4833]: E1013 09:15:00.156750 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95ac0995-dbea-414e-ad58-0b9916f6a16f" containerName="extract-utilities" Oct 13 09:15:00 crc kubenswrapper[4833]: I1013 09:15:00.156756 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="95ac0995-dbea-414e-ad58-0b9916f6a16f" containerName="extract-utilities" Oct 13 09:15:00 crc kubenswrapper[4833]: I1013 09:15:00.156992 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="95ac0995-dbea-414e-ad58-0b9916f6a16f" containerName="registry-server" Oct 13 09:15:00 crc kubenswrapper[4833]: I1013 09:15:00.157016 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f103fdd-b425-420e-99f5-e73aaa0b91cc" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Oct 13 09:15:00 crc kubenswrapper[4833]: I1013 09:15:00.158022 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339115-x5ww7" Oct 13 09:15:00 crc kubenswrapper[4833]: I1013 09:15:00.160652 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 13 09:15:00 crc kubenswrapper[4833]: I1013 09:15:00.160847 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 13 09:15:00 crc kubenswrapper[4833]: I1013 09:15:00.170668 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339115-x5ww7"] Oct 13 09:15:00 crc kubenswrapper[4833]: I1013 09:15:00.241726 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/129df4c0-cfc0-47f5-af79-8f044985a903-config-volume\") pod \"collect-profiles-29339115-x5ww7\" (UID: \"129df4c0-cfc0-47f5-af79-8f044985a903\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339115-x5ww7" Oct 13 09:15:00 crc kubenswrapper[4833]: I1013 09:15:00.241806 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/129df4c0-cfc0-47f5-af79-8f044985a903-secret-volume\") pod \"collect-profiles-29339115-x5ww7\" (UID: \"129df4c0-cfc0-47f5-af79-8f044985a903\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339115-x5ww7" Oct 13 09:15:00 crc kubenswrapper[4833]: I1013 09:15:00.241935 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjjqf\" (UniqueName: \"kubernetes.io/projected/129df4c0-cfc0-47f5-af79-8f044985a903-kube-api-access-vjjqf\") pod \"collect-profiles-29339115-x5ww7\" (UID: \"129df4c0-cfc0-47f5-af79-8f044985a903\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339115-x5ww7" Oct 13 09:15:00 crc kubenswrapper[4833]: I1013 09:15:00.343503 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjjqf\" (UniqueName: \"kubernetes.io/projected/129df4c0-cfc0-47f5-af79-8f044985a903-kube-api-access-vjjqf\") pod \"collect-profiles-29339115-x5ww7\" (UID: \"129df4c0-cfc0-47f5-af79-8f044985a903\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339115-x5ww7" Oct 13 09:15:00 crc kubenswrapper[4833]: I1013 09:15:00.343824 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/129df4c0-cfc0-47f5-af79-8f044985a903-config-volume\") pod \"collect-profiles-29339115-x5ww7\" (UID: \"129df4c0-cfc0-47f5-af79-8f044985a903\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339115-x5ww7" Oct 13 09:15:00 crc kubenswrapper[4833]: I1013 09:15:00.343882 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/129df4c0-cfc0-47f5-af79-8f044985a903-secret-volume\") pod \"collect-profiles-29339115-x5ww7\" (UID: \"129df4c0-cfc0-47f5-af79-8f044985a903\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339115-x5ww7" Oct 13 09:15:00 crc kubenswrapper[4833]: I1013 09:15:00.344808 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/129df4c0-cfc0-47f5-af79-8f044985a903-config-volume\") pod \"collect-profiles-29339115-x5ww7\" (UID: \"129df4c0-cfc0-47f5-af79-8f044985a903\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339115-x5ww7" Oct 13 09:15:00 crc kubenswrapper[4833]: I1013 09:15:00.350225 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/129df4c0-cfc0-47f5-af79-8f044985a903-secret-volume\") pod \"collect-profiles-29339115-x5ww7\" (UID: \"129df4c0-cfc0-47f5-af79-8f044985a903\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339115-x5ww7" Oct 13 09:15:00 crc kubenswrapper[4833]: I1013 09:15:00.358063 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjjqf\" (UniqueName: \"kubernetes.io/projected/129df4c0-cfc0-47f5-af79-8f044985a903-kube-api-access-vjjqf\") pod \"collect-profiles-29339115-x5ww7\" (UID: \"129df4c0-cfc0-47f5-af79-8f044985a903\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339115-x5ww7" Oct 13 09:15:00 crc kubenswrapper[4833]: I1013 09:15:00.500340 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339115-x5ww7" Oct 13 09:15:00 crc kubenswrapper[4833]: I1013 09:15:00.996250 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339115-x5ww7"] Oct 13 09:15:01 crc kubenswrapper[4833]: I1013 09:15:01.971937 4833 generic.go:334] "Generic (PLEG): container finished" podID="129df4c0-cfc0-47f5-af79-8f044985a903" containerID="326b4f6f262b8f46a4db78ce9e535cc90a12997c2536b1e3b3adc4f00da1479c" exitCode=0 Oct 13 09:15:01 crc kubenswrapper[4833]: I1013 09:15:01.972451 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339115-x5ww7" event={"ID":"129df4c0-cfc0-47f5-af79-8f044985a903","Type":"ContainerDied","Data":"326b4f6f262b8f46a4db78ce9e535cc90a12997c2536b1e3b3adc4f00da1479c"} Oct 13 09:15:01 crc kubenswrapper[4833]: I1013 09:15:01.972482 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339115-x5ww7" event={"ID":"129df4c0-cfc0-47f5-af79-8f044985a903","Type":"ContainerStarted","Data":"514ecfe4847d94f79f2b6d5bd853571cec1cba6da2a11beb7651613ede0d296d"} Oct 13 09:15:03 crc kubenswrapper[4833]: I1013 09:15:03.430526 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339115-x5ww7" Oct 13 09:15:03 crc kubenswrapper[4833]: I1013 09:15:03.530879 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjjqf\" (UniqueName: \"kubernetes.io/projected/129df4c0-cfc0-47f5-af79-8f044985a903-kube-api-access-vjjqf\") pod \"129df4c0-cfc0-47f5-af79-8f044985a903\" (UID: \"129df4c0-cfc0-47f5-af79-8f044985a903\") " Oct 13 09:15:03 crc kubenswrapper[4833]: I1013 09:15:03.530946 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/129df4c0-cfc0-47f5-af79-8f044985a903-config-volume\") pod \"129df4c0-cfc0-47f5-af79-8f044985a903\" (UID: \"129df4c0-cfc0-47f5-af79-8f044985a903\") " Oct 13 09:15:03 crc kubenswrapper[4833]: I1013 09:15:03.531157 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/129df4c0-cfc0-47f5-af79-8f044985a903-secret-volume\") pod \"129df4c0-cfc0-47f5-af79-8f044985a903\" (UID: \"129df4c0-cfc0-47f5-af79-8f044985a903\") " Oct 13 09:15:03 crc kubenswrapper[4833]: I1013 09:15:03.532570 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/129df4c0-cfc0-47f5-af79-8f044985a903-config-volume" (OuterVolumeSpecName: "config-volume") pod "129df4c0-cfc0-47f5-af79-8f044985a903" (UID: "129df4c0-cfc0-47f5-af79-8f044985a903"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 09:15:03 crc kubenswrapper[4833]: I1013 09:15:03.537413 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/129df4c0-cfc0-47f5-af79-8f044985a903-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "129df4c0-cfc0-47f5-af79-8f044985a903" (UID: "129df4c0-cfc0-47f5-af79-8f044985a903"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 09:15:03 crc kubenswrapper[4833]: I1013 09:15:03.545803 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/129df4c0-cfc0-47f5-af79-8f044985a903-kube-api-access-vjjqf" (OuterVolumeSpecName: "kube-api-access-vjjqf") pod "129df4c0-cfc0-47f5-af79-8f044985a903" (UID: "129df4c0-cfc0-47f5-af79-8f044985a903"). InnerVolumeSpecName "kube-api-access-vjjqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 09:15:03 crc kubenswrapper[4833]: I1013 09:15:03.627146 4833 scope.go:117] "RemoveContainer" containerID="c1e0a749edc59c61aa1279e53e1d1abcd013fc13ab1f20b8ccb195b1b5ce6138" Oct 13 09:15:03 crc kubenswrapper[4833]: E1013 09:15:03.627445 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:15:03 crc kubenswrapper[4833]: I1013 09:15:03.634120 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjjqf\" (UniqueName: \"kubernetes.io/projected/129df4c0-cfc0-47f5-af79-8f044985a903-kube-api-access-vjjqf\") on node \"crc\" DevicePath \"\"" Oct 13 09:15:03 crc kubenswrapper[4833]: I1013 09:15:03.634160 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/129df4c0-cfc0-47f5-af79-8f044985a903-config-volume\") on node \"crc\" DevicePath \"\"" Oct 13 09:15:03 crc kubenswrapper[4833]: I1013 09:15:03.634173 4833 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/129df4c0-cfc0-47f5-af79-8f044985a903-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 13 09:15:03 crc kubenswrapper[4833]: I1013 09:15:03.991274 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339115-x5ww7" event={"ID":"129df4c0-cfc0-47f5-af79-8f044985a903","Type":"ContainerDied","Data":"514ecfe4847d94f79f2b6d5bd853571cec1cba6da2a11beb7651613ede0d296d"} Oct 13 09:15:03 crc kubenswrapper[4833]: I1013 09:15:03.991595 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="514ecfe4847d94f79f2b6d5bd853571cec1cba6da2a11beb7651613ede0d296d" Oct 13 09:15:03 crc kubenswrapper[4833]: I1013 09:15:03.991315 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339115-x5ww7" Oct 13 09:15:04 crc kubenswrapper[4833]: I1013 09:15:04.524268 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339070-wsc64"] Oct 13 09:15:04 crc kubenswrapper[4833]: I1013 09:15:04.534478 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339070-wsc64"] Oct 13 09:15:04 crc kubenswrapper[4833]: I1013 09:15:04.641861 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a150200-a8a1-42f8-add6-8b78e4b6eb6c" path="/var/lib/kubelet/pods/3a150200-a8a1-42f8-add6-8b78e4b6eb6c/volumes" Oct 13 09:15:11 crc kubenswrapper[4833]: I1013 09:15:11.453146 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r2nsm"] Oct 13 09:15:11 crc kubenswrapper[4833]: E1013 09:15:11.454705 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="129df4c0-cfc0-47f5-af79-8f044985a903" containerName="collect-profiles" Oct 13 09:15:11 crc kubenswrapper[4833]: I1013 09:15:11.454730 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="129df4c0-cfc0-47f5-af79-8f044985a903" containerName="collect-profiles" Oct 13 09:15:11 crc kubenswrapper[4833]: I1013 09:15:11.455237 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="129df4c0-cfc0-47f5-af79-8f044985a903" containerName="collect-profiles" Oct 13 09:15:11 crc kubenswrapper[4833]: I1013 09:15:11.458423 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r2nsm" Oct 13 09:15:11 crc kubenswrapper[4833]: I1013 09:15:11.461445 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r2nsm"] Oct 13 09:15:11 crc kubenswrapper[4833]: I1013 09:15:11.507935 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qgjc\" (UniqueName: \"kubernetes.io/projected/228a679e-9908-48b2-af4b-6e6b7be8caad-kube-api-access-6qgjc\") pod \"certified-operators-r2nsm\" (UID: \"228a679e-9908-48b2-af4b-6e6b7be8caad\") " pod="openshift-marketplace/certified-operators-r2nsm" Oct 13 09:15:11 crc kubenswrapper[4833]: I1013 09:15:11.508168 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/228a679e-9908-48b2-af4b-6e6b7be8caad-catalog-content\") pod \"certified-operators-r2nsm\" (UID: \"228a679e-9908-48b2-af4b-6e6b7be8caad\") " pod="openshift-marketplace/certified-operators-r2nsm" Oct 13 09:15:11 crc kubenswrapper[4833]: I1013 09:15:11.508225 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/228a679e-9908-48b2-af4b-6e6b7be8caad-utilities\") pod \"certified-operators-r2nsm\" (UID: \"228a679e-9908-48b2-af4b-6e6b7be8caad\") " pod="openshift-marketplace/certified-operators-r2nsm" Oct 13 09:15:11 crc kubenswrapper[4833]: I1013 09:15:11.610767 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/228a679e-9908-48b2-af4b-6e6b7be8caad-utilities\") pod \"certified-operators-r2nsm\" (UID: \"228a679e-9908-48b2-af4b-6e6b7be8caad\") " pod="openshift-marketplace/certified-operators-r2nsm" Oct 13 09:15:11 crc kubenswrapper[4833]: I1013 09:15:11.610895 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qgjc\" (UniqueName: \"kubernetes.io/projected/228a679e-9908-48b2-af4b-6e6b7be8caad-kube-api-access-6qgjc\") pod \"certified-operators-r2nsm\" (UID: \"228a679e-9908-48b2-af4b-6e6b7be8caad\") " pod="openshift-marketplace/certified-operators-r2nsm" Oct 13 09:15:11 crc kubenswrapper[4833]: I1013 09:15:11.611024 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/228a679e-9908-48b2-af4b-6e6b7be8caad-catalog-content\") pod \"certified-operators-r2nsm\" (UID: \"228a679e-9908-48b2-af4b-6e6b7be8caad\") " pod="openshift-marketplace/certified-operators-r2nsm" Oct 13 09:15:11 crc kubenswrapper[4833]: I1013 09:15:11.611564 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/228a679e-9908-48b2-af4b-6e6b7be8caad-catalog-content\") pod \"certified-operators-r2nsm\" (UID: \"228a679e-9908-48b2-af4b-6e6b7be8caad\") " pod="openshift-marketplace/certified-operators-r2nsm" Oct 13 09:15:11 crc kubenswrapper[4833]: I1013 09:15:11.611568 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/228a679e-9908-48b2-af4b-6e6b7be8caad-utilities\") pod \"certified-operators-r2nsm\" (UID: \"228a679e-9908-48b2-af4b-6e6b7be8caad\") " pod="openshift-marketplace/certified-operators-r2nsm" Oct 13 09:15:11 crc kubenswrapper[4833]: I1013 09:15:11.882112 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qgjc\" (UniqueName: \"kubernetes.io/projected/228a679e-9908-48b2-af4b-6e6b7be8caad-kube-api-access-6qgjc\") pod \"certified-operators-r2nsm\" (UID: \"228a679e-9908-48b2-af4b-6e6b7be8caad\") " pod="openshift-marketplace/certified-operators-r2nsm" Oct 13 09:15:12 crc kubenswrapper[4833]: I1013 09:15:12.096772 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r2nsm" Oct 13 09:15:12 crc kubenswrapper[4833]: I1013 09:15:12.664731 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r2nsm"] Oct 13 09:15:13 crc kubenswrapper[4833]: I1013 09:15:13.113040 4833 generic.go:334] "Generic (PLEG): container finished" podID="228a679e-9908-48b2-af4b-6e6b7be8caad" containerID="26054ad698400dc64572b2d18b9ab840fd9e5a55b39be0a8256abc1093de02a8" exitCode=0 Oct 13 09:15:13 crc kubenswrapper[4833]: I1013 09:15:13.113084 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2nsm" event={"ID":"228a679e-9908-48b2-af4b-6e6b7be8caad","Type":"ContainerDied","Data":"26054ad698400dc64572b2d18b9ab840fd9e5a55b39be0a8256abc1093de02a8"} Oct 13 09:15:13 crc kubenswrapper[4833]: I1013 09:15:13.113362 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2nsm" event={"ID":"228a679e-9908-48b2-af4b-6e6b7be8caad","Type":"ContainerStarted","Data":"3abe6183c530d196a5c22ab5b075bd9c222ff23315473af5bc9340e1e5c4de14"} Oct 13 09:15:13 crc kubenswrapper[4833]: I1013 09:15:13.116051 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 09:15:14 crc kubenswrapper[4833]: I1013 09:15:14.123833 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2nsm" event={"ID":"228a679e-9908-48b2-af4b-6e6b7be8caad","Type":"ContainerStarted","Data":"043d5885a6ebf816f362f1ec746f593d0eac010fccc23c8516f409cab58a0cda"} Oct 13 09:15:16 crc kubenswrapper[4833]: I1013 09:15:16.149949 4833 generic.go:334] "Generic (PLEG): container finished" podID="228a679e-9908-48b2-af4b-6e6b7be8caad" containerID="043d5885a6ebf816f362f1ec746f593d0eac010fccc23c8516f409cab58a0cda" exitCode=0 Oct 13 09:15:16 crc kubenswrapper[4833]: I1013 09:15:16.150034 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2nsm" event={"ID":"228a679e-9908-48b2-af4b-6e6b7be8caad","Type":"ContainerDied","Data":"043d5885a6ebf816f362f1ec746f593d0eac010fccc23c8516f409cab58a0cda"} Oct 13 09:15:17 crc kubenswrapper[4833]: I1013 09:15:17.163787 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2nsm" event={"ID":"228a679e-9908-48b2-af4b-6e6b7be8caad","Type":"ContainerStarted","Data":"04c9991af84eedd50e1b2417fb6a9f8f88ac45556ce2aad5897f775da5231aca"} Oct 13 09:15:17 crc kubenswrapper[4833]: I1013 09:15:17.183632 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r2nsm" podStartSLOduration=2.581592189 podStartE2EDuration="6.183514937s" podCreationTimestamp="2025-10-13 09:15:11 +0000 UTC" firstStartedPulling="2025-10-13 09:15:13.115811324 +0000 UTC m=+10003.216234240" lastFinishedPulling="2025-10-13 09:15:16.717734072 +0000 UTC m=+10006.818156988" observedRunningTime="2025-10-13 09:15:17.182289463 +0000 UTC m=+10007.282712389" watchObservedRunningTime="2025-10-13 09:15:17.183514937 +0000 UTC m=+10007.283937853" Oct 13 09:15:18 crc kubenswrapper[4833]: I1013 09:15:18.627855 4833 scope.go:117] "RemoveContainer" containerID="c1e0a749edc59c61aa1279e53e1d1abcd013fc13ab1f20b8ccb195b1b5ce6138" Oct 13 09:15:18 crc kubenswrapper[4833]: E1013 09:15:18.628440 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:15:22 crc kubenswrapper[4833]: I1013 09:15:22.096973 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r2nsm" Oct 13 09:15:22 crc kubenswrapper[4833]: I1013 09:15:22.097640 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r2nsm" Oct 13 09:15:22 crc kubenswrapper[4833]: I1013 09:15:22.182054 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r2nsm" Oct 13 09:15:22 crc kubenswrapper[4833]: I1013 09:15:22.300183 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r2nsm" Oct 13 09:15:22 crc kubenswrapper[4833]: I1013 09:15:22.430033 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r2nsm"] Oct 13 09:15:24 crc kubenswrapper[4833]: I1013 09:15:24.246851 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r2nsm" podUID="228a679e-9908-48b2-af4b-6e6b7be8caad" containerName="registry-server" containerID="cri-o://04c9991af84eedd50e1b2417fb6a9f8f88ac45556ce2aad5897f775da5231aca" gracePeriod=2 Oct 13 09:15:25 crc kubenswrapper[4833]: I1013 09:15:25.057469 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r2nsm" Oct 13 09:15:25 crc kubenswrapper[4833]: I1013 09:15:25.154531 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/228a679e-9908-48b2-af4b-6e6b7be8caad-catalog-content\") pod \"228a679e-9908-48b2-af4b-6e6b7be8caad\" (UID: \"228a679e-9908-48b2-af4b-6e6b7be8caad\") " Oct 13 09:15:25 crc kubenswrapper[4833]: I1013 09:15:25.154729 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/228a679e-9908-48b2-af4b-6e6b7be8caad-utilities\") pod \"228a679e-9908-48b2-af4b-6e6b7be8caad\" (UID: \"228a679e-9908-48b2-af4b-6e6b7be8caad\") " Oct 13 09:15:25 crc kubenswrapper[4833]: I1013 09:15:25.154779 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qgjc\" (UniqueName: \"kubernetes.io/projected/228a679e-9908-48b2-af4b-6e6b7be8caad-kube-api-access-6qgjc\") pod \"228a679e-9908-48b2-af4b-6e6b7be8caad\" (UID: \"228a679e-9908-48b2-af4b-6e6b7be8caad\") " Oct 13 09:15:25 crc kubenswrapper[4833]: I1013 09:15:25.155790 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/228a679e-9908-48b2-af4b-6e6b7be8caad-utilities" (OuterVolumeSpecName: "utilities") pod "228a679e-9908-48b2-af4b-6e6b7be8caad" (UID: "228a679e-9908-48b2-af4b-6e6b7be8caad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 09:15:25 crc kubenswrapper[4833]: I1013 09:15:25.160531 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/228a679e-9908-48b2-af4b-6e6b7be8caad-kube-api-access-6qgjc" (OuterVolumeSpecName: "kube-api-access-6qgjc") pod "228a679e-9908-48b2-af4b-6e6b7be8caad" (UID: "228a679e-9908-48b2-af4b-6e6b7be8caad"). InnerVolumeSpecName "kube-api-access-6qgjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 09:15:25 crc kubenswrapper[4833]: I1013 09:15:25.222502 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/228a679e-9908-48b2-af4b-6e6b7be8caad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "228a679e-9908-48b2-af4b-6e6b7be8caad" (UID: "228a679e-9908-48b2-af4b-6e6b7be8caad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 09:15:25 crc kubenswrapper[4833]: I1013 09:15:25.257558 4833 generic.go:334] "Generic (PLEG): container finished" podID="228a679e-9908-48b2-af4b-6e6b7be8caad" containerID="04c9991af84eedd50e1b2417fb6a9f8f88ac45556ce2aad5897f775da5231aca" exitCode=0 Oct 13 09:15:25 crc kubenswrapper[4833]: I1013 09:15:25.257616 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2nsm" event={"ID":"228a679e-9908-48b2-af4b-6e6b7be8caad","Type":"ContainerDied","Data":"04c9991af84eedd50e1b2417fb6a9f8f88ac45556ce2aad5897f775da5231aca"} Oct 13 09:15:25 crc kubenswrapper[4833]: I1013 09:15:25.257651 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2nsm" event={"ID":"228a679e-9908-48b2-af4b-6e6b7be8caad","Type":"ContainerDied","Data":"3abe6183c530d196a5c22ab5b075bd9c222ff23315473af5bc9340e1e5c4de14"} Oct 13 09:15:25 crc kubenswrapper[4833]: I1013 09:15:25.257674 4833 scope.go:117] "RemoveContainer" containerID="04c9991af84eedd50e1b2417fb6a9f8f88ac45556ce2aad5897f775da5231aca" Oct 13 09:15:25 crc kubenswrapper[4833]: I1013 09:15:25.257693 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r2nsm" Oct 13 09:15:25 crc kubenswrapper[4833]: I1013 09:15:25.260420 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/228a679e-9908-48b2-af4b-6e6b7be8caad-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 09:15:25 crc kubenswrapper[4833]: I1013 09:15:25.260550 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qgjc\" (UniqueName: \"kubernetes.io/projected/228a679e-9908-48b2-af4b-6e6b7be8caad-kube-api-access-6qgjc\") on node \"crc\" DevicePath \"\"" Oct 13 09:15:25 crc kubenswrapper[4833]: I1013 09:15:25.260628 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/228a679e-9908-48b2-af4b-6e6b7be8caad-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 09:15:25 crc kubenswrapper[4833]: I1013 09:15:25.278754 4833 scope.go:117] "RemoveContainer" containerID="043d5885a6ebf816f362f1ec746f593d0eac010fccc23c8516f409cab58a0cda" Oct 13 09:15:25 crc kubenswrapper[4833]: I1013 09:15:25.311920 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r2nsm"] Oct 13 09:15:25 crc kubenswrapper[4833]: I1013 09:15:25.319853 4833 scope.go:117] "RemoveContainer" containerID="26054ad698400dc64572b2d18b9ab840fd9e5a55b39be0a8256abc1093de02a8" Oct 13 09:15:25 crc kubenswrapper[4833]: I1013 09:15:25.327516 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r2nsm"] Oct 13 09:15:25 crc kubenswrapper[4833]: I1013 09:15:25.361445 4833 scope.go:117] "RemoveContainer" containerID="04c9991af84eedd50e1b2417fb6a9f8f88ac45556ce2aad5897f775da5231aca" Oct 13 09:15:25 crc kubenswrapper[4833]: E1013 09:15:25.362129 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04c9991af84eedd50e1b2417fb6a9f8f88ac45556ce2aad5897f775da5231aca\": container with ID starting with 04c9991af84eedd50e1b2417fb6a9f8f88ac45556ce2aad5897f775da5231aca not found: ID does not exist" containerID="04c9991af84eedd50e1b2417fb6a9f8f88ac45556ce2aad5897f775da5231aca" Oct 13 09:15:25 crc kubenswrapper[4833]: I1013 09:15:25.362185 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04c9991af84eedd50e1b2417fb6a9f8f88ac45556ce2aad5897f775da5231aca"} err="failed to get container status \"04c9991af84eedd50e1b2417fb6a9f8f88ac45556ce2aad5897f775da5231aca\": rpc error: code = NotFound desc = could not find container \"04c9991af84eedd50e1b2417fb6a9f8f88ac45556ce2aad5897f775da5231aca\": container with ID starting with 04c9991af84eedd50e1b2417fb6a9f8f88ac45556ce2aad5897f775da5231aca not found: ID does not exist" Oct 13 09:15:25 crc kubenswrapper[4833]: I1013 09:15:25.362212 4833 scope.go:117] "RemoveContainer" containerID="043d5885a6ebf816f362f1ec746f593d0eac010fccc23c8516f409cab58a0cda" Oct 13 09:15:25 crc kubenswrapper[4833]: E1013 09:15:25.362493 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"043d5885a6ebf816f362f1ec746f593d0eac010fccc23c8516f409cab58a0cda\": container with ID starting with 043d5885a6ebf816f362f1ec746f593d0eac010fccc23c8516f409cab58a0cda not found: ID does not exist" containerID="043d5885a6ebf816f362f1ec746f593d0eac010fccc23c8516f409cab58a0cda" Oct 13 09:15:25 crc kubenswrapper[4833]: I1013 09:15:25.362519 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"043d5885a6ebf816f362f1ec746f593d0eac010fccc23c8516f409cab58a0cda"} err="failed to get container status \"043d5885a6ebf816f362f1ec746f593d0eac010fccc23c8516f409cab58a0cda\": rpc error: code = NotFound desc = could not find container \"043d5885a6ebf816f362f1ec746f593d0eac010fccc23c8516f409cab58a0cda\": container with ID starting with 043d5885a6ebf816f362f1ec746f593d0eac010fccc23c8516f409cab58a0cda not found: ID does not exist" Oct 13 09:15:25 crc kubenswrapper[4833]: I1013 09:15:25.362556 4833 scope.go:117] "RemoveContainer" containerID="26054ad698400dc64572b2d18b9ab840fd9e5a55b39be0a8256abc1093de02a8" Oct 13 09:15:25 crc kubenswrapper[4833]: E1013 09:15:25.362741 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26054ad698400dc64572b2d18b9ab840fd9e5a55b39be0a8256abc1093de02a8\": container with ID starting with 26054ad698400dc64572b2d18b9ab840fd9e5a55b39be0a8256abc1093de02a8 not found: ID does not exist" containerID="26054ad698400dc64572b2d18b9ab840fd9e5a55b39be0a8256abc1093de02a8" Oct 13 09:15:25 crc kubenswrapper[4833]: I1013 09:15:25.362763 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26054ad698400dc64572b2d18b9ab840fd9e5a55b39be0a8256abc1093de02a8"} err="failed to get container status \"26054ad698400dc64572b2d18b9ab840fd9e5a55b39be0a8256abc1093de02a8\": rpc error: code = NotFound desc = could not find container \"26054ad698400dc64572b2d18b9ab840fd9e5a55b39be0a8256abc1093de02a8\": container with ID starting with 26054ad698400dc64572b2d18b9ab840fd9e5a55b39be0a8256abc1093de02a8 not found: ID does not exist" Oct 13 09:15:26 crc kubenswrapper[4833]: I1013 09:15:26.647030 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="228a679e-9908-48b2-af4b-6e6b7be8caad" path="/var/lib/kubelet/pods/228a679e-9908-48b2-af4b-6e6b7be8caad/volumes" Oct 13 09:15:30 crc kubenswrapper[4833]: I1013 09:15:30.636755 4833 scope.go:117] "RemoveContainer" containerID="c1e0a749edc59c61aa1279e53e1d1abcd013fc13ab1f20b8ccb195b1b5ce6138" Oct 13 09:15:30 crc kubenswrapper[4833]: E1013 09:15:30.638245 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:15:44 crc kubenswrapper[4833]: I1013 09:15:44.627506 4833 scope.go:117] "RemoveContainer" containerID="c1e0a749edc59c61aa1279e53e1d1abcd013fc13ab1f20b8ccb195b1b5ce6138" Oct 13 09:15:44 crc kubenswrapper[4833]: E1013 09:15:44.629882 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:15:57 crc kubenswrapper[4833]: I1013 09:15:57.627064 4833 scope.go:117] "RemoveContainer" containerID="c1e0a749edc59c61aa1279e53e1d1abcd013fc13ab1f20b8ccb195b1b5ce6138" Oct 13 09:15:57 crc kubenswrapper[4833]: E1013 09:15:57.627653 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:15:58 crc kubenswrapper[4833]: I1013 09:15:58.623588 4833 scope.go:117] "RemoveContainer" containerID="c9b9b6898475b5c85ca684d49ab9211072253ba774e669f1a483efeb2b08c9c8" Oct 13 09:16:01 crc kubenswrapper[4833]: I1013 09:16:01.071687 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Oct 13 09:16:01 crc kubenswrapper[4833]: I1013 09:16:01.072438 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-copy-data" podUID="b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5" containerName="adoption" containerID="cri-o://3d1e9975e04c219a7bf74b5d8ef440fceb0482b1dacc25b66ddb8ed4b57c43f1" gracePeriod=30 Oct 13 09:16:09 crc kubenswrapper[4833]: I1013 09:16:09.627929 4833 scope.go:117] "RemoveContainer" containerID="c1e0a749edc59c61aa1279e53e1d1abcd013fc13ab1f20b8ccb195b1b5ce6138" Oct 13 09:16:09 crc kubenswrapper[4833]: E1013 09:16:09.629184 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:16:14 crc kubenswrapper[4833]: I1013 09:16:14.374219 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mpvfc"] Oct 13 09:16:14 crc kubenswrapper[4833]: E1013 09:16:14.375752 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="228a679e-9908-48b2-af4b-6e6b7be8caad" containerName="extract-utilities" Oct 13 09:16:14 crc kubenswrapper[4833]: I1013 09:16:14.375777 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="228a679e-9908-48b2-af4b-6e6b7be8caad" containerName="extract-utilities" Oct 13 09:16:14 crc kubenswrapper[4833]: E1013 09:16:14.375824 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="228a679e-9908-48b2-af4b-6e6b7be8caad" containerName="registry-server" Oct 13 09:16:14 crc kubenswrapper[4833]: I1013 09:16:14.375833 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="228a679e-9908-48b2-af4b-6e6b7be8caad" containerName="registry-server" Oct 13 09:16:14 crc kubenswrapper[4833]: E1013 09:16:14.375861 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="228a679e-9908-48b2-af4b-6e6b7be8caad" containerName="extract-content" Oct 13 09:16:14 crc kubenswrapper[4833]: I1013 09:16:14.375871 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="228a679e-9908-48b2-af4b-6e6b7be8caad" containerName="extract-content" Oct 13 09:16:14 crc kubenswrapper[4833]: I1013 09:16:14.376179 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="228a679e-9908-48b2-af4b-6e6b7be8caad" containerName="registry-server" Oct 13 09:16:14 crc kubenswrapper[4833]: I1013 09:16:14.378738 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mpvfc" Oct 13 09:16:14 crc kubenswrapper[4833]: I1013 09:16:14.383371 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mpvfc"] Oct 13 09:16:14 crc kubenswrapper[4833]: I1013 09:16:14.483470 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f31eab0-8bd9-47d0-953c-b39034293170-catalog-content\") pod \"redhat-marketplace-mpvfc\" (UID: \"8f31eab0-8bd9-47d0-953c-b39034293170\") " pod="openshift-marketplace/redhat-marketplace-mpvfc" Oct 13 09:16:14 crc kubenswrapper[4833]: I1013 09:16:14.483638 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6hcp\" (UniqueName: \"kubernetes.io/projected/8f31eab0-8bd9-47d0-953c-b39034293170-kube-api-access-p6hcp\") pod \"redhat-marketplace-mpvfc\" (UID: \"8f31eab0-8bd9-47d0-953c-b39034293170\") " pod="openshift-marketplace/redhat-marketplace-mpvfc" Oct 13 09:16:14 crc kubenswrapper[4833]: I1013 09:16:14.483799 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f31eab0-8bd9-47d0-953c-b39034293170-utilities\") pod \"redhat-marketplace-mpvfc\" (UID: \"8f31eab0-8bd9-47d0-953c-b39034293170\") " pod="openshift-marketplace/redhat-marketplace-mpvfc" Oct 13 09:16:14 crc kubenswrapper[4833]: I1013 09:16:14.585294 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6hcp\" (UniqueName: \"kubernetes.io/projected/8f31eab0-8bd9-47d0-953c-b39034293170-kube-api-access-p6hcp\") pod \"redhat-marketplace-mpvfc\" (UID: \"8f31eab0-8bd9-47d0-953c-b39034293170\") " pod="openshift-marketplace/redhat-marketplace-mpvfc" Oct 13 09:16:14 crc kubenswrapper[4833]: I1013 09:16:14.585433 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f31eab0-8bd9-47d0-953c-b39034293170-utilities\") pod \"redhat-marketplace-mpvfc\" (UID: \"8f31eab0-8bd9-47d0-953c-b39034293170\") " pod="openshift-marketplace/redhat-marketplace-mpvfc" Oct 13 09:16:14 crc kubenswrapper[4833]: I1013 09:16:14.585527 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f31eab0-8bd9-47d0-953c-b39034293170-catalog-content\") pod \"redhat-marketplace-mpvfc\" (UID: \"8f31eab0-8bd9-47d0-953c-b39034293170\") " pod="openshift-marketplace/redhat-marketplace-mpvfc" Oct 13 09:16:14 crc kubenswrapper[4833]: I1013 09:16:14.585917 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f31eab0-8bd9-47d0-953c-b39034293170-utilities\") pod \"redhat-marketplace-mpvfc\" (UID: \"8f31eab0-8bd9-47d0-953c-b39034293170\") " pod="openshift-marketplace/redhat-marketplace-mpvfc" Oct 13 09:16:14 crc kubenswrapper[4833]: I1013 09:16:14.585988 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f31eab0-8bd9-47d0-953c-b39034293170-catalog-content\") pod \"redhat-marketplace-mpvfc\" (UID: \"8f31eab0-8bd9-47d0-953c-b39034293170\") " pod="openshift-marketplace/redhat-marketplace-mpvfc" Oct 13 09:16:14 crc kubenswrapper[4833]: I1013 09:16:14.616319 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6hcp\" (UniqueName: \"kubernetes.io/projected/8f31eab0-8bd9-47d0-953c-b39034293170-kube-api-access-p6hcp\") pod \"redhat-marketplace-mpvfc\" (UID: \"8f31eab0-8bd9-47d0-953c-b39034293170\") " pod="openshift-marketplace/redhat-marketplace-mpvfc" Oct 13 09:16:14 crc kubenswrapper[4833]: I1013 09:16:14.718119 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mpvfc" Oct 13 09:16:15 crc kubenswrapper[4833]: I1013 09:16:15.243165 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mpvfc"] Oct 13 09:16:15 crc kubenswrapper[4833]: I1013 09:16:15.837815 4833 generic.go:334] "Generic (PLEG): container finished" podID="8f31eab0-8bd9-47d0-953c-b39034293170" containerID="c3992e1dd496bbafe9395e986d4881acdd2e3974e3ccbd519167c75cf62b2b97" exitCode=0 Oct 13 09:16:15 crc kubenswrapper[4833]: I1013 09:16:15.837872 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mpvfc" event={"ID":"8f31eab0-8bd9-47d0-953c-b39034293170","Type":"ContainerDied","Data":"c3992e1dd496bbafe9395e986d4881acdd2e3974e3ccbd519167c75cf62b2b97"} Oct 13 09:16:15 crc kubenswrapper[4833]: I1013 09:16:15.838172 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mpvfc" event={"ID":"8f31eab0-8bd9-47d0-953c-b39034293170","Type":"ContainerStarted","Data":"2917f6e4f70b19f20ef182420f72683ae469fa97e15628462ad5bec4bc0ecede"} Oct 13 09:16:16 crc kubenswrapper[4833]: I1013 09:16:16.853901 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mpvfc" event={"ID":"8f31eab0-8bd9-47d0-953c-b39034293170","Type":"ContainerStarted","Data":"1869a86338de553833abeb619676f9ab8ef5af0673a037b4f48ba8b5f6615a7b"} Oct 13 09:16:17 crc kubenswrapper[4833]: I1013 09:16:17.869717 4833 generic.go:334] "Generic (PLEG): container finished" podID="8f31eab0-8bd9-47d0-953c-b39034293170" containerID="1869a86338de553833abeb619676f9ab8ef5af0673a037b4f48ba8b5f6615a7b" exitCode=0 Oct 13 09:16:17 crc kubenswrapper[4833]: I1013 09:16:17.869766 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mpvfc" event={"ID":"8f31eab0-8bd9-47d0-953c-b39034293170","Type":"ContainerDied","Data":"1869a86338de553833abeb619676f9ab8ef5af0673a037b4f48ba8b5f6615a7b"} Oct 13 09:16:18 crc kubenswrapper[4833]: I1013 09:16:18.885262 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mpvfc" event={"ID":"8f31eab0-8bd9-47d0-953c-b39034293170","Type":"ContainerStarted","Data":"aacf8dfeb5838c6bde6a76c971e58f4e96dd020ed64a57b06d37b35c2a7edada"} Oct 13 09:16:18 crc kubenswrapper[4833]: I1013 09:16:18.912150 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mpvfc" podStartSLOduration=2.410725382 podStartE2EDuration="4.91212512s" podCreationTimestamp="2025-10-13 09:16:14 +0000 UTC" firstStartedPulling="2025-10-13 09:16:15.839855806 +0000 UTC m=+10065.940278732" lastFinishedPulling="2025-10-13 09:16:18.341255534 +0000 UTC m=+10068.441678470" observedRunningTime="2025-10-13 09:16:18.904582346 +0000 UTC m=+10069.005005282" watchObservedRunningTime="2025-10-13 09:16:18.91212512 +0000 UTC m=+10069.012548066" Oct 13 09:16:20 crc kubenswrapper[4833]: I1013 09:16:20.643249 4833 scope.go:117] "RemoveContainer" containerID="c1e0a749edc59c61aa1279e53e1d1abcd013fc13ab1f20b8ccb195b1b5ce6138" Oct 13 09:16:20 crc kubenswrapper[4833]: E1013 09:16:20.644261 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:16:24 crc kubenswrapper[4833]: I1013 09:16:24.719287 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mpvfc" Oct 13 09:16:24 crc kubenswrapper[4833]: I1013 09:16:24.720156 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mpvfc" Oct 13 09:16:24 crc kubenswrapper[4833]: I1013 09:16:24.777726 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mpvfc" Oct 13 09:16:25 crc kubenswrapper[4833]: I1013 09:16:25.004828 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mpvfc" Oct 13 09:16:25 crc kubenswrapper[4833]: I1013 09:16:25.049490 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mpvfc"] Oct 13 09:16:26 crc kubenswrapper[4833]: I1013 09:16:26.981809 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mpvfc" podUID="8f31eab0-8bd9-47d0-953c-b39034293170" containerName="registry-server" containerID="cri-o://aacf8dfeb5838c6bde6a76c971e58f4e96dd020ed64a57b06d37b35c2a7edada" gracePeriod=2 Oct 13 09:16:27 crc kubenswrapper[4833]: I1013 09:16:27.481649 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mpvfc" Oct 13 09:16:27 crc kubenswrapper[4833]: I1013 09:16:27.590774 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f31eab0-8bd9-47d0-953c-b39034293170-utilities\") pod \"8f31eab0-8bd9-47d0-953c-b39034293170\" (UID: \"8f31eab0-8bd9-47d0-953c-b39034293170\") " Oct 13 09:16:27 crc kubenswrapper[4833]: I1013 09:16:27.590863 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6hcp\" (UniqueName: \"kubernetes.io/projected/8f31eab0-8bd9-47d0-953c-b39034293170-kube-api-access-p6hcp\") pod \"8f31eab0-8bd9-47d0-953c-b39034293170\" (UID: \"8f31eab0-8bd9-47d0-953c-b39034293170\") " Oct 13 09:16:27 crc kubenswrapper[4833]: I1013 09:16:27.590977 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f31eab0-8bd9-47d0-953c-b39034293170-catalog-content\") pod \"8f31eab0-8bd9-47d0-953c-b39034293170\" (UID: \"8f31eab0-8bd9-47d0-953c-b39034293170\") " Oct 13 09:16:27 crc kubenswrapper[4833]: I1013 09:16:27.591737 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f31eab0-8bd9-47d0-953c-b39034293170-utilities" (OuterVolumeSpecName: "utilities") pod "8f31eab0-8bd9-47d0-953c-b39034293170" (UID: "8f31eab0-8bd9-47d0-953c-b39034293170"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 09:16:27 crc kubenswrapper[4833]: I1013 09:16:27.592084 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f31eab0-8bd9-47d0-953c-b39034293170-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 09:16:27 crc kubenswrapper[4833]: I1013 09:16:27.596604 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f31eab0-8bd9-47d0-953c-b39034293170-kube-api-access-p6hcp" (OuterVolumeSpecName: "kube-api-access-p6hcp") pod "8f31eab0-8bd9-47d0-953c-b39034293170" (UID: "8f31eab0-8bd9-47d0-953c-b39034293170"). InnerVolumeSpecName "kube-api-access-p6hcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 09:16:27 crc kubenswrapper[4833]: I1013 09:16:27.610125 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f31eab0-8bd9-47d0-953c-b39034293170-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f31eab0-8bd9-47d0-953c-b39034293170" (UID: "8f31eab0-8bd9-47d0-953c-b39034293170"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 09:16:27 crc kubenswrapper[4833]: I1013 09:16:27.694583 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6hcp\" (UniqueName: \"kubernetes.io/projected/8f31eab0-8bd9-47d0-953c-b39034293170-kube-api-access-p6hcp\") on node \"crc\" DevicePath \"\"" Oct 13 09:16:27 crc kubenswrapper[4833]: I1013 09:16:27.694845 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f31eab0-8bd9-47d0-953c-b39034293170-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 09:16:27 crc kubenswrapper[4833]: I1013 09:16:27.997523 4833 generic.go:334] "Generic (PLEG): container finished" podID="8f31eab0-8bd9-47d0-953c-b39034293170" containerID="aacf8dfeb5838c6bde6a76c971e58f4e96dd020ed64a57b06d37b35c2a7edada" exitCode=0 Oct 13 09:16:27 crc kubenswrapper[4833]: I1013 09:16:27.997616 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mpvfc" event={"ID":"8f31eab0-8bd9-47d0-953c-b39034293170","Type":"ContainerDied","Data":"aacf8dfeb5838c6bde6a76c971e58f4e96dd020ed64a57b06d37b35c2a7edada"} Oct 13 09:16:27 crc kubenswrapper[4833]: I1013 09:16:27.997642 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mpvfc" Oct 13 09:16:27 crc kubenswrapper[4833]: I1013 09:16:27.997674 4833 scope.go:117] "RemoveContainer" containerID="aacf8dfeb5838c6bde6a76c971e58f4e96dd020ed64a57b06d37b35c2a7edada" Oct 13 09:16:27 crc kubenswrapper[4833]: I1013 09:16:27.997657 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mpvfc" event={"ID":"8f31eab0-8bd9-47d0-953c-b39034293170","Type":"ContainerDied","Data":"2917f6e4f70b19f20ef182420f72683ae469fa97e15628462ad5bec4bc0ecede"} Oct 13 09:16:28 crc kubenswrapper[4833]: I1013 09:16:28.024717 4833 scope.go:117] "RemoveContainer" containerID="1869a86338de553833abeb619676f9ab8ef5af0673a037b4f48ba8b5f6615a7b" Oct 13 09:16:28 crc kubenswrapper[4833]: I1013 09:16:28.041388 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mpvfc"] Oct 13 09:16:28 crc kubenswrapper[4833]: I1013 09:16:28.053787 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mpvfc"] Oct 13 09:16:28 crc kubenswrapper[4833]: I1013 09:16:28.061588 4833 scope.go:117] "RemoveContainer" containerID="c3992e1dd496bbafe9395e986d4881acdd2e3974e3ccbd519167c75cf62b2b97" Oct 13 09:16:28 crc kubenswrapper[4833]: I1013 09:16:28.109632 4833 scope.go:117] "RemoveContainer" containerID="aacf8dfeb5838c6bde6a76c971e58f4e96dd020ed64a57b06d37b35c2a7edada" Oct 13 09:16:28 crc kubenswrapper[4833]: E1013 09:16:28.110066 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aacf8dfeb5838c6bde6a76c971e58f4e96dd020ed64a57b06d37b35c2a7edada\": container with ID starting with aacf8dfeb5838c6bde6a76c971e58f4e96dd020ed64a57b06d37b35c2a7edada not found: ID does not exist" containerID="aacf8dfeb5838c6bde6a76c971e58f4e96dd020ed64a57b06d37b35c2a7edada" Oct 13 09:16:28 crc kubenswrapper[4833]: I1013 09:16:28.110092 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aacf8dfeb5838c6bde6a76c971e58f4e96dd020ed64a57b06d37b35c2a7edada"} err="failed to get container status \"aacf8dfeb5838c6bde6a76c971e58f4e96dd020ed64a57b06d37b35c2a7edada\": rpc error: code = NotFound desc = could not find container \"aacf8dfeb5838c6bde6a76c971e58f4e96dd020ed64a57b06d37b35c2a7edada\": container with ID starting with aacf8dfeb5838c6bde6a76c971e58f4e96dd020ed64a57b06d37b35c2a7edada not found: ID does not exist" Oct 13 09:16:28 crc kubenswrapper[4833]: I1013 09:16:28.110112 4833 scope.go:117] "RemoveContainer" containerID="1869a86338de553833abeb619676f9ab8ef5af0673a037b4f48ba8b5f6615a7b" Oct 13 09:16:28 crc kubenswrapper[4833]: E1013 09:16:28.110485 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1869a86338de553833abeb619676f9ab8ef5af0673a037b4f48ba8b5f6615a7b\": container with ID starting with 1869a86338de553833abeb619676f9ab8ef5af0673a037b4f48ba8b5f6615a7b not found: ID does not exist" containerID="1869a86338de553833abeb619676f9ab8ef5af0673a037b4f48ba8b5f6615a7b" Oct 13 09:16:28 crc kubenswrapper[4833]: I1013 09:16:28.110516 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1869a86338de553833abeb619676f9ab8ef5af0673a037b4f48ba8b5f6615a7b"} err="failed to get container status \"1869a86338de553833abeb619676f9ab8ef5af0673a037b4f48ba8b5f6615a7b\": rpc error: code = NotFound desc = could not find container \"1869a86338de553833abeb619676f9ab8ef5af0673a037b4f48ba8b5f6615a7b\": container with ID starting with 1869a86338de553833abeb619676f9ab8ef5af0673a037b4f48ba8b5f6615a7b not found: ID does not exist" Oct 13 09:16:28 crc kubenswrapper[4833]: I1013 09:16:28.110548 4833 scope.go:117] "RemoveContainer" containerID="c3992e1dd496bbafe9395e986d4881acdd2e3974e3ccbd519167c75cf62b2b97" Oct 13 09:16:28 crc kubenswrapper[4833]: E1013 09:16:28.110881 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3992e1dd496bbafe9395e986d4881acdd2e3974e3ccbd519167c75cf62b2b97\": container with ID starting with c3992e1dd496bbafe9395e986d4881acdd2e3974e3ccbd519167c75cf62b2b97 not found: ID does not exist" containerID="c3992e1dd496bbafe9395e986d4881acdd2e3974e3ccbd519167c75cf62b2b97" Oct 13 09:16:28 crc kubenswrapper[4833]: I1013 09:16:28.110926 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3992e1dd496bbafe9395e986d4881acdd2e3974e3ccbd519167c75cf62b2b97"} err="failed to get container status \"c3992e1dd496bbafe9395e986d4881acdd2e3974e3ccbd519167c75cf62b2b97\": rpc error: code = NotFound desc = could not find container \"c3992e1dd496bbafe9395e986d4881acdd2e3974e3ccbd519167c75cf62b2b97\": container with ID starting with c3992e1dd496bbafe9395e986d4881acdd2e3974e3ccbd519167c75cf62b2b97 not found: ID does not exist" Oct 13 09:16:28 crc kubenswrapper[4833]: I1013 09:16:28.642206 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f31eab0-8bd9-47d0-953c-b39034293170" path="/var/lib/kubelet/pods/8f31eab0-8bd9-47d0-953c-b39034293170/volumes" Oct 13 09:16:31 crc kubenswrapper[4833]: I1013 09:16:31.685874 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Oct 13 09:16:31 crc kubenswrapper[4833]: I1013 09:16:31.796761 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mariadb-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a4677bd0-70bf-4dd7-8b6b-d1e2a4fbf423\") pod \"b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5\" (UID: \"b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5\") " Oct 13 09:16:31 crc kubenswrapper[4833]: I1013 09:16:31.796930 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq4dg\" (UniqueName: \"kubernetes.io/projected/b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5-kube-api-access-fq4dg\") pod \"b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5\" (UID: \"b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5\") " Oct 13 09:16:31 crc kubenswrapper[4833]: I1013 09:16:31.803285 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5-kube-api-access-fq4dg" (OuterVolumeSpecName: "kube-api-access-fq4dg") pod "b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5" (UID: "b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5"). InnerVolumeSpecName "kube-api-access-fq4dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 09:16:31 crc kubenswrapper[4833]: I1013 09:16:31.825682 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a4677bd0-70bf-4dd7-8b6b-d1e2a4fbf423" (OuterVolumeSpecName: "mariadb-data") pod "b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5" (UID: "b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5"). InnerVolumeSpecName "pvc-a4677bd0-70bf-4dd7-8b6b-d1e2a4fbf423". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 13 09:16:31 crc kubenswrapper[4833]: I1013 09:16:31.899658 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a4677bd0-70bf-4dd7-8b6b-d1e2a4fbf423\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a4677bd0-70bf-4dd7-8b6b-d1e2a4fbf423\") on node \"crc\" " Oct 13 09:16:31 crc kubenswrapper[4833]: I1013 09:16:31.899698 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fq4dg\" (UniqueName: \"kubernetes.io/projected/b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5-kube-api-access-fq4dg\") on node \"crc\" DevicePath \"\"" Oct 13 09:16:31 crc kubenswrapper[4833]: I1013 09:16:31.936494 4833 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 13 09:16:31 crc kubenswrapper[4833]: I1013 09:16:31.936691 4833 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a4677bd0-70bf-4dd7-8b6b-d1e2a4fbf423" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a4677bd0-70bf-4dd7-8b6b-d1e2a4fbf423") on node "crc" Oct 13 09:16:32 crc kubenswrapper[4833]: I1013 09:16:32.001867 4833 reconciler_common.go:293] "Volume detached for volume \"pvc-a4677bd0-70bf-4dd7-8b6b-d1e2a4fbf423\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a4677bd0-70bf-4dd7-8b6b-d1e2a4fbf423\") on node \"crc\" DevicePath \"\"" Oct 13 09:16:32 crc kubenswrapper[4833]: I1013 09:16:32.054280 4833 generic.go:334] "Generic (PLEG): container finished" podID="b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5" containerID="3d1e9975e04c219a7bf74b5d8ef440fceb0482b1dacc25b66ddb8ed4b57c43f1" exitCode=137 Oct 13 09:16:32 crc kubenswrapper[4833]: I1013 09:16:32.054335 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Oct 13 09:16:32 crc kubenswrapper[4833]: I1013 09:16:32.054342 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5","Type":"ContainerDied","Data":"3d1e9975e04c219a7bf74b5d8ef440fceb0482b1dacc25b66ddb8ed4b57c43f1"} Oct 13 09:16:32 crc kubenswrapper[4833]: I1013 09:16:32.054394 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5","Type":"ContainerDied","Data":"15c06747e4cb1e7e82e07fae31f19f664be33e6ea2fd261e1705177cd1dc97de"} Oct 13 09:16:32 crc kubenswrapper[4833]: I1013 09:16:32.054421 4833 scope.go:117] "RemoveContainer" containerID="3d1e9975e04c219a7bf74b5d8ef440fceb0482b1dacc25b66ddb8ed4b57c43f1" Oct 13 09:16:32 crc kubenswrapper[4833]: I1013 09:16:32.075737 4833 scope.go:117] "RemoveContainer" containerID="3d1e9975e04c219a7bf74b5d8ef440fceb0482b1dacc25b66ddb8ed4b57c43f1" Oct 13 09:16:32 crc kubenswrapper[4833]: E1013 09:16:32.076080 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d1e9975e04c219a7bf74b5d8ef440fceb0482b1dacc25b66ddb8ed4b57c43f1\": container with ID starting with 3d1e9975e04c219a7bf74b5d8ef440fceb0482b1dacc25b66ddb8ed4b57c43f1 not found: ID does not exist" containerID="3d1e9975e04c219a7bf74b5d8ef440fceb0482b1dacc25b66ddb8ed4b57c43f1" Oct 13 09:16:32 crc kubenswrapper[4833]: I1013 09:16:32.076999 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d1e9975e04c219a7bf74b5d8ef440fceb0482b1dacc25b66ddb8ed4b57c43f1"} err="failed to get container status \"3d1e9975e04c219a7bf74b5d8ef440fceb0482b1dacc25b66ddb8ed4b57c43f1\": rpc error: code = NotFound desc = could not find container \"3d1e9975e04c219a7bf74b5d8ef440fceb0482b1dacc25b66ddb8ed4b57c43f1\": container with ID starting with 3d1e9975e04c219a7bf74b5d8ef440fceb0482b1dacc25b66ddb8ed4b57c43f1 not found: ID does not exist" Oct 13 09:16:32 crc kubenswrapper[4833]: I1013 09:16:32.097155 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Oct 13 09:16:32 crc kubenswrapper[4833]: I1013 09:16:32.109420 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-copy-data"] Oct 13 09:16:32 crc kubenswrapper[4833]: I1013 09:16:32.643696 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5" path="/var/lib/kubelet/pods/b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5/volumes" Oct 13 09:16:32 crc kubenswrapper[4833]: I1013 09:16:32.807827 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Oct 13 09:16:32 crc kubenswrapper[4833]: I1013 09:16:32.808074 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-copy-data" podUID="dde840da-a72a-4913-a6ec-12e725edd967" containerName="adoption" containerID="cri-o://31f44ace549c08589d45324b9041b59d0471b076ac85218bfff68d4bf2752958" gracePeriod=30 Oct 13 09:16:35 crc kubenswrapper[4833]: I1013 09:16:35.629264 4833 scope.go:117] "RemoveContainer" containerID="c1e0a749edc59c61aa1279e53e1d1abcd013fc13ab1f20b8ccb195b1b5ce6138" Oct 13 09:16:35 crc kubenswrapper[4833]: E1013 09:16:35.630800 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:16:47 crc kubenswrapper[4833]: I1013 09:16:47.627155 4833 scope.go:117] "RemoveContainer" containerID="c1e0a749edc59c61aa1279e53e1d1abcd013fc13ab1f20b8ccb195b1b5ce6138" Oct 13 09:16:47 crc kubenswrapper[4833]: E1013 09:16:47.628065 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:17:01 crc kubenswrapper[4833]: I1013 09:17:01.628882 4833 scope.go:117] "RemoveContainer" containerID="c1e0a749edc59c61aa1279e53e1d1abcd013fc13ab1f20b8ccb195b1b5ce6138" Oct 13 09:17:02 crc kubenswrapper[4833]: I1013 09:17:02.440519 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"c711e429a871ab17078d49d8686220613b7c9459f066d2cfadebd3ed9d5979ed"} Oct 13 09:17:03 crc kubenswrapper[4833]: I1013 09:17:03.249017 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Oct 13 09:17:03 crc kubenswrapper[4833]: I1013 09:17:03.357089 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a70cf987-2495-46db-9d19-48453e28ffd3\") pod \"dde840da-a72a-4913-a6ec-12e725edd967\" (UID: \"dde840da-a72a-4913-a6ec-12e725edd967\") " Oct 13 09:17:03 crc kubenswrapper[4833]: I1013 09:17:03.357181 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mm45f\" (UniqueName: \"kubernetes.io/projected/dde840da-a72a-4913-a6ec-12e725edd967-kube-api-access-mm45f\") pod \"dde840da-a72a-4913-a6ec-12e725edd967\" (UID: \"dde840da-a72a-4913-a6ec-12e725edd967\") " Oct 13 09:17:03 crc kubenswrapper[4833]: I1013 09:17:03.357415 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/dde840da-a72a-4913-a6ec-12e725edd967-ovn-data-cert\") pod \"dde840da-a72a-4913-a6ec-12e725edd967\" (UID: \"dde840da-a72a-4913-a6ec-12e725edd967\") " Oct 13 09:17:03 crc kubenswrapper[4833]: I1013 09:17:03.363932 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dde840da-a72a-4913-a6ec-12e725edd967-kube-api-access-mm45f" (OuterVolumeSpecName: "kube-api-access-mm45f") pod "dde840da-a72a-4913-a6ec-12e725edd967" (UID: "dde840da-a72a-4913-a6ec-12e725edd967"). InnerVolumeSpecName "kube-api-access-mm45f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 09:17:03 crc kubenswrapper[4833]: I1013 09:17:03.368222 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dde840da-a72a-4913-a6ec-12e725edd967-ovn-data-cert" (OuterVolumeSpecName: "ovn-data-cert") pod "dde840da-a72a-4913-a6ec-12e725edd967" (UID: "dde840da-a72a-4913-a6ec-12e725edd967"). InnerVolumeSpecName "ovn-data-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 09:17:03 crc kubenswrapper[4833]: I1013 09:17:03.450964 4833 generic.go:334] "Generic (PLEG): container finished" podID="dde840da-a72a-4913-a6ec-12e725edd967" containerID="31f44ace549c08589d45324b9041b59d0471b076ac85218bfff68d4bf2752958" exitCode=137 Oct 13 09:17:03 crc kubenswrapper[4833]: I1013 09:17:03.451123 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"dde840da-a72a-4913-a6ec-12e725edd967","Type":"ContainerDied","Data":"31f44ace549c08589d45324b9041b59d0471b076ac85218bfff68d4bf2752958"} Oct 13 09:17:03 crc kubenswrapper[4833]: I1013 09:17:03.451229 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"dde840da-a72a-4913-a6ec-12e725edd967","Type":"ContainerDied","Data":"13e1381f1f883f78f12534ee73fed620f509c2ebe2464d124b37d37f98202a31"} Oct 13 09:17:03 crc kubenswrapper[4833]: I1013 09:17:03.451292 4833 scope.go:117] "RemoveContainer" containerID="31f44ace549c08589d45324b9041b59d0471b076ac85218bfff68d4bf2752958" Oct 13 09:17:03 crc kubenswrapper[4833]: I1013 09:17:03.451424 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Oct 13 09:17:03 crc kubenswrapper[4833]: I1013 09:17:03.460506 4833 reconciler_common.go:293] "Volume detached for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/dde840da-a72a-4913-a6ec-12e725edd967-ovn-data-cert\") on node \"crc\" DevicePath \"\"" Oct 13 09:17:03 crc kubenswrapper[4833]: I1013 09:17:03.460564 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mm45f\" (UniqueName: \"kubernetes.io/projected/dde840da-a72a-4913-a6ec-12e725edd967-kube-api-access-mm45f\") on node \"crc\" DevicePath \"\"" Oct 13 09:17:04 crc kubenswrapper[4833]: I1013 09:17:04.109006 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a70cf987-2495-46db-9d19-48453e28ffd3" (OuterVolumeSpecName: "ovn-data") pod "dde840da-a72a-4913-a6ec-12e725edd967" (UID: "dde840da-a72a-4913-a6ec-12e725edd967"). InnerVolumeSpecName "pvc-a70cf987-2495-46db-9d19-48453e28ffd3". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 13 09:17:04 crc kubenswrapper[4833]: I1013 09:17:04.192155 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a70cf987-2495-46db-9d19-48453e28ffd3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a70cf987-2495-46db-9d19-48453e28ffd3\") on node \"crc\" " Oct 13 09:17:04 crc kubenswrapper[4833]: I1013 09:17:04.202318 4833 scope.go:117] "RemoveContainer" containerID="31f44ace549c08589d45324b9041b59d0471b076ac85218bfff68d4bf2752958" Oct 13 09:17:04 crc kubenswrapper[4833]: E1013 09:17:04.205999 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31f44ace549c08589d45324b9041b59d0471b076ac85218bfff68d4bf2752958\": container with ID starting with 31f44ace549c08589d45324b9041b59d0471b076ac85218bfff68d4bf2752958 not found: ID does not exist" containerID="31f44ace549c08589d45324b9041b59d0471b076ac85218bfff68d4bf2752958" Oct 13 09:17:04 crc kubenswrapper[4833]: I1013 09:17:04.206037 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31f44ace549c08589d45324b9041b59d0471b076ac85218bfff68d4bf2752958"} err="failed to get container status \"31f44ace549c08589d45324b9041b59d0471b076ac85218bfff68d4bf2752958\": rpc error: code = NotFound desc = could not find container \"31f44ace549c08589d45324b9041b59d0471b076ac85218bfff68d4bf2752958\": container with ID starting with 31f44ace549c08589d45324b9041b59d0471b076ac85218bfff68d4bf2752958 not found: ID does not exist" Oct 13 09:17:04 crc kubenswrapper[4833]: I1013 09:17:04.232419 4833 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 13 09:17:04 crc kubenswrapper[4833]: I1013 09:17:04.232746 4833 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a70cf987-2495-46db-9d19-48453e28ffd3" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a70cf987-2495-46db-9d19-48453e28ffd3") on node "crc" Oct 13 09:17:04 crc kubenswrapper[4833]: I1013 09:17:04.293822 4833 reconciler_common.go:293] "Volume detached for volume \"pvc-a70cf987-2495-46db-9d19-48453e28ffd3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a70cf987-2495-46db-9d19-48453e28ffd3\") on node \"crc\" DevicePath \"\"" Oct 13 09:17:04 crc kubenswrapper[4833]: I1013 09:17:04.392739 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Oct 13 09:17:04 crc kubenswrapper[4833]: I1013 09:17:04.402328 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-copy-data"] Oct 13 09:17:04 crc kubenswrapper[4833]: I1013 09:17:04.640384 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dde840da-a72a-4913-a6ec-12e725edd967" path="/var/lib/kubelet/pods/dde840da-a72a-4913-a6ec-12e725edd967/volumes" Oct 13 09:18:15 crc kubenswrapper[4833]: I1013 09:18:15.826918 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5xxsh/must-gather-rd2pj"] Oct 13 09:18:15 crc kubenswrapper[4833]: E1013 09:18:15.827753 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f31eab0-8bd9-47d0-953c-b39034293170" containerName="extract-utilities" Oct 13 09:18:15 crc kubenswrapper[4833]: I1013 09:18:15.827766 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f31eab0-8bd9-47d0-953c-b39034293170" containerName="extract-utilities" Oct 13 09:18:15 crc kubenswrapper[4833]: E1013 09:18:15.827777 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f31eab0-8bd9-47d0-953c-b39034293170" containerName="registry-server" Oct 13 09:18:15 crc kubenswrapper[4833]: I1013 09:18:15.827783 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f31eab0-8bd9-47d0-953c-b39034293170" containerName="registry-server" Oct 13 09:18:15 crc kubenswrapper[4833]: E1013 09:18:15.827809 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f31eab0-8bd9-47d0-953c-b39034293170" containerName="extract-content" Oct 13 09:18:15 crc kubenswrapper[4833]: I1013 09:18:15.827816 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f31eab0-8bd9-47d0-953c-b39034293170" containerName="extract-content" Oct 13 09:18:15 crc kubenswrapper[4833]: E1013 09:18:15.827834 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dde840da-a72a-4913-a6ec-12e725edd967" containerName="adoption" Oct 13 09:18:15 crc kubenswrapper[4833]: I1013 09:18:15.827840 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="dde840da-a72a-4913-a6ec-12e725edd967" containerName="adoption" Oct 13 09:18:15 crc kubenswrapper[4833]: E1013 09:18:15.827854 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5" containerName="adoption" Oct 13 09:18:15 crc kubenswrapper[4833]: I1013 09:18:15.827860 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5" containerName="adoption" Oct 13 09:18:15 crc kubenswrapper[4833]: I1013 09:18:15.828067 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f31eab0-8bd9-47d0-953c-b39034293170" containerName="registry-server" Oct 13 09:18:15 crc kubenswrapper[4833]: I1013 09:18:15.828077 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="dde840da-a72a-4913-a6ec-12e725edd967" containerName="adoption" Oct 13 09:18:15 crc kubenswrapper[4833]: I1013 09:18:15.828088 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3cb24ce-dd5a-42a4-ba5c-cfd7ca059dd5" containerName="adoption" Oct 13 09:18:15 crc kubenswrapper[4833]: I1013 09:18:15.829169 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5xxsh/must-gather-rd2pj" Oct 13 09:18:15 crc kubenswrapper[4833]: I1013 09:18:15.831278 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5xxsh"/"kube-root-ca.crt" Oct 13 09:18:15 crc kubenswrapper[4833]: I1013 09:18:15.833268 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5xxsh"/"default-dockercfg-jwn8c" Oct 13 09:18:15 crc kubenswrapper[4833]: I1013 09:18:15.833316 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5xxsh"/"openshift-service-ca.crt" Oct 13 09:18:15 crc kubenswrapper[4833]: I1013 09:18:15.845280 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5xxsh/must-gather-rd2pj"] Oct 13 09:18:15 crc kubenswrapper[4833]: I1013 09:18:15.920751 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7d7251b3-0441-44dd-8966-4c912b8cf2b1-must-gather-output\") pod \"must-gather-rd2pj\" (UID: \"7d7251b3-0441-44dd-8966-4c912b8cf2b1\") " pod="openshift-must-gather-5xxsh/must-gather-rd2pj" Oct 13 09:18:15 crc kubenswrapper[4833]: I1013 09:18:15.920950 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h88mt\" (UniqueName: \"kubernetes.io/projected/7d7251b3-0441-44dd-8966-4c912b8cf2b1-kube-api-access-h88mt\") pod \"must-gather-rd2pj\" (UID: \"7d7251b3-0441-44dd-8966-4c912b8cf2b1\") " pod="openshift-must-gather-5xxsh/must-gather-rd2pj" Oct 13 09:18:16 crc kubenswrapper[4833]: I1013 09:18:16.022271 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h88mt\" (UniqueName: \"kubernetes.io/projected/7d7251b3-0441-44dd-8966-4c912b8cf2b1-kube-api-access-h88mt\") pod \"must-gather-rd2pj\" (UID: \"7d7251b3-0441-44dd-8966-4c912b8cf2b1\") " pod="openshift-must-gather-5xxsh/must-gather-rd2pj" Oct 13 09:18:16 crc kubenswrapper[4833]: I1013 09:18:16.022398 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7d7251b3-0441-44dd-8966-4c912b8cf2b1-must-gather-output\") pod \"must-gather-rd2pj\" (UID: \"7d7251b3-0441-44dd-8966-4c912b8cf2b1\") " pod="openshift-must-gather-5xxsh/must-gather-rd2pj" Oct 13 09:18:16 crc kubenswrapper[4833]: I1013 09:18:16.022811 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7d7251b3-0441-44dd-8966-4c912b8cf2b1-must-gather-output\") pod \"must-gather-rd2pj\" (UID: \"7d7251b3-0441-44dd-8966-4c912b8cf2b1\") " pod="openshift-must-gather-5xxsh/must-gather-rd2pj" Oct 13 09:18:16 crc kubenswrapper[4833]: I1013 09:18:16.045863 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h88mt\" (UniqueName: \"kubernetes.io/projected/7d7251b3-0441-44dd-8966-4c912b8cf2b1-kube-api-access-h88mt\") pod \"must-gather-rd2pj\" (UID: \"7d7251b3-0441-44dd-8966-4c912b8cf2b1\") " pod="openshift-must-gather-5xxsh/must-gather-rd2pj" Oct 13 09:18:16 crc kubenswrapper[4833]: I1013 09:18:16.152411 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5xxsh/must-gather-rd2pj" Oct 13 09:18:16 crc kubenswrapper[4833]: I1013 09:18:16.671071 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5xxsh/must-gather-rd2pj"] Oct 13 09:18:17 crc kubenswrapper[4833]: I1013 09:18:17.290277 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5xxsh/must-gather-rd2pj" event={"ID":"7d7251b3-0441-44dd-8966-4c912b8cf2b1","Type":"ContainerStarted","Data":"f84ea5f2bbc7e6e594b7919c676ff5a565ec0d5ec660fdde169a3d04c76568cb"} Oct 13 09:18:22 crc kubenswrapper[4833]: I1013 09:18:22.364080 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5xxsh/must-gather-rd2pj" event={"ID":"7d7251b3-0441-44dd-8966-4c912b8cf2b1","Type":"ContainerStarted","Data":"8d6dbd1a2862bc0741fea06aca9ece6084b3fdf6e790e68c275cde669ce1c07d"} Oct 13 09:18:22 crc kubenswrapper[4833]: I1013 09:18:22.364796 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5xxsh/must-gather-rd2pj" event={"ID":"7d7251b3-0441-44dd-8966-4c912b8cf2b1","Type":"ContainerStarted","Data":"005dbe276c69cd71bab29411b0c3b798f16f4135c0bb96ae158d83b702f63b62"} Oct 13 09:18:26 crc kubenswrapper[4833]: I1013 09:18:26.642795 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5xxsh/must-gather-rd2pj" podStartSLOduration=6.747845362 podStartE2EDuration="11.642778428s" podCreationTimestamp="2025-10-13 09:18:15 +0000 UTC" firstStartedPulling="2025-10-13 09:18:16.679417417 +0000 UTC m=+10186.779840323" lastFinishedPulling="2025-10-13 09:18:21.574350473 +0000 UTC m=+10191.674773389" observedRunningTime="2025-10-13 09:18:22.380750313 +0000 UTC m=+10192.481173229" watchObservedRunningTime="2025-10-13 09:18:26.642778428 +0000 UTC m=+10196.743201344" Oct 13 09:18:26 crc kubenswrapper[4833]: I1013 09:18:26.649685 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5xxsh/crc-debug-dzhhk"] Oct 13 09:18:26 crc kubenswrapper[4833]: I1013 09:18:26.651293 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5xxsh/crc-debug-dzhhk" Oct 13 09:18:26 crc kubenswrapper[4833]: I1013 09:18:26.787068 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqwsv\" (UniqueName: \"kubernetes.io/projected/13006645-b24a-4fd1-9138-ec3fa39c641a-kube-api-access-tqwsv\") pod \"crc-debug-dzhhk\" (UID: \"13006645-b24a-4fd1-9138-ec3fa39c641a\") " pod="openshift-must-gather-5xxsh/crc-debug-dzhhk" Oct 13 09:18:26 crc kubenswrapper[4833]: I1013 09:18:26.787158 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13006645-b24a-4fd1-9138-ec3fa39c641a-host\") pod \"crc-debug-dzhhk\" (UID: \"13006645-b24a-4fd1-9138-ec3fa39c641a\") " pod="openshift-must-gather-5xxsh/crc-debug-dzhhk" Oct 13 09:18:26 crc kubenswrapper[4833]: I1013 09:18:26.889718 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqwsv\" (UniqueName: \"kubernetes.io/projected/13006645-b24a-4fd1-9138-ec3fa39c641a-kube-api-access-tqwsv\") pod \"crc-debug-dzhhk\" (UID: \"13006645-b24a-4fd1-9138-ec3fa39c641a\") " pod="openshift-must-gather-5xxsh/crc-debug-dzhhk" Oct 13 09:18:26 crc kubenswrapper[4833]: I1013 09:18:26.890134 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13006645-b24a-4fd1-9138-ec3fa39c641a-host\") pod \"crc-debug-dzhhk\" (UID: \"13006645-b24a-4fd1-9138-ec3fa39c641a\") " pod="openshift-must-gather-5xxsh/crc-debug-dzhhk" Oct 13 09:18:26 crc kubenswrapper[4833]: I1013 09:18:26.891097 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13006645-b24a-4fd1-9138-ec3fa39c641a-host\") pod \"crc-debug-dzhhk\" (UID: \"13006645-b24a-4fd1-9138-ec3fa39c641a\") " pod="openshift-must-gather-5xxsh/crc-debug-dzhhk" Oct 13 09:18:26 crc kubenswrapper[4833]: I1013 09:18:26.929508 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqwsv\" (UniqueName: \"kubernetes.io/projected/13006645-b24a-4fd1-9138-ec3fa39c641a-kube-api-access-tqwsv\") pod \"crc-debug-dzhhk\" (UID: \"13006645-b24a-4fd1-9138-ec3fa39c641a\") " pod="openshift-must-gather-5xxsh/crc-debug-dzhhk" Oct 13 09:18:26 crc kubenswrapper[4833]: I1013 09:18:26.981615 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5xxsh/crc-debug-dzhhk" Oct 13 09:18:27 crc kubenswrapper[4833]: W1013 09:18:27.016736 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13006645_b24a_4fd1_9138_ec3fa39c641a.slice/crio-062f72ab0e361a4157ffe106c8df1ede8c0e57a301f8d83fef02d6390da8b7ed WatchSource:0}: Error finding container 062f72ab0e361a4157ffe106c8df1ede8c0e57a301f8d83fef02d6390da8b7ed: Status 404 returned error can't find the container with id 062f72ab0e361a4157ffe106c8df1ede8c0e57a301f8d83fef02d6390da8b7ed Oct 13 09:18:27 crc kubenswrapper[4833]: I1013 09:18:27.431196 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5xxsh/crc-debug-dzhhk" event={"ID":"13006645-b24a-4fd1-9138-ec3fa39c641a","Type":"ContainerStarted","Data":"062f72ab0e361a4157ffe106c8df1ede8c0e57a301f8d83fef02d6390da8b7ed"} Oct 13 09:18:39 crc kubenswrapper[4833]: I1013 09:18:39.551157 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5xxsh/crc-debug-dzhhk" event={"ID":"13006645-b24a-4fd1-9138-ec3fa39c641a","Type":"ContainerStarted","Data":"58ef8db1638f22d1af09b65b52f3128ffe3720041ebc9f09c38a19d3fc4b68fb"} Oct 13 09:18:39 crc kubenswrapper[4833]: I1013 09:18:39.568265 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5xxsh/crc-debug-dzhhk" podStartSLOduration=2.736587536 podStartE2EDuration="13.568247778s" podCreationTimestamp="2025-10-13 09:18:26 +0000 UTC" firstStartedPulling="2025-10-13 09:18:27.019065937 +0000 UTC m=+10197.119488853" lastFinishedPulling="2025-10-13 09:18:37.850726179 +0000 UTC m=+10207.951149095" observedRunningTime="2025-10-13 09:18:39.562836634 +0000 UTC m=+10209.663259570" watchObservedRunningTime="2025-10-13 09:18:39.568247778 +0000 UTC m=+10209.668670694" Oct 13 09:18:42 crc kubenswrapper[4833]: I1013 09:18:42.713269 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c7cfj"] Oct 13 09:18:42 crc kubenswrapper[4833]: I1013 09:18:42.719903 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c7cfj" Oct 13 09:18:42 crc kubenswrapper[4833]: I1013 09:18:42.845804 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c7cfj"] Oct 13 09:18:42 crc kubenswrapper[4833]: I1013 09:18:42.860736 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3dc9252-e96e-4db1-a00e-1256b4e23708-catalog-content\") pod \"community-operators-c7cfj\" (UID: \"c3dc9252-e96e-4db1-a00e-1256b4e23708\") " pod="openshift-marketplace/community-operators-c7cfj" Oct 13 09:18:42 crc kubenswrapper[4833]: I1013 09:18:42.860902 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3dc9252-e96e-4db1-a00e-1256b4e23708-utilities\") pod \"community-operators-c7cfj\" (UID: \"c3dc9252-e96e-4db1-a00e-1256b4e23708\") " pod="openshift-marketplace/community-operators-c7cfj" Oct 13 09:18:42 crc kubenswrapper[4833]: I1013 09:18:42.860962 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5zqd\" (UniqueName: \"kubernetes.io/projected/c3dc9252-e96e-4db1-a00e-1256b4e23708-kube-api-access-r5zqd\") pod \"community-operators-c7cfj\" (UID: \"c3dc9252-e96e-4db1-a00e-1256b4e23708\") " pod="openshift-marketplace/community-operators-c7cfj" Oct 13 09:18:42 crc kubenswrapper[4833]: I1013 09:18:42.963340 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3dc9252-e96e-4db1-a00e-1256b4e23708-utilities\") pod \"community-operators-c7cfj\" (UID: \"c3dc9252-e96e-4db1-a00e-1256b4e23708\") " pod="openshift-marketplace/community-operators-c7cfj" Oct 13 09:18:42 crc kubenswrapper[4833]: I1013 09:18:42.963435 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5zqd\" (UniqueName: \"kubernetes.io/projected/c3dc9252-e96e-4db1-a00e-1256b4e23708-kube-api-access-r5zqd\") pod \"community-operators-c7cfj\" (UID: \"c3dc9252-e96e-4db1-a00e-1256b4e23708\") " pod="openshift-marketplace/community-operators-c7cfj" Oct 13 09:18:42 crc kubenswrapper[4833]: I1013 09:18:42.963475 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3dc9252-e96e-4db1-a00e-1256b4e23708-catalog-content\") pod \"community-operators-c7cfj\" (UID: \"c3dc9252-e96e-4db1-a00e-1256b4e23708\") " pod="openshift-marketplace/community-operators-c7cfj" Oct 13 09:18:42 crc kubenswrapper[4833]: I1013 09:18:42.964099 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3dc9252-e96e-4db1-a00e-1256b4e23708-catalog-content\") pod \"community-operators-c7cfj\" (UID: \"c3dc9252-e96e-4db1-a00e-1256b4e23708\") " pod="openshift-marketplace/community-operators-c7cfj" Oct 13 09:18:42 crc kubenswrapper[4833]: I1013 09:18:42.964320 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3dc9252-e96e-4db1-a00e-1256b4e23708-utilities\") pod \"community-operators-c7cfj\" (UID: \"c3dc9252-e96e-4db1-a00e-1256b4e23708\") " pod="openshift-marketplace/community-operators-c7cfj" Oct 13 09:18:42 crc kubenswrapper[4833]: I1013 09:18:42.984065 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5zqd\" (UniqueName: \"kubernetes.io/projected/c3dc9252-e96e-4db1-a00e-1256b4e23708-kube-api-access-r5zqd\") pod \"community-operators-c7cfj\" (UID: \"c3dc9252-e96e-4db1-a00e-1256b4e23708\") " pod="openshift-marketplace/community-operators-c7cfj" Oct 13 09:18:43 crc kubenswrapper[4833]: I1013 09:18:43.059170 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c7cfj" Oct 13 09:18:43 crc kubenswrapper[4833]: I1013 09:18:43.867260 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c7cfj"] Oct 13 09:18:44 crc kubenswrapper[4833]: I1013 09:18:44.604595 4833 generic.go:334] "Generic (PLEG): container finished" podID="c3dc9252-e96e-4db1-a00e-1256b4e23708" containerID="a98d08b68f2156c8eb2cae7e9c7d22d60077a657697e267762e45df47bc73df1" exitCode=0 Oct 13 09:18:44 crc kubenswrapper[4833]: I1013 09:18:44.604850 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c7cfj" event={"ID":"c3dc9252-e96e-4db1-a00e-1256b4e23708","Type":"ContainerDied","Data":"a98d08b68f2156c8eb2cae7e9c7d22d60077a657697e267762e45df47bc73df1"} Oct 13 09:18:44 crc kubenswrapper[4833]: I1013 09:18:44.605141 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c7cfj" event={"ID":"c3dc9252-e96e-4db1-a00e-1256b4e23708","Type":"ContainerStarted","Data":"80043d772c2d314c3d5de3583871b8d55cba6008da742132db9e128592f62797"} Oct 13 09:18:45 crc kubenswrapper[4833]: I1013 09:18:45.616214 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c7cfj" event={"ID":"c3dc9252-e96e-4db1-a00e-1256b4e23708","Type":"ContainerStarted","Data":"8843128859ecb544497700e9d1eb20c604ac13d5ae9fabbe92e2c4ed5aeb4a79"} Oct 13 09:18:48 crc kubenswrapper[4833]: I1013 09:18:48.650228 4833 generic.go:334] "Generic (PLEG): container finished" podID="c3dc9252-e96e-4db1-a00e-1256b4e23708" containerID="8843128859ecb544497700e9d1eb20c604ac13d5ae9fabbe92e2c4ed5aeb4a79" exitCode=0 Oct 13 09:18:48 crc kubenswrapper[4833]: I1013 09:18:48.650291 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c7cfj" event={"ID":"c3dc9252-e96e-4db1-a00e-1256b4e23708","Type":"ContainerDied","Data":"8843128859ecb544497700e9d1eb20c604ac13d5ae9fabbe92e2c4ed5aeb4a79"} Oct 13 09:18:49 crc kubenswrapper[4833]: I1013 09:18:49.662082 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c7cfj" event={"ID":"c3dc9252-e96e-4db1-a00e-1256b4e23708","Type":"ContainerStarted","Data":"f616ec9c8fa1f93c67504e308f7e149d4e611e50ae3134b786053b00a4d27302"} Oct 13 09:18:49 crc kubenswrapper[4833]: I1013 09:18:49.677382 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c7cfj" podStartSLOduration=3.156670693 podStartE2EDuration="7.677366978s" podCreationTimestamp="2025-10-13 09:18:42 +0000 UTC" firstStartedPulling="2025-10-13 09:18:44.608477081 +0000 UTC m=+10214.708899997" lastFinishedPulling="2025-10-13 09:18:49.129173366 +0000 UTC m=+10219.229596282" observedRunningTime="2025-10-13 09:18:49.675891026 +0000 UTC m=+10219.776313942" watchObservedRunningTime="2025-10-13 09:18:49.677366978 +0000 UTC m=+10219.777789884" Oct 13 09:18:53 crc kubenswrapper[4833]: I1013 09:18:53.060866 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c7cfj" Oct 13 09:18:53 crc kubenswrapper[4833]: I1013 09:18:53.061358 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c7cfj" Oct 13 09:18:53 crc kubenswrapper[4833]: I1013 09:18:53.111153 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c7cfj" Oct 13 09:19:03 crc kubenswrapper[4833]: I1013 09:19:03.328952 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c7cfj" Oct 13 09:19:03 crc kubenswrapper[4833]: I1013 09:19:03.410050 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c7cfj"] Oct 13 09:19:03 crc kubenswrapper[4833]: I1013 09:19:03.803161 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c7cfj" podUID="c3dc9252-e96e-4db1-a00e-1256b4e23708" containerName="registry-server" containerID="cri-o://f616ec9c8fa1f93c67504e308f7e149d4e611e50ae3134b786053b00a4d27302" gracePeriod=2 Oct 13 09:19:04 crc kubenswrapper[4833]: I1013 09:19:04.409175 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c7cfj" Oct 13 09:19:04 crc kubenswrapper[4833]: I1013 09:19:04.579900 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3dc9252-e96e-4db1-a00e-1256b4e23708-utilities\") pod \"c3dc9252-e96e-4db1-a00e-1256b4e23708\" (UID: \"c3dc9252-e96e-4db1-a00e-1256b4e23708\") " Oct 13 09:19:04 crc kubenswrapper[4833]: I1013 09:19:04.580032 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5zqd\" (UniqueName: \"kubernetes.io/projected/c3dc9252-e96e-4db1-a00e-1256b4e23708-kube-api-access-r5zqd\") pod \"c3dc9252-e96e-4db1-a00e-1256b4e23708\" (UID: \"c3dc9252-e96e-4db1-a00e-1256b4e23708\") " Oct 13 09:19:04 crc kubenswrapper[4833]: I1013 09:19:04.580093 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3dc9252-e96e-4db1-a00e-1256b4e23708-catalog-content\") pod \"c3dc9252-e96e-4db1-a00e-1256b4e23708\" (UID: \"c3dc9252-e96e-4db1-a00e-1256b4e23708\") " Oct 13 09:19:04 crc kubenswrapper[4833]: I1013 09:19:04.581392 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3dc9252-e96e-4db1-a00e-1256b4e23708-utilities" (OuterVolumeSpecName: "utilities") pod "c3dc9252-e96e-4db1-a00e-1256b4e23708" (UID: "c3dc9252-e96e-4db1-a00e-1256b4e23708"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 09:19:04 crc kubenswrapper[4833]: I1013 09:19:04.604929 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3dc9252-e96e-4db1-a00e-1256b4e23708-kube-api-access-r5zqd" (OuterVolumeSpecName: "kube-api-access-r5zqd") pod "c3dc9252-e96e-4db1-a00e-1256b4e23708" (UID: "c3dc9252-e96e-4db1-a00e-1256b4e23708"). InnerVolumeSpecName "kube-api-access-r5zqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 09:19:04 crc kubenswrapper[4833]: I1013 09:19:04.684580 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3dc9252-e96e-4db1-a00e-1256b4e23708-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 09:19:04 crc kubenswrapper[4833]: I1013 09:19:04.684662 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5zqd\" (UniqueName: \"kubernetes.io/projected/c3dc9252-e96e-4db1-a00e-1256b4e23708-kube-api-access-r5zqd\") on node \"crc\" DevicePath \"\"" Oct 13 09:19:04 crc kubenswrapper[4833]: I1013 09:19:04.684590 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3dc9252-e96e-4db1-a00e-1256b4e23708-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c3dc9252-e96e-4db1-a00e-1256b4e23708" (UID: "c3dc9252-e96e-4db1-a00e-1256b4e23708"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 09:19:04 crc kubenswrapper[4833]: I1013 09:19:04.787315 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3dc9252-e96e-4db1-a00e-1256b4e23708-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 09:19:04 crc kubenswrapper[4833]: I1013 09:19:04.814322 4833 generic.go:334] "Generic (PLEG): container finished" podID="c3dc9252-e96e-4db1-a00e-1256b4e23708" containerID="f616ec9c8fa1f93c67504e308f7e149d4e611e50ae3134b786053b00a4d27302" exitCode=0 Oct 13 09:19:04 crc kubenswrapper[4833]: I1013 09:19:04.814380 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c7cfj" event={"ID":"c3dc9252-e96e-4db1-a00e-1256b4e23708","Type":"ContainerDied","Data":"f616ec9c8fa1f93c67504e308f7e149d4e611e50ae3134b786053b00a4d27302"} Oct 13 09:19:04 crc kubenswrapper[4833]: I1013 09:19:04.814413 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c7cfj" event={"ID":"c3dc9252-e96e-4db1-a00e-1256b4e23708","Type":"ContainerDied","Data":"80043d772c2d314c3d5de3583871b8d55cba6008da742132db9e128592f62797"} Oct 13 09:19:04 crc kubenswrapper[4833]: I1013 09:19:04.814438 4833 scope.go:117] "RemoveContainer" containerID="f616ec9c8fa1f93c67504e308f7e149d4e611e50ae3134b786053b00a4d27302" Oct 13 09:19:04 crc kubenswrapper[4833]: I1013 09:19:04.814659 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c7cfj" Oct 13 09:19:04 crc kubenswrapper[4833]: I1013 09:19:04.848664 4833 scope.go:117] "RemoveContainer" containerID="8843128859ecb544497700e9d1eb20c604ac13d5ae9fabbe92e2c4ed5aeb4a79" Oct 13 09:19:04 crc kubenswrapper[4833]: I1013 09:19:04.861631 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c7cfj"] Oct 13 09:19:04 crc kubenswrapper[4833]: I1013 09:19:04.872770 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c7cfj"] Oct 13 09:19:05 crc kubenswrapper[4833]: I1013 09:19:05.537198 4833 scope.go:117] "RemoveContainer" containerID="a98d08b68f2156c8eb2cae7e9c7d22d60077a657697e267762e45df47bc73df1" Oct 13 09:19:05 crc kubenswrapper[4833]: I1013 09:19:05.571092 4833 scope.go:117] "RemoveContainer" containerID="f616ec9c8fa1f93c67504e308f7e149d4e611e50ae3134b786053b00a4d27302" Oct 13 09:19:05 crc kubenswrapper[4833]: E1013 09:19:05.577126 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f616ec9c8fa1f93c67504e308f7e149d4e611e50ae3134b786053b00a4d27302\": container with ID starting with f616ec9c8fa1f93c67504e308f7e149d4e611e50ae3134b786053b00a4d27302 not found: ID does not exist" containerID="f616ec9c8fa1f93c67504e308f7e149d4e611e50ae3134b786053b00a4d27302" Oct 13 09:19:05 crc kubenswrapper[4833]: I1013 09:19:05.577177 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f616ec9c8fa1f93c67504e308f7e149d4e611e50ae3134b786053b00a4d27302"} err="failed to get container status \"f616ec9c8fa1f93c67504e308f7e149d4e611e50ae3134b786053b00a4d27302\": rpc error: code = NotFound desc = could not find container \"f616ec9c8fa1f93c67504e308f7e149d4e611e50ae3134b786053b00a4d27302\": container with ID starting with f616ec9c8fa1f93c67504e308f7e149d4e611e50ae3134b786053b00a4d27302 not found: ID does not exist" Oct 13 09:19:05 crc kubenswrapper[4833]: I1013 09:19:05.577215 4833 scope.go:117] "RemoveContainer" containerID="8843128859ecb544497700e9d1eb20c604ac13d5ae9fabbe92e2c4ed5aeb4a79" Oct 13 09:19:05 crc kubenswrapper[4833]: E1013 09:19:05.577759 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8843128859ecb544497700e9d1eb20c604ac13d5ae9fabbe92e2c4ed5aeb4a79\": container with ID starting with 8843128859ecb544497700e9d1eb20c604ac13d5ae9fabbe92e2c4ed5aeb4a79 not found: ID does not exist" containerID="8843128859ecb544497700e9d1eb20c604ac13d5ae9fabbe92e2c4ed5aeb4a79" Oct 13 09:19:05 crc kubenswrapper[4833]: I1013 09:19:05.577881 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8843128859ecb544497700e9d1eb20c604ac13d5ae9fabbe92e2c4ed5aeb4a79"} err="failed to get container status \"8843128859ecb544497700e9d1eb20c604ac13d5ae9fabbe92e2c4ed5aeb4a79\": rpc error: code = NotFound desc = could not find container \"8843128859ecb544497700e9d1eb20c604ac13d5ae9fabbe92e2c4ed5aeb4a79\": container with ID starting with 8843128859ecb544497700e9d1eb20c604ac13d5ae9fabbe92e2c4ed5aeb4a79 not found: ID does not exist" Oct 13 09:19:05 crc kubenswrapper[4833]: I1013 09:19:05.577979 4833 scope.go:117] "RemoveContainer" containerID="a98d08b68f2156c8eb2cae7e9c7d22d60077a657697e267762e45df47bc73df1" Oct 13 09:19:05 crc kubenswrapper[4833]: E1013 09:19:05.581787 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a98d08b68f2156c8eb2cae7e9c7d22d60077a657697e267762e45df47bc73df1\": container with ID starting with a98d08b68f2156c8eb2cae7e9c7d22d60077a657697e267762e45df47bc73df1 not found: ID does not exist" containerID="a98d08b68f2156c8eb2cae7e9c7d22d60077a657697e267762e45df47bc73df1" Oct 13 09:19:05 crc kubenswrapper[4833]: I1013 09:19:05.582122 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a98d08b68f2156c8eb2cae7e9c7d22d60077a657697e267762e45df47bc73df1"} err="failed to get container status \"a98d08b68f2156c8eb2cae7e9c7d22d60077a657697e267762e45df47bc73df1\": rpc error: code = NotFound desc = could not find container \"a98d08b68f2156c8eb2cae7e9c7d22d60077a657697e267762e45df47bc73df1\": container with ID starting with a98d08b68f2156c8eb2cae7e9c7d22d60077a657697e267762e45df47bc73df1 not found: ID does not exist" Oct 13 09:19:06 crc kubenswrapper[4833]: I1013 09:19:06.639468 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3dc9252-e96e-4db1-a00e-1256b4e23708" path="/var/lib/kubelet/pods/c3dc9252-e96e-4db1-a00e-1256b4e23708/volumes" Oct 13 09:19:20 crc kubenswrapper[4833]: I1013 09:19:20.983369 4833 generic.go:334] "Generic (PLEG): container finished" podID="13006645-b24a-4fd1-9138-ec3fa39c641a" containerID="58ef8db1638f22d1af09b65b52f3128ffe3720041ebc9f09c38a19d3fc4b68fb" exitCode=0 Oct 13 09:19:20 crc kubenswrapper[4833]: I1013 09:19:20.983468 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5xxsh/crc-debug-dzhhk" event={"ID":"13006645-b24a-4fd1-9138-ec3fa39c641a","Type":"ContainerDied","Data":"58ef8db1638f22d1af09b65b52f3128ffe3720041ebc9f09c38a19d3fc4b68fb"} Oct 13 09:19:22 crc kubenswrapper[4833]: I1013 09:19:22.110557 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5xxsh/crc-debug-dzhhk" Oct 13 09:19:22 crc kubenswrapper[4833]: I1013 09:19:22.146048 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5xxsh/crc-debug-dzhhk"] Oct 13 09:19:22 crc kubenswrapper[4833]: I1013 09:19:22.155492 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5xxsh/crc-debug-dzhhk"] Oct 13 09:19:22 crc kubenswrapper[4833]: I1013 09:19:22.191686 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13006645-b24a-4fd1-9138-ec3fa39c641a-host\") pod \"13006645-b24a-4fd1-9138-ec3fa39c641a\" (UID: \"13006645-b24a-4fd1-9138-ec3fa39c641a\") " Oct 13 09:19:22 crc kubenswrapper[4833]: I1013 09:19:22.191942 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqwsv\" (UniqueName: \"kubernetes.io/projected/13006645-b24a-4fd1-9138-ec3fa39c641a-kube-api-access-tqwsv\") pod \"13006645-b24a-4fd1-9138-ec3fa39c641a\" (UID: \"13006645-b24a-4fd1-9138-ec3fa39c641a\") " Oct 13 09:19:22 crc kubenswrapper[4833]: I1013 09:19:22.192104 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13006645-b24a-4fd1-9138-ec3fa39c641a-host" (OuterVolumeSpecName: "host") pod "13006645-b24a-4fd1-9138-ec3fa39c641a" (UID: "13006645-b24a-4fd1-9138-ec3fa39c641a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 09:19:22 crc kubenswrapper[4833]: I1013 09:19:22.192459 4833 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13006645-b24a-4fd1-9138-ec3fa39c641a-host\") on node \"crc\" DevicePath \"\"" Oct 13 09:19:22 crc kubenswrapper[4833]: I1013 09:19:22.198090 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13006645-b24a-4fd1-9138-ec3fa39c641a-kube-api-access-tqwsv" (OuterVolumeSpecName: "kube-api-access-tqwsv") pod "13006645-b24a-4fd1-9138-ec3fa39c641a" (UID: "13006645-b24a-4fd1-9138-ec3fa39c641a"). InnerVolumeSpecName "kube-api-access-tqwsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 09:19:22 crc kubenswrapper[4833]: I1013 09:19:22.294164 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqwsv\" (UniqueName: \"kubernetes.io/projected/13006645-b24a-4fd1-9138-ec3fa39c641a-kube-api-access-tqwsv\") on node \"crc\" DevicePath \"\"" Oct 13 09:19:22 crc kubenswrapper[4833]: I1013 09:19:22.641174 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13006645-b24a-4fd1-9138-ec3fa39c641a" path="/var/lib/kubelet/pods/13006645-b24a-4fd1-9138-ec3fa39c641a/volumes" Oct 13 09:19:23 crc kubenswrapper[4833]: I1013 09:19:23.005478 4833 scope.go:117] "RemoveContainer" containerID="58ef8db1638f22d1af09b65b52f3128ffe3720041ebc9f09c38a19d3fc4b68fb" Oct 13 09:19:23 crc kubenswrapper[4833]: I1013 09:19:23.005514 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5xxsh/crc-debug-dzhhk" Oct 13 09:19:23 crc kubenswrapper[4833]: I1013 09:19:23.313884 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5xxsh/crc-debug-tnh9m"] Oct 13 09:19:23 crc kubenswrapper[4833]: E1013 09:19:23.314270 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3dc9252-e96e-4db1-a00e-1256b4e23708" containerName="extract-content" Oct 13 09:19:23 crc kubenswrapper[4833]: I1013 09:19:23.314284 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3dc9252-e96e-4db1-a00e-1256b4e23708" containerName="extract-content" Oct 13 09:19:23 crc kubenswrapper[4833]: E1013 09:19:23.314302 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3dc9252-e96e-4db1-a00e-1256b4e23708" containerName="extract-utilities" Oct 13 09:19:23 crc kubenswrapper[4833]: I1013 09:19:23.314307 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3dc9252-e96e-4db1-a00e-1256b4e23708" containerName="extract-utilities" Oct 13 09:19:23 crc kubenswrapper[4833]: E1013 09:19:23.314331 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3dc9252-e96e-4db1-a00e-1256b4e23708" containerName="registry-server" Oct 13 09:19:23 crc kubenswrapper[4833]: I1013 09:19:23.314337 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3dc9252-e96e-4db1-a00e-1256b4e23708" containerName="registry-server" Oct 13 09:19:23 crc kubenswrapper[4833]: E1013 09:19:23.314355 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13006645-b24a-4fd1-9138-ec3fa39c641a" containerName="container-00" Oct 13 09:19:23 crc kubenswrapper[4833]: I1013 09:19:23.314361 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="13006645-b24a-4fd1-9138-ec3fa39c641a" containerName="container-00" Oct 13 09:19:23 crc kubenswrapper[4833]: I1013 09:19:23.314566 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3dc9252-e96e-4db1-a00e-1256b4e23708" containerName="registry-server" Oct 13 09:19:23 crc kubenswrapper[4833]: I1013 09:19:23.314588 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="13006645-b24a-4fd1-9138-ec3fa39c641a" containerName="container-00" Oct 13 09:19:23 crc kubenswrapper[4833]: I1013 09:19:23.315243 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5xxsh/crc-debug-tnh9m" Oct 13 09:19:23 crc kubenswrapper[4833]: I1013 09:19:23.421512 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7dd2d6b0-faa3-4a49-b9fe-31f28c80c219-host\") pod \"crc-debug-tnh9m\" (UID: \"7dd2d6b0-faa3-4a49-b9fe-31f28c80c219\") " pod="openshift-must-gather-5xxsh/crc-debug-tnh9m" Oct 13 09:19:23 crc kubenswrapper[4833]: I1013 09:19:23.421670 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b54x\" (UniqueName: \"kubernetes.io/projected/7dd2d6b0-faa3-4a49-b9fe-31f28c80c219-kube-api-access-9b54x\") pod \"crc-debug-tnh9m\" (UID: \"7dd2d6b0-faa3-4a49-b9fe-31f28c80c219\") " pod="openshift-must-gather-5xxsh/crc-debug-tnh9m" Oct 13 09:19:23 crc kubenswrapper[4833]: I1013 09:19:23.523851 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b54x\" (UniqueName: \"kubernetes.io/projected/7dd2d6b0-faa3-4a49-b9fe-31f28c80c219-kube-api-access-9b54x\") pod \"crc-debug-tnh9m\" (UID: \"7dd2d6b0-faa3-4a49-b9fe-31f28c80c219\") " pod="openshift-must-gather-5xxsh/crc-debug-tnh9m" Oct 13 09:19:23 crc kubenswrapper[4833]: I1013 09:19:23.524080 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7dd2d6b0-faa3-4a49-b9fe-31f28c80c219-host\") pod \"crc-debug-tnh9m\" (UID: \"7dd2d6b0-faa3-4a49-b9fe-31f28c80c219\") " pod="openshift-must-gather-5xxsh/crc-debug-tnh9m" Oct 13 09:19:23 crc kubenswrapper[4833]: I1013 09:19:23.524210 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7dd2d6b0-faa3-4a49-b9fe-31f28c80c219-host\") pod \"crc-debug-tnh9m\" (UID: \"7dd2d6b0-faa3-4a49-b9fe-31f28c80c219\") " pod="openshift-must-gather-5xxsh/crc-debug-tnh9m" Oct 13 09:19:23 crc kubenswrapper[4833]: I1013 09:19:23.540489 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b54x\" (UniqueName: \"kubernetes.io/projected/7dd2d6b0-faa3-4a49-b9fe-31f28c80c219-kube-api-access-9b54x\") pod \"crc-debug-tnh9m\" (UID: \"7dd2d6b0-faa3-4a49-b9fe-31f28c80c219\") " pod="openshift-must-gather-5xxsh/crc-debug-tnh9m" Oct 13 09:19:23 crc kubenswrapper[4833]: I1013 09:19:23.640187 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5xxsh/crc-debug-tnh9m" Oct 13 09:19:24 crc kubenswrapper[4833]: I1013 09:19:24.071209 4833 generic.go:334] "Generic (PLEG): container finished" podID="7dd2d6b0-faa3-4a49-b9fe-31f28c80c219" containerID="744bd87c8d44d305b8ffade36cf6cc78766f5f04db0e9341c64c1cf67bdec5d4" exitCode=0 Oct 13 09:19:24 crc kubenswrapper[4833]: I1013 09:19:24.071585 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5xxsh/crc-debug-tnh9m" event={"ID":"7dd2d6b0-faa3-4a49-b9fe-31f28c80c219","Type":"ContainerDied","Data":"744bd87c8d44d305b8ffade36cf6cc78766f5f04db0e9341c64c1cf67bdec5d4"} Oct 13 09:19:24 crc kubenswrapper[4833]: I1013 09:19:24.071614 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5xxsh/crc-debug-tnh9m" event={"ID":"7dd2d6b0-faa3-4a49-b9fe-31f28c80c219","Type":"ContainerStarted","Data":"919ef01832438916866aa094ea890af392decb388d3fd0cb0d1eda8763a6e2c2"} Oct 13 09:19:24 crc kubenswrapper[4833]: I1013 09:19:24.499195 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5xxsh/crc-debug-tnh9m"] Oct 13 09:19:24 crc kubenswrapper[4833]: I1013 09:19:24.507971 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5xxsh/crc-debug-tnh9m"] Oct 13 09:19:25 crc kubenswrapper[4833]: I1013 09:19:25.240799 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5xxsh/crc-debug-tnh9m" Oct 13 09:19:25 crc kubenswrapper[4833]: I1013 09:19:25.365958 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7dd2d6b0-faa3-4a49-b9fe-31f28c80c219-host\") pod \"7dd2d6b0-faa3-4a49-b9fe-31f28c80c219\" (UID: \"7dd2d6b0-faa3-4a49-b9fe-31f28c80c219\") " Oct 13 09:19:25 crc kubenswrapper[4833]: I1013 09:19:25.366070 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dd2d6b0-faa3-4a49-b9fe-31f28c80c219-host" (OuterVolumeSpecName: "host") pod "7dd2d6b0-faa3-4a49-b9fe-31f28c80c219" (UID: "7dd2d6b0-faa3-4a49-b9fe-31f28c80c219"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 09:19:25 crc kubenswrapper[4833]: I1013 09:19:25.366206 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b54x\" (UniqueName: \"kubernetes.io/projected/7dd2d6b0-faa3-4a49-b9fe-31f28c80c219-kube-api-access-9b54x\") pod \"7dd2d6b0-faa3-4a49-b9fe-31f28c80c219\" (UID: \"7dd2d6b0-faa3-4a49-b9fe-31f28c80c219\") " Oct 13 09:19:25 crc kubenswrapper[4833]: I1013 09:19:25.367026 4833 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7dd2d6b0-faa3-4a49-b9fe-31f28c80c219-host\") on node \"crc\" DevicePath \"\"" Oct 13 09:19:25 crc kubenswrapper[4833]: I1013 09:19:25.752174 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5xxsh/crc-debug-xzjds"] Oct 13 09:19:25 crc kubenswrapper[4833]: E1013 09:19:25.752811 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dd2d6b0-faa3-4a49-b9fe-31f28c80c219" containerName="container-00" Oct 13 09:19:25 crc kubenswrapper[4833]: I1013 09:19:25.752833 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dd2d6b0-faa3-4a49-b9fe-31f28c80c219" containerName="container-00" Oct 13 09:19:25 crc kubenswrapper[4833]: I1013 09:19:25.753210 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dd2d6b0-faa3-4a49-b9fe-31f28c80c219" containerName="container-00" Oct 13 09:19:25 crc kubenswrapper[4833]: I1013 09:19:25.754257 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5xxsh/crc-debug-xzjds" Oct 13 09:19:25 crc kubenswrapper[4833]: I1013 09:19:25.779816 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dd2d6b0-faa3-4a49-b9fe-31f28c80c219-kube-api-access-9b54x" (OuterVolumeSpecName: "kube-api-access-9b54x") pod "7dd2d6b0-faa3-4a49-b9fe-31f28c80c219" (UID: "7dd2d6b0-faa3-4a49-b9fe-31f28c80c219"). InnerVolumeSpecName "kube-api-access-9b54x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 09:19:25 crc kubenswrapper[4833]: I1013 09:19:25.877754 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ft7x\" (UniqueName: \"kubernetes.io/projected/d32c17d0-b34d-443a-a609-e087fdd95ea7-kube-api-access-2ft7x\") pod \"crc-debug-xzjds\" (UID: \"d32c17d0-b34d-443a-a609-e087fdd95ea7\") " pod="openshift-must-gather-5xxsh/crc-debug-xzjds" Oct 13 09:19:25 crc kubenswrapper[4833]: I1013 09:19:25.878193 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d32c17d0-b34d-443a-a609-e087fdd95ea7-host\") pod \"crc-debug-xzjds\" (UID: \"d32c17d0-b34d-443a-a609-e087fdd95ea7\") " pod="openshift-must-gather-5xxsh/crc-debug-xzjds" Oct 13 09:19:25 crc kubenswrapper[4833]: I1013 09:19:25.878349 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b54x\" (UniqueName: \"kubernetes.io/projected/7dd2d6b0-faa3-4a49-b9fe-31f28c80c219-kube-api-access-9b54x\") on node \"crc\" DevicePath \"\"" Oct 13 09:19:25 crc kubenswrapper[4833]: I1013 09:19:25.980180 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d32c17d0-b34d-443a-a609-e087fdd95ea7-host\") pod \"crc-debug-xzjds\" (UID: \"d32c17d0-b34d-443a-a609-e087fdd95ea7\") " pod="openshift-must-gather-5xxsh/crc-debug-xzjds" Oct 13 09:19:25 crc kubenswrapper[4833]: I1013 09:19:25.980251 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ft7x\" (UniqueName: \"kubernetes.io/projected/d32c17d0-b34d-443a-a609-e087fdd95ea7-kube-api-access-2ft7x\") pod \"crc-debug-xzjds\" (UID: \"d32c17d0-b34d-443a-a609-e087fdd95ea7\") " pod="openshift-must-gather-5xxsh/crc-debug-xzjds" Oct 13 09:19:25 crc kubenswrapper[4833]: I1013 09:19:25.980346 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d32c17d0-b34d-443a-a609-e087fdd95ea7-host\") pod \"crc-debug-xzjds\" (UID: \"d32c17d0-b34d-443a-a609-e087fdd95ea7\") " pod="openshift-must-gather-5xxsh/crc-debug-xzjds" Oct 13 09:19:26 crc kubenswrapper[4833]: I1013 09:19:26.013642 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ft7x\" (UniqueName: \"kubernetes.io/projected/d32c17d0-b34d-443a-a609-e087fdd95ea7-kube-api-access-2ft7x\") pod \"crc-debug-xzjds\" (UID: \"d32c17d0-b34d-443a-a609-e087fdd95ea7\") " pod="openshift-must-gather-5xxsh/crc-debug-xzjds" Oct 13 09:19:26 crc kubenswrapper[4833]: I1013 09:19:26.075119 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5xxsh/crc-debug-xzjds" Oct 13 09:19:26 crc kubenswrapper[4833]: I1013 09:19:26.091525 4833 scope.go:117] "RemoveContainer" containerID="744bd87c8d44d305b8ffade36cf6cc78766f5f04db0e9341c64c1cf67bdec5d4" Oct 13 09:19:26 crc kubenswrapper[4833]: I1013 09:19:26.091633 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5xxsh/crc-debug-tnh9m" Oct 13 09:19:26 crc kubenswrapper[4833]: I1013 09:19:26.638800 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dd2d6b0-faa3-4a49-b9fe-31f28c80c219" path="/var/lib/kubelet/pods/7dd2d6b0-faa3-4a49-b9fe-31f28c80c219/volumes" Oct 13 09:19:27 crc kubenswrapper[4833]: I1013 09:19:27.105102 4833 generic.go:334] "Generic (PLEG): container finished" podID="d32c17d0-b34d-443a-a609-e087fdd95ea7" containerID="3a44da4ff451dc5a03b088f38fa87ec599fa235ee64c717a98a4922862a50862" exitCode=0 Oct 13 09:19:27 crc kubenswrapper[4833]: I1013 09:19:27.105194 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5xxsh/crc-debug-xzjds" event={"ID":"d32c17d0-b34d-443a-a609-e087fdd95ea7","Type":"ContainerDied","Data":"3a44da4ff451dc5a03b088f38fa87ec599fa235ee64c717a98a4922862a50862"} Oct 13 09:19:27 crc kubenswrapper[4833]: I1013 09:19:27.105453 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5xxsh/crc-debug-xzjds" event={"ID":"d32c17d0-b34d-443a-a609-e087fdd95ea7","Type":"ContainerStarted","Data":"8868c4acba0f3dffe9936fb51579ad75f285bb13168dd73921881ab887248cd0"} Oct 13 09:19:27 crc kubenswrapper[4833]: I1013 09:19:27.136375 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5xxsh/crc-debug-xzjds"] Oct 13 09:19:27 crc kubenswrapper[4833]: I1013 09:19:27.144289 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5xxsh/crc-debug-xzjds"] Oct 13 09:19:28 crc kubenswrapper[4833]: I1013 09:19:28.226902 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5xxsh/crc-debug-xzjds" Oct 13 09:19:28 crc kubenswrapper[4833]: I1013 09:19:28.329458 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d32c17d0-b34d-443a-a609-e087fdd95ea7-host\") pod \"d32c17d0-b34d-443a-a609-e087fdd95ea7\" (UID: \"d32c17d0-b34d-443a-a609-e087fdd95ea7\") " Oct 13 09:19:28 crc kubenswrapper[4833]: I1013 09:19:28.329636 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d32c17d0-b34d-443a-a609-e087fdd95ea7-host" (OuterVolumeSpecName: "host") pod "d32c17d0-b34d-443a-a609-e087fdd95ea7" (UID: "d32c17d0-b34d-443a-a609-e087fdd95ea7"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 09:19:28 crc kubenswrapper[4833]: I1013 09:19:28.329866 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ft7x\" (UniqueName: \"kubernetes.io/projected/d32c17d0-b34d-443a-a609-e087fdd95ea7-kube-api-access-2ft7x\") pod \"d32c17d0-b34d-443a-a609-e087fdd95ea7\" (UID: \"d32c17d0-b34d-443a-a609-e087fdd95ea7\") " Oct 13 09:19:28 crc kubenswrapper[4833]: I1013 09:19:28.330460 4833 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d32c17d0-b34d-443a-a609-e087fdd95ea7-host\") on node \"crc\" DevicePath \"\"" Oct 13 09:19:28 crc kubenswrapper[4833]: I1013 09:19:28.336341 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d32c17d0-b34d-443a-a609-e087fdd95ea7-kube-api-access-2ft7x" (OuterVolumeSpecName: "kube-api-access-2ft7x") pod "d32c17d0-b34d-443a-a609-e087fdd95ea7" (UID: "d32c17d0-b34d-443a-a609-e087fdd95ea7"). InnerVolumeSpecName "kube-api-access-2ft7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 09:19:28 crc kubenswrapper[4833]: I1013 09:19:28.432152 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ft7x\" (UniqueName: \"kubernetes.io/projected/d32c17d0-b34d-443a-a609-e087fdd95ea7-kube-api-access-2ft7x\") on node \"crc\" DevicePath \"\"" Oct 13 09:19:28 crc kubenswrapper[4833]: I1013 09:19:28.642903 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d32c17d0-b34d-443a-a609-e087fdd95ea7" path="/var/lib/kubelet/pods/d32c17d0-b34d-443a-a609-e087fdd95ea7/volumes" Oct 13 09:19:29 crc kubenswrapper[4833]: I1013 09:19:29.126548 4833 scope.go:117] "RemoveContainer" containerID="3a44da4ff451dc5a03b088f38fa87ec599fa235ee64c717a98a4922862a50862" Oct 13 09:19:29 crc kubenswrapper[4833]: I1013 09:19:29.126608 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5xxsh/crc-debug-xzjds" Oct 13 09:19:30 crc kubenswrapper[4833]: I1013 09:19:30.542677 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 09:19:30 crc kubenswrapper[4833]: I1013 09:19:30.543047 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 09:19:45 crc kubenswrapper[4833]: I1013 09:19:45.552425 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hdl8p"] Oct 13 09:19:45 crc kubenswrapper[4833]: E1013 09:19:45.553457 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d32c17d0-b34d-443a-a609-e087fdd95ea7" containerName="container-00" Oct 13 09:19:45 crc kubenswrapper[4833]: I1013 09:19:45.553471 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d32c17d0-b34d-443a-a609-e087fdd95ea7" containerName="container-00" Oct 13 09:19:45 crc kubenswrapper[4833]: I1013 09:19:45.553809 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d32c17d0-b34d-443a-a609-e087fdd95ea7" containerName="container-00" Oct 13 09:19:45 crc kubenswrapper[4833]: I1013 09:19:45.555631 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hdl8p" Oct 13 09:19:45 crc kubenswrapper[4833]: I1013 09:19:45.594420 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hdl8p"] Oct 13 09:19:45 crc kubenswrapper[4833]: I1013 09:19:45.701730 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa468ee-0ec3-47e0-aeff-0a7841650350-utilities\") pod \"redhat-operators-hdl8p\" (UID: \"4fa468ee-0ec3-47e0-aeff-0a7841650350\") " pod="openshift-marketplace/redhat-operators-hdl8p" Oct 13 09:19:45 crc kubenswrapper[4833]: I1013 09:19:45.701792 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa468ee-0ec3-47e0-aeff-0a7841650350-catalog-content\") pod \"redhat-operators-hdl8p\" (UID: \"4fa468ee-0ec3-47e0-aeff-0a7841650350\") " pod="openshift-marketplace/redhat-operators-hdl8p" Oct 13 09:19:45 crc kubenswrapper[4833]: I1013 09:19:45.701845 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfs4z\" (UniqueName: \"kubernetes.io/projected/4fa468ee-0ec3-47e0-aeff-0a7841650350-kube-api-access-nfs4z\") pod \"redhat-operators-hdl8p\" (UID: \"4fa468ee-0ec3-47e0-aeff-0a7841650350\") " pod="openshift-marketplace/redhat-operators-hdl8p" Oct 13 09:19:45 crc kubenswrapper[4833]: I1013 09:19:45.804276 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa468ee-0ec3-47e0-aeff-0a7841650350-utilities\") pod \"redhat-operators-hdl8p\" (UID: \"4fa468ee-0ec3-47e0-aeff-0a7841650350\") " pod="openshift-marketplace/redhat-operators-hdl8p" Oct 13 09:19:45 crc kubenswrapper[4833]: I1013 09:19:45.804385 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa468ee-0ec3-47e0-aeff-0a7841650350-catalog-content\") pod \"redhat-operators-hdl8p\" (UID: \"4fa468ee-0ec3-47e0-aeff-0a7841650350\") " pod="openshift-marketplace/redhat-operators-hdl8p" Oct 13 09:19:45 crc kubenswrapper[4833]: I1013 09:19:45.804486 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfs4z\" (UniqueName: \"kubernetes.io/projected/4fa468ee-0ec3-47e0-aeff-0a7841650350-kube-api-access-nfs4z\") pod \"redhat-operators-hdl8p\" (UID: \"4fa468ee-0ec3-47e0-aeff-0a7841650350\") " pod="openshift-marketplace/redhat-operators-hdl8p" Oct 13 09:19:45 crc kubenswrapper[4833]: I1013 09:19:45.805461 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa468ee-0ec3-47e0-aeff-0a7841650350-catalog-content\") pod \"redhat-operators-hdl8p\" (UID: \"4fa468ee-0ec3-47e0-aeff-0a7841650350\") " pod="openshift-marketplace/redhat-operators-hdl8p" Oct 13 09:19:45 crc kubenswrapper[4833]: I1013 09:19:45.805814 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa468ee-0ec3-47e0-aeff-0a7841650350-utilities\") pod \"redhat-operators-hdl8p\" (UID: \"4fa468ee-0ec3-47e0-aeff-0a7841650350\") " pod="openshift-marketplace/redhat-operators-hdl8p" Oct 13 09:19:45 crc kubenswrapper[4833]: I1013 09:19:45.844439 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfs4z\" (UniqueName: \"kubernetes.io/projected/4fa468ee-0ec3-47e0-aeff-0a7841650350-kube-api-access-nfs4z\") pod \"redhat-operators-hdl8p\" (UID: \"4fa468ee-0ec3-47e0-aeff-0a7841650350\") " pod="openshift-marketplace/redhat-operators-hdl8p" Oct 13 09:19:45 crc kubenswrapper[4833]: I1013 09:19:45.884344 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hdl8p" Oct 13 09:19:46 crc kubenswrapper[4833]: I1013 09:19:46.408953 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hdl8p"] Oct 13 09:19:47 crc kubenswrapper[4833]: I1013 09:19:47.322982 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdl8p" event={"ID":"4fa468ee-0ec3-47e0-aeff-0a7841650350","Type":"ContainerStarted","Data":"fea838ff058f34038e773ea1fc5516a5d59830603f31b0e3ec7ef9a9fb05721e"} Oct 13 09:19:48 crc kubenswrapper[4833]: I1013 09:19:48.338445 4833 generic.go:334] "Generic (PLEG): container finished" podID="4fa468ee-0ec3-47e0-aeff-0a7841650350" containerID="37b9a4736b73933a64719c5c0f2881bce7ea46bc54425b5fc32653f54525f6bb" exitCode=0 Oct 13 09:19:48 crc kubenswrapper[4833]: I1013 09:19:48.338634 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdl8p" event={"ID":"4fa468ee-0ec3-47e0-aeff-0a7841650350","Type":"ContainerDied","Data":"37b9a4736b73933a64719c5c0f2881bce7ea46bc54425b5fc32653f54525f6bb"} Oct 13 09:19:50 crc kubenswrapper[4833]: I1013 09:19:50.376379 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdl8p" event={"ID":"4fa468ee-0ec3-47e0-aeff-0a7841650350","Type":"ContainerStarted","Data":"af89fcce29d685e10f13d379919fcad5514c82ab9b24cbdb635f5beb48bc0b8e"} Oct 13 09:19:54 crc kubenswrapper[4833]: I1013 09:19:54.423191 4833 generic.go:334] "Generic (PLEG): container finished" podID="4fa468ee-0ec3-47e0-aeff-0a7841650350" containerID="af89fcce29d685e10f13d379919fcad5514c82ab9b24cbdb635f5beb48bc0b8e" exitCode=0 Oct 13 09:19:54 crc kubenswrapper[4833]: I1013 09:19:54.423262 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdl8p" event={"ID":"4fa468ee-0ec3-47e0-aeff-0a7841650350","Type":"ContainerDied","Data":"af89fcce29d685e10f13d379919fcad5514c82ab9b24cbdb635f5beb48bc0b8e"} Oct 13 09:19:55 crc kubenswrapper[4833]: I1013 09:19:55.438045 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdl8p" event={"ID":"4fa468ee-0ec3-47e0-aeff-0a7841650350","Type":"ContainerStarted","Data":"ff4020251530b3ade02ba4d1c8f16ae8913c68634af239bd762fc5a876a4169c"} Oct 13 09:19:55 crc kubenswrapper[4833]: I1013 09:19:55.461975 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hdl8p" podStartSLOduration=3.798807195 podStartE2EDuration="10.461957064s" podCreationTimestamp="2025-10-13 09:19:45 +0000 UTC" firstStartedPulling="2025-10-13 09:19:48.344099214 +0000 UTC m=+10278.444522130" lastFinishedPulling="2025-10-13 09:19:55.007249083 +0000 UTC m=+10285.107671999" observedRunningTime="2025-10-13 09:19:55.454051419 +0000 UTC m=+10285.554474335" watchObservedRunningTime="2025-10-13 09:19:55.461957064 +0000 UTC m=+10285.562379970" Oct 13 09:19:55 crc kubenswrapper[4833]: I1013 09:19:55.884746 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hdl8p" Oct 13 09:19:55 crc kubenswrapper[4833]: I1013 09:19:55.884795 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hdl8p" Oct 13 09:19:56 crc kubenswrapper[4833]: I1013 09:19:56.936917 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hdl8p" podUID="4fa468ee-0ec3-47e0-aeff-0a7841650350" containerName="registry-server" probeResult="failure" output=< Oct 13 09:19:56 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Oct 13 09:19:56 crc kubenswrapper[4833]: > Oct 13 09:19:58 crc kubenswrapper[4833]: I1013 09:19:58.498399 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_19fdc94b-9174-42a2-ad1f-7f52f79daa6b/init-config-reloader/0.log" Oct 13 09:19:58 crc kubenswrapper[4833]: I1013 09:19:58.824309 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_19fdc94b-9174-42a2-ad1f-7f52f79daa6b/alertmanager/0.log" Oct 13 09:19:58 crc kubenswrapper[4833]: I1013 09:19:58.869653 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_19fdc94b-9174-42a2-ad1f-7f52f79daa6b/init-config-reloader/0.log" Oct 13 09:19:58 crc kubenswrapper[4833]: I1013 09:19:58.912441 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_19fdc94b-9174-42a2-ad1f-7f52f79daa6b/config-reloader/0.log" Oct 13 09:19:59 crc kubenswrapper[4833]: I1013 09:19:59.117585 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_d1a01396-e0aa-4626-9eaf-7a75da4ca8c4/aodh-api/0.log" Oct 13 09:19:59 crc kubenswrapper[4833]: I1013 09:19:59.160464 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_d1a01396-e0aa-4626-9eaf-7a75da4ca8c4/aodh-evaluator/0.log" Oct 13 09:19:59 crc kubenswrapper[4833]: I1013 09:19:59.362729 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_d1a01396-e0aa-4626-9eaf-7a75da4ca8c4/aodh-listener/0.log" Oct 13 09:19:59 crc kubenswrapper[4833]: I1013 09:19:59.403352 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_d1a01396-e0aa-4626-9eaf-7a75da4ca8c4/aodh-notifier/0.log" Oct 13 09:19:59 crc kubenswrapper[4833]: I1013 09:19:59.568624 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6b489798bd-m2wph_d855a17c-efd2-41dc-939d-264069e488e7/barbican-api/0.log" Oct 13 09:19:59 crc kubenswrapper[4833]: I1013 09:19:59.623952 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6b489798bd-m2wph_d855a17c-efd2-41dc-939d-264069e488e7/barbican-api-log/0.log" Oct 13 09:19:59 crc kubenswrapper[4833]: I1013 09:19:59.801019 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5f96d4c4b6-c4vtj_e418cfbe-e180-41a6-9730-91552572bfce/barbican-keystone-listener/0.log" Oct 13 09:20:00 crc kubenswrapper[4833]: I1013 09:20:00.004635 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5f96d4c4b6-c4vtj_e418cfbe-e180-41a6-9730-91552572bfce/barbican-keystone-listener-log/0.log" Oct 13 09:20:00 crc kubenswrapper[4833]: I1013 09:20:00.346008 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-77b9cc5f97-fmd25_9fd61fda-332f-4333-adc9-e1815b3a1433/barbican-worker/0.log" Oct 13 09:20:00 crc kubenswrapper[4833]: I1013 09:20:00.394957 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-77b9cc5f97-fmd25_9fd61fda-332f-4333-adc9-e1815b3a1433/barbican-worker-log/0.log" Oct 13 09:20:00 crc kubenswrapper[4833]: I1013 09:20:00.542607 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 09:20:00 crc kubenswrapper[4833]: I1013 09:20:00.542680 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 09:20:00 crc kubenswrapper[4833]: I1013 09:20:00.664360 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-openstack-openstack-cell1-rlqwd_9a5185c3-99ea-4511-a0ec-f614d10e420f/bootstrap-openstack-openstack-cell1/0.log" Oct 13 09:20:00 crc kubenswrapper[4833]: I1013 09:20:00.876268 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d/ceilometer-central-agent/0.log" Oct 13 09:20:00 crc kubenswrapper[4833]: I1013 09:20:00.934742 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d/ceilometer-notification-agent/0.log" Oct 13 09:20:01 crc kubenswrapper[4833]: I1013 09:20:01.031355 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d/proxy-httpd/0.log" Oct 13 09:20:01 crc kubenswrapper[4833]: I1013 09:20:01.088185 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_746c02f3-3c1e-4022-89b0-9e4c1c8f2e5d/sg-core/0.log" Oct 13 09:20:01 crc kubenswrapper[4833]: I1013 09:20:01.273316 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8dd89614-5887-4774-bbd2-4b8a41630d51/cinder-api/0.log" Oct 13 09:20:01 crc kubenswrapper[4833]: I1013 09:20:01.343194 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8dd89614-5887-4774-bbd2-4b8a41630d51/cinder-api-log/0.log" Oct 13 09:20:01 crc kubenswrapper[4833]: I1013 09:20:01.491072 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c6a2abf4-9a00-428a-8d10-8212929d2dd4/cinder-scheduler/0.log" Oct 13 09:20:01 crc kubenswrapper[4833]: I1013 09:20:01.543782 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c6a2abf4-9a00-428a-8d10-8212929d2dd4/probe/0.log" Oct 13 09:20:01 crc kubenswrapper[4833]: I1013 09:20:01.748708 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-openstack-openstack-cell1-kpjvj_c42a7993-128b-4196-b9b0-0622b7ecfca4/configure-network-openstack-openstack-cell1/0.log" Oct 13 09:20:01 crc kubenswrapper[4833]: I1013 09:20:01.765276 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-f96v8_a805910b-b612-45f1-9b8e-98c498855a3d/configure-os-openstack-openstack-cell1/0.log" Oct 13 09:20:01 crc kubenswrapper[4833]: I1013 09:20:01.927923 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-687555fd5c-7h7rk_2ff252af-a98c-42f9-b3e5-9a18d5fa2d10/init/0.log" Oct 13 09:20:02 crc kubenswrapper[4833]: I1013 09:20:02.105225 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-687555fd5c-7h7rk_2ff252af-a98c-42f9-b3e5-9a18d5fa2d10/init/0.log" Oct 13 09:20:02 crc kubenswrapper[4833]: I1013 09:20:02.130243 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-687555fd5c-7h7rk_2ff252af-a98c-42f9-b3e5-9a18d5fa2d10/dnsmasq-dns/0.log" Oct 13 09:20:02 crc kubenswrapper[4833]: I1013 09:20:02.152050 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-openstack-openstack-cell1-vx4f2_4cd90ca5-fdd6-4c4b-8f4c-6c156a814975/download-cache-openstack-openstack-cell1/0.log" Oct 13 09:20:02 crc kubenswrapper[4833]: I1013 09:20:02.330896 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_aadbd2b5-d3a3-4eca-857b-efca637a54ae/glance-httpd/0.log" Oct 13 09:20:02 crc kubenswrapper[4833]: I1013 09:20:02.342514 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_aadbd2b5-d3a3-4eca-857b-efca637a54ae/glance-log/0.log" Oct 13 09:20:02 crc kubenswrapper[4833]: I1013 09:20:02.521612 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_cbf8b795-b01a-48f8-8470-0aea6a0c2556/glance-log/0.log" Oct 13 09:20:02 crc kubenswrapper[4833]: I1013 09:20:02.597115 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_cbf8b795-b01a-48f8-8470-0aea6a0c2556/glance-httpd/0.log" Oct 13 09:20:03 crc kubenswrapper[4833]: I1013 09:20:03.077766 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-6664854fc-pnxtp_a70ba4a7-5e72-4533-a9e4-7181e816b057/heat-engine/0.log" Oct 13 09:20:03 crc kubenswrapper[4833]: I1013 09:20:03.420928 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-b4c45f97d-8pghc_1197b9de-5ae7-42f1-b0f6-7c1adcc009e8/horizon/0.log" Oct 13 09:20:03 crc kubenswrapper[4833]: I1013 09:20:03.422790 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-cc5d665d5-b4fc4_e298300a-64be-4c39-8c6d-6b40af6fdf2c/heat-api/0.log" Oct 13 09:20:03 crc kubenswrapper[4833]: I1013 09:20:03.424709 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-68779bfcb7-pjbs4_0943c9ec-c442-4180-aad0-6a1919690b86/heat-cfnapi/0.log" Oct 13 09:20:03 crc kubenswrapper[4833]: I1013 09:20:03.674650 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-openstack-openstack-cell1-d79ll_26d8c1a5-4eda-4010-8bd8-5634ce08c7fb/install-certs-openstack-openstack-cell1/0.log" Oct 13 09:20:03 crc kubenswrapper[4833]: I1013 09:20:03.914652 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-openstack-openstack-cell1-sq85p_323322d4-ab6d-4f37-aea1-5c3497ce1522/install-os-openstack-openstack-cell1/0.log" Oct 13 09:20:03 crc kubenswrapper[4833]: I1013 09:20:03.923382 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-b4c45f97d-8pghc_1197b9de-5ae7-42f1-b0f6-7c1adcc009e8/horizon-log/0.log" Oct 13 09:20:04 crc kubenswrapper[4833]: I1013 09:20:04.111167 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-c88f78f44-h69f7_9713b03c-2a06-4163-9540-d7fd9f32c2ab/keystone-api/0.log" Oct 13 09:20:04 crc kubenswrapper[4833]: I1013 09:20:04.122208 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29339041-tp27n_c9696285-91dc-48ff-911b-0f984c7c17f4/keystone-cron/0.log" Oct 13 09:20:04 crc kubenswrapper[4833]: I1013 09:20:04.301243 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29339101-7ntm4_8809de13-fae0-46e4-bfd4-fdbe1e3ba5e4/keystone-cron/0.log" Oct 13 09:20:04 crc kubenswrapper[4833]: I1013 09:20:04.433214 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_03e397ac-e05a-4f00-9ce6-91a68ac1d22f/kube-state-metrics/0.log" Oct 13 09:20:04 crc kubenswrapper[4833]: I1013 09:20:04.547419 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-openstack-openstack-cell1-dkj5f_d3e31455-5a84-415b-be9a-91e2f7033095/libvirt-openstack-openstack-cell1/0.log" Oct 13 09:20:04 crc kubenswrapper[4833]: I1013 09:20:04.884768 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-ddc9cc9f7-vclj7_c9e9ff9f-9222-4649-bb70-e6112a50dfe9/neutron-api/0.log" Oct 13 09:20:04 crc kubenswrapper[4833]: I1013 09:20:04.898486 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-ddc9cc9f7-vclj7_c9e9ff9f-9222-4649-bb70-e6112a50dfe9/neutron-httpd/0.log" Oct 13 09:20:05 crc kubenswrapper[4833]: I1013 09:20:05.259347 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dhcp-openstack-openstack-cell1-mvqtv_3327b23b-96ed-4d8f-b3c5-7ebc83c0e07c/neutron-dhcp-openstack-openstack-cell1/0.log" Oct 13 09:20:05 crc kubenswrapper[4833]: I1013 09:20:05.439584 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-openstack-openstack-cell1-kcvtf_7909d966-8e5b-4c89-a6d1-1c6c5f1b5c2f/neutron-metadata-openstack-openstack-cell1/0.log" Oct 13 09:20:05 crc kubenswrapper[4833]: I1013 09:20:05.561934 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-sriov-openstack-openstack-cell1-dzjpn_d9df95b1-d108-4695-b002-f586578d6afe/neutron-sriov-openstack-openstack-cell1/0.log" Oct 13 09:20:05 crc kubenswrapper[4833]: I1013 09:20:05.888320 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a2b4a338-75d6-4013-87be-5a57fb8f203e/nova-api-log/0.log" Oct 13 09:20:05 crc kubenswrapper[4833]: I1013 09:20:05.903022 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a2b4a338-75d6-4013-87be-5a57fb8f203e/nova-api-api/0.log" Oct 13 09:20:05 crc kubenswrapper[4833]: I1013 09:20:05.954353 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hdl8p" Oct 13 09:20:06 crc kubenswrapper[4833]: I1013 09:20:06.014076 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hdl8p" Oct 13 09:20:06 crc kubenswrapper[4833]: I1013 09:20:06.200414 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hdl8p"] Oct 13 09:20:06 crc kubenswrapper[4833]: I1013 09:20:06.212626 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_d236a3ad-6af8-4c48-97b7-9bc1c9d90039/nova-cell0-conductor-conductor/0.log" Oct 13 09:20:06 crc kubenswrapper[4833]: I1013 09:20:06.323153 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_f9864c95-1312-4722-a9db-1848bf00059a/nova-cell1-conductor-conductor/0.log" Oct 13 09:20:06 crc kubenswrapper[4833]: I1013 09:20:06.536021 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_a37e4fb8-49e9-4edd-aea8-a60a7b3e1db7/nova-cell1-novncproxy-novncproxy/0.log" Oct 13 09:20:06 crc kubenswrapper[4833]: I1013 09:20:06.873399 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell8brnd_3f103fdd-b425-420e-99f5-e73aaa0b91cc/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1/0.log" Oct 13 09:20:07 crc kubenswrapper[4833]: I1013 09:20:07.036765 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-openstack-cell1-txjbp_2e4c0a0f-f063-49a3-8289-6efdb12b97fc/nova-cell1-openstack-openstack-cell1/0.log" Oct 13 09:20:07 crc kubenswrapper[4833]: I1013 09:20:07.219279 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_aad2dab2-c620-43aa-b43b-be9119ff2864/nova-metadata-log/0.log" Oct 13 09:20:07 crc kubenswrapper[4833]: I1013 09:20:07.582615 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hdl8p" podUID="4fa468ee-0ec3-47e0-aeff-0a7841650350" containerName="registry-server" containerID="cri-o://ff4020251530b3ade02ba4d1c8f16ae8913c68634af239bd762fc5a876a4169c" gracePeriod=2 Oct 13 09:20:07 crc kubenswrapper[4833]: I1013 09:20:07.692206 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_ff520e60-61f4-4a72-bff7-f0a47fc3c5f1/nova-scheduler-scheduler/0.log" Oct 13 09:20:07 crc kubenswrapper[4833]: I1013 09:20:07.940514 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_aad2dab2-c620-43aa-b43b-be9119ff2864/nova-metadata-metadata/0.log" Oct 13 09:20:08 crc kubenswrapper[4833]: I1013 09:20:08.188710 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-5b9f5bccc5-4lqcj_9e9d268a-8671-4459-96b3-abb75af5726a/init/0.log" Oct 13 09:20:08 crc kubenswrapper[4833]: I1013 09:20:08.424817 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-5b9f5bccc5-4lqcj_9e9d268a-8671-4459-96b3-abb75af5726a/init/0.log" Oct 13 09:20:08 crc kubenswrapper[4833]: I1013 09:20:08.506838 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hdl8p" Oct 13 09:20:08 crc kubenswrapper[4833]: I1013 09:20:08.589725 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-5b9f5bccc5-4lqcj_9e9d268a-8671-4459-96b3-abb75af5726a/octavia-api-provider-agent/0.log" Oct 13 09:20:08 crc kubenswrapper[4833]: I1013 09:20:08.594413 4833 generic.go:334] "Generic (PLEG): container finished" podID="4fa468ee-0ec3-47e0-aeff-0a7841650350" containerID="ff4020251530b3ade02ba4d1c8f16ae8913c68634af239bd762fc5a876a4169c" exitCode=0 Oct 13 09:20:08 crc kubenswrapper[4833]: I1013 09:20:08.594457 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdl8p" event={"ID":"4fa468ee-0ec3-47e0-aeff-0a7841650350","Type":"ContainerDied","Data":"ff4020251530b3ade02ba4d1c8f16ae8913c68634af239bd762fc5a876a4169c"} Oct 13 09:20:08 crc kubenswrapper[4833]: I1013 09:20:08.594486 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdl8p" event={"ID":"4fa468ee-0ec3-47e0-aeff-0a7841650350","Type":"ContainerDied","Data":"fea838ff058f34038e773ea1fc5516a5d59830603f31b0e3ec7ef9a9fb05721e"} Oct 13 09:20:08 crc kubenswrapper[4833]: I1013 09:20:08.594503 4833 scope.go:117] "RemoveContainer" containerID="ff4020251530b3ade02ba4d1c8f16ae8913c68634af239bd762fc5a876a4169c" Oct 13 09:20:08 crc kubenswrapper[4833]: I1013 09:20:08.594654 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hdl8p" Oct 13 09:20:08 crc kubenswrapper[4833]: I1013 09:20:08.618419 4833 scope.go:117] "RemoveContainer" containerID="af89fcce29d685e10f13d379919fcad5514c82ab9b24cbdb635f5beb48bc0b8e" Oct 13 09:20:08 crc kubenswrapper[4833]: I1013 09:20:08.639280 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa468ee-0ec3-47e0-aeff-0a7841650350-catalog-content\") pod \"4fa468ee-0ec3-47e0-aeff-0a7841650350\" (UID: \"4fa468ee-0ec3-47e0-aeff-0a7841650350\") " Oct 13 09:20:08 crc kubenswrapper[4833]: I1013 09:20:08.639398 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfs4z\" (UniqueName: \"kubernetes.io/projected/4fa468ee-0ec3-47e0-aeff-0a7841650350-kube-api-access-nfs4z\") pod \"4fa468ee-0ec3-47e0-aeff-0a7841650350\" (UID: \"4fa468ee-0ec3-47e0-aeff-0a7841650350\") " Oct 13 09:20:08 crc kubenswrapper[4833]: I1013 09:20:08.639683 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa468ee-0ec3-47e0-aeff-0a7841650350-utilities\") pod \"4fa468ee-0ec3-47e0-aeff-0a7841650350\" (UID: \"4fa468ee-0ec3-47e0-aeff-0a7841650350\") " Oct 13 09:20:08 crc kubenswrapper[4833]: I1013 09:20:08.642510 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fa468ee-0ec3-47e0-aeff-0a7841650350-utilities" (OuterVolumeSpecName: "utilities") pod "4fa468ee-0ec3-47e0-aeff-0a7841650350" (UID: "4fa468ee-0ec3-47e0-aeff-0a7841650350"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 09:20:08 crc kubenswrapper[4833]: I1013 09:20:08.642859 4833 scope.go:117] "RemoveContainer" containerID="37b9a4736b73933a64719c5c0f2881bce7ea46bc54425b5fc32653f54525f6bb" Oct 13 09:20:08 crc kubenswrapper[4833]: I1013 09:20:08.647871 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fa468ee-0ec3-47e0-aeff-0a7841650350-kube-api-access-nfs4z" (OuterVolumeSpecName: "kube-api-access-nfs4z") pod "4fa468ee-0ec3-47e0-aeff-0a7841650350" (UID: "4fa468ee-0ec3-47e0-aeff-0a7841650350"). InnerVolumeSpecName "kube-api-access-nfs4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 09:20:08 crc kubenswrapper[4833]: I1013 09:20:08.742115 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfs4z\" (UniqueName: \"kubernetes.io/projected/4fa468ee-0ec3-47e0-aeff-0a7841650350-kube-api-access-nfs4z\") on node \"crc\" DevicePath \"\"" Oct 13 09:20:08 crc kubenswrapper[4833]: I1013 09:20:08.742144 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa468ee-0ec3-47e0-aeff-0a7841650350-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 09:20:08 crc kubenswrapper[4833]: I1013 09:20:08.746237 4833 scope.go:117] "RemoveContainer" containerID="ff4020251530b3ade02ba4d1c8f16ae8913c68634af239bd762fc5a876a4169c" Oct 13 09:20:08 crc kubenswrapper[4833]: E1013 09:20:08.746631 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff4020251530b3ade02ba4d1c8f16ae8913c68634af239bd762fc5a876a4169c\": container with ID starting with ff4020251530b3ade02ba4d1c8f16ae8913c68634af239bd762fc5a876a4169c not found: ID does not exist" containerID="ff4020251530b3ade02ba4d1c8f16ae8913c68634af239bd762fc5a876a4169c" Oct 13 09:20:08 crc kubenswrapper[4833]: I1013 09:20:08.746658 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff4020251530b3ade02ba4d1c8f16ae8913c68634af239bd762fc5a876a4169c"} err="failed to get container status \"ff4020251530b3ade02ba4d1c8f16ae8913c68634af239bd762fc5a876a4169c\": rpc error: code = NotFound desc = could not find container \"ff4020251530b3ade02ba4d1c8f16ae8913c68634af239bd762fc5a876a4169c\": container with ID starting with ff4020251530b3ade02ba4d1c8f16ae8913c68634af239bd762fc5a876a4169c not found: ID does not exist" Oct 13 09:20:08 crc kubenswrapper[4833]: I1013 09:20:08.746679 4833 scope.go:117] "RemoveContainer" containerID="af89fcce29d685e10f13d379919fcad5514c82ab9b24cbdb635f5beb48bc0b8e" Oct 13 09:20:08 crc kubenswrapper[4833]: E1013 09:20:08.748151 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af89fcce29d685e10f13d379919fcad5514c82ab9b24cbdb635f5beb48bc0b8e\": container with ID starting with af89fcce29d685e10f13d379919fcad5514c82ab9b24cbdb635f5beb48bc0b8e not found: ID does not exist" containerID="af89fcce29d685e10f13d379919fcad5514c82ab9b24cbdb635f5beb48bc0b8e" Oct 13 09:20:08 crc kubenswrapper[4833]: I1013 09:20:08.748177 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af89fcce29d685e10f13d379919fcad5514c82ab9b24cbdb635f5beb48bc0b8e"} err="failed to get container status \"af89fcce29d685e10f13d379919fcad5514c82ab9b24cbdb635f5beb48bc0b8e\": rpc error: code = NotFound desc = could not find container \"af89fcce29d685e10f13d379919fcad5514c82ab9b24cbdb635f5beb48bc0b8e\": container with ID starting with af89fcce29d685e10f13d379919fcad5514c82ab9b24cbdb635f5beb48bc0b8e not found: ID does not exist" Oct 13 09:20:08 crc kubenswrapper[4833]: I1013 09:20:08.748191 4833 scope.go:117] "RemoveContainer" containerID="37b9a4736b73933a64719c5c0f2881bce7ea46bc54425b5fc32653f54525f6bb" Oct 13 09:20:08 crc kubenswrapper[4833]: E1013 09:20:08.748435 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37b9a4736b73933a64719c5c0f2881bce7ea46bc54425b5fc32653f54525f6bb\": container with ID starting with 37b9a4736b73933a64719c5c0f2881bce7ea46bc54425b5fc32653f54525f6bb not found: ID does not exist" containerID="37b9a4736b73933a64719c5c0f2881bce7ea46bc54425b5fc32653f54525f6bb" Oct 13 09:20:08 crc kubenswrapper[4833]: I1013 09:20:08.748466 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37b9a4736b73933a64719c5c0f2881bce7ea46bc54425b5fc32653f54525f6bb"} err="failed to get container status \"37b9a4736b73933a64719c5c0f2881bce7ea46bc54425b5fc32653f54525f6bb\": rpc error: code = NotFound desc = could not find container \"37b9a4736b73933a64719c5c0f2881bce7ea46bc54425b5fc32653f54525f6bb\": container with ID starting with 37b9a4736b73933a64719c5c0f2881bce7ea46bc54425b5fc32653f54525f6bb not found: ID does not exist" Oct 13 09:20:08 crc kubenswrapper[4833]: I1013 09:20:08.756028 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fa468ee-0ec3-47e0-aeff-0a7841650350-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4fa468ee-0ec3-47e0-aeff-0a7841650350" (UID: "4fa468ee-0ec3-47e0-aeff-0a7841650350"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 09:20:08 crc kubenswrapper[4833]: I1013 09:20:08.763959 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-5b9f5bccc5-4lqcj_9e9d268a-8671-4459-96b3-abb75af5726a/octavia-api/0.log" Oct 13 09:20:08 crc kubenswrapper[4833]: I1013 09:20:08.793101 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-njgq2_3a7654ab-fe63-4757-8e67-4cdf67232494/init/0.log" Oct 13 09:20:08 crc kubenswrapper[4833]: I1013 09:20:08.843751 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa468ee-0ec3-47e0-aeff-0a7841650350-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 09:20:08 crc kubenswrapper[4833]: I1013 09:20:08.938584 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hdl8p"] Oct 13 09:20:08 crc kubenswrapper[4833]: I1013 09:20:08.951942 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hdl8p"] Oct 13 09:20:09 crc kubenswrapper[4833]: I1013 09:20:09.012319 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-njgq2_3a7654ab-fe63-4757-8e67-4cdf67232494/init/0.log" Oct 13 09:20:09 crc kubenswrapper[4833]: I1013 09:20:09.111387 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-njgq2_3a7654ab-fe63-4757-8e67-4cdf67232494/octavia-healthmanager/0.log" Oct 13 09:20:09 crc kubenswrapper[4833]: I1013 09:20:09.268158 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-6w4xz_120a8304-64d3-4f08-b340-8f0853335cba/init/0.log" Oct 13 09:20:09 crc kubenswrapper[4833]: I1013 09:20:09.987298 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-6w4xz_120a8304-64d3-4f08-b340-8f0853335cba/octavia-housekeeping/0.log" Oct 13 09:20:10 crc kubenswrapper[4833]: I1013 09:20:10.038015 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-6w4xz_120a8304-64d3-4f08-b340-8f0853335cba/init/0.log" Oct 13 09:20:10 crc kubenswrapper[4833]: I1013 09:20:10.241819 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-678599687f-sc8lv_ca0eb080-87bc-42d1-8250-15bad5d138cd/init/0.log" Oct 13 09:20:10 crc kubenswrapper[4833]: I1013 09:20:10.382728 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-678599687f-sc8lv_ca0eb080-87bc-42d1-8250-15bad5d138cd/init/0.log" Oct 13 09:20:10 crc kubenswrapper[4833]: I1013 09:20:10.403291 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-678599687f-sc8lv_ca0eb080-87bc-42d1-8250-15bad5d138cd/octavia-amphora-httpd/0.log" Oct 13 09:20:10 crc kubenswrapper[4833]: I1013 09:20:10.569523 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-rmj9t_4c8629c8-3ccd-4d03-be53-6923783cf739/init/0.log" Oct 13 09:20:10 crc kubenswrapper[4833]: I1013 09:20:10.655598 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fa468ee-0ec3-47e0-aeff-0a7841650350" path="/var/lib/kubelet/pods/4fa468ee-0ec3-47e0-aeff-0a7841650350/volumes" Oct 13 09:20:10 crc kubenswrapper[4833]: I1013 09:20:10.837696 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-rmj9t_4c8629c8-3ccd-4d03-be53-6923783cf739/octavia-rsyslog/0.log" Oct 13 09:20:10 crc kubenswrapper[4833]: I1013 09:20:10.838139 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-rmj9t_4c8629c8-3ccd-4d03-be53-6923783cf739/init/0.log" Oct 13 09:20:11 crc kubenswrapper[4833]: I1013 09:20:11.014620 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-w4qcr_87bfdfbf-fa98-4597-bbc7-bb9add7b65db/init/0.log" Oct 13 09:20:11 crc kubenswrapper[4833]: I1013 09:20:11.277962 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-w4qcr_87bfdfbf-fa98-4597-bbc7-bb9add7b65db/init/0.log" Oct 13 09:20:11 crc kubenswrapper[4833]: I1013 09:20:11.343740 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-w4qcr_87bfdfbf-fa98-4597-bbc7-bb9add7b65db/octavia-worker/0.log" Oct 13 09:20:11 crc kubenswrapper[4833]: I1013 09:20:11.512698 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8e193d26-9513-4f0d-bed6-e499f9264ba6/mysql-bootstrap/0.log" Oct 13 09:20:11 crc kubenswrapper[4833]: I1013 09:20:11.685963 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8e193d26-9513-4f0d-bed6-e499f9264ba6/galera/0.log" Oct 13 09:20:11 crc kubenswrapper[4833]: I1013 09:20:11.749966 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8e193d26-9513-4f0d-bed6-e499f9264ba6/mysql-bootstrap/0.log" Oct 13 09:20:11 crc kubenswrapper[4833]: I1013 09:20:11.920172 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f5728b9b-3142-4f3c-af80-b23b846b22e0/mysql-bootstrap/0.log" Oct 13 09:20:12 crc kubenswrapper[4833]: I1013 09:20:12.081884 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f5728b9b-3142-4f3c-af80-b23b846b22e0/mysql-bootstrap/0.log" Oct 13 09:20:12 crc kubenswrapper[4833]: I1013 09:20:12.174558 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f5728b9b-3142-4f3c-af80-b23b846b22e0/galera/0.log" Oct 13 09:20:12 crc kubenswrapper[4833]: I1013 09:20:12.320980 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_85c42c8c-b34b-4d78-b2c7-e4bbd86f0a09/openstackclient/0.log" Oct 13 09:20:12 crc kubenswrapper[4833]: I1013 09:20:12.466733 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-hptlp_7189a2c9-b33a-4701-a8af-43ad816b793e/openstack-network-exporter/0.log" Oct 13 09:20:12 crc kubenswrapper[4833]: I1013 09:20:12.670586 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c74mk_6aaf4fde-4669-4220-97e2-f04d63727284/ovsdb-server-init/0.log" Oct 13 09:20:12 crc kubenswrapper[4833]: I1013 09:20:12.886778 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c74mk_6aaf4fde-4669-4220-97e2-f04d63727284/ovsdb-server/0.log" Oct 13 09:20:12 crc kubenswrapper[4833]: I1013 09:20:12.887881 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c74mk_6aaf4fde-4669-4220-97e2-f04d63727284/ovsdb-server-init/0.log" Oct 13 09:20:12 crc kubenswrapper[4833]: I1013 09:20:12.911095 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c74mk_6aaf4fde-4669-4220-97e2-f04d63727284/ovs-vswitchd/0.log" Oct 13 09:20:13 crc kubenswrapper[4833]: I1013 09:20:13.126877 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-rd2wk_61d0e633-8305-4d08-b83a-af05a6abbb96/ovn-controller/0.log" Oct 13 09:20:13 crc kubenswrapper[4833]: I1013 09:20:13.301217 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8a6c7ab6-9cb4-4e29-b0cc-08939c57944d/openstack-network-exporter/0.log" Oct 13 09:20:13 crc kubenswrapper[4833]: I1013 09:20:13.355116 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8a6c7ab6-9cb4-4e29-b0cc-08939c57944d/ovn-northd/0.log" Oct 13 09:20:13 crc kubenswrapper[4833]: I1013 09:20:13.633291 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-openstack-openstack-cell1-c8thl_21601e8f-e29f-4fc3-a938-4fc556422961/ovn-openstack-openstack-cell1/0.log" Oct 13 09:20:13 crc kubenswrapper[4833]: I1013 09:20:13.704776 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_68045148-ee4a-4502-932b-d945d4cc26f3/openstack-network-exporter/0.log" Oct 13 09:20:13 crc kubenswrapper[4833]: I1013 09:20:13.842609 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_68045148-ee4a-4502-932b-d945d4cc26f3/ovsdbserver-nb/0.log" Oct 13 09:20:13 crc kubenswrapper[4833]: I1013 09:20:13.896575 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_38a8efaf-eda4-4b3a-9d9c-7d1f513b8345/openstack-network-exporter/0.log" Oct 13 09:20:14 crc kubenswrapper[4833]: I1013 09:20:14.085979 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_38a8efaf-eda4-4b3a-9d9c-7d1f513b8345/ovsdbserver-nb/0.log" Oct 13 09:20:14 crc kubenswrapper[4833]: I1013 09:20:14.217362 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_98c76e26-6ad7-49f3-b68c-1a1a857111dd/openstack-network-exporter/0.log" Oct 13 09:20:14 crc kubenswrapper[4833]: I1013 09:20:14.300485 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_98c76e26-6ad7-49f3-b68c-1a1a857111dd/ovsdbserver-nb/0.log" Oct 13 09:20:14 crc kubenswrapper[4833]: I1013 09:20:14.430878 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0480246d-bbb2-4278-850e-1835e52c7eae/openstack-network-exporter/0.log" Oct 13 09:20:14 crc kubenswrapper[4833]: I1013 09:20:14.547929 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0480246d-bbb2-4278-850e-1835e52c7eae/ovsdbserver-sb/0.log" Oct 13 09:20:14 crc kubenswrapper[4833]: I1013 09:20:14.743407 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_9413068d-6f1c-449a-bbbd-7e1ad94bf92c/openstack-network-exporter/0.log" Oct 13 09:20:14 crc kubenswrapper[4833]: I1013 09:20:14.802200 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_9413068d-6f1c-449a-bbbd-7e1ad94bf92c/ovsdbserver-sb/0.log" Oct 13 09:20:14 crc kubenswrapper[4833]: I1013 09:20:14.972301 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_1bf99176-41b5-4884-ab0e-4a2f877e26ec/openstack-network-exporter/0.log" Oct 13 09:20:15 crc kubenswrapper[4833]: I1013 09:20:15.022083 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_1bf99176-41b5-4884-ab0e-4a2f877e26ec/ovsdbserver-sb/0.log" Oct 13 09:20:15 crc kubenswrapper[4833]: I1013 09:20:15.227724 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-864fb88886-jf42k_009992ed-3b8d-457e-a32c-3119e80a90a7/placement-api/0.log" Oct 13 09:20:15 crc kubenswrapper[4833]: I1013 09:20:15.332809 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-864fb88886-jf42k_009992ed-3b8d-457e-a32c-3119e80a90a7/placement-log/0.log" Oct 13 09:20:15 crc kubenswrapper[4833]: I1013 09:20:15.555437 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-cspj8v_e69c4e90-686a-4ca5-a9df-a661d4bb00ee/pre-adoption-validation-openstack-pre-adoption-openstack-cell1/0.log" Oct 13 09:20:15 crc kubenswrapper[4833]: I1013 09:20:15.706344 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6389489a-7b63-44c5-aa24-8ff7f36399c9/init-config-reloader/0.log" Oct 13 09:20:15 crc kubenswrapper[4833]: I1013 09:20:15.945966 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6389489a-7b63-44c5-aa24-8ff7f36399c9/init-config-reloader/0.log" Oct 13 09:20:15 crc kubenswrapper[4833]: I1013 09:20:15.946306 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6389489a-7b63-44c5-aa24-8ff7f36399c9/config-reloader/0.log" Oct 13 09:20:15 crc kubenswrapper[4833]: I1013 09:20:15.956808 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6389489a-7b63-44c5-aa24-8ff7f36399c9/prometheus/0.log" Oct 13 09:20:16 crc kubenswrapper[4833]: I1013 09:20:16.189400 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6389489a-7b63-44c5-aa24-8ff7f36399c9/thanos-sidecar/0.log" Oct 13 09:20:16 crc kubenswrapper[4833]: I1013 09:20:16.213684 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9/setup-container/0.log" Oct 13 09:20:16 crc kubenswrapper[4833]: I1013 09:20:16.389518 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9/setup-container/0.log" Oct 13 09:20:16 crc kubenswrapper[4833]: I1013 09:20:16.456712 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_aa77cbfd-35bb-40f8-9ba9-62dc99e07ad9/rabbitmq/0.log" Oct 13 09:20:16 crc kubenswrapper[4833]: I1013 09:20:16.604550 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_face6c99-0326-44b0-a9eb-c877d804ca2f/setup-container/0.log" Oct 13 09:20:16 crc kubenswrapper[4833]: I1013 09:20:16.825143 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_face6c99-0326-44b0-a9eb-c877d804ca2f/setup-container/0.log" Oct 13 09:20:16 crc kubenswrapper[4833]: I1013 09:20:16.920999 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_face6c99-0326-44b0-a9eb-c877d804ca2f/rabbitmq/0.log" Oct 13 09:20:17 crc kubenswrapper[4833]: I1013 09:20:17.028965 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-openstack-openstack-cell1-nk465_1e0af3e4-7b62-491a-9bce-c6f749f15512/reboot-os-openstack-openstack-cell1/0.log" Oct 13 09:20:17 crc kubenswrapper[4833]: I1013 09:20:17.126619 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-openstack-openstack-cell1-f2xkf_a29203c6-0802-488f-9269-b4725a8923b2/run-os-openstack-openstack-cell1/0.log" Oct 13 09:20:17 crc kubenswrapper[4833]: I1013 09:20:17.335901 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-openstack-w8brv_d5672816-d3a2-49fb-bbeb-eb03f3d1639d/ssh-known-hosts-openstack/0.log" Oct 13 09:20:17 crc kubenswrapper[4833]: I1013 09:20:17.587946 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6bcbb6d8f6-j7v6l_8b84ec2a-5911-404b-a6e4-6654625c0e0f/proxy-server/0.log" Oct 13 09:20:17 crc kubenswrapper[4833]: I1013 09:20:17.776193 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6bcbb6d8f6-j7v6l_8b84ec2a-5911-404b-a6e4-6654625c0e0f/proxy-httpd/0.log" Oct 13 09:20:17 crc kubenswrapper[4833]: I1013 09:20:17.800820 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-dcsbj_26e000c4-06bf-4225-b73d-6738529f741a/swift-ring-rebalance/0.log" Oct 13 09:20:18 crc kubenswrapper[4833]: I1013 09:20:18.751758 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-openstack-openstack-cell1-nwksz_58c23bcc-cad3-4d75-9970-b8b9335d7fe5/telemetry-openstack-openstack-cell1/0.log" Oct 13 09:20:18 crc kubenswrapper[4833]: I1013 09:20:18.977803 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tripleo-cleanup-tripleo-cleanup-openstack-cell1-2968n_8e4d3fe9-fde6-4388-892d-6477fa1aa0c4/tripleo-cleanup-tripleo-cleanup-openstack-cell1/0.log" Oct 13 09:20:18 crc kubenswrapper[4833]: I1013 09:20:18.994618 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-openstack-openstack-cell1-ldvs7_eb27761f-f85a-4cb5-a221-b3d8eaf993c8/validate-network-openstack-openstack-cell1/0.log" Oct 13 09:20:20 crc kubenswrapper[4833]: I1013 09:20:20.266826 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_53377dda-bb5d-4ac3-bf05-d6d7a8801896/memcached/0.log" Oct 13 09:20:30 crc kubenswrapper[4833]: I1013 09:20:30.542325 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 09:20:30 crc kubenswrapper[4833]: I1013 09:20:30.542801 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 09:20:30 crc kubenswrapper[4833]: I1013 09:20:30.542842 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 09:20:30 crc kubenswrapper[4833]: I1013 09:20:30.543830 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c711e429a871ab17078d49d8686220613b7c9459f066d2cfadebd3ed9d5979ed"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 09:20:30 crc kubenswrapper[4833]: I1013 09:20:30.543905 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://c711e429a871ab17078d49d8686220613b7c9459f066d2cfadebd3ed9d5979ed" gracePeriod=600 Oct 13 09:20:31 crc kubenswrapper[4833]: I1013 09:20:31.829628 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="c711e429a871ab17078d49d8686220613b7c9459f066d2cfadebd3ed9d5979ed" exitCode=0 Oct 13 09:20:31 crc kubenswrapper[4833]: I1013 09:20:31.829692 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"c711e429a871ab17078d49d8686220613b7c9459f066d2cfadebd3ed9d5979ed"} Oct 13 09:20:31 crc kubenswrapper[4833]: I1013 09:20:31.830313 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerStarted","Data":"9ce1dc1789caa8b1bb9ce3bf55d2ad4fa619998cd139896048fd851c8281d151"} Oct 13 09:20:31 crc kubenswrapper[4833]: I1013 09:20:31.830331 4833 scope.go:117] "RemoveContainer" containerID="c1e0a749edc59c61aa1279e53e1d1abcd013fc13ab1f20b8ccb195b1b5ce6138" Oct 13 09:21:25 crc kubenswrapper[4833]: I1013 09:21:25.297326 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-658bdf4b74-w9zrf_bbfa1bde-53ad-46fb-9217-cfd3bcbd9355/kube-rbac-proxy/0.log" Oct 13 09:21:25 crc kubenswrapper[4833]: I1013 09:21:25.431110 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-658bdf4b74-w9zrf_bbfa1bde-53ad-46fb-9217-cfd3bcbd9355/manager/0.log" Oct 13 09:21:25 crc kubenswrapper[4833]: I1013 09:21:25.499860 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g_1c6b623c-4ccf-4b07-8448-c9b8b403615c/util/0.log" Oct 13 09:21:25 crc kubenswrapper[4833]: I1013 09:21:25.671608 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g_1c6b623c-4ccf-4b07-8448-c9b8b403615c/pull/0.log" Oct 13 09:21:25 crc kubenswrapper[4833]: I1013 09:21:25.671587 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g_1c6b623c-4ccf-4b07-8448-c9b8b403615c/util/0.log" Oct 13 09:21:25 crc kubenswrapper[4833]: I1013 09:21:25.689338 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g_1c6b623c-4ccf-4b07-8448-c9b8b403615c/pull/0.log" Oct 13 09:21:25 crc kubenswrapper[4833]: I1013 09:21:25.832990 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g_1c6b623c-4ccf-4b07-8448-c9b8b403615c/util/0.log" Oct 13 09:21:25 crc kubenswrapper[4833]: I1013 09:21:25.876068 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g_1c6b623c-4ccf-4b07-8448-c9b8b403615c/extract/0.log" Oct 13 09:21:25 crc kubenswrapper[4833]: I1013 09:21:25.883072 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bghh8g_1c6b623c-4ccf-4b07-8448-c9b8b403615c/pull/0.log" Oct 13 09:21:26 crc kubenswrapper[4833]: I1013 09:21:26.022755 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7b7fb68549-mhd5p_9608166e-0d48-4e57-99d7-6fa85036e7bf/kube-rbac-proxy/0.log" Oct 13 09:21:26 crc kubenswrapper[4833]: I1013 09:21:26.099149 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-85d5d9dd78-hmcxj_a24ab3f0-f53c-4f16-9fd8-e0e69149776d/kube-rbac-proxy/0.log" Oct 13 09:21:26 crc kubenswrapper[4833]: I1013 09:21:26.155588 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7b7fb68549-mhd5p_9608166e-0d48-4e57-99d7-6fa85036e7bf/manager/0.log" Oct 13 09:21:26 crc kubenswrapper[4833]: I1013 09:21:26.209585 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-85d5d9dd78-hmcxj_a24ab3f0-f53c-4f16-9fd8-e0e69149776d/manager/0.log" Oct 13 09:21:26 crc kubenswrapper[4833]: I1013 09:21:26.309891 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-84b9b84486-5pqx6_3da4ee47-a023-411b-b367-c7eae5c8bd9b/kube-rbac-proxy/0.log" Oct 13 09:21:26 crc kubenswrapper[4833]: I1013 09:21:26.443303 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-84b9b84486-5pqx6_3da4ee47-a023-411b-b367-c7eae5c8bd9b/manager/0.log" Oct 13 09:21:26 crc kubenswrapper[4833]: I1013 09:21:26.518161 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-858f76bbdd-wl4kq_677b9ffe-683c-4a84-9b7c-a625280c79f8/kube-rbac-proxy/0.log" Oct 13 09:21:26 crc kubenswrapper[4833]: I1013 09:21:26.596325 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7ffbcb7588-4nz57_63700b89-455e-4df1-baec-273d82261c60/kube-rbac-proxy/0.log" Oct 13 09:21:26 crc kubenswrapper[4833]: I1013 09:21:26.613389 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-858f76bbdd-wl4kq_677b9ffe-683c-4a84-9b7c-a625280c79f8/manager/0.log" Oct 13 09:21:26 crc kubenswrapper[4833]: I1013 09:21:26.756237 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7ffbcb7588-4nz57_63700b89-455e-4df1-baec-273d82261c60/manager/0.log" Oct 13 09:21:26 crc kubenswrapper[4833]: I1013 09:21:26.823051 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-656bcbd775-6wmgz_47ff6370-fc12-4d28-a59a-1ae1614191a9/kube-rbac-proxy/0.log" Oct 13 09:21:26 crc kubenswrapper[4833]: I1013 09:21:26.978561 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-9c5c78d49-9rvcx_8ea24fc5-58ac-426e-8943-eccec9261185/kube-rbac-proxy/0.log" Oct 13 09:21:27 crc kubenswrapper[4833]: I1013 09:21:27.024669 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-9c5c78d49-9rvcx_8ea24fc5-58ac-426e-8943-eccec9261185/manager/0.log" Oct 13 09:21:27 crc kubenswrapper[4833]: I1013 09:21:27.189585 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-656bcbd775-6wmgz_47ff6370-fc12-4d28-a59a-1ae1614191a9/manager/0.log" Oct 13 09:21:27 crc kubenswrapper[4833]: I1013 09:21:27.504261 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55b6b7c7b8-sqftk_bc224de1-26e7-447e-a2cd-9290d6a756c8/kube-rbac-proxy/0.log" Oct 13 09:21:27 crc kubenswrapper[4833]: I1013 09:21:27.629299 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55b6b7c7b8-sqftk_bc224de1-26e7-447e-a2cd-9290d6a756c8/manager/0.log" Oct 13 09:21:27 crc kubenswrapper[4833]: I1013 09:21:27.688353 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5f67fbc655-dzsh5_1b4f79cc-b86c-4742-97cd-d5c2a7fc95fb/kube-rbac-proxy/0.log" Oct 13 09:21:27 crc kubenswrapper[4833]: I1013 09:21:27.706823 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5f67fbc655-dzsh5_1b4f79cc-b86c-4742-97cd-d5c2a7fc95fb/manager/0.log" Oct 13 09:21:27 crc kubenswrapper[4833]: I1013 09:21:27.844894 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-f9fb45f8f-5fzdw_d971bfad-5c66-4145-beb3-fadf231cbacf/kube-rbac-proxy/0.log" Oct 13 09:21:27 crc kubenswrapper[4833]: I1013 09:21:27.924326 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-f9fb45f8f-5fzdw_d971bfad-5c66-4145-beb3-fadf231cbacf/manager/0.log" Oct 13 09:21:27 crc kubenswrapper[4833]: I1013 09:21:27.978301 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-79d585cb66-542w7_390d3ba7-7d67-4b01-9729-22040d2c8ecd/kube-rbac-proxy/0.log" Oct 13 09:21:28 crc kubenswrapper[4833]: I1013 09:21:28.110662 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5df598886f-5nmfh_336c7267-9a4b-4924-bad8-9ccbef37dc21/kube-rbac-proxy/0.log" Oct 13 09:21:28 crc kubenswrapper[4833]: I1013 09:21:28.120055 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-79d585cb66-542w7_390d3ba7-7d67-4b01-9729-22040d2c8ecd/manager/0.log" Oct 13 09:21:28 crc kubenswrapper[4833]: I1013 09:21:28.322725 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5df598886f-5nmfh_336c7267-9a4b-4924-bad8-9ccbef37dc21/manager/0.log" Oct 13 09:21:28 crc kubenswrapper[4833]: I1013 09:21:28.377362 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69fdcfc5f5-h2bzs_7e224255-b0b7-444a-aa67-8980b10e4131/kube-rbac-proxy/0.log" Oct 13 09:21:28 crc kubenswrapper[4833]: I1013 09:21:28.398092 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69fdcfc5f5-h2bzs_7e224255-b0b7-444a-aa67-8980b10e4131/manager/0.log" Oct 13 09:21:28 crc kubenswrapper[4833]: I1013 09:21:28.508485 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5956dffb7bm68wk_e21ec2f6-af6b-4fa8-98c7-937dbf8f44b6/kube-rbac-proxy/0.log" Oct 13 09:21:28 crc kubenswrapper[4833]: I1013 09:21:28.586726 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5956dffb7bm68wk_e21ec2f6-af6b-4fa8-98c7-937dbf8f44b6/manager/0.log" Oct 13 09:21:28 crc kubenswrapper[4833]: I1013 09:21:28.710010 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5b95c8954b-pn28d_d941a373-01b1-4305-a56a-8829605f9efa/kube-rbac-proxy/0.log" Oct 13 09:21:29 crc kubenswrapper[4833]: I1013 09:21:29.437218 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-688d597459-56fw9_3e030361-241a-4361-b8aa-13454891c551/kube-rbac-proxy/0.log" Oct 13 09:21:29 crc kubenswrapper[4833]: I1013 09:21:29.608501 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-688d597459-56fw9_3e030361-241a-4361-b8aa-13454891c551/operator/0.log" Oct 13 09:21:29 crc kubenswrapper[4833]: I1013 09:21:29.730098 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-79df5fb58c-hmwbl_a8ddb0b0-ab30-4dfe-b4c1-3ba0a53d9972/kube-rbac-proxy/0.log" Oct 13 09:21:29 crc kubenswrapper[4833]: I1013 09:21:29.866521 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7gvlk_a9ca3f95-4e28-4e9d-ba53-2c6aa8dd7298/registry-server/0.log" Oct 13 09:21:29 crc kubenswrapper[4833]: I1013 09:21:29.970274 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-68b6c87b68-ffb5v_f7288e97-60b3-4dbf-8717-883c28e960b4/kube-rbac-proxy/0.log" Oct 13 09:21:29 crc kubenswrapper[4833]: I1013 09:21:29.979672 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-79df5fb58c-hmwbl_a8ddb0b0-ab30-4dfe-b4c1-3ba0a53d9972/manager/0.log" Oct 13 09:21:30 crc kubenswrapper[4833]: I1013 09:21:30.140975 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-68b6c87b68-ffb5v_f7288e97-60b3-4dbf-8717-883c28e960b4/manager/0.log" Oct 13 09:21:30 crc kubenswrapper[4833]: I1013 09:21:30.238455 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-4lk4k_76ebee26-0a3b-49a8-92f1-4eb0362ed0c5/operator/0.log" Oct 13 09:21:30 crc kubenswrapper[4833]: I1013 09:21:30.418741 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-db6d7f97b-gnrwn_f47ab387-784f-4cfa-998c-1c37b7b15bb8/manager/0.log" Oct 13 09:21:30 crc kubenswrapper[4833]: I1013 09:21:30.463002 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-db6d7f97b-gnrwn_f47ab387-784f-4cfa-998c-1c37b7b15bb8/kube-rbac-proxy/0.log" Oct 13 09:21:30 crc kubenswrapper[4833]: I1013 09:21:30.520350 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-67cfc6749b-jh5ml_96aa2f66-4ecd-476b-9bf2-a9da443767df/kube-rbac-proxy/0.log" Oct 13 09:21:30 crc kubenswrapper[4833]: I1013 09:21:30.665745 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5458f77c4-npmrc_f710c8db-0ead-4d38-9dd5-74b1068c85cc/kube-rbac-proxy/0.log" Oct 13 09:21:30 crc kubenswrapper[4833]: I1013 09:21:30.740698 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5458f77c4-npmrc_f710c8db-0ead-4d38-9dd5-74b1068c85cc/manager/0.log" Oct 13 09:21:30 crc kubenswrapper[4833]: I1013 09:21:30.952219 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7f554bff7b-2kjcp_5822a35e-6851-47e9-be13-3a5418c44787/kube-rbac-proxy/0.log" Oct 13 09:21:30 crc kubenswrapper[4833]: I1013 09:21:30.978487 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7f554bff7b-2kjcp_5822a35e-6851-47e9-be13-3a5418c44787/manager/0.log" Oct 13 09:21:31 crc kubenswrapper[4833]: I1013 09:21:31.024704 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-67cfc6749b-jh5ml_96aa2f66-4ecd-476b-9bf2-a9da443767df/manager/0.log" Oct 13 09:21:31 crc kubenswrapper[4833]: I1013 09:21:31.897844 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5b95c8954b-pn28d_d941a373-01b1-4305-a56a-8829605f9efa/manager/0.log" Oct 13 09:21:49 crc kubenswrapper[4833]: I1013 09:21:49.498101 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-lr2tr_822cc654-3f56-4ca1-b73c-863bc7129d43/control-plane-machine-set-operator/0.log" Oct 13 09:21:49 crc kubenswrapper[4833]: I1013 09:21:49.737958 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zk7qg_404a7ccb-1a6f-4185-aba4-e74c8fcd6092/kube-rbac-proxy/0.log" Oct 13 09:21:49 crc kubenswrapper[4833]: I1013 09:21:49.772826 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zk7qg_404a7ccb-1a6f-4185-aba4-e74c8fcd6092/machine-api-operator/0.log" Oct 13 09:22:03 crc kubenswrapper[4833]: I1013 09:22:03.060178 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-7d4cc89fcb-69dqf_31c1c666-1827-46ae-9e9f-5639a894a089/cert-manager-controller/0.log" Oct 13 09:22:03 crc kubenswrapper[4833]: I1013 09:22:03.214829 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7d9f95dbf-bvm97_c2674d19-a371-463e-8baa-c7c278bea011/cert-manager-cainjector/0.log" Oct 13 09:22:03 crc kubenswrapper[4833]: I1013 09:22:03.303619 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-d969966f-wlkmt_45fe6616-16f5-4179-b297-6260a2573ae7/cert-manager-webhook/0.log" Oct 13 09:22:17 crc kubenswrapper[4833]: I1013 09:22:17.531817 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6b874cbd85-9657w_8d564272-a394-4689-b2c6-0685d447a2a4/nmstate-console-plugin/0.log" Oct 13 09:22:17 crc kubenswrapper[4833]: I1013 09:22:17.541233 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-d45xz_a0c9d202-f469-4633-85a2-16cea67b5d26/nmstate-handler/0.log" Oct 13 09:22:17 crc kubenswrapper[4833]: I1013 09:22:17.701777 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-drn9l_be580188-b967-4a91-b2ff-5b82f300d50f/kube-rbac-proxy/0.log" Oct 13 09:22:17 crc kubenswrapper[4833]: I1013 09:22:17.718650 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-drn9l_be580188-b967-4a91-b2ff-5b82f300d50f/nmstate-metrics/0.log" Oct 13 09:22:17 crc kubenswrapper[4833]: I1013 09:22:17.911196 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-858ddd8f98-4qbpk_9694fd5a-c5d2-4d3e-9e5b-5ca415933b33/nmstate-operator/0.log" Oct 13 09:22:17 crc kubenswrapper[4833]: I1013 09:22:17.923739 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6cdbc54649-zfgzs_3fd37d62-cc28-41c0-a6ee-f086c41cbcec/nmstate-webhook/0.log" Oct 13 09:22:32 crc kubenswrapper[4833]: I1013 09:22:32.130256 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-wv7ff_c6237d22-0249-4927-9ec7-d7b86bb6e80e/kube-rbac-proxy/0.log" Oct 13 09:22:32 crc kubenswrapper[4833]: I1013 09:22:32.400424 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nf9x_283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5/cp-frr-files/0.log" Oct 13 09:22:32 crc kubenswrapper[4833]: I1013 09:22:32.631159 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nf9x_283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5/cp-metrics/0.log" Oct 13 09:22:32 crc kubenswrapper[4833]: I1013 09:22:32.638506 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-wv7ff_c6237d22-0249-4927-9ec7-d7b86bb6e80e/controller/0.log" Oct 13 09:22:32 crc kubenswrapper[4833]: I1013 09:22:32.694246 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nf9x_283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5/cp-frr-files/0.log" Oct 13 09:22:32 crc kubenswrapper[4833]: I1013 09:22:32.696877 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nf9x_283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5/cp-reloader/0.log" Oct 13 09:22:32 crc kubenswrapper[4833]: I1013 09:22:32.820046 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nf9x_283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5/cp-reloader/0.log" Oct 13 09:22:32 crc kubenswrapper[4833]: I1013 09:22:32.979697 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nf9x_283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5/cp-frr-files/0.log" Oct 13 09:22:33 crc kubenswrapper[4833]: I1013 09:22:33.027333 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nf9x_283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5/cp-reloader/0.log" Oct 13 09:22:33 crc kubenswrapper[4833]: I1013 09:22:33.047803 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nf9x_283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5/cp-metrics/0.log" Oct 13 09:22:33 crc kubenswrapper[4833]: I1013 09:22:33.063805 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nf9x_283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5/cp-metrics/0.log" Oct 13 09:22:33 crc kubenswrapper[4833]: I1013 09:22:33.237217 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nf9x_283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5/cp-frr-files/0.log" Oct 13 09:22:33 crc kubenswrapper[4833]: I1013 09:22:33.243776 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nf9x_283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5/cp-metrics/0.log" Oct 13 09:22:33 crc kubenswrapper[4833]: I1013 09:22:33.255461 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nf9x_283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5/controller/0.log" Oct 13 09:22:33 crc kubenswrapper[4833]: I1013 09:22:33.268668 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nf9x_283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5/cp-reloader/0.log" Oct 13 09:22:33 crc kubenswrapper[4833]: I1013 09:22:33.428382 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nf9x_283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5/kube-rbac-proxy/0.log" Oct 13 09:22:33 crc kubenswrapper[4833]: I1013 09:22:33.447843 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nf9x_283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5/frr-metrics/0.log" Oct 13 09:22:33 crc kubenswrapper[4833]: I1013 09:22:33.484192 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nf9x_283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5/kube-rbac-proxy-frr/0.log" Oct 13 09:22:33 crc kubenswrapper[4833]: I1013 09:22:33.676695 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nf9x_283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5/reloader/0.log" Oct 13 09:22:33 crc kubenswrapper[4833]: I1013 09:22:33.716108 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-64bf5d555-n574p_5e16790f-a99f-4c1c-ac8e-b350e0e9efc9/frr-k8s-webhook-server/0.log" Oct 13 09:22:33 crc kubenswrapper[4833]: I1013 09:22:33.922795 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-57cb68956b-fwz7n_5fa05b8f-37f5-468d-a716-752f3402d091/manager/0.log" Oct 13 09:22:34 crc kubenswrapper[4833]: I1013 09:22:34.146220 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6645b6586b-dks2w_d3563d2c-a34e-4301-a437-b963e22b0c33/webhook-server/0.log" Oct 13 09:22:34 crc kubenswrapper[4833]: I1013 09:22:34.248630 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-w6zhs_1a12e7a4-c597-4876-b004-8a12717d688e/kube-rbac-proxy/0.log" Oct 13 09:22:35 crc kubenswrapper[4833]: I1013 09:22:35.337029 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-w6zhs_1a12e7a4-c597-4876-b004-8a12717d688e/speaker/0.log" Oct 13 09:22:36 crc kubenswrapper[4833]: I1013 09:22:36.705967 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nf9x_283c8c8a-15a2-46bb-9cbc-a6f9276dfbd5/frr/0.log" Oct 13 09:22:50 crc kubenswrapper[4833]: I1013 09:22:50.883970 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs_55762019-8ec1-492a-9691-d02c118d3176/util/0.log" Oct 13 09:22:51 crc kubenswrapper[4833]: I1013 09:22:51.139406 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs_55762019-8ec1-492a-9691-d02c118d3176/pull/0.log" Oct 13 09:22:51 crc kubenswrapper[4833]: I1013 09:22:51.141868 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs_55762019-8ec1-492a-9691-d02c118d3176/pull/0.log" Oct 13 09:22:51 crc kubenswrapper[4833]: I1013 09:22:51.146186 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs_55762019-8ec1-492a-9691-d02c118d3176/util/0.log" Oct 13 09:22:51 crc kubenswrapper[4833]: I1013 09:22:51.300753 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs_55762019-8ec1-492a-9691-d02c118d3176/util/0.log" Oct 13 09:22:51 crc kubenswrapper[4833]: I1013 09:22:51.356244 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs_55762019-8ec1-492a-9691-d02c118d3176/pull/0.log" Oct 13 09:22:51 crc kubenswrapper[4833]: I1013 09:22:51.378351 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69zp7vs_55762019-8ec1-492a-9691-d02c118d3176/extract/0.log" Oct 13 09:22:51 crc kubenswrapper[4833]: I1013 09:22:51.476828 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4_17d37695-97f2-49f8-907b-a7b183bd0bc0/util/0.log" Oct 13 09:22:51 crc kubenswrapper[4833]: I1013 09:22:51.656722 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4_17d37695-97f2-49f8-907b-a7b183bd0bc0/util/0.log" Oct 13 09:22:51 crc kubenswrapper[4833]: I1013 09:22:51.665706 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4_17d37695-97f2-49f8-907b-a7b183bd0bc0/pull/0.log" Oct 13 09:22:51 crc kubenswrapper[4833]: I1013 09:22:51.733199 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4_17d37695-97f2-49f8-907b-a7b183bd0bc0/pull/0.log" Oct 13 09:22:51 crc kubenswrapper[4833]: I1013 09:22:51.935876 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4_17d37695-97f2-49f8-907b-a7b183bd0bc0/extract/0.log" Oct 13 09:22:51 crc kubenswrapper[4833]: I1013 09:22:51.945357 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4_17d37695-97f2-49f8-907b-a7b183bd0bc0/util/0.log" Oct 13 09:22:51 crc kubenswrapper[4833]: I1013 09:22:51.953776 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2gzwz4_17d37695-97f2-49f8-907b-a7b183bd0bc0/pull/0.log" Oct 13 09:22:52 crc kubenswrapper[4833]: I1013 09:22:52.186873 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k_ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a/util/0.log" Oct 13 09:22:52 crc kubenswrapper[4833]: I1013 09:22:52.334601 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k_ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a/util/0.log" Oct 13 09:22:52 crc kubenswrapper[4833]: I1013 09:22:52.425702 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k_ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a/pull/0.log" Oct 13 09:22:52 crc kubenswrapper[4833]: I1013 09:22:52.425760 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k_ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a/pull/0.log" Oct 13 09:22:52 crc kubenswrapper[4833]: I1013 09:22:52.549223 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k_ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a/pull/0.log" Oct 13 09:22:52 crc kubenswrapper[4833]: I1013 09:22:52.572188 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k_ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a/util/0.log" Oct 13 09:22:52 crc kubenswrapper[4833]: I1013 09:22:52.908529 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dpqw9k_ed6a7dfb-e951-4ba3-a992-75b7bfd2ad5a/extract/0.log" Oct 13 09:22:52 crc kubenswrapper[4833]: I1013 09:22:52.948912 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2bk65_adc8be39-7458-4c84-b227-856761d77e4e/extract-utilities/0.log" Oct 13 09:22:53 crc kubenswrapper[4833]: I1013 09:22:53.189317 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2bk65_adc8be39-7458-4c84-b227-856761d77e4e/extract-utilities/0.log" Oct 13 09:22:53 crc kubenswrapper[4833]: I1013 09:22:53.192028 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2bk65_adc8be39-7458-4c84-b227-856761d77e4e/extract-content/0.log" Oct 13 09:22:53 crc kubenswrapper[4833]: I1013 09:22:53.197512 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2bk65_adc8be39-7458-4c84-b227-856761d77e4e/extract-content/0.log" Oct 13 09:22:53 crc kubenswrapper[4833]: I1013 09:22:53.358427 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2bk65_adc8be39-7458-4c84-b227-856761d77e4e/extract-utilities/0.log" Oct 13 09:22:53 crc kubenswrapper[4833]: I1013 09:22:53.382819 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2bk65_adc8be39-7458-4c84-b227-856761d77e4e/extract-content/0.log" Oct 13 09:22:53 crc kubenswrapper[4833]: I1013 09:22:53.507513 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bwrt4_26fa694e-4c55-40eb-a912-78d3e13520f0/extract-utilities/0.log" Oct 13 09:22:53 crc kubenswrapper[4833]: I1013 09:22:53.752270 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bwrt4_26fa694e-4c55-40eb-a912-78d3e13520f0/extract-utilities/0.log" Oct 13 09:22:53 crc kubenswrapper[4833]: I1013 09:22:53.762118 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bwrt4_26fa694e-4c55-40eb-a912-78d3e13520f0/extract-content/0.log" Oct 13 09:22:53 crc kubenswrapper[4833]: I1013 09:22:53.802212 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bwrt4_26fa694e-4c55-40eb-a912-78d3e13520f0/extract-content/0.log" Oct 13 09:22:53 crc kubenswrapper[4833]: I1013 09:22:53.972336 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bwrt4_26fa694e-4c55-40eb-a912-78d3e13520f0/extract-content/0.log" Oct 13 09:22:54 crc kubenswrapper[4833]: I1013 09:22:54.016610 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bwrt4_26fa694e-4c55-40eb-a912-78d3e13520f0/extract-utilities/0.log" Oct 13 09:22:54 crc kubenswrapper[4833]: I1013 09:22:54.182143 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5_962d14bb-624a-4fa8-93ae-e81f514487ca/util/0.log" Oct 13 09:22:54 crc kubenswrapper[4833]: I1013 09:22:54.510863 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5_962d14bb-624a-4fa8-93ae-e81f514487ca/util/0.log" Oct 13 09:22:54 crc kubenswrapper[4833]: I1013 09:22:54.522334 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5_962d14bb-624a-4fa8-93ae-e81f514487ca/pull/0.log" Oct 13 09:22:54 crc kubenswrapper[4833]: I1013 09:22:54.562751 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5_962d14bb-624a-4fa8-93ae-e81f514487ca/pull/0.log" Oct 13 09:22:54 crc kubenswrapper[4833]: I1013 09:22:54.735167 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5_962d14bb-624a-4fa8-93ae-e81f514487ca/util/0.log" Oct 13 09:22:54 crc kubenswrapper[4833]: I1013 09:22:54.761969 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5_962d14bb-624a-4fa8-93ae-e81f514487ca/pull/0.log" Oct 13 09:22:54 crc kubenswrapper[4833]: I1013 09:22:54.783373 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cb26g5_962d14bb-624a-4fa8-93ae-e81f514487ca/extract/0.log" Oct 13 09:22:54 crc kubenswrapper[4833]: I1013 09:22:54.987575 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-hf4k4_9abfcabe-0f85-4d47-aace-d218b9245549/marketplace-operator/0.log" Oct 13 09:22:55 crc kubenswrapper[4833]: I1013 09:22:55.161801 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9v92m_6beb74db-c3f1-480b-b294-9d1ba1867055/extract-utilities/0.log" Oct 13 09:22:55 crc kubenswrapper[4833]: I1013 09:22:55.395659 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2bk65_adc8be39-7458-4c84-b227-856761d77e4e/registry-server/0.log" Oct 13 09:22:55 crc kubenswrapper[4833]: I1013 09:22:55.470712 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9v92m_6beb74db-c3f1-480b-b294-9d1ba1867055/extract-utilities/0.log" Oct 13 09:22:55 crc kubenswrapper[4833]: I1013 09:22:55.471412 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bwrt4_26fa694e-4c55-40eb-a912-78d3e13520f0/registry-server/0.log" Oct 13 09:22:55 crc kubenswrapper[4833]: I1013 09:22:55.498874 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9v92m_6beb74db-c3f1-480b-b294-9d1ba1867055/extract-content/0.log" Oct 13 09:22:55 crc kubenswrapper[4833]: I1013 09:22:55.555517 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9v92m_6beb74db-c3f1-480b-b294-9d1ba1867055/extract-content/0.log" Oct 13 09:22:55 crc kubenswrapper[4833]: I1013 09:22:55.689437 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9v92m_6beb74db-c3f1-480b-b294-9d1ba1867055/extract-content/0.log" Oct 13 09:22:55 crc kubenswrapper[4833]: I1013 09:22:55.690107 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9v92m_6beb74db-c3f1-480b-b294-9d1ba1867055/extract-utilities/0.log" Oct 13 09:22:55 crc kubenswrapper[4833]: I1013 09:22:55.847393 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fl56v_e48cfb66-5672-427f-ac38-4c191d8735e9/extract-utilities/0.log" Oct 13 09:22:55 crc kubenswrapper[4833]: I1013 09:22:55.983607 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fl56v_e48cfb66-5672-427f-ac38-4c191d8735e9/extract-content/0.log" Oct 13 09:22:56 crc kubenswrapper[4833]: I1013 09:22:56.017819 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9v92m_6beb74db-c3f1-480b-b294-9d1ba1867055/registry-server/0.log" Oct 13 09:22:56 crc kubenswrapper[4833]: I1013 09:22:56.018584 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fl56v_e48cfb66-5672-427f-ac38-4c191d8735e9/extract-utilities/0.log" Oct 13 09:22:56 crc kubenswrapper[4833]: I1013 09:22:56.024201 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fl56v_e48cfb66-5672-427f-ac38-4c191d8735e9/extract-content/0.log" Oct 13 09:22:56 crc kubenswrapper[4833]: I1013 09:22:56.200650 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fl56v_e48cfb66-5672-427f-ac38-4c191d8735e9/extract-content/0.log" Oct 13 09:22:56 crc kubenswrapper[4833]: I1013 09:22:56.201641 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fl56v_e48cfb66-5672-427f-ac38-4c191d8735e9/extract-utilities/0.log" Oct 13 09:22:57 crc kubenswrapper[4833]: I1013 09:22:57.107959 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fl56v_e48cfb66-5672-427f-ac38-4c191d8735e9/registry-server/0.log" Oct 13 09:23:00 crc kubenswrapper[4833]: I1013 09:23:00.543299 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 09:23:00 crc kubenswrapper[4833]: I1013 09:23:00.543707 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 09:23:08 crc kubenswrapper[4833]: I1013 09:23:08.914940 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-7c8cf85677-7wnhh_22d23ba0-fd38-403e-8ec5-6c42942bd13a/prometheus-operator/0.log" Oct 13 09:23:09 crc kubenswrapper[4833]: I1013 09:23:09.075037 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7885f6cfd9-9pr5t_937ac9f6-a3cb-405d-a685-e0fcee7d3cea/prometheus-operator-admission-webhook/0.log" Oct 13 09:23:09 crc kubenswrapper[4833]: I1013 09:23:09.139734 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7885f6cfd9-wsw46_68878938-abe8-45e8-bd4e-dc920d54cdfa/prometheus-operator-admission-webhook/0.log" Oct 13 09:23:09 crc kubenswrapper[4833]: I1013 09:23:09.309212 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-cc5f78dfc-9vqdj_7c9058d7-a182-413c-b6a7-f25560ca3989/operator/0.log" Oct 13 09:23:09 crc kubenswrapper[4833]: I1013 09:23:09.346846 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-54bc95c9fb-tjpvt_0a572d30-12a1-498e-bc38-d27cb95328ce/perses-operator/0.log" Oct 13 09:23:30 crc kubenswrapper[4833]: I1013 09:23:30.543049 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 09:23:30 crc kubenswrapper[4833]: I1013 09:23:30.543686 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 09:24:00 crc kubenswrapper[4833]: I1013 09:24:00.542892 4833 patch_prober.go:28] interesting pod/machine-config-daemon-wd7ss container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 09:24:00 crc kubenswrapper[4833]: I1013 09:24:00.543457 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 09:24:00 crc kubenswrapper[4833]: I1013 09:24:00.543524 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" Oct 13 09:24:00 crc kubenswrapper[4833]: I1013 09:24:00.544506 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9ce1dc1789caa8b1bb9ce3bf55d2ad4fa619998cd139896048fd851c8281d151"} pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 09:24:00 crc kubenswrapper[4833]: I1013 09:24:00.544684 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerName="machine-config-daemon" containerID="cri-o://9ce1dc1789caa8b1bb9ce3bf55d2ad4fa619998cd139896048fd851c8281d151" gracePeriod=600 Oct 13 09:24:00 crc kubenswrapper[4833]: E1013 09:24:00.666725 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:24:01 crc kubenswrapper[4833]: I1013 09:24:01.129243 4833 generic.go:334] "Generic (PLEG): container finished" podID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" containerID="9ce1dc1789caa8b1bb9ce3bf55d2ad4fa619998cd139896048fd851c8281d151" exitCode=0 Oct 13 09:24:01 crc kubenswrapper[4833]: I1013 09:24:01.129289 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" event={"ID":"fa5b6ea2-f89e-4768-8663-bd965bde64fa","Type":"ContainerDied","Data":"9ce1dc1789caa8b1bb9ce3bf55d2ad4fa619998cd139896048fd851c8281d151"} Oct 13 09:24:01 crc kubenswrapper[4833]: I1013 09:24:01.129327 4833 scope.go:117] "RemoveContainer" containerID="c711e429a871ab17078d49d8686220613b7c9459f066d2cfadebd3ed9d5979ed" Oct 13 09:24:01 crc kubenswrapper[4833]: I1013 09:24:01.133285 4833 scope.go:117] "RemoveContainer" containerID="9ce1dc1789caa8b1bb9ce3bf55d2ad4fa619998cd139896048fd851c8281d151" Oct 13 09:24:01 crc kubenswrapper[4833]: E1013 09:24:01.133909 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:24:11 crc kubenswrapper[4833]: I1013 09:24:11.627532 4833 scope.go:117] "RemoveContainer" containerID="9ce1dc1789caa8b1bb9ce3bf55d2ad4fa619998cd139896048fd851c8281d151" Oct 13 09:24:11 crc kubenswrapper[4833]: E1013 09:24:11.628407 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:24:22 crc kubenswrapper[4833]: I1013 09:24:22.628190 4833 scope.go:117] "RemoveContainer" containerID="9ce1dc1789caa8b1bb9ce3bf55d2ad4fa619998cd139896048fd851c8281d151" Oct 13 09:24:22 crc kubenswrapper[4833]: E1013 09:24:22.630652 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:24:35 crc kubenswrapper[4833]: I1013 09:24:35.627700 4833 scope.go:117] "RemoveContainer" containerID="9ce1dc1789caa8b1bb9ce3bf55d2ad4fa619998cd139896048fd851c8281d151" Oct 13 09:24:35 crc kubenswrapper[4833]: E1013 09:24:35.628886 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:24:49 crc kubenswrapper[4833]: I1013 09:24:49.628793 4833 scope.go:117] "RemoveContainer" containerID="9ce1dc1789caa8b1bb9ce3bf55d2ad4fa619998cd139896048fd851c8281d151" Oct 13 09:24:49 crc kubenswrapper[4833]: E1013 09:24:49.629785 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:25:00 crc kubenswrapper[4833]: I1013 09:25:00.643742 4833 scope.go:117] "RemoveContainer" containerID="9ce1dc1789caa8b1bb9ce3bf55d2ad4fa619998cd139896048fd851c8281d151" Oct 13 09:25:00 crc kubenswrapper[4833]: E1013 09:25:00.644476 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:25:11 crc kubenswrapper[4833]: I1013 09:25:11.629999 4833 scope.go:117] "RemoveContainer" containerID="9ce1dc1789caa8b1bb9ce3bf55d2ad4fa619998cd139896048fd851c8281d151" Oct 13 09:25:11 crc kubenswrapper[4833]: E1013 09:25:11.631275 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:25:13 crc kubenswrapper[4833]: I1013 09:25:13.052562 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d24nv"] Oct 13 09:25:13 crc kubenswrapper[4833]: E1013 09:25:13.053274 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa468ee-0ec3-47e0-aeff-0a7841650350" containerName="extract-content" Oct 13 09:25:13 crc kubenswrapper[4833]: I1013 09:25:13.053286 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa468ee-0ec3-47e0-aeff-0a7841650350" containerName="extract-content" Oct 13 09:25:13 crc kubenswrapper[4833]: E1013 09:25:13.053345 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa468ee-0ec3-47e0-aeff-0a7841650350" containerName="extract-utilities" Oct 13 09:25:13 crc kubenswrapper[4833]: I1013 09:25:13.053358 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa468ee-0ec3-47e0-aeff-0a7841650350" containerName="extract-utilities" Oct 13 09:25:13 crc kubenswrapper[4833]: E1013 09:25:13.053373 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa468ee-0ec3-47e0-aeff-0a7841650350" containerName="registry-server" Oct 13 09:25:13 crc kubenswrapper[4833]: I1013 09:25:13.053381 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa468ee-0ec3-47e0-aeff-0a7841650350" containerName="registry-server" Oct 13 09:25:13 crc kubenswrapper[4833]: I1013 09:25:13.053621 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fa468ee-0ec3-47e0-aeff-0a7841650350" containerName="registry-server" Oct 13 09:25:13 crc kubenswrapper[4833]: I1013 09:25:13.055294 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d24nv" Oct 13 09:25:13 crc kubenswrapper[4833]: I1013 09:25:13.070213 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d24nv"] Oct 13 09:25:13 crc kubenswrapper[4833]: I1013 09:25:13.149259 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb40b67b-b652-45bd-b3cf-efd102ddbfa0-catalog-content\") pod \"certified-operators-d24nv\" (UID: \"eb40b67b-b652-45bd-b3cf-efd102ddbfa0\") " pod="openshift-marketplace/certified-operators-d24nv" Oct 13 09:25:13 crc kubenswrapper[4833]: I1013 09:25:13.149414 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb40b67b-b652-45bd-b3cf-efd102ddbfa0-utilities\") pod \"certified-operators-d24nv\" (UID: \"eb40b67b-b652-45bd-b3cf-efd102ddbfa0\") " pod="openshift-marketplace/certified-operators-d24nv" Oct 13 09:25:13 crc kubenswrapper[4833]: I1013 09:25:13.149517 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj9gp\" (UniqueName: \"kubernetes.io/projected/eb40b67b-b652-45bd-b3cf-efd102ddbfa0-kube-api-access-fj9gp\") pod \"certified-operators-d24nv\" (UID: \"eb40b67b-b652-45bd-b3cf-efd102ddbfa0\") " pod="openshift-marketplace/certified-operators-d24nv" Oct 13 09:25:13 crc kubenswrapper[4833]: I1013 09:25:13.252110 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb40b67b-b652-45bd-b3cf-efd102ddbfa0-utilities\") pod \"certified-operators-d24nv\" (UID: \"eb40b67b-b652-45bd-b3cf-efd102ddbfa0\") " pod="openshift-marketplace/certified-operators-d24nv" Oct 13 09:25:13 crc kubenswrapper[4833]: I1013 09:25:13.252507 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb40b67b-b652-45bd-b3cf-efd102ddbfa0-utilities\") pod \"certified-operators-d24nv\" (UID: \"eb40b67b-b652-45bd-b3cf-efd102ddbfa0\") " pod="openshift-marketplace/certified-operators-d24nv" Oct 13 09:25:13 crc kubenswrapper[4833]: I1013 09:25:13.252701 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj9gp\" (UniqueName: \"kubernetes.io/projected/eb40b67b-b652-45bd-b3cf-efd102ddbfa0-kube-api-access-fj9gp\") pod \"certified-operators-d24nv\" (UID: \"eb40b67b-b652-45bd-b3cf-efd102ddbfa0\") " pod="openshift-marketplace/certified-operators-d24nv" Oct 13 09:25:13 crc kubenswrapper[4833]: I1013 09:25:13.253081 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb40b67b-b652-45bd-b3cf-efd102ddbfa0-catalog-content\") pod \"certified-operators-d24nv\" (UID: \"eb40b67b-b652-45bd-b3cf-efd102ddbfa0\") " pod="openshift-marketplace/certified-operators-d24nv" Oct 13 09:25:13 crc kubenswrapper[4833]: I1013 09:25:13.253380 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb40b67b-b652-45bd-b3cf-efd102ddbfa0-catalog-content\") pod \"certified-operators-d24nv\" (UID: \"eb40b67b-b652-45bd-b3cf-efd102ddbfa0\") " pod="openshift-marketplace/certified-operators-d24nv" Oct 13 09:25:13 crc kubenswrapper[4833]: I1013 09:25:13.280844 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj9gp\" (UniqueName: \"kubernetes.io/projected/eb40b67b-b652-45bd-b3cf-efd102ddbfa0-kube-api-access-fj9gp\") pod \"certified-operators-d24nv\" (UID: \"eb40b67b-b652-45bd-b3cf-efd102ddbfa0\") " pod="openshift-marketplace/certified-operators-d24nv" Oct 13 09:25:13 crc kubenswrapper[4833]: I1013 09:25:13.383122 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d24nv" Oct 13 09:25:13 crc kubenswrapper[4833]: I1013 09:25:13.997156 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d24nv"] Oct 13 09:25:15 crc kubenswrapper[4833]: I1013 09:25:15.005879 4833 generic.go:334] "Generic (PLEG): container finished" podID="eb40b67b-b652-45bd-b3cf-efd102ddbfa0" containerID="a74d24f3955ad50ef0bb99531c955d4c92c037d63697232ce647fd43264845ef" exitCode=0 Oct 13 09:25:15 crc kubenswrapper[4833]: I1013 09:25:15.005925 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d24nv" event={"ID":"eb40b67b-b652-45bd-b3cf-efd102ddbfa0","Type":"ContainerDied","Data":"a74d24f3955ad50ef0bb99531c955d4c92c037d63697232ce647fd43264845ef"} Oct 13 09:25:15 crc kubenswrapper[4833]: I1013 09:25:15.005955 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d24nv" event={"ID":"eb40b67b-b652-45bd-b3cf-efd102ddbfa0","Type":"ContainerStarted","Data":"a9fd279210a493c4745bc64bad403f8d5677296d1b4c81f3cacf757bebe24fa1"} Oct 13 09:25:15 crc kubenswrapper[4833]: I1013 09:25:15.009174 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 09:25:17 crc kubenswrapper[4833]: I1013 09:25:17.029559 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d24nv" event={"ID":"eb40b67b-b652-45bd-b3cf-efd102ddbfa0","Type":"ContainerStarted","Data":"a0e4f508e1794bc965316757068511721b852ae2fa5946264f2ebd948b4f7059"} Oct 13 09:25:18 crc kubenswrapper[4833]: I1013 09:25:18.041214 4833 generic.go:334] "Generic (PLEG): container finished" podID="eb40b67b-b652-45bd-b3cf-efd102ddbfa0" containerID="a0e4f508e1794bc965316757068511721b852ae2fa5946264f2ebd948b4f7059" exitCode=0 Oct 13 09:25:18 crc kubenswrapper[4833]: I1013 09:25:18.041269 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d24nv" event={"ID":"eb40b67b-b652-45bd-b3cf-efd102ddbfa0","Type":"ContainerDied","Data":"a0e4f508e1794bc965316757068511721b852ae2fa5946264f2ebd948b4f7059"} Oct 13 09:25:19 crc kubenswrapper[4833]: I1013 09:25:19.051707 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d24nv" event={"ID":"eb40b67b-b652-45bd-b3cf-efd102ddbfa0","Type":"ContainerStarted","Data":"c0fd131c3de9232fdb05433830ce3564eb1d2e342adc539fc0163a756a4be890"} Oct 13 09:25:19 crc kubenswrapper[4833]: I1013 09:25:19.055183 4833 generic.go:334] "Generic (PLEG): container finished" podID="7d7251b3-0441-44dd-8966-4c912b8cf2b1" containerID="005dbe276c69cd71bab29411b0c3b798f16f4135c0bb96ae158d83b702f63b62" exitCode=0 Oct 13 09:25:19 crc kubenswrapper[4833]: I1013 09:25:19.055226 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5xxsh/must-gather-rd2pj" event={"ID":"7d7251b3-0441-44dd-8966-4c912b8cf2b1","Type":"ContainerDied","Data":"005dbe276c69cd71bab29411b0c3b798f16f4135c0bb96ae158d83b702f63b62"} Oct 13 09:25:19 crc kubenswrapper[4833]: I1013 09:25:19.055925 4833 scope.go:117] "RemoveContainer" containerID="005dbe276c69cd71bab29411b0c3b798f16f4135c0bb96ae158d83b702f63b62" Oct 13 09:25:19 crc kubenswrapper[4833]: I1013 09:25:19.078413 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d24nv" podStartSLOduration=2.453724218 podStartE2EDuration="6.078392767s" podCreationTimestamp="2025-10-13 09:25:13 +0000 UTC" firstStartedPulling="2025-10-13 09:25:15.008856908 +0000 UTC m=+10605.109279824" lastFinishedPulling="2025-10-13 09:25:18.633525447 +0000 UTC m=+10608.733948373" observedRunningTime="2025-10-13 09:25:19.067833667 +0000 UTC m=+10609.168256603" watchObservedRunningTime="2025-10-13 09:25:19.078392767 +0000 UTC m=+10609.178815683" Oct 13 09:25:19 crc kubenswrapper[4833]: I1013 09:25:19.710866 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5xxsh_must-gather-rd2pj_7d7251b3-0441-44dd-8966-4c912b8cf2b1/gather/0.log" Oct 13 09:25:23 crc kubenswrapper[4833]: I1013 09:25:23.383820 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d24nv" Oct 13 09:25:23 crc kubenswrapper[4833]: I1013 09:25:23.384336 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d24nv" Oct 13 09:25:23 crc kubenswrapper[4833]: I1013 09:25:23.434652 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d24nv" Oct 13 09:25:24 crc kubenswrapper[4833]: I1013 09:25:24.168301 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d24nv" Oct 13 09:25:24 crc kubenswrapper[4833]: I1013 09:25:24.225363 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d24nv"] Oct 13 09:25:25 crc kubenswrapper[4833]: I1013 09:25:25.627523 4833 scope.go:117] "RemoveContainer" containerID="9ce1dc1789caa8b1bb9ce3bf55d2ad4fa619998cd139896048fd851c8281d151" Oct 13 09:25:25 crc kubenswrapper[4833]: E1013 09:25:25.628091 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:25:26 crc kubenswrapper[4833]: I1013 09:25:26.129241 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d24nv" podUID="eb40b67b-b652-45bd-b3cf-efd102ddbfa0" containerName="registry-server" containerID="cri-o://c0fd131c3de9232fdb05433830ce3564eb1d2e342adc539fc0163a756a4be890" gracePeriod=2 Oct 13 09:25:26 crc kubenswrapper[4833]: I1013 09:25:26.684637 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d24nv" Oct 13 09:25:26 crc kubenswrapper[4833]: I1013 09:25:26.802267 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb40b67b-b652-45bd-b3cf-efd102ddbfa0-catalog-content\") pod \"eb40b67b-b652-45bd-b3cf-efd102ddbfa0\" (UID: \"eb40b67b-b652-45bd-b3cf-efd102ddbfa0\") " Oct 13 09:25:26 crc kubenswrapper[4833]: I1013 09:25:26.802365 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb40b67b-b652-45bd-b3cf-efd102ddbfa0-utilities\") pod \"eb40b67b-b652-45bd-b3cf-efd102ddbfa0\" (UID: \"eb40b67b-b652-45bd-b3cf-efd102ddbfa0\") " Oct 13 09:25:26 crc kubenswrapper[4833]: I1013 09:25:26.802420 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fj9gp\" (UniqueName: \"kubernetes.io/projected/eb40b67b-b652-45bd-b3cf-efd102ddbfa0-kube-api-access-fj9gp\") pod \"eb40b67b-b652-45bd-b3cf-efd102ddbfa0\" (UID: \"eb40b67b-b652-45bd-b3cf-efd102ddbfa0\") " Oct 13 09:25:26 crc kubenswrapper[4833]: I1013 09:25:26.803204 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb40b67b-b652-45bd-b3cf-efd102ddbfa0-utilities" (OuterVolumeSpecName: "utilities") pod "eb40b67b-b652-45bd-b3cf-efd102ddbfa0" (UID: "eb40b67b-b652-45bd-b3cf-efd102ddbfa0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 09:25:26 crc kubenswrapper[4833]: I1013 09:25:26.808015 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb40b67b-b652-45bd-b3cf-efd102ddbfa0-kube-api-access-fj9gp" (OuterVolumeSpecName: "kube-api-access-fj9gp") pod "eb40b67b-b652-45bd-b3cf-efd102ddbfa0" (UID: "eb40b67b-b652-45bd-b3cf-efd102ddbfa0"). InnerVolumeSpecName "kube-api-access-fj9gp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 09:25:26 crc kubenswrapper[4833]: I1013 09:25:26.844774 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb40b67b-b652-45bd-b3cf-efd102ddbfa0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb40b67b-b652-45bd-b3cf-efd102ddbfa0" (UID: "eb40b67b-b652-45bd-b3cf-efd102ddbfa0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 09:25:26 crc kubenswrapper[4833]: I1013 09:25:26.904649 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb40b67b-b652-45bd-b3cf-efd102ddbfa0-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 09:25:26 crc kubenswrapper[4833]: I1013 09:25:26.904688 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb40b67b-b652-45bd-b3cf-efd102ddbfa0-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 09:25:26 crc kubenswrapper[4833]: I1013 09:25:26.904697 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fj9gp\" (UniqueName: \"kubernetes.io/projected/eb40b67b-b652-45bd-b3cf-efd102ddbfa0-kube-api-access-fj9gp\") on node \"crc\" DevicePath \"\"" Oct 13 09:25:27 crc kubenswrapper[4833]: I1013 09:25:27.141391 4833 generic.go:334] "Generic (PLEG): container finished" podID="eb40b67b-b652-45bd-b3cf-efd102ddbfa0" containerID="c0fd131c3de9232fdb05433830ce3564eb1d2e342adc539fc0163a756a4be890" exitCode=0 Oct 13 09:25:27 crc kubenswrapper[4833]: I1013 09:25:27.141452 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d24nv" event={"ID":"eb40b67b-b652-45bd-b3cf-efd102ddbfa0","Type":"ContainerDied","Data":"c0fd131c3de9232fdb05433830ce3564eb1d2e342adc539fc0163a756a4be890"} Oct 13 09:25:27 crc kubenswrapper[4833]: I1013 09:25:27.141491 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d24nv" event={"ID":"eb40b67b-b652-45bd-b3cf-efd102ddbfa0","Type":"ContainerDied","Data":"a9fd279210a493c4745bc64bad403f8d5677296d1b4c81f3cacf757bebe24fa1"} Oct 13 09:25:27 crc kubenswrapper[4833]: I1013 09:25:27.141453 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d24nv" Oct 13 09:25:27 crc kubenswrapper[4833]: I1013 09:25:27.141531 4833 scope.go:117] "RemoveContainer" containerID="c0fd131c3de9232fdb05433830ce3564eb1d2e342adc539fc0163a756a4be890" Oct 13 09:25:27 crc kubenswrapper[4833]: I1013 09:25:27.165423 4833 scope.go:117] "RemoveContainer" containerID="a0e4f508e1794bc965316757068511721b852ae2fa5946264f2ebd948b4f7059" Oct 13 09:25:27 crc kubenswrapper[4833]: I1013 09:25:27.183497 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d24nv"] Oct 13 09:25:27 crc kubenswrapper[4833]: I1013 09:25:27.191920 4833 scope.go:117] "RemoveContainer" containerID="a74d24f3955ad50ef0bb99531c955d4c92c037d63697232ce647fd43264845ef" Oct 13 09:25:27 crc kubenswrapper[4833]: I1013 09:25:27.193622 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d24nv"] Oct 13 09:25:27 crc kubenswrapper[4833]: I1013 09:25:27.240923 4833 scope.go:117] "RemoveContainer" containerID="c0fd131c3de9232fdb05433830ce3564eb1d2e342adc539fc0163a756a4be890" Oct 13 09:25:27 crc kubenswrapper[4833]: E1013 09:25:27.243183 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0fd131c3de9232fdb05433830ce3564eb1d2e342adc539fc0163a756a4be890\": container with ID starting with c0fd131c3de9232fdb05433830ce3564eb1d2e342adc539fc0163a756a4be890 not found: ID does not exist" containerID="c0fd131c3de9232fdb05433830ce3564eb1d2e342adc539fc0163a756a4be890" Oct 13 09:25:27 crc kubenswrapper[4833]: I1013 09:25:27.243240 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0fd131c3de9232fdb05433830ce3564eb1d2e342adc539fc0163a756a4be890"} err="failed to get container status \"c0fd131c3de9232fdb05433830ce3564eb1d2e342adc539fc0163a756a4be890\": rpc error: code = NotFound desc = could not find container \"c0fd131c3de9232fdb05433830ce3564eb1d2e342adc539fc0163a756a4be890\": container with ID starting with c0fd131c3de9232fdb05433830ce3564eb1d2e342adc539fc0163a756a4be890 not found: ID does not exist" Oct 13 09:25:27 crc kubenswrapper[4833]: I1013 09:25:27.243267 4833 scope.go:117] "RemoveContainer" containerID="a0e4f508e1794bc965316757068511721b852ae2fa5946264f2ebd948b4f7059" Oct 13 09:25:27 crc kubenswrapper[4833]: E1013 09:25:27.243659 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0e4f508e1794bc965316757068511721b852ae2fa5946264f2ebd948b4f7059\": container with ID starting with a0e4f508e1794bc965316757068511721b852ae2fa5946264f2ebd948b4f7059 not found: ID does not exist" containerID="a0e4f508e1794bc965316757068511721b852ae2fa5946264f2ebd948b4f7059" Oct 13 09:25:27 crc kubenswrapper[4833]: I1013 09:25:27.243693 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0e4f508e1794bc965316757068511721b852ae2fa5946264f2ebd948b4f7059"} err="failed to get container status \"a0e4f508e1794bc965316757068511721b852ae2fa5946264f2ebd948b4f7059\": rpc error: code = NotFound desc = could not find container \"a0e4f508e1794bc965316757068511721b852ae2fa5946264f2ebd948b4f7059\": container with ID starting with a0e4f508e1794bc965316757068511721b852ae2fa5946264f2ebd948b4f7059 not found: ID does not exist" Oct 13 09:25:27 crc kubenswrapper[4833]: I1013 09:25:27.243711 4833 scope.go:117] "RemoveContainer" containerID="a74d24f3955ad50ef0bb99531c955d4c92c037d63697232ce647fd43264845ef" Oct 13 09:25:27 crc kubenswrapper[4833]: E1013 09:25:27.244097 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a74d24f3955ad50ef0bb99531c955d4c92c037d63697232ce647fd43264845ef\": container with ID starting with a74d24f3955ad50ef0bb99531c955d4c92c037d63697232ce647fd43264845ef not found: ID does not exist" containerID="a74d24f3955ad50ef0bb99531c955d4c92c037d63697232ce647fd43264845ef" Oct 13 09:25:27 crc kubenswrapper[4833]: I1013 09:25:27.244143 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a74d24f3955ad50ef0bb99531c955d4c92c037d63697232ce647fd43264845ef"} err="failed to get container status \"a74d24f3955ad50ef0bb99531c955d4c92c037d63697232ce647fd43264845ef\": rpc error: code = NotFound desc = could not find container \"a74d24f3955ad50ef0bb99531c955d4c92c037d63697232ce647fd43264845ef\": container with ID starting with a74d24f3955ad50ef0bb99531c955d4c92c037d63697232ce647fd43264845ef not found: ID does not exist" Oct 13 09:25:27 crc kubenswrapper[4833]: I1013 09:25:27.780835 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5xxsh/must-gather-rd2pj"] Oct 13 09:25:27 crc kubenswrapper[4833]: I1013 09:25:27.781105 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-5xxsh/must-gather-rd2pj" podUID="7d7251b3-0441-44dd-8966-4c912b8cf2b1" containerName="copy" containerID="cri-o://8d6dbd1a2862bc0741fea06aca9ece6084b3fdf6e790e68c275cde669ce1c07d" gracePeriod=2 Oct 13 09:25:27 crc kubenswrapper[4833]: I1013 09:25:27.788944 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5xxsh/must-gather-rd2pj"] Oct 13 09:25:28 crc kubenswrapper[4833]: I1013 09:25:28.167619 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5xxsh_must-gather-rd2pj_7d7251b3-0441-44dd-8966-4c912b8cf2b1/copy/0.log" Oct 13 09:25:28 crc kubenswrapper[4833]: I1013 09:25:28.168338 4833 generic.go:334] "Generic (PLEG): container finished" podID="7d7251b3-0441-44dd-8966-4c912b8cf2b1" containerID="8d6dbd1a2862bc0741fea06aca9ece6084b3fdf6e790e68c275cde669ce1c07d" exitCode=143 Oct 13 09:25:28 crc kubenswrapper[4833]: I1013 09:25:28.168408 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f84ea5f2bbc7e6e594b7919c676ff5a565ec0d5ec660fdde169a3d04c76568cb" Oct 13 09:25:28 crc kubenswrapper[4833]: I1013 09:25:28.225314 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5xxsh_must-gather-rd2pj_7d7251b3-0441-44dd-8966-4c912b8cf2b1/copy/0.log" Oct 13 09:25:28 crc kubenswrapper[4833]: I1013 09:25:28.225836 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5xxsh/must-gather-rd2pj" Oct 13 09:25:28 crc kubenswrapper[4833]: I1013 09:25:28.339290 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7d7251b3-0441-44dd-8966-4c912b8cf2b1-must-gather-output\") pod \"7d7251b3-0441-44dd-8966-4c912b8cf2b1\" (UID: \"7d7251b3-0441-44dd-8966-4c912b8cf2b1\") " Oct 13 09:25:28 crc kubenswrapper[4833]: I1013 09:25:28.339370 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h88mt\" (UniqueName: \"kubernetes.io/projected/7d7251b3-0441-44dd-8966-4c912b8cf2b1-kube-api-access-h88mt\") pod \"7d7251b3-0441-44dd-8966-4c912b8cf2b1\" (UID: \"7d7251b3-0441-44dd-8966-4c912b8cf2b1\") " Oct 13 09:25:28 crc kubenswrapper[4833]: I1013 09:25:28.345781 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d7251b3-0441-44dd-8966-4c912b8cf2b1-kube-api-access-h88mt" (OuterVolumeSpecName: "kube-api-access-h88mt") pod "7d7251b3-0441-44dd-8966-4c912b8cf2b1" (UID: "7d7251b3-0441-44dd-8966-4c912b8cf2b1"). InnerVolumeSpecName "kube-api-access-h88mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 09:25:28 crc kubenswrapper[4833]: I1013 09:25:28.445621 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h88mt\" (UniqueName: \"kubernetes.io/projected/7d7251b3-0441-44dd-8966-4c912b8cf2b1-kube-api-access-h88mt\") on node \"crc\" DevicePath \"\"" Oct 13 09:25:28 crc kubenswrapper[4833]: I1013 09:25:28.556521 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d7251b3-0441-44dd-8966-4c912b8cf2b1-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "7d7251b3-0441-44dd-8966-4c912b8cf2b1" (UID: "7d7251b3-0441-44dd-8966-4c912b8cf2b1"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 09:25:28 crc kubenswrapper[4833]: I1013 09:25:28.641754 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d7251b3-0441-44dd-8966-4c912b8cf2b1" path="/var/lib/kubelet/pods/7d7251b3-0441-44dd-8966-4c912b8cf2b1/volumes" Oct 13 09:25:28 crc kubenswrapper[4833]: I1013 09:25:28.643286 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb40b67b-b652-45bd-b3cf-efd102ddbfa0" path="/var/lib/kubelet/pods/eb40b67b-b652-45bd-b3cf-efd102ddbfa0/volumes" Oct 13 09:25:28 crc kubenswrapper[4833]: I1013 09:25:28.650117 4833 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7d7251b3-0441-44dd-8966-4c912b8cf2b1-must-gather-output\") on node \"crc\" DevicePath \"\"" Oct 13 09:25:29 crc kubenswrapper[4833]: I1013 09:25:29.192607 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5xxsh/must-gather-rd2pj" Oct 13 09:25:39 crc kubenswrapper[4833]: I1013 09:25:39.627671 4833 scope.go:117] "RemoveContainer" containerID="9ce1dc1789caa8b1bb9ce3bf55d2ad4fa619998cd139896048fd851c8281d151" Oct 13 09:25:39 crc kubenswrapper[4833]: E1013 09:25:39.628387 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:25:53 crc kubenswrapper[4833]: I1013 09:25:53.627531 4833 scope.go:117] "RemoveContainer" containerID="9ce1dc1789caa8b1bb9ce3bf55d2ad4fa619998cd139896048fd851c8281d151" Oct 13 09:25:53 crc kubenswrapper[4833]: E1013 09:25:53.628499 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:25:59 crc kubenswrapper[4833]: I1013 09:25:59.002486 4833 scope.go:117] "RemoveContainer" containerID="005dbe276c69cd71bab29411b0c3b798f16f4135c0bb96ae158d83b702f63b62" Oct 13 09:25:59 crc kubenswrapper[4833]: I1013 09:25:59.118570 4833 scope.go:117] "RemoveContainer" containerID="8d6dbd1a2862bc0741fea06aca9ece6084b3fdf6e790e68c275cde669ce1c07d" Oct 13 09:26:05 crc kubenswrapper[4833]: I1013 09:26:05.627142 4833 scope.go:117] "RemoveContainer" containerID="9ce1dc1789caa8b1bb9ce3bf55d2ad4fa619998cd139896048fd851c8281d151" Oct 13 09:26:05 crc kubenswrapper[4833]: E1013 09:26:05.628065 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:26:20 crc kubenswrapper[4833]: I1013 09:26:20.637152 4833 scope.go:117] "RemoveContainer" containerID="9ce1dc1789caa8b1bb9ce3bf55d2ad4fa619998cd139896048fd851c8281d151" Oct 13 09:26:20 crc kubenswrapper[4833]: E1013 09:26:20.637950 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:26:33 crc kubenswrapper[4833]: I1013 09:26:33.626823 4833 scope.go:117] "RemoveContainer" containerID="9ce1dc1789caa8b1bb9ce3bf55d2ad4fa619998cd139896048fd851c8281d151" Oct 13 09:26:33 crc kubenswrapper[4833]: E1013 09:26:33.627750 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:26:48 crc kubenswrapper[4833]: I1013 09:26:48.628185 4833 scope.go:117] "RemoveContainer" containerID="9ce1dc1789caa8b1bb9ce3bf55d2ad4fa619998cd139896048fd851c8281d151" Oct 13 09:26:48 crc kubenswrapper[4833]: E1013 09:26:48.629151 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:27:01 crc kubenswrapper[4833]: I1013 09:27:01.627620 4833 scope.go:117] "RemoveContainer" containerID="9ce1dc1789caa8b1bb9ce3bf55d2ad4fa619998cd139896048fd851c8281d151" Oct 13 09:27:01 crc kubenswrapper[4833]: E1013 09:27:01.628783 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa" Oct 13 09:27:12 crc kubenswrapper[4833]: I1013 09:27:12.628072 4833 scope.go:117] "RemoveContainer" containerID="9ce1dc1789caa8b1bb9ce3bf55d2ad4fa619998cd139896048fd851c8281d151" Oct 13 09:27:12 crc kubenswrapper[4833]: E1013 09:27:12.629311 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wd7ss_openshift-machine-config-operator(fa5b6ea2-f89e-4768-8663-bd965bde64fa)\"" pod="openshift-machine-config-operator/machine-config-daemon-wd7ss" podUID="fa5b6ea2-f89e-4768-8663-bd965bde64fa"